Sample records for on-board computer algorithms

  1. Satellite on-board processing for earth resources data

    NASA Technical Reports Server (NTRS)

    Bodenheimer, R. E.; Gonzalez, R. C.; Gupta, J. N.; Hwang, K.; Rochelle, R. W.; Wilson, J. B.; Wintz, P. A.

    1975-01-01

    Results of a survey of earth resources user applications and their data requirements, earth resources multispectral scanner sensor technology, and preprocessing algorithms for correcting the sensor outputs and for data bulk reduction are presented along with a candidate data format. Computational requirements required to implement the data analysis algorithms are included along with a review of computer architectures and organizations. Computer architectures capable of handling the algorithm computational requirements are suggested and the environmental effects of an on-board processor discussed. By relating performance parameters to the system requirements of each of the user requirements the feasibility of on-board processing is determined for each user. A tradeoff analysis is performed to determine the sensitivity of results to each of the system parameters. Significant results and conclusions are discussed, and recommendations are presented.

  2. Fast gradient-based algorithm on extended landscapes for wave-front reconstruction of Earth observation satellite

    NASA Astrophysics Data System (ADS)

    Thiebaut, C.; Perraud, L.; Delvit, J. M.; Latry, C.

    2016-07-01

    We present an on-board satellite implementation of a gradient-based (optical flows) algorithm for the shifts estimation between images of a Shack-Hartmann wave-front sensor on extended landscapes. The proposed algorithm has low complexity in comparison with classical correlation methods which is a big advantage for being used on-board a satellite at high instrument data rate and in real-time. The electronic board used for this implementation is designed for space applications and is composed of radiation-hardened software and hardware. Processing times of both shift estimations and pre-processing steps are compatible of on-board real-time computation.

  3. A lateral guidance algorithm to reduce the post-aerobraking burn requirements for a lift-modulated orbital transfer vehicle. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Herman, G. C.

    1986-01-01

    A lateral guidance algorithm which controls the location of the line of intersection between the actual and desired orbital planes (the hinge line) is developed for the aerobraking phase of a lift-modulated orbital transfer vehicle. The on-board targeting algorithm associated with this lateral guidance algorithm is simple and concise which is very desirable since computation time and space are limited on an on-board flight computer. A variational equation which describes the movement of the hinge line is derived. Simple relationships between the plane error, the desired hinge line position, the position out-of-plane error, and the velocity out-of-plane error are found. A computer simulation is developed to test the lateral guidance algorithm for a variety of operating conditions. The algorithm does reduce the total burn magnitude needed to achieve the desired orbit by allowing the plane correction and perigee-raising burn to be combined in a single maneuver. The algorithm performs well under vacuum perigee dispersions, pot-hole density disturbance, and thick atmospheres. The results for many different operating conditions are presented.

  4. Real-time optical flow estimation on a GPU for a skied-steered mobile robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2016-04-01

    Accurate egomotion estimation is required for mobile robot navigation. Often the egomotion is estimated using optical flow algorithms. For an accurate estimation of optical flow most of modern algorithms require high memory resources and processor speed. However simple single-board computers that control the motion of the robot usually do not provide such resources. On the other hand, most of modern single-board computers are equipped with an embedded GPU that could be used in parallel with a CPU to improve the performance of the optical flow estimation algorithm. This paper presents a new Z-flow algorithm for efficient computation of an optical flow using an embedded GPU. The algorithm is based on the phase correlation optical flow estimation and provide a real-time performance on a low cost embedded GPU. The layered optical flow model is used. Layer segmentation is performed using graph-cut algorithm with a time derivative based energy function. Such approach makes the algorithm both fast and robust in low light and low texture conditions. The algorithm implementation for a Raspberry Pi Model B computer is discussed. For evaluation of the algorithm the computer was mounted on a Hercules mobile skied-steered robot equipped with a monocular camera. The evaluation was performed using a hardware-in-the-loop simulation and experiments with Hercules mobile robot. Also the algorithm was evaluated using KITTY Optical Flow 2015 dataset. The resulting endpoint error of the optical flow calculated with the developed algorithm was low enough for navigation of the robot along the desired trajectory.

  5. Smart Payload Development for High Data Rate Instrument Systems

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Norton, Charles D.

    2007-01-01

    This slide presentation reviews the development of smart payloads instruments systems with high data rates. On-board computation has become a bottleneck for advanced science instrument and engineering capabilities. In order to improve the computation capability on board, smart payloads have been proposed. A smart payload is a Localized instrument, that can offload the flight processor of extensive computing cycles, simplify the interfaces, and minimize the dependency of the instrument on the flight system. This has been proposed for the Mars mission, Mars Atmospheric Trace Molecule Spectroscopy (MATMOS). The design of this system is discussed; the features of the Virtex-4, are discussed, and the technical approach is reviewed. The proposed Hybrid Field Programmable Gate Array (FPGA) technology has been shown to deliver breakthrough performance by tightly coupling hardware and software. Smart Payload designs for instruments such as MATMOS can meet science data return requirements with more competitive use of available on-board resources and can provide algorithm acceleration in hardware leading to implementation of better (more advanced) algorithms in on-board systems for improved science data return

  6. A Formal Algorithm for Routing Traces on a Printed Circuit Board

    NASA Technical Reports Server (NTRS)

    Hedgley, David R., Jr.

    1996-01-01

    This paper addresses the classical problem of printed circuit board routing: that is, the problem of automatic routing by a computer other than by brute force that causes the execution time to grow exponentially as a function of the complexity. Most of the present solutions are either inexpensive but not efficient and fast, or efficient and fast but very costly. Many solutions are proprietary, so not much is written or known about the actual algorithms upon which these solutions are based. This paper presents a formal algorithm for routing traces on a print- ed circuit board. The solution presented is very fast and efficient and for the first time speaks to the question eloquently by way of symbolic statements.

  7. Algorithmic support for graphic images rotation in avionics

    NASA Astrophysics Data System (ADS)

    Kniga, E. V.; Gurjanov, A. V.; Shukalov, A. V.; Zharinov, I. O.

    2018-05-01

    The avionics device designing has an actual problem of development and research algorithms to rotate the images which are being shown in the on-board display. The image rotation algorithms are a part of program software of avionics devices, which are parts of the on-board computers of the airplanes and helicopters. Images to be rotated have the flight location map fragments. The image rotation in the display system can be done as a part of software or mechanically. The program option is worse than the mechanic one in its rotation speed. The comparison of some test images of rotation several algorithms is shown which are being realized mechanically with the program environment Altera QuartusII.

  8. Rapid Onboard Trajectory Design for Autonomous Spacecraft in Multibody Systems

    NASA Astrophysics Data System (ADS)

    Trumbauer, Eric Michael

    This research develops automated, on-board trajectory planning algorithms in order to support current and new mission concepts. These include orbiter missions to Phobos or Deimos, Outer Planet Moon orbiters, and robotic and crewed missions to small bodies. The challenges stem from the limited on-board computing resources which restrict full trajectory optimization with guaranteed convergence in complex dynamical environments. The approach taken consists of leveraging pre-mission computations to create a large database of pre-computed orbits and arcs. Such a database is used to generate a discrete representation of the dynamics in the form of a directed graph, which acts to index these arcs. This allows the use of graph search algorithms on-board in order to provide good approximate solutions to the path planning problem. Coupled with robust differential correction and optimization techniques, this enables the determination of an efficient path between any boundary conditions with very little time and computing effort. Furthermore, the optimization methods developed here based on sequential convex programming are shown to have provable convergence properties, as well as generating feasible major iterates in case of a system interrupt -- a key requirement for on-board application. The outcome of this project is thus the development of an algorithmic framework which allows the deployment of this approach in a variety of specific mission contexts. Test cases related to missions of interest to NASA and JPL such as a Phobos orbiter and a Near Earth Asteroid interceptor are demonstrated, including the results of an implementation on the RAD750 flight processor. This method fills a gap in the toolbox being developed to create fully autonomous space exploration systems.

  9. Performance comparison of attitude determination, attitude estimation, and nonlinear observers algorithms

    NASA Astrophysics Data System (ADS)

    MOHAMMED, M. A. SI; BOUSSADIA, H.; BELLAR, A.; ADNANE, A.

    2017-01-01

    This paper presents a brief synthesis and useful performance analysis of different attitude filtering algorithms (attitude determination algorithms, attitude estimation algorithms, and nonlinear observers) applied to Low Earth Orbit Satellite in terms of accuracy, convergence time, amount of memory, and computation time. This latter is calculated in two ways, using a personal computer and also using On-board computer 750 (OBC 750) that is being used in many SSTL Earth observation missions. The use of this comparative study could be an aided design tool to the designer to choose from an attitude determination or attitude estimation or attitude observer algorithms. The simulation results clearly indicate that the nonlinear Observer is the more logical choice.

  10. Special-purpose computer for holography HORN-4 with recurrence algorithm

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Hishinuma, Sinsuke; Ito, Tomoyoshi

    2002-10-01

    We designed and built a special-purpose computer for holography, HORN-4 (HOlographic ReconstructioN) using PLD (Programmable Logic Device) technology. HORN computers have a pipeline architecture. We use HORN-4 as an attached processor to enhance the performance of a general-purpose computer when it is used to generate holograms using a "recurrence formulas" algorithm developed by our previous paper. In the HORN-4 system, we designed the pipeline by adopting our "recurrence formulas" algorithm which can calculate the phase on a hologram. As the result, we could integrate the pipeline composed of 21 units into one PLD chip. The units in the pipeline consists of one BPU (Basic Phase Unit) unit and twenty CU (Cascade Unit) units. These CU units can compute twenty light intensities on a hologram plane at one time. By mounting two of the PLD chips on a PCI (Peripheral Component Interconnect) universal board, HORN-4 can calculate holograms at high speed of about 42 Gflops equivalent. The cost of HORN-4 board is about 1700 US dollar. We could obtain 800×600 grids hologram from a 3D-image composed of 415 points in about 0.45 sec with the HORN-4 system.

  11. GLAS Spacecraft Pointing Study

    NASA Technical Reports Server (NTRS)

    Born, George H.; Gold, Kenn; Ondrey, Michael; Kubitschek, Dan; Axelrad, Penina; Komjathy, Attila

    1998-01-01

    Science requirements for the GLAS mission demand that the laser altimeter be pointed to within 50 m of the location of the previous repeat ground track. The satellite will be flown in a repeat orbit of 182 days. Operationally, the required pointing information will be determined on the ground using the nominal ground track, to which pointing is desired, and the current propagated orbit of the satellite as inputs to the roll computation algorithm developed by CCAR. The roll profile will be used to generate a set of fit coefficients which can be uploaded on a daily basis and used by the on-board attitude control system. In addition, an algorithm has been developed for computation of the associated command quaternions which will be necessary when pointing at targets of opportunity. It may be desirable in the future to perform the roll calculation in an autonomous real-time mode on-board the spacecraft. GPS can provide near real-time tracking of the satellite, and the nominal ground track can be stored in the on-board computer. It will be necessary to choose the spacing of this nominal ground track to meet storage requirements in the on-board environment. Several methods for generating the roll profile from a sparse reference ground track are presented.

  12. The Gaia On-Board Scientific Data Handling

    NASA Astrophysics Data System (ADS)

    Arenou, F.; Babusiaux, C.; Chéreau, F.; Mignot, S.

    2005-01-01

    Because Gaia will perform a continuous all-sky survey at a medium (Spectro) or very high (Astro) angular resolution, the on-board processing needs to cope with a high variety of objects and densities which calls for generic and adaptive algorithms at the detection level, but not only. Consequently, the Pyxis scientific algorithms developed for the on-board data handling cover a large range of application: detection and confirmation of astronomical objects, background sky estimation, classification of detected objects, Near-Earth Objects onboard detection, and window selection and positioning. Very dense fields, where the real-time computing requirements should remain within fixed bounds, are particularly challenging. Another constraint stems from the limited telemetry bandwidth and an additional compromise has to be found between scientific requirements and constraints in terms of the mass, volume and power budgets of the satellite. The rationale for the on-board data handling procedure is described here, together with the developed algorithms, the main issues and the expected scientific performances in the Astro and Spectro instruments.

  13. 40 CFR 86.004-16 - Prohibition of defeat devices.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... information which the Administrator may request to be submitted) regarding test programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies...

  14. 40 CFR 86.004-16 - Prohibition of defeat devices.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... information which the Administrator may request to be submitted) regarding test programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies...

  15. 40 CFR 86.004-16 - Prohibition of defeat devices.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... information which the Administrator may request to be submitted) regarding test programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies...

  16. High-Speed On-Board Data Processing for Science Instruments

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Lin, Bing; Hu, Yongxiang; Harrison, Wallace

    2014-01-01

    A new development of on-board data processing platform has been in progress at NASA Langley Research Center since April, 2012, and the overall review of such work is presented in this paper. The project is called High-Speed On-Board Data Processing for Science Instruments (HOPS) and focuses on a high-speed scalable data processing platform for three particular National Research Council's Decadal Survey missions such as Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS), Aerosol-Cloud-Ecosystems (ACE), and Doppler Aerosol Wind Lidar (DAWN) 3-D Winds. HOPS utilizes advanced general purpose computing with Field Programmable Gate Array (FPGA) based algorithm implementation techniques. The significance of HOPS is to enable high speed on-board data processing for current and future science missions with its reconfigurable and scalable data processing platform. A single HOPS processing board is expected to provide approximately 66 times faster data processing speed for ASCENDS, more than 70% reduction in both power and weight, and about two orders of cost reduction compared to the state-of-the-art (SOA) on-board data processing system. Such benchmark predictions are based on the data when HOPS was originally proposed in August, 2011. The details of these improvement measures are also presented. The two facets of HOPS development are identifying the most computationally intensive algorithm segments of each mission and implementing them in a FPGA-based data processing board. A general introduction of such facets is also the purpose of this paper.

  17. On-board attitude determination for the Explorer Platform satellite

    NASA Technical Reports Server (NTRS)

    Jayaraman, C.; Class, B.

    1992-01-01

    This paper describes the attitude determination algorithm for the Explorer Platform satellite. The algorithm, which is baselined on the Landsat code, is a six-element linear quadratic state estimation processor, in the form of a Kalman filter augmented by an adaptive filter process. Improvements to the original Landsat algorithm were required to meet mission pointing requirements. These consisted of a more efficient sensor processing algorithm and the addition of an adaptive filter which acts as a check on the Kalman filter during satellite slew maneuvers. A 1750A processor will be flown on board the satellite for the first time as a coprocessor (COP) in addition to the NASA Standard Spacecraft Computer. The attitude determination algorithm, which will be resident in the COP's memory, will make full use of its improved processing capabilities to meet mission requirements. Additional benefits were gained by writing the attitude determination code in Ada.

  18. GRAPE-6A: A Single-Card GRAPE-6 for Parallel PC-GRAPE Cluster Systems

    NASA Astrophysics Data System (ADS)

    Fukushige, Toshiyuki; Makino, Junichiro; Kawai, Atsushi

    2005-12-01

    In this paper, we describe the design and performance of GRAPE-6A, a special-purpose computer for gravitational many-body simulations. It was designed to be used with a PC cluster, in which each node has one GRAPE-6A. Such a configuration is particularly cost-effective in running parallel tree algorithms. Though the use of parallel tree algorithms was possible with the original GRAPE-6 hardware, it was not very cost-effective since a single GRAPE-6 board was still too fast and too expensive. Therefore, we designed GRAPE-6A as a single PCI card to minimize the reproduction cost and to optimize the computing speed. The peak performance is 130 Gflops for one GRAPE-6A board and 3.1 Tflops for our 24 node cluster. We describe the implementation of the tree, TreePM and individual timestep algorithms on both a single GRAPE-6A system and GRAPE-6A cluster. Using the tree algorithm on our 16-node GRAPE-6A system, we can complete a collisionless simulation with 100 million particles (8000 steps) within 10 days.

  19. Fast computational scheme of image compression for 32-bit microprocessors

    NASA Technical Reports Server (NTRS)

    Kasperovich, Leonid

    1994-01-01

    This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.

  20. A low-cost vector processor boosting compute-intensive image processing operations

    NASA Technical Reports Server (NTRS)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  1. User's Guide for Computer Program that Routes Signal Traces

    NASA Technical Reports Server (NTRS)

    Hedgley, David R., Jr.

    2000-01-01

    This disk contains both a FORTRAN computer program and the corresponding user's guide that facilitates both its incorporation into your system and its utility. The computer program represents an efficient algorithm that routes signal traces on layers of a printed circuit with both through-pins and surface mounts. The computer program included is an implementation of the ideas presented in the theoretical paper titled "A Formal Algorithm for Routing Signal Traces on a Printed Circuit Board", NASA TP-3639 published in 1996. The computer program in the "connects" file can be read with a FORTRAN compiler and readily integrated into software unique to each particular environment where it might be used.

  2. Design of on-board parallel computer on nano-satellite

    NASA Astrophysics Data System (ADS)

    You, Zheng; Tian, Hexiang; Yu, Shijie; Meng, Li

    2007-11-01

    This paper provides one scheme of the on-board parallel computer system designed for the Nano-satellite. Based on the development request that the Nano-satellite should have a small volume, low weight, low power cost, and intelligence, this scheme gets rid of the traditional one-computer system and dual-computer system with endeavor to improve the dependability, capability and intelligence simultaneously. According to the method of integration design, it employs the parallel computer system with shared memory as the main structure, connects the telemetric system, attitude control system, and the payload system by the intelligent bus, designs the management which can deal with the static tasks and dynamic task-scheduling, protect and recover the on-site status and so forth in light of the parallel algorithms, and establishes the fault diagnosis, restoration and system restructure mechanism. It accomplishes an on-board parallel computer system with high dependability, capability and intelligence, a flexible management on hardware resources, an excellent software system, and a high ability in extension, which satisfies with the conception and the tendency of the integration electronic design sufficiently.

  3. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  4. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  5. 40 CFR 86.004-16 - Prohibition of defeat devices.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... exceptions set forth in the definition of “defeat device” in § 86.004-2 has been met. (2) Information... evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies...

  6. 40 CFR 86.004-16 - Prohibition of defeat devices.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... exceptions set forth in the definition of “defeat device” in § 86.004-2 has been met. (2) Information... evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies...

  7. Molecular computation: RNA solutions to chess problems.

    PubMed

    Faulhammer, D; Cukras, A R; Lipton, R J; Landweber, L F

    2000-02-15

    We have expanded the field of "DNA computers" to RNA and present a general approach for the solution of satisfiability problems. As an example, we consider a variant of the "Knight problem," which asks generally what configurations of knights can one place on an n x n chess board such that no knight is attacking any other knight on the board. Using specific ribonuclease digestion to manipulate strands of a 10-bit binary RNA library, we developed a molecular algorithm and applied it to a 3 x 3 chessboard as a 9-bit instance of this problem. Here, the nine spaces on the board correspond to nine "bits" or placeholders in a combinatorial RNA library. We recovered a set of "winning" molecules that describe solutions to this problem.

  8. Spacecube: A Family of Reconfigurable Hybrid On-Board Science Data Processors

    NASA Technical Reports Server (NTRS)

    Flatley, Thomas P.

    2015-01-01

    SpaceCube is a family of Field Programmable Gate Array (FPGA) based on-board science data processing systems developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. SpaceCube is based on the Xilinx Virtex family of FPGAs, which include processor, FPGA logic and digital signal processing (DSP) resources. These processing elements are leveraged to produce a hybrid science data processing platform that accelerates the execution of algorithms by distributing computational functions to the most suitable elements. This approach enables the implementation of complex on-board functions that were previously limited to ground based systems, such as on-board product generation, data reduction, calibration, classification, eventfeature detection, data mining and real-time autonomous operations. The system is fully reconfigurable in flight, including data parameters, software and FPGA logic, through either ground commanding or autonomously in response to detected eventsfeatures in the instrument data stream.

  9. 40 CFR 1068.110 - What other provisions apply to engines/equipment in service?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... information regarding test programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies. It is a violation of the Clean Air Act for anyone to make...

  10. 40 CFR 1068.110 - What other provisions apply to engines/equipment in service?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... information regarding test programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies. It is a violation of the Clean Air Act for anyone to make...

  11. 40 CFR 1068.110 - What other provisions apply to engines/equipment in service?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... information regarding test programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies. It is a violation of the Clean Air Act for anyone to make...

  12. A low-cost test-bed for real-time landmark tracking

    NASA Astrophysics Data System (ADS)

    Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher

    2007-04-01

    A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.

  13. Modification and fixed-point analysis of a Kalman filter for orientation estimation based on 9D inertial measurement unit data.

    PubMed

    Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger

    2013-01-01

    A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.

  14. Development of hardware accelerator for molecular dynamics simulations: a computation board that calculates nonbonded interactions in cooperation with fast multipole method.

    PubMed

    Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro

    2003-04-15

    Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003

  15. 40 CFR 86.1809-12 - Prohibition of defeat devices.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... manufacturer must provide an explanation containing detailed information regarding test programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies..., with the Part II certification application, an engineering evaluation demonstrating to the satisfaction...

  16. DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.

    PubMed

    Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei

    2017-07-18

    Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.

  17. 40 CFR 86.1809-10 - Prohibition of defeat devices.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... detailed information regarding test programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and design strategies incorporated for operation both during and... HLDT/MDPVs the manufacturer must submit, with the Part II certification application, an engineering...

  18. Validation of On-board Cloud Cover Assessment Using EO-1

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Miller, Jerry; Griffin, Michael; Burke, Hsiao-hua

    2003-01-01

    The purpose of this NASA Earth Science Technology Office funded effort was to flight validate an on-board cloud detection algorithm and to determine the performance that can be achieved with a Mongoose V flight computer. This validation was performed on the EO-1 satellite, which is operational, by uploading new flight code to perform the cloud detection. The algorithm was developed by MIT/Lincoln Lab and is based on the use of the Hyperion hyperspectral instrument using selected spectral bands from 0.4 to 2.5 microns. The Technology Readiness Level (TRL) of this technology at the beginning of the task was level 5 and was TRL 6 upon completion. In the final validation, an 8 second (0.75 Gbytes) Hyperion image was processed on-board and assessed for percentage cloud cover within 30 minutes. It was expected to take many hours and perhaps a day considering that the Mongoose V is only a 6-8 MIP machine in performance. To accomplish this test, the image taken had to have level 0 and level 1 processing performed on-board before the cloud algorithm was applied. For almost all of the ground test cases and all of the flight cases, the cloud assessment was within 5% of the correct value and in most cases within 1-2%.

  19. GRAPE-5: A Special-Purpose Computer for N-Body Simulations

    NASA Astrophysics Data System (ADS)

    Kawai, Atsushi; Fukushige, Toshiyuki; Makino, Junichiro; Taiji, Makoto

    2000-08-01

    We have developed a special-purpose computer for gravitational many-body simulations, GRAPE-5. GRAPE-5 accelerates the force calculation which dominates the calculation cost of the simulation. All other calculations, such as the time integration of orbits, are performed on a general-purpose computer (host computer) connected to GRAPE-5. A GRAPE-5 board consists of eight custom pipeline chips (G5 chip) and its peak performance is 38.4 Gflops. GRAPE-5 is the successor of GRAPE-3. The differences between GRAPE-5 and GRAPE-3 are: (1) The newly developed G5 chip contains two pipelines operating at 80 MHz, while the GRAPE chip, which was used for GRAPE-3, had one at 20 MHz. The calculation speed of GRAPE-5 is 8-times faster than that of GRAPE-3. (2) The GRAPE-5 board adopted a PCI bus as the interface to the host computer instead of VME of GRAPE-3, resulting in a communication speed one order of magnitude faster. (3) In addition to the pure 1/r potential, the G5 chip can calculate forces with arbitrary cutoff functions, so that it can be applied to the Ewald or P3M methods. (4) The pairwise force calculated on GRAPE-5 is about 10-times more accurate than that on GRAPE-3. On one GRAPE-5 board, one timestep with a direct summation algorithm takes 14 (N/128 k)2 seconds. With the Barnes-Hut tree algorithm (theta = 0.75), one timestep can be done in 15 (N/106) seconds.

  20. Parallel image reconstruction for 3D positron emission tomography from incomplete 2D projection data

    NASA Astrophysics Data System (ADS)

    Guerrero, Thomas M.; Ricci, Anthony R.; Dahlbom, Magnus; Cherry, Simon R.; Hoffman, Edward T.

    1993-07-01

    The problem of excessive computational time in 3D Positron Emission Tomography (3D PET) reconstruction is defined, and we present an approach for solving this problem through the construction of an inexpensive parallel processing system and the adoption of the FAVOR algorithm. Currently, the 3D reconstruction of the 610 images of a total body procedure would require 80 hours and the 3D reconstruction of the 620 images of a dynamic study would require 110 hours. An inexpensive parallel processing system for 3D PET reconstruction is constructed from the integration of board level products from multiple vendors. The system achieves its computational performance through the use of 6U VME four i860 processor boards, the processor boards from five manufacturers are discussed from our perspective. The new 3D PET reconstruction algorithm FAVOR, FAst VOlume Reconstructor, that promises a substantial speed improvement is adopted. Preliminary results from parallelizing FAVOR are utilized in formulating architectural improvements for this problem. In summary, we are addressing the problem of excessive computational time in 3D PET image reconstruction, through the construction of an inexpensive parallel processing system and the parallelization of a 3D reconstruction algorithm that uses the incomplete data set that is produced by current PET systems.

  1. Virtualizing Super-Computation On-Board Uas

    NASA Astrophysics Data System (ADS)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  2. Detection of circuit-board components with an adaptive multiclass correlation filter

    NASA Astrophysics Data System (ADS)

    Diaz-Ramirez, Victor H.; Kober, Vitaly

    2008-08-01

    A new method for reliable detection of circuit-board components is proposed. The method is based on an adaptive multiclass composite correlation filter. The filter is designed with the help of an iterative algorithm using complex synthetic discriminant functions. The impulse response of the filter contains information needed to localize and classify geometrically distorted circuit-board components belonging to different classes. Computer simulation results obtained with the proposed method are provided and compared with those of known multiclass correlation based techniques in terms of performance criteria for recognition and classification of objects.

  3. Model predictive and reallocation problem for CubeSat fault recovery and attitude control

    NASA Astrophysics Data System (ADS)

    Franchi, Loris; Feruglio, Lorenzo; Mozzillo, Raffaele; Corpino, Sabrina

    2018-01-01

    In recent years, thanks to the increase of the know-how on machine-learning techniques and the advance of the computational capabilities of on-board processing, expensive computing algorithms, such as Model Predictive Control, have begun to spread in space applications even on small on-board processor. The paper presents an algorithm for an optimal fault recovery of a 3U CubeSat, developed in MathWorks Matlab & Simulink environment. This algorithm involves optimization techniques aiming at obtaining the optimal recovery solution, and involves a Model Predictive Control approach for the attitude control. The simulated system is a CubeSat in Low Earth Orbit: the attitude control is performed with three magnetic torquers and a single reaction wheel. The simulation neglects the errors in the attitude determination of the satellite, and focuses on the recovery approach and control method. The optimal recovery approach takes advantage of the properties of magnetic actuation, which gives the possibility of the redistribution of the control action when a fault occurs on a single magnetic torquer, even in absence of redundant actuators. In addition, the paper presents the results of the implementation of Model Predictive approach to control the attitude of the satellite.

  4. 40 CFR 86.1809-12 - Prohibition of defeat devices.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and... manufacturer must submit, with the Part II certification application, an engineering evaluation demonstrating... vehicles, the engineering evaluation must also include particulate emissions. [75 FR 25685, May 7, 2010] ...

  5. A precise goniometer/tensiometer using a low cost single-board computer

    NASA Astrophysics Data System (ADS)

    Favier, Benoit; Chamakos, Nikolaos T.; Papathanasiou, Athanasios G.

    2017-12-01

    Measuring the surface tension and the Young contact angle of a droplet is extremely important for many industrial applications. Here, considering the booming interest for small and cheap but precise experimental instruments, we have constructed a low-cost contact angle goniometer/tensiometer, based on a single-board computer (Raspberry Pi). The device runs an axisymmetric drop shape analysis (ADSA) algorithm written in Python. The code, here named DropToolKit, was developed in-house. We initially present the mathematical framework of our algorithm and then we validate our software tool against other well-established ADSA packages, including the commercial ramé-hart DROPimage Advanced as well as the DropAnalysis plugin in ImageJ. After successfully testing for various combinations of liquids and solid surfaces, we concluded that our prototype device would be highly beneficial for industrial applications as well as for scientific research in wetting phenomena compared to the commercial solutions.

  6. A Streaming Language Implementation of the Discontinuous Galerkin Method

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Knight, Timothy

    2005-01-01

    We present a Brook streaming language implementation of the 3-D discontinuous Galerkin method for compressible fluid flow on tetrahedral meshes. Efficient implementation of the discontinuous Galerkin method using the streaming model of computation introduces several algorithmic design challenges. Using a cycle-accurate simulator, performance characteristics have been obtained for the Stanford Merrimac stream processor. The current Merrimac design achieves 128 Gflops per chip and the desktop board is populated with 16 chips yielding a peak performance of 2 Teraflops. Total parts cost for the desktop board is less than $20K. Current cycle-accurate simulations for discretizations of the 3-D compressible flow equations yield approximately 40-50% of the peak performance of the Merrimac streaming processor chip. Ongoing work includes the assessment of the performance of the same algorithm on the 2 Teraflop desktop board with a target goal of achieving 1 Teraflop performance.

  7. F-8C adaptive control law refinement and software development

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Stein, G.

    1981-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.

  8. An Enhanced MWR-Based Wet Tropospheric Correction for Sentinel-3: Inheritance from Past ESA Altimetry Missions

    NASA Astrophysics Data System (ADS)

    Lazaro, Clara; Fernandes, Joanna M.

    2015-12-01

    The GNSS-derived Path Delay (GPD) and the Data Combination (DComb) algorithms were developed by University of Porto (U.Porto), in the scope of different projects funded by ESA, to compute a continuous and improved wet tropospheric correction (WTC) for use in satellite altimetry. Both algorithms are mission independent and are based on a linear space-time objective analysis procedure that combines various wet path delay data sources. A new algorithm that gets the best of each aforementioned algorithm (GNSS-derived Path Delay Plus, GPD+) has been developed at U.Porto in the scope of SL_cci project, where the use of consistent and stable in time datasets is of major importance. The algorithm has been applied to the main eight altimetric missions (TOPEX/Poseidon, Jason-1, Jason-2, ERS-1, ERS-2, Envisat and CryoSat-2 and SARAL). Upcoming Sentinel-3 possesses a two-channel on-board radiometer similar to those that were deployed in ERS-1/2 and Envisat. Consequently, the fine-tuning of the GPD+ algorithm to these missions datasets shall enrich it, by increasing its capability to quickly deal with Sentinel-3 data. Foreseeing that the computation of an improved MWR-based WTC for use with Sentinel-3 data will be required, this study focuses on the results obtained for ERS-1/2 and Envisat missions, which are expected to give insight into the computation of this correction for the upcoming ESA altimetric mission. The various WTC corrections available for each mission (in general, the original correction derived from the on-board MWR, the model correction and the one derived from GPD+) are inter-compared either directly or using various sea level anomaly variance statistical analyses. Results show that the GPD+ algorithm is efficient in generating global and continuous datasets, corrected for land and ice contamination and spurious measurements of instrumental origin, with significant impacts on all ESA missions.

  9. 40 CFR 86.1809-10 - Prohibition of defeat devices.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... programs, engineering evaluations, design specifications, calibrations, on-board computer algorithms, and..., with the Part II certification application, an engineering evaluation demonstrating to the satisfaction... not occur in the temperature range of 20 to 86 °F. For diesel vehicles, the engineering evaluation...

  10. Computing Optic Flow with ArduEye Vision Sensor

    DTIC Science & Technology

    2013-01-01

    processing algorithm that can be applied to the flight control of other robotic platforms. 15. SUBJECT TERMS Optical flow, ArduEye, vision based ...2 Figure 2. ArduEye vision chip on Stonyman breakout board connected to Arduino Mega (8) (left) and the Stonyman vision chips (7...robotic platforms. There is a significant need for small, light , less power-hungry sensors and sensory data processing algorithms in order to control the

  11. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  12. Essential Autonomous Science Inference on Rovers (EASIR)

    NASA Technical Reports Server (NTRS)

    Roush, Ted L.; Shipman, Mark; Morris, Robert; Gazis, Paul; Pedersen, Liam

    2003-01-01

    Existing constraints on time, computational, and communication resources associated with Mars rover missions suggest on-board science evaluation of sensor data can contribute to decreasing human-directed operational planning, optimizing returned science data volumes, and recognition of unique or novel data. All of which act to increase the scientific return from a mission. Many different levels of science autonomy exist and each impacts the data collected and returned by, and activities of, rovers. Several computational algorithms, designed to recognize objects of interest to geologists and biologists, are discussed. The algorithms represent various functions that producing scientific opinions and several scenarios illustrate how the opinions can be used.

  13. Optimized feature-detection for on-board vision-based surveillance

    NASA Astrophysics Data System (ADS)

    Gond, Laetitia; Monnin, David; Schneider, Armin

    2012-06-01

    The detection and matching of robust features in images is an important step in many computer vision applications. In this paper, the importance of the keypoint detection algorithms and their inherent parameters in the particular context of an image-based change detection system for IED detection is studied. Through extensive application-oriented experiments, we draw an evaluation and comparison of the most popular feature detectors proposed by the computer vision community. We analyze how to automatically adjust these algorithms to changing imaging conditions and suggest improvements in order to achieve more exibility and robustness in their practical implementation.

  14. A study of autonomous satellite navigation methods using the global positioning satellite system

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.

    1980-01-01

    Special orbit determination algorithms were developed to accommodate the size and speed limitations of on-board computer systems of the NAVSTAR Global Positioning System. The algorithms use square root sequential filtering methods. A new method for the time update of the square root covariance matrix was also developed. In addition, the time update method was compared with another square root convariance propagation method to determine relative performance characteristics. Comparisions were based on the results of computer simulations of the LANDSAT-D satellite processing pseudo range and pseudo range-rate measurements from the phase one GPS. A summary of the comparison results is presented.

  15. Computer-aided US diagnosis of breast lesions by using cell-based contour grouping.

    PubMed

    Cheng, Jie-Zhi; Chou, Yi-Hong; Huang, Chiun-Sheng; Chang, Yeun-Chung; Tiu, Chui-Mei; Chen, Kuei-Wu; Chen, Chung-Ming

    2010-06-01

    To develop a computer-aided diagnostic algorithm with automatic boundary delineation for differential diagnosis of benign and malignant breast lesions at ultrasonography (US) and investigate the effect of boundary quality on the performance of a computer-aided diagnostic algorithm. This was an institutional review board-approved retrospective study with waiver of informed consent. A cell-based contour grouping (CBCG) segmentation algorithm was used to delineate the lesion boundaries automatically. Seven morphologic features were extracted. The classifier was a logistic regression function. Five hundred twenty breast US scans were obtained from 520 subjects (age range, 15-89 years), including 275 benign (mean size, 15 mm; range, 5-35 mm) and 245 malignant (mean size, 18 mm; range, 8-29 mm) lesions. The newly developed computer-aided diagnostic algorithm was evaluated on the basis of boundary quality and differentiation performance. The segmentation algorithms and features in two conventional computer-aided diagnostic algorithms were used for comparative study. The CBCG-generated boundaries were shown to be comparable with the manually delineated boundaries. The area under the receiver operating characteristic curve (AUC) and differentiation accuracy were 0.968 +/- 0.010 and 93.1% +/- 0.7, respectively, for all 520 breast lesions. At the 5% significance level, the newly developed algorithm was shown to be superior to the use of the boundaries and features of the two conventional computer-aided diagnostic algorithms in terms of AUC (0.974 +/- 0.007 versus 0.890 +/- 0.008 and 0.788 +/- 0.024, respectively). The newly developed computer-aided diagnostic algorithm that used a CBCG segmentation method to measure boundaries achieved a high differentiation performance. Copyright RSNA, 2010

  16. Concepts and algorithms for terminal-area traffic management

    NASA Technical Reports Server (NTRS)

    Erzberger, H.; Chapel, J. D.

    1984-01-01

    The nation's air-traffic-control system is the subject of an extensive modernization program, including the planned introduction of advanced automation techniques. This paper gives an overview of a concept for automating terminal-area traffic management. Four-dimensional (4D) guidance techniques, which play an essential role in the automated system, are reviewed. One technique, intended for on-board computer implementation, is based on application of optimal control theory. The second technique is a simplified approach to 4D guidance intended for ground computer implementation. It generates advisory messages to help the controller maintain scheduled landing times of aircraft not equipped with on-board 4D guidance systems. An operational system for the second technique, recently evaluated in a simulation, is also described.

  17. Flight data processing with the F-8 adaptive algorithm

    NASA Technical Reports Server (NTRS)

    Hartmann, G.; Stein, G.; Petersen, K.

    1977-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described

  18. System and Method for an Integrated Satellite Platform

    NASA Technical Reports Server (NTRS)

    Starin, Scott R. (Inventor); Sheikh, Salman I. (Inventor); Hesse, Michael (Inventor); Clagett, Charles E. (Inventor); Santos Soto, Luis H. (Inventor); Hesh, Scott V. (Inventor); Paschalidis, Nikolaos (Inventor); Ericsson, Aprille J. (Inventor); Johnson, Michael A. (Inventor)

    2018-01-01

    A system, method, and computer-readable storage devices for a 6U CubeSat with a magnetometer boom. The example 6U CubeSat can include an on-board computing device connected to an electrical power system, wherein the electrical power system receives power from at least one of a battery and at least one solar panel, a first fluxgate sensor attached to an extendable boom, a release mechanism for extending the extendable boom, at least one second fluxgate sensor fixed within the satellite, an ion neutral mass spectrometer, and a relativistic electron/proton telescope. The on-board computing device can receive data from the first fluxgate sensor, the at least one second fluxgate sensor, the ion neutral mass spectrometer, and the relativistic electron/proton telescope via the bus, and can then process the data via an algorithm to deduce a geophysical signal.

  19. CometBoards Users Manual Release 1.0

    NASA Technical Reports Server (NTRS)

    Guptill, James D.; Coroneos, Rula M.; Patnaik, Surya N.; Hopkins, Dale A.; Berke, Lazlo

    1996-01-01

    Several nonlinear mathematical programming algorithms for structural design applications are available at present. These include the sequence of unconstrained minimizations technique, the method of feasible directions, and the sequential quadratic programming technique. The optimality criteria technique and the fully utilized design concept are two other structural design methods. A project was undertaken to bring all these design methods under a common computer environment so that a designer can select any one of these tools that may be suitable for his/her application. To facilitate selection of a design algorithm, to validate and check out the computer code, and to ascertain the relative merits of the design tools, modest finite element structural analysis programs based on the concept of stiffness and integrated force methods have been coupled to each design method. The code that contains both these design and analysis tools, by reading input information from analysis and design data files, can cast the design of a structure as a minimum-weight optimization problem. The code can then solve it with a user-specified optimization technique and a user-specified analysis method. This design code is called CometBoards, which is an acronym for Comparative Evaluation Test Bed of Optimization and Analysis Routines for the Design of Structures. This manual describes for the user a step-by-step procedure for setting up the input data files and executing CometBoards to solve a structural design problem. The manual includes the organization of CometBoards; instructions for preparing input data files; the procedure for submitting a problem; illustrative examples; and several demonstration problems. A set of 29 structural design problems have been solved by using all the optimization methods available in CometBoards. A summary of the optimum results obtained for these problems is appended to this users manual. CometBoards, at present, is available for Posix-based Cray and Convex computers, Iris and Sun workstations, and the VM/CMS system.

  20. High-performance floating-point image computing workstation for medical applications

    NASA Astrophysics Data System (ADS)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.

  1. ROI-Based On-Board Compression for Hyperspectral Remote Sensing Images on GPU.

    PubMed

    Giordano, Rossella; Guccione, Pietro

    2017-05-19

    In recent years, hyperspectral sensors for Earth remote sensing have become very popular. Such systems are able to provide the user with images having both spectral and spatial information. The current hyperspectral spaceborne sensors are able to capture large areas with increased spatial and spectral resolution. For this reason, the volume of acquired data needs to be reduced on board in order to avoid a low orbital duty cycle due to limited storage space. Recently, literature has focused the attention on efficient ways for on-board data compression. This topic is a challenging task due to the difficult environment (outer space) and due to the limited time, power and computing resources. Often, the hardware properties of Graphic Processing Units (GPU) have been adopted to reduce the processing time using parallel computing. The current work proposes a framework for on-board operation on a GPU, using NVIDIA's CUDA (Compute Unified Device Architecture) architecture. The algorithm aims at performing on-board compression using the target's related strategy. In detail, the main operations are: the automatic recognition of land cover types or detection of events in near real time in regions of interest (this is a user related choice) with an unsupervised classifier; the compression of specific regions with space-variant different bit rates including Principal Component Analysis (PCA), wavelet and arithmetic coding; and data volume management to the Ground Station. Experiments are provided using a real dataset taken from an AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) airborne sensor in a harbor area.

  2. Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)

    2002-01-01

    Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.

  3. Optimal boarding method for airline passengers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steffen, Jason H.; /Fermilab

    2008-02-01

    Using a Markov Chain Monte Carlo optimization algorithm and a computer simulation, I find the passenger ordering which minimizes the time required to board the passengers onto an airplane. The model that I employ assumes that the time that a passenger requires to load his or her luggage is the dominant contribution to the time needed to completely fill the aircraft. The optimal boarding strategy may reduce the time required to board and airplane by over a factor of four and possibly more depending upon the dimensions of the aircraft. I explore some features of the optimal boarding method andmore » discuss practical modifications to the optimal. Finally, I mention some of the benefits that could come from implementing an improved passenger boarding scheme.« less

  4. An Algorithm for Pedestrian Detection in Multispectral Image Sequences

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Fedorenko, V. V.

    2017-05-01

    The growing interest for self-driving cars provides a demand for scene understanding and obstacle detection algorithms. One of the most challenging problems in this field is the problem of pedestrian detection. Main difficulties arise from a diverse appearances of pedestrians. Poor visibility conditions such as fog and low light conditions also significantly decrease the quality of pedestrian detection. This paper presents a new optical flow based algorithm BipedDetet that provides robust pedestrian detection on a single-borad computer. The algorithm is based on the idea of simplified Kalman filtering suitable for realization on modern single-board computers. To detect a pedestrian a synthetic optical flow of the scene without pedestrians is generated using slanted-plane model. The estimate of a real optical flow is generated using a multispectral image sequence. The difference of the synthetic optical flow and the real optical flow provides the optical flow induced by pedestrians. The final detection of pedestrians is done by the segmentation of the difference of optical flows. To evaluate the BipedDetect algorithm a multispectral dataset was collected using a mobile robot.

  5. The SpaceCube Family of Hybrid On-Board Science Data Processors: An Update

    NASA Astrophysics Data System (ADS)

    Flatley, T.

    2012-12-01

    SpaceCube is an FPGA based on-board hybrid science data processing system developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. The SpaceCube design strategy incorporates commercial rad-tolerant FPGA technology and couples it with an upset mitigation software architecture to provide "order of magnitude" improvements in computing power over traditional rad-hard flight systems. Many of the missions proposed in the Earth Science Decadal Survey (ESDS) will require "next generation" on-board processing capabilities to meet their specified mission goals. Advanced laser altimeter, radar, lidar and hyper-spectral instruments are proposed for at least ten of the ESDS missions, and all of these instrument systems will require advanced on-board processing capabilities to facilitate the timely conversion of Earth Science data into Earth Science information. Both an "order of magnitude" increase in processing power and the ability to "reconfigure on the fly" are required to implement algorithms that detect and react to events, to produce data products on-board for applications such as direct downlink, quick look, and "first responder" real-time awareness, to enable "sensor web" multi-platform collaboration, and to perform on-board "lossless" data reduction by migrating typical ground-based processing functions on-board, thus reducing on-board storage and downlink requirements. This presentation will highlight a number of SpaceCube technology developments to date and describe current and future efforts, including the collaboration with the U.S. Department of Defense - Space Test Program (DoD/STP) on the STP-H4 ISS experiment pallet (launch June 2013) that will demonstrate SpaceCube 2.0 technology on-orbit.; ;

  6. Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader

    2004-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing for control and for robotics missions using vision sensors. It presents a top-level description of technologies required for the design and construction of SVIP and EASI and to advance the spatial-spectral imaging and large-scale space interferometry science and engineering.

  7. Apollo LM guidance computer software for the final lunar descent.

    NASA Technical Reports Server (NTRS)

    Eyles, D.

    1973-01-01

    In all manned lunar landings to date, the lunar module Commander has taken partial manual control of the spacecraft during the final stage of the descent, below roughly 500 ft altitude. This report describes programs developed at the Charles Stark Draper Laboratory, MIT, for use in the LM's guidance computer during the final descent. At this time computational demands on the on-board computer are at a maximum, and particularly close interaction with the crew is necessary. The emphasis is on the design of the computer software rather than on justification of the particular guidance algorithms employed. After the computer and the mission have been introduced, the current configuration of the final landing programs and an advanced version developed experimentally by the author are described.

  8. An on-board near-optimal climb-dash energy management

    NASA Technical Reports Server (NTRS)

    Weston, A. R.; Cliff, E. M.; Kelley, H. J.

    1982-01-01

    On-board real time flight control is studied in order to develop algorithms which are simple enough to be used in practice, for a variety of missions involving three dimensional flight. The intercept mission in symmetric flight is emphasized. Extensive computation is required on the ground prior to the mission but the ensuing on-board exploitation is extremely simple. The scheme takes advantage of the boundary layer structure common in singular perturbations, arising with the multiple time scales appropriate to aircraft dynamics. Energy modelling of aircraft is used as the starting point for the analysis. In the symmetric case, a nominal path is generated which fairs into the dash or cruise state. Feedback coefficients are found as functions of the remaining energy to go (dash energy less current energy) along the nominal path.

  9. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  10. Application of square-root filtering for spacecraft attitude control

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Schmidt, S. F.; Goka, T.

    1978-01-01

    Suitable digital algorithms are developed and tested for providing on-board precision attitude estimation and pointing control for potential use in the Landsat-D spacecraft. These algorithms provide pointing accuracy of better than 0.01 deg. To obtain necessary precision with efficient software, a six state-variable square-root Kalman filter combines two star tracker measurements to update attitude estimates obtained from processing three gyro outputs. The validity of the estimation and control algorithms are established, and the sensitivity of their performance to various error sources and software parameters are investigated by detailed digital simulation. Spacecraft computer memory, cycle time, and accuracy requirements are estimated.

  11. On-board ephemeris representation for Topex/Poseidon

    NASA Technical Reports Server (NTRS)

    Salama, Ahmed H.

    1990-01-01

    The Topex/Poseidon satellite requires real-time on-board knowledge of the satellite and TDRS ephemeris for attitude determination and control and High-Gain Antenna (HGA) pointing. The ephemeris representation concept for the MMS (Multimission Modular Spacecraft) satellites has shown that compressing the predicted ephemeris in a Fourier Power Series (FPS) before uplinking in conjunction with the On-Board Computer (OBC) ephemeris reconstruction algorithms is an efficient technique for ephemeris representation. As an MMS-based satellite, Topex/Poseidon has inherited the Landsat ephemeris representation concept including a daily FPS upload. This paper presents the Topex/Poseidon concept, analysis, and results including the conclusion that the ephemeris representation duration could be extended to 10 days or more and convenient weekly uploading is adopted without an increase in OBC memory requirements.

  12. Molecular computation: RNA solutions to chess problems

    PubMed Central

    Faulhammer, Dirk; Cukras, Anthony R.; Lipton, Richard J.; Landweber, Laura F.

    2000-01-01

    We have expanded the field of “DNA computers” to RNA and present a general approach for the solution of satisfiability problems. As an example, we consider a variant of the “Knight problem,” which asks generally what configurations of knights can one place on an n × n chess board such that no knight is attacking any other knight on the board. Using specific ribonuclease digestion to manipulate strands of a 10-bit binary RNA library, we developed a molecular algorithm and applied it to a 3 × 3 chessboard as a 9-bit instance of this problem. Here, the nine spaces on the board correspond to nine “bits” or placeholders in a combinatorial RNA library. We recovered a set of “winning” molecules that describe solutions to this problem. PMID:10677471

  13. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC (version 4.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information contained in the NASARC (Version 4.0) Technical Manual and NASARC (Version 4.0) User's Manual relates to the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbits. Array dimensions within the software were structured to fit within the currently available 12 megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.0) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  14. Numerical arc segmentation algorithm for a radio conference-NASARC (version 2.0) technical manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of NASARC software development through October 16, 1987. The Technical Manual describes the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operating instructions. Significant revisions have been incorporated in the Version 2.0 software. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit within the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time effecting an overall reduction in computer run time.

  15. Numerical Arc Segmentation Algorithm for a Radio Conference-NASARC, Version 2.0: User's Manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1987-01-01

    The information contained in the NASARC (Version 2.0) Technical Manual (NASA TM-100160) and the NASARC (Version 2.0) User's Manual (NASA TM-100161) relates to the state of the Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through October 16, 1987. The technical manual describes the NASARC concept and the algorithms which are used to implement it. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions have been incorporated in the Version 2.0 software over prior versions. These revisions have enhanced the modeling capabilities of the NASARC procedure while greatly reducing the computer run time and memory requirements. Array dimensions within the software have been structured to fit into the currently available 6-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 2.0) allows worldwide scenarios to be accommodated within these memory constraints while at the same time reducing computer run time.

  16. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    NASA Astrophysics Data System (ADS)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  17. Software algorithm and hardware design for real-time implementation of new spectral estimator

    PubMed Central

    2014-01-01

    Background Real-time spectral analyzers can be difficult to implement for PC computer-based systems because of the potential for high computational cost, and algorithm complexity. In this work a new spectral estimator (NSE) is developed for real-time analysis, and compared with the discrete Fourier transform (DFT). Method Clinical data in the form of 216 fractionated atrial electrogram sequences were used as inputs. The sample rate for acquisition was 977 Hz, or approximately 1 millisecond between digital samples. Real-time NSE power spectra were generated for 16,384 consecutive data points. The same data sequences were used for spectral calculation using a radix-2 implementation of the DFT. The NSE algorithm was also developed for implementation as a real-time spectral analyzer electronic circuit board. Results The average interval for a single real-time spectral calculation in software was 3.29 μs for NSE versus 504.5 μs for DFT. Thus for real-time spectral analysis, the NSE algorithm is approximately 150× faster than the DFT. Over a 1 millisecond sampling period, the NSE algorithm had the capability to spectrally analyze a maximum of 303 data channels, while the DFT algorithm could only analyze a single channel. Moreover, for the 8 second sequences, the NSE spectral resolution in the 3-12 Hz range was 0.037 Hz while the DFT spectral resolution was only 0.122 Hz. The NSE was also found to be implementable as a standalone spectral analyzer board using approximately 26 integrated circuits at a cost of approximately $500. The software files used for analysis are included as a supplement, please see the Additional files 1 and 2. Conclusions The NSE real-time algorithm has low computational cost and complexity, and is implementable in both software and hardware for 1 millisecond updates of multichannel spectra. The algorithm may be helpful to guide radiofrequency catheter ablation in real time. PMID:24886214

  18. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  19. Automated and real-time segmentation of suspicious breast masses using convolutional neural network

    PubMed Central

    Gregory, Adriana; Denis, Max; Meixner, Duane D.; Bayat, Mahdi; Whaley, Dana H.; Fatemi, Mostafa; Alizad, Azra

    2018-01-01

    In this work, a computer-aided tool for detection was developed to segment breast masses from clinical ultrasound (US) scans. The underlying Multi U-net algorithm is based on convolutional neural networks. Under the Mayo Clinic Institutional Review Board protocol, a prospective study of the automatic segmentation of suspicious breast masses was performed. The cohort consisted of 258 female patients who were clinically identified with suspicious breast masses and underwent clinical US scan and breast biopsy. The computer-aided detection tool effectively segmented the breast masses, achieving a mean Dice coefficient of 0.82, a true positive fraction (TPF) of 0.84, and a false positive fraction (FPF) of 0.01. By avoiding positioning of an initial seed, the algorithm is able to segment images in real time (13–55 ms per image), and can have potential clinical applications. The algorithm is at par with a conventional seeded algorithm, which had a mean Dice coefficient of 0.84 and performs significantly better (P< 0.0001) than the original U-net algorithm. PMID:29768415

  20. Method and apparatus for detection of catalyst failure on-board a motor vehicle using a dual oxygen sensor and an algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clemmens, W.B.; Koupal, J.W.; Sabourin, M.A.

    1993-07-20

    Apparatus is described for detecting motor vehicle exhaust gas catalytic converter deterioration comprising a first exhaust gas oxygen sensor adapted for communication with an exhaust stream before passage of the exhaust stream through a catalytic converter and a second exhaust gas oxygen sensor adapted for communication with the exhaust stream after passage of the exhaust stream through the catalytic converter, an on-board vehicle computational means, said computational means adapted to accept oxygen content signals from the before and after catalytic converter oxygen sensors and adapted to generate signal threshold values, said computational means adapted to compare over repeated time intervalsmore » the oxygen content signals to the signal threshold values and to store the output of the compared oxygen content signals, and in response after a specified number of time intervals for a specified mode of motor vehicle operation to determine and indicate a level of catalyst deterioration.« less

  1. Sensorimotor Assessment and Rehabilitative Apparatus

    DTIC Science & Technology

    2017-10-01

    vestibulo-ocular assessment without measuring eye movements per se. VON uses a head-mounted motion sensor, laptop computer with user...powered laptop computer with extensive processing algorithms. Frequent occlusion of the pupil by 2 eurosc t a o t t T t m I L f t o e n o a s h e t t s...The apparatus consists of a laptop computer , mirror galvanometer, back-projected laser target, data acquisition board, rate sensor, and motion-gain

  2. On-Board Mining in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Tanner, S.; Conover, H.; Graves, S.; Ramachandran, R.; Rushing, J.

    2004-12-01

    On-board data mining can contribute to many research and engineering applications, including natural hazard detection and prediction, intelligent sensor control, and the generation of customized data products for direct distribution to users. The ability to mine sensor data in real time can also be a critical component of autonomous operations, supporting deep space missions, unmanned aerial and ground-based vehicles (UAVs, UGVs), and a wide range of sensor meshes, webs and grids. On-board processing is expected to play a significant role in the next generation of NASA, Homeland Security, Department of Defense and civilian programs, providing for greater flexibility and versatility in measurements of physical systems. In addition, the use of UAV and UGV systems is increasing in military, emergency response and industrial applications. As research into the autonomy of these vehicles progresses, especially in fleet or web configurations, the applicability of on-board data mining is expected to increase significantly. Data mining in real time on board sensor platforms presents unique challenges. Most notably, the data to be mined is a continuous stream, rather than a fixed store such as a database. This means that the data mining algorithms must be modified to make only a single pass through the data. In addition, the on-board environment requires real time processing with limited computing resources, thus the algorithms must use fixed and relatively small amounts of processing time and memory. The University of Alabama in Huntsville is developing an innovative processing framework for the on-board data and information environment. The Environment for On-Board Processing (EVE) and the Adaptive On-board Data Processing (AODP) projects serve as proofs-of-concept of advanced information systems for remote sensing platforms. The EVE real-time processing infrastructure will upload, schedule and control the execution of processing plans on board remote sensors. These plans provide capabilities for autonomous data mining, classification and feature extraction using both streaming and buffered data sources. A ground-based testbed provides a heterogeneous, embedded hardware and software environment representing both space-based and ground-based sensor platforms, including wireless sensor mesh architectures. The AODP project explores the EVE concepts in the world of sensor-networks, including ad-hoc networks of small sensor platforms.

  3. Finite element computation on nearest neighbor connected machines

    NASA Technical Reports Server (NTRS)

    Mcaulay, A. D.

    1984-01-01

    Research aimed at faster, more cost effective parallel machines and algorithms for improving designer productivity with finite element computations is discussed. A set of 8 boards, containing 4 nearest neighbor connected arrays of commercially available floating point chips and substantial memory, are inserted into a commercially available machine. One-tenth Mflop (64 bit operation) processors provide an 89% efficiency when solving the equations arising in a finite element problem for a single variable regular grid of size 40 by 40 by 40. This is approximately 15 to 20 times faster than a much more expensive machine such as a VAX 11/780 used in double precision. The efficiency falls off as faster or more processors are envisaged because communication times become dominant. A novel successive overrelaxation algorithm which uses cyclic reduction in order to permit data transfer and computation to overlap in time is proposed.

  4. A Real Time Controller For Applications In Smart Structures

    NASA Astrophysics Data System (ADS)

    Ahrens, Christian P.; Claus, Richard O.

    1990-02-01

    Research in smart structures, especially the area of vibration suppression, has warranted the investigation of advanced computing environments. Real time PC computing power has limited development of high order control algorithms. This paper presents a simple Real Time Embedded Control System (RTECS) in an application of Intelligent Structure Monitoring by way of modal domain sensing for vibration control. It is compared to a PC AT based system for overall functionality and speed. The system employs a novel Reduced Instruction Set Computer (RISC) microcontroller capable of 15 million instructions per second (MIPS) continuous performance and burst rates of 40 MIPS. Advanced Complimentary Metal Oxide Semiconductor (CMOS) circuits are integrated on a single 100 mm by 160 mm printed circuit board requiring only 1 Watt of power. An operating system written in Forth provides high speed operation and short development cycles. The system allows for implementation of Input/Output (I/O) intensive algorithms and provides capability for advanced system development.

  5. Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha

    2012-11-01

    Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.

  6. Singular perturbation techniques for real time aircraft trajectory optimization and control

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Moerder, D. D.

    1982-01-01

    The usefulness of singular perturbation methods for developing real time computer algorithms to control and optimize aircraft flight trajectories is examined. A minimum time intercept problem using F-8 aerodynamic and propulsion data is used as a baseline. This provides a framework within which issues relating to problem formulation, solution methodology and real time implementation are examined. Theoretical questions relating to separability of dynamics are addressed. With respect to implementation, situations leading to numerical singularities are identified, and procedures for dealing with them are outlined. Also, particular attention is given to identifying quantities that can be precomputed and stored, thus greatly reducing the on-board computational load. Numerical results are given to illustrate the minimum time algorithm, and the resulting flight paths. An estimate is given for execution time and storage requirements.

  7. On Gamma Ray Instrument On-Board Data Processing Real-Time Computational Algorithm for Cosmic Ray Rejection

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Hunter, Stanley D.; Hanu, Andrei R.; Sheets, Teresa B.

    2016-01-01

    Richard O. Duda and Peter E. Hart of Stanford Research Institute in [1] described the recurring problem in computer image processing as the detection of straight lines in digitized images. The problem is to detect the presence of groups of collinear or almost collinear figure points. It is clear that the problem can be solved to any desired degree of accuracy by testing the lines formed by all pairs of points. However, the computation required for n=NxM points image is approximately proportional to n2 or O(n2), becoming prohibitive for large images or when data processing cadence time is in milliseconds. Rosenfeld in [2] described an ingenious method due to Hough [3] for replacing the original problem of finding collinear points by a mathematically equivalent problem of finding concurrent lines. This method involves transforming each of the figure points into a straight line in a parameter space. Hough chose to use the familiar slope-intercept parameters, and thus his parameter space was the two-dimensional slope-intercept plane. A parallel Hough transform running on multi-core processors was elaborated in [4]. There are many other proposed methods of solving a similar problem, such as sampling-up-the-ramp algorithm (SUTR) [5] and algorithms involving artificial swarm intelligence techniques [6]. However, all state-of-the-art algorithms lack in real time performance. Namely, they are slow for large images that require performance cadence of a few dozens of milliseconds (50ms). This problem arises in spaceflight applications such as near real-time analysis of gamma ray measurements contaminated by overwhelming amount of traces of cosmic rays (CR). Future spaceflight instruments such as the Advanced Energetic Pair Telescope instrument (AdEPT) [7-9] for cosmos gamma ray survey employ large detector readout planes registering multitudes of cosmic ray interference events and sparse science gamma ray event traces' projections. The AdEPT science of interest is in the gamma ray events and the problem is to detect and reject the much more voluminous cosmic ray projections, so that the remaining science data can be telemetered to the ground over the constrained communication link. The state-of-the-art in cosmic rays detection and rejection does not provide an adequate computational solution. This paper presents a novel approach to the AdEPT on-board data processing burdened with the CR detection top pole bottleneck problem. This paper is introducing the data processing object, demonstrates object segmentation and distribution for processing among many processing elements (PEs) and presents solution algorithm for the processing bottleneck - the CR-Algorithm. The algorithm is based on the a priori knowledge that a CR pierces the entire instrument pressure vessel. This phenomenon is also the basis for a straightforward CR simulator, allowing the CR-Algorithm performance testing. Parallel processing of the readout image's (2(N+M) - 4) peripheral voxels is detecting all CRs, resulting in O(n) computational complexity. This algorithm near real-time performance is making AdEPT class spaceflight instruments feasible.

  8. Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC), version 4.0: User's manual

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Heyward, Ann O.; Ponchak, Denise S.; Spence, Rodney L.; Zuzek, John E.

    1988-01-01

    The information in the NASARC (Version 4.0) Technical Manual (NASA-TM-101453) and NASARC (Version 4.0) User's Manual (NASA-TM-101454) relates to the state of Numerical Arc Segmentation Algorithm for a Radio Conference (NASARC) software development through November 1, 1988. The Technical Manual describes the NASARC concept and the algorithms used to implement the concept. The User's Manual provides information on computer system considerations, installation instructions, description of input files, and program operation instructions. Significant revisions were incorporated in the Version 4.0 software over prior versions. These revisions have further enhanced the modeling capabilities of the NASARC procedure and provide improved arrangements of predetermined arcs within the geostationary orbit. Array dimensions within the software were structured to fit within the currently available 12-megabyte memory capacity of the International Frequency Registration Board (IFRB) computer facility. A piecewise approach to predetermined arc generation in NASARC (Version 4.) allows worldwide planning problem scenarios to be accommodated within computer run time and memory constraints with enhanced likelihood and ease of solution.

  9. A computer-based servo system for controlling isotonic contractions of muscle.

    PubMed

    Smith, J P; Barsotti, R J

    1993-11-01

    We have developed a computer-based servo system for controlling isotonic releases in muscle. This system is a composite of commercially available devices: an IBM personal computer, an analog-to-digital (A/D) board, an Akers AE801 force transducer, and a Cambridge Technology motor. The servo loop controlling the force clamp is generated by computer via the A/D board, using a program written in QuickBASIC 4.5. Results are shown that illustrate the ability of the system to clamp the force generated by either skinned cardiac trabeculae or single rabbit psoas fibers down to the resolution of the force transducer within 4 ms. This rate is independent of the level of activation of the tissue and the size of the load imposed during the release. The key to the effectiveness of the system consists of two algorithms that are described in detail. The first is used to calculate the error signal to hold force to the desired level. The second algorithm is used to calculate the appropriate gain of the servo for a particular fiber and the size of the desired load to be imposed. The results show that the described computer-based method for controlling isotonic releases in muscle represents a good compromise between simplicity and performance and is an alternative to the custom-built digital/analog servo devices currently being used in studies of muscle mechanics.

  10. On-Board Entry Trajectory Planning Expanded to Sub-orbital Flight

    NASA Technical Reports Server (NTRS)

    Lu, Ping; Shen, Zuojun

    2003-01-01

    A methodology for on-board planning of sub-orbital entry trajectories is developed. The algorithm is able to generate in a time frame consistent with on-board environment a three-degree-of-freedom (3DOF) feasible entry trajectory, given the boundary conditions and vehicle modeling. This trajectory is then tracked by feedback guidance laws which issue guidance commands. The current trajectory planning algorithm complements the recently developed method for on-board 3DOF entry trajectory generation for orbital missions, and provides full-envelope autonomous adaptive entry guidance capability. The algorithm is validated and verified by extensive high fidelity simulations using a sub-orbital reusable launch vehicle model and difficult mission scenarios including failures and aborts.

  11. High-Speed On-Board Data Processing Platform for LIDAR Projects at NASA Langley Research Center

    NASA Astrophysics Data System (ADS)

    Beyon, J.; Ng, T. K.; Davis, M. J.; Adams, J. K.; Lin, B.

    2015-12-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 - April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.

  12. High-Speed On-Board Data Processing for Science Instruments: HOPS

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey

    2015-01-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 â€" April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.

  13. A Highly Parallelized Special-Purpose Computer for Many-Body Simulations with an Arbitrary Central Force: MD-GRAPE

    NASA Astrophysics Data System (ADS)

    Fukushige, Toshiyuki; Taiji, Makoto; Makino, Junichiro; Ebisuzaki, Toshikazu; Sugimoto, Daiichiro

    1996-09-01

    We have developed a parallel, pipelined special-purpose computer for N-body simulations, MD-GRAPE (for "GRAvity PipE"). In gravitational N- body simulations, almost all computing time is spent on the calculation of interactions between particles. GRAPE is specialized hardware to calculate these interactions. It is used with a general-purpose front-end computer that performs all calculations other than the force calculation. MD-GRAPE is the first parallel GRAPE that can calculate an arbitrary central force. A force different from a pure 1/r potential is necessary for N-body simulations with periodic boundary conditions using the Ewald or particle-particle/particle-mesh (P^3^M) method. MD-GRAPE accelerates the calculation of particle-particle force for these algorithms. An MD- GRAPE board has four MD chips and its peak performance is 4.2 GFLOPS. On an MD-GRAPE board, a cosmological N-body simulation takes 6O0(N/10^6^)^3/2^ s per step for the Ewald method, where N is the number of particles, and would take 24O(N/10^6^) s per step for the P^3^M method, in a uniform distribution of particles.

  14. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery

    PubMed Central

    Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang

    2018-01-01

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585

  15. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.

    PubMed

    Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang

    2018-04-25

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.

  16. WE-AB-303-09: Rapid Projection Computations for On-Board Digital Tomosynthesis in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, AS; Sun, X; Pitsianis, N

    2015-06-15

    Purpose: To facilitate fast and accurate iterative volumetric image reconstruction from limited-angle on-board projections. Methods: Intrafraction motion hinders the clinical applicability of modern radiotherapy techniques, such as lung stereotactic body radiation therapy (SBRT). The LIVE system may impact clinical practice by recovering volumetric information via Digital Tomosynthesis (DTS), thus entailing low time and radiation dose for image acquisition during treatment. The DTS is estimated as a deformation of prior CT via iterative registration with on-board images; this shifts the challenge to the computational domain, owing largely to repeated projection computations across iterations. We address this issue by composing efficient digitalmore » projection operators from their constituent parts. This allows us to separate the static (projection geometry) and dynamic (volume/image data) parts of projection operations by means of pre-computations, enabling fast on-board processing, while also relaxing constraints on underlying numerical models (e.g. regridding interpolation kernels). Further decoupling the projectors into simpler ones ensures the incurred memory overhead remains low, within the capacity of a single GPU. These operators depend only on the treatment plan and may be reused across iterations and patients. The dynamic processing load is kept to a minimum and maps well to the GPU computational model. Results: We have integrated efficient, pre-computable modules for volumetric ray-casting and FDK-based back-projection with the LIVE processing pipeline. Our results show a 60x acceleration of the DTS computations, compared to the previous version, using a single GPU; presently, reconstruction is attained within a couple of minutes. The present implementation allows for significant flexibility in terms of the numerical and operational projection model; we are investigating the benefit of further optimizations and accurate digital projection sub-kernels. Conclusion: Composable projection operators constitute a versatile research tool which can greatly accelerate iterative registration algorithms and may be conducive to the clinical applicability of LIVE. National Institutes of Health Grant No. R01-CA184173; GPU donation by NVIDIA Corporation.« less

  17. Real-time plasma control based on the ISTTOK tomography diagnostica)

    NASA Astrophysics Data System (ADS)

    Carvalho, P. J.; Carvalho, B. B.; Neto, A.; Coelho, R.; Fernandes, H.; Sousa, J.; Varandas, C.; Chávez-Alarcón, E.; Herrera-Velázquez, J. J. E.

    2008-10-01

    The presently available processing power in generic processing units (GPUs) combined with state-of-the-art programmable logic devices benefits the implementation of complex, real-time driven, data processing algorithms for plasma diagnostics. A tomographic reconstruction diagnostic has been developed for the ISTTOK tokamak, based on three linear pinhole cameras each with ten lines of sight. The plasma emissivity in a poloidal cross section is computed locally on a submillisecond time scale, using a Fourier-Bessel algorithm, allowing the use of the output signals for active plasma position control. The data acquisition and reconstruction (DAR) system is based on ATCA technology and consists of one acquisition board with integrated field programmable gate array (FPGA) capabilities and a dual-core Pentium module running real-time application interface (RTAI) Linux. In this paper, the DAR real-time firmware/software implementation is presented, based on (i) front-end digital processing in the FPGA; (ii) a device driver specially developed for the board which enables streaming data acquisition to the host GPU; and (iii) a fast reconstruction algorithm running in Linux RTAI. This system behaves as a module of the central ISTTOK control and data acquisition system (FIRESIGNAL). Preliminary results of the above experimental setup are presented and a performance benchmarking against the magnetic coil diagnostic is shown.

  18. A real-time KLT implementation for radio-SETI applications

    NASA Astrophysics Data System (ADS)

    Melis, Andrea; Concu, Raimondo; Pari, Pierpaolo; Maccone, Claudio; Montebugnoli, Stelio; Possenti, Andrea; Valente, Giuseppe; Antonietti, Nicoló; Perrodin, Delphine; Migoni, Carlo; Murgia, Matteo; Trois, Alessio; Barbaro, Massimo; Bocchinu, Alessandro; Casu, Silvia; Lunesu, Maria Ilaria; Monari, Jader; Navarrini, Alessandro; Pisanu, Tonino; Schilliró, Francesco; Vacca, Valentina

    2016-07-01

    SETI, the Search for ExtraTerrestrial Intelligence, is the search for radio signals emitted by alien civilizations living in the Galaxy. Narrow-band FFT-based approaches have been preferred in SETI, since their computation time only grows like N*lnN, where N is the number of time samples. On the contrary, a wide-band approach based on the Kahrunen-Lo`eve Transform (KLT) algorithm would be preferable, but it would scale like N*N. In this paper, we describe a hardware-software infrastructure based on FPGA boards and GPU-based PCs that circumvents this computation-time problem allowing for a real-time KLT.

  19. Development of embedded real-time and high-speed vision platform

    NASA Astrophysics Data System (ADS)

    Ouyang, Zhenxing; Dong, Yimin; Yang, Hua

    2015-12-01

    Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.

  20. A filter circuit board for the Earthworm Seismic Data Acquisition System

    USGS Publications Warehouse

    Jensen, Edward Gray

    2000-01-01

    The Earthworm system is a seismic network data acquisition and processing system used by the Northern California Seismic Network as well as many other seismic networks. The input to the system is comprised of many realtime electronic waveforms fed to a multi-channel digitizer on a PC platform. The digitizer consists of one or more National Instruments Corp. AMUX–64T multiplexer boards attached to an A/D converter board located in the computer. Originally, passive filters were installed on the multiplexers to eliminate electronic noise picked up in cabling. It was later discovered that a small amount of crosstalk occurred between successive channels in the digitizing sequence. Though small, this crosstalk will cause what appear to be small earthquake arrivals at the wrong time on some channels. This can result in erroneous calculation of earthquake arrival times, particularly by automated algorithms. To deal with this problem, an Earthworm filter board was developed to provide the needed filtering while eliminating crosstalk. This report describes the tests performed to find a suitable solution, and the design of the circuit board. Also included are all the details needed to build and install this board in an Earthworm system or any other system using the AMUX–64T board. Available below is the report in PDF format as well as an archive file containing the circuit board manufacturing information.

  1. FPGA-based real-time swept-source OCT systems for B-scan live-streaming or volumetric imaging

    NASA Astrophysics Data System (ADS)

    Bandi, Vinzenz; Goette, Josef; Jacomet, Marcel; von Niederhäusern, Tim; Bachmann, Adrian H.; Duelk, Marcus

    2013-03-01

    We have developed a Swept-Source Optical Coherence Tomography (Ss-OCT) system with high-speed, real-time signal processing on a commercially available Data-Acquisition (DAQ) board with a Field-Programmable Gate Array (FPGA). The Ss-OCT system simultaneously acquires OCT and k-clock reference signals at 500MS/s. From the k-clock signal of each A-scan we extract a remap vector for the k-space linearization of the OCT signal. The linear but oversampled interpolation is followed by a 2048-point FFT, additional auxiliary computations, and a data transfer to a host computer for real-time, live-streaming of B-scan or volumetric C-scan OCT visualization. We achieve a 100 kHz A-scan rate by parallelization of our hardware algorithms, which run on standard and affordable, commercially available DAQ boards. Our main development tool for signal analysis as well as for hardware synthesis is MATLAB® with add-on toolboxes and 3rd-party tools.

  2. Real-time implementation of logo detection on open source BeagleBoard

    NASA Astrophysics Data System (ADS)

    George, M.; Kehtarnavaz, N.; Estevez, L.

    2011-03-01

    This paper presents the real-time implementation of our previously developed logo detection and tracking algorithm on the open source BeagleBoard mobile platform. This platform has an OMAP processor that incorporates an ARM Cortex processor. The algorithm combines Scale Invariant Feature Transform (SIFT) with k-means clustering, online color calibration and moment invariants to robustly detect and track logos in video. Various optimization steps that are carried out to allow the real-time execution of the algorithm on BeagleBoard are discussed. The results obtained are compared to the PC real-time implementation results.

  3. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm.

    PubMed

    Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A; Ravankar, Abhijeet

    2018-04-23

    In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.

  4. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm

    PubMed Central

    Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A.; Ravankar, Abhijeet

    2018-01-01

    In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision. PMID:29690624

  5. FPGA implementation of ICA algorithm for blind signal separation and adaptive noise canceling.

    PubMed

    Kim, Chang-Min; Park, Hyung-Min; Kim, Taesu; Choi, Yoon-Kyung; Lee, Soo-Young

    2003-01-01

    An field programmable gate array (FPGA) implementation of independent component analysis (ICA) algorithm is reported for blind signal separation (BSS) and adaptive noise canceling (ANC) in real time. In order to provide enormous computing power for ICA-based algorithms with multipath reverberation, a special digital processor is designed and implemented in FPGA. The chip design fully utilizes modular concept and several chips may be put together for complex applications with a large number of noise sources. Experimental results with a fabricated test board are reported for ANC only, BSS only, and simultaneous ANC/BSS, which demonstrates successful speech enhancement in real environments in real time.

  6. Increasing the object recognition distance of compact open air on board vision system

    NASA Astrophysics Data System (ADS)

    Kirillov, Sergey; Kostkin, Ivan; Strotov, Valery; Dmitriev, Vladimir; Berdnikov, Vadim; Akopov, Eduard; Elyutin, Aleksey

    2016-10-01

    The aim of this work was developing an algorithm eliminating the atmospheric distortion and improves image quality. The proposed algorithm is entirely software without using additional hardware photographic equipment. . This algorithm does not required preliminary calibration. It can work equally effectively with the images obtained at a distances from 1 to 500 meters. An algorithm for the open air images improve designed for Raspberry Pi model B on-board vision systems is proposed. The results of experimental examination are given.

  7. Adversarial search by evolutionary computation.

    PubMed

    Hong, T P; Huang, K Y; Lin, W Y

    2001-01-01

    In this paper, we consider the problem of finding good next moves in two-player games. Traditional search algorithms, such as minimax and alpha-beta pruning, suffer great temporal and spatial expansion when exploring deeply into search trees to find better next moves. The evolution of genetic algorithms with the ability to find global or near global optima in limited time seems promising, but they are inept at finding compound optima, such as the minimax in a game-search tree. We thus propose a new genetic algorithm-based approach that can find a good next move by reserving the board evaluation values of new offspring in a partial game-search tree. Experiments show that solution accuracy and search speed are greatly improved by our algorithm.

  8. Chess games: a model for RNA based computation.

    PubMed

    Cukras, A R; Faulhammer, D; Lipton, R J; Landweber, L F

    1999-10-01

    Here we develop the theory of RNA computing and a method for solving the 'knight problem' as an instance of a satisfiability (SAT) problem. Using only biological molecules and enzymes as tools, we developed an algorithm for solving the knight problem (3 x 3 chess board) using a 10-bit combinatorial pool and sequential RNase H digestions. The results of preliminary experiments presented here reveal that the protocol recovers far more correct solutions than expected at random, but the persistence of errors still presents the greatest challenge.

  9. The star identification, pointing and tracking system of UVSTAR, an attached payload instrument system for the Shuttle Hitchhiker-M platform

    NASA Technical Reports Server (NTRS)

    Decarlo, Francesco; Stalio, Roberto; Trampus, Paolo; Broadfoot, A. Lyle; Sandel, Bill R.; Sicuranza, Giovanni

    1993-01-01

    We describe an algorithm for star identification and pointing/tracking of a spaceborne electro-optical system and simulation analyses to test the algorithm. The algorithm will be implemented in the guiding system of UVSTAR, a spectrographic telescope for observations of astronomical and planetary sources operating in the 500-1250 A waveband at approximately 1 A resolution. The experiment is an attached payload and will fly as a Hitchhiker-M payload on the Shuttle. UVSTAR includes capabilities for independent target acquisition and tracking. The spectrograph package has internal gimbals that allow angular movement of plus or minus 3 deg from the central position. Rotation about the azimuth axis (parallel to the Shuttle z axis) and elevation axis (parallel to the Shuttle x axis) will actively position the field of view to center the target of interest in the fields of the spectrographs. The algorithm is based on an on-board catalog of stars. To identify star fields, the algorithm compares the positions of stars recorded by the guiding imager to positions computed from the on-board catalog. When the field has been identified, its position within the guiding imager field of view can be used to compute the pointing corrections necessary to point to a target of interest. In tracking mode, the software uses the past history to predict the quasi-periodic attitude control motions of the shuttle and sends pointing commands to cancel the motion and stabilize UVSTAR on the target. The guiding imager (guider) will have an 80-mm focal length and f/1.4 optics giving a field of view of 6 deg x 4.5 deg using a 385 x 288 pixel intensified CCD. It will be capable of providing high accuracy (better than 2 arc-sec) attitude determination from coarse (6 deg x 4.5 deg) initial knowledge of the pointing direction; and of pointing toward the target. It will also be capable of tracking at the same high accuracy with a processing time of less than a few hundredths of a second.

  10. Architecture and Implementation of OpenPET Firmware and Embedded Software

    PubMed Central

    Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; Peng, Qiyu; Choong, Woon-Seng

    2016-01-01

    OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the flexibility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics – a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures and implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration. PMID:27110034

  11. Prior image constrained scatter correction in cone-beam computed tomography image-guided radiation therapy.

    PubMed

    Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong

    2011-02-21

    X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.

  12. Performance Assessment of Different Pulse Reconstruction Algorithms for the ATHENA X-Ray Integral Field Unit

    NASA Technical Reports Server (NTRS)

    Peille, Phillip; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; Den Haretog, Roland; de Plaa, Jelle; hide

    2016-01-01

    The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.

  13. An on-board pedestrian detection and warning system with features of side pedestrian

    NASA Astrophysics Data System (ADS)

    Cheng, Ruzhong; Zhao, Yong; Wong, ChupChung; Chan, KwokPo; Xu, Jiayao; Wang, Xin'an

    2012-01-01

    Automotive Active Safety(AAS) is the main branch of intelligence automobile study and pedestrian detection is the key problem of AAS, because it is related with the casualties of most vehicle accidents. For on-board pedestrian detection algorithms, the main problem is to balance efficiency and accuracy to make the on-board system available in real scenes, so an on-board pedestrian detection and warning system with the algorithm considered the features of side pedestrian is proposed. The system includes two modules, pedestrian detecting and warning module. Haar feature and a cascade of stage classifiers trained by Adaboost are first applied, and then HOG feature and SVM classifier are used to refine false positives. To make these time-consuming algorithms available in real-time use, a divide-window method together with operator context scanning(OCS) method are applied to increase efficiency. To merge the velocity information of the automotive, the distance of the detected pedestrian is also obtained, so the system could judge if there is a potential danger for the pedestrian in the front. With a new dataset captured in urban environment with side pedestrians on zebra, the embedded system and its algorithm perform an on-board available result on side pedestrian detection.

  14. Nuclear IHC enumeration: A digital phantom to evaluate the performance of automated algorithms in digital pathology.

    PubMed

    Niazi, Muhammad Khalid Khan; Abas, Fazly Salleh; Senaras, Caglar; Pennell, Michael; Sahiner, Berkman; Chen, Weijie; Opfer, John; Hasserjian, Robert; Louissaint, Abner; Shana'ah, Arwa; Lozanski, Gerard; Gurcan, Metin N

    2018-01-01

    Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.

  15. On-Board Cryospheric Change Detection By The Autonomous Sciencecraft Experiment

    NASA Astrophysics Data System (ADS)

    Doggett, T.; Greeley, R.; Castano, R.; Cichy, B.; Chien, S.; Davies, A.; Baker, V.; Dohm, J.; Ip, F.

    2004-12-01

    The Autonomous Sciencecraft Experiment (ASE) is operating on-board Earth Observing - 1 (EO-1) with the Hyperion hyper-spectral visible/near-IR spectrometer. ASE science activities include autonomous monitoring of cryopsheric changes, triggering the collection of additional data when change is detected and filtering of null data such as no change or cloud cover. This would have application to the study of cryospheres on Earth, Mars and the icy moons of the outer solar system. A cryosphere classification algorithm, in combination with a previously developed cloud algorithm [1] has been tested on-board ten times from March through August 2004. The cloud algorithm correctly screened out three scenes with total cloud cover, while the cryosphere algorithm detected alpine snow cover in the Rocky Mountains, lake thaw near Madison, Wisconsin, and the presence and subsequent break-up of sea ice in the Barrow Strait of the Canadian Arctic. Hyperion has 220 bands ranging from 400 to 2400 nm, with a spatial resolution of 30 m/pixel and a spectral resolution of 10 nm. Limited on-board memory and processing speed imposed the constraint that only partially processed Level 0.5 data with dark image subtraction and gain factors applied, but not full radiometric calibration. In addition, a maximum of 12 bands could be used for any stacked sequence of algorithms run for a scene on-board. The cryosphere algorithm was developed to classify snow, water, ice and land, using six Hyperion bands at 427, 559, 661, 864, 1245 and 1649 nm. Of these, only 427 nm does overlap with the cloud algorithm. The cloud algorithm was developed with Level 1 data, which introduces complications because of the incomplete calibration of SWIR in Level 0.5 data, including a high level of noise in the 1377 nm band used by the cloud algorithm. Development of a more robust cryosphere classifier, including cloud classification specifically adapted to Level 0.5, is in progress for deployment on EO-1 as part of continued ASE operations. [1] Griffin, M.K. et al., Cloud Cover Detection Algorithm For EO-1 Hyperion Imagery, SPIE 17, 2003.

  16. GPU-accelerated algorithms for compressed signals recovery with application to astronomical imagery deblurring

    NASA Astrophysics Data System (ADS)

    Fiandrotti, Attilio; Fosson, Sophie M.; Ravazzi, Chiara; Magli, Enrico

    2018-04-01

    Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.

  17. Mastering the game of Go with deep neural networks and tree search

    NASA Astrophysics Data System (ADS)

    Silver, David; Huang, Aja; Maddison, Chris J.; Guez, Arthur; Sifre, Laurent; van den Driessche, George; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda; Lanctot, Marc; Dieleman, Sander; Grewe, Dominik; Nham, John; Kalchbrenner, Nal; Sutskever, Ilya; Lillicrap, Timothy; Leach, Madeleine; Kavukcuoglu, Koray; Graepel, Thore; Hassabis, Demis

    2016-01-01

    The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

  18. Mastering the game of Go with deep neural networks and tree search.

    PubMed

    Silver, David; Huang, Aja; Maddison, Chris J; Guez, Arthur; Sifre, Laurent; van den Driessche, George; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda; Lanctot, Marc; Dieleman, Sander; Grewe, Dominik; Nham, John; Kalchbrenner, Nal; Sutskever, Ilya; Lillicrap, Timothy; Leach, Madeleine; Kavukcuoglu, Koray; Graepel, Thore; Hassabis, Demis

    2016-01-28

    The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

  19. Solder creep-fatigue interactions with flexible leaded parts

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.; Wen, L. C.; Mon, G. R.; Jetter, E.

    1992-01-01

    With flexible leaded parts, the solder-joint failure process involves a complex interplay of creep and fatigue mechanisms. To better understand the role of creep in typical multi-hour cyclic loading conditions, a specialized non-linear finite-element creep simulation computer program has been formulated. The numerical algorithm includes the complete part-lead-solder-PWB system, accounting for strain-rate dependence of creep on applied stress and temperature, and the role of the part-lead dimensions and flexibility that determine the total creep deflection (solder strain range) during stress relaxation. The computer program has been used to explore the effects of various solder creep-fatigue parameters such as lead height and stiffness, thermal-cycle test profile, and part/board differential thermal expansion properties. One of the most interesting findings is the strong presence of unidirectional creep-ratcheting that occurs during thermal cycling due to temperature dominated strain-rate effects. To corroborate the solder fatigue model predictions, a number of carefully controlled thermal-cycle tests have been conducted using special bimetallic test boards.

  20. Smart lighting using a liquid crystal modulator

    NASA Astrophysics Data System (ADS)

    Baril, Alexandre; Thibault, Simon; Galstian, Tigran

    2017-08-01

    Now that LEDs have massively invaded the illumination market, a clear trend has emerged for more efficient and targeted lighting. The project described here is at the leading edge of the trend and aims at developing an evaluation board to test smart lighting applications. This is made possible thanks to a new liquid crystal light modulator recently developed for broadening LED light beams. The modulator is controlled by electrical signals and is characterized by a linear working zone. This feature allows the implementation of a closed loop control with a sensor feedback. This project shows that the use of computer vision is a promising opportunity for cheap closed loop control. The developed evaluation board integrates the liquid crystal modulator, a webcam, a LED light source and all the required electronics to implement a closed loop control with a computer vision algorithm.

  1. Hardware accelerator design for change detection in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.

  2. Intelligent On-Board Processing in the Sensor Web

    NASA Astrophysics Data System (ADS)

    Tanner, S.

    2005-12-01

    Most existing sensing systems are designed as passive, independent observers. They are rarely aware of the phenomena they observe, and are even less likely to be aware of what other sensors are observing within the same environment. Increasingly, intelligent processing of sensor data is taking place in real-time, using computing resources on-board the sensor or the platform itself. One can imagine a sensor network consisting of intelligent and autonomous space-borne, airborne, and ground-based sensors. These sensors will act independently of one another, yet each will be capable of both publishing and receiving sensor information, observations, and alerts among other sensors in the network. Furthermore, these sensors will be capable of acting upon this information, perhaps altering acquisition properties of their instruments, changing the location of their platform, or updating processing strategies for their own observations to provide responsive information or additional alerts. Such autonomous and intelligent sensor networking capabilities provide significant benefits for collections of heterogeneous sensors within any environment. They are crucial for multi-sensor observations and surveillance, where real-time communication with external components and users may be inhibited, and the environment may be hostile. In all environments, mission automation and communication capabilities among disparate sensors will enable quicker response to interesting, rare, or unexpected events. Additionally, an intelligent network of heterogeneous sensors provides the advantage that all of the sensors can benefit from the unique capabilities of each sensor in the network. The University of Alabama in Huntsville (UAH) is developing a unique approach to data processing, integration and mining through the use of the Adaptive On-Board Data Processing (AODP) framework. AODP is a key foundation technology for autonomous internetworking capabilities to support situational awareness by sensors and their on-board processes. The two primary research areas for this project are (1) the on-board processing and communications framework itself, and (2) data mining algorithms targeted to the needs and constraints of the on-board environment. The team is leveraging its experience in on-board processing, data mining, custom data processing, and sensor network design. Several unique UAH-developed technologies are employed in the AODP project, including EVE, an EnVironmEnt for on-board processing, and the data mining tools included in the Algorithm Development and Mining (ADaM) toolkit.

  3. The techniques of quality operations computational and experimental researches of the launch vehicles in the drawing-board stage

    NASA Astrophysics Data System (ADS)

    Rozhaeva, K.

    2018-01-01

    The aim of the researchis the quality operations of the design process at the stage of research works on the development of active on-Board system of the launch vehicles spent stages descent with liquid propellant rocket engines by simulating the gasification process of undeveloped residues of fuel in the tanks. The design techniques of the gasification process of liquid rocket propellant components residues in the tank to the expense of finding and fixing errors in the algorithm calculation to increase the accuracy of calculation results is proposed. Experimental modelling of the model liquid evaporation in a limited reservoir of the experimental stand, allowing due to the false measurements rejection based on given criteria and detected faults to enhance the results reliability of the experimental studies; to reduce the experiments cost.

  4. Testing of the on-board attitude determination and control algorithms for SAMPEX

    NASA Technical Reports Server (NTRS)

    Mccullough, Jon D.; Flatley, Thomas W.; Henretty, Debra A.; Markley, F. Landis; San, Josephine K.

    1993-01-01

    Algorithms for on-board attitude determination and control of the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) have been expanded to include a constant gain Kalman filter for the spacecraft angular momentum, pulse width modulation for the reaction wheel command, an algorithm to avoid pointing the Heavy Ion Large Telescope (HILT) instrument boresight along the spacecraft velocity vector, and the addition of digital sun sensor (DSS) failure detection logic. These improved algorithms were tested in a closed-loop environment for three orbit geometries, one with the sun perpendicular to the orbit plane, and two with the sun near the orbit plane - at Autumnal Equinox and at Winter Solstice. The closed-loop simulator was enhanced and used as a truth model for the control systems' performance evaluation and sensor/actuator contingency analysis. The simulations were performed on a VAX 8830 using a prototype version of the on-board software.

  5. The discrete Fourier transform algorithm for determining decay constants—Implementation using a field programmable gate array

    NASA Astrophysics Data System (ADS)

    Bostrom, G.; Atkinson, D.; Rice, A.

    2015-04-01

    Cavity ringdown spectroscopy (CRDS) uses the exponential decay constant of light exiting a high-finesse resonance cavity to determine analyte concentration, typically via absorption. We present a high-throughput data acquisition system that determines the decay constant in near real time using the discrete Fourier transform algorithm on a field programmable gate array (FPGA). A commercially available, high-speed, high-resolution, analog-to-digital converter evaluation board system is used as the platform for the system, after minor hardware and software modifications. The system outputs decay constants at maximum rate of 4.4 kHz using an 8192-point fast Fourier transform by processing the intensity decay signal between ringdown events. We present the details of the system, including the modifications required to adapt the evaluation board to accurately process the exponential waveform. We also demonstrate the performance of the system, both stand-alone and incorporated into our existing CRDS system. Details of FPGA, microcontroller, and circuitry modifications are provided in the Appendix and computer code is available upon request from the authors.

  6. Calibrating LOFAR using the Black Board Selfcal System

    NASA Astrophysics Data System (ADS)

    Pandey, V. N.; van Zwieten, J. E.; de Bruyn, A. G.; Nijboer, R.

    2009-09-01

    The Black Board SelfCal (BBS) system is designed as the final processing system to carry out the calibration of LOFAR in an efficient way. In this paper we give a brief description of its architectural and software design including its distributed computing approach. A confusion limited deep all sky image (from 38-62 MHz) by calibrating LOFAR test data with the BBS suite is shown as a sample result. The present status and future directions of development of BBS suite are also touched upon. Although BBS is mainly developed for LOFAR, it may also be used to calibrate other instruments once their specific algorithms are plugged in.

  7. A micro-CL system and its applications

    NASA Astrophysics Data System (ADS)

    Wei, Zenghui; Yuan, Lulu; Liu, Baodong; Wei, Cunfeng; Sun, Cuili; Yin, Pengfei; Wei, Long

    2017-11-01

    The computed laminography (CL) method is preferable to computed tomography for the non-destructive testing of plate-like objects. A micro-CL system is developed for three-dimensional imaging of plate-like objects. The details of the micro-CL system are described, including the system architecture, scanning modes, and reconstruction algorithm. The experiment results of plate-like fossils, insulated gate bipolar translator module, ball grid array packaging, and printed circuit board are also presented to demonstrate micro-CL's ability for 3D imaging of flat specimens and universal applicability in various fields.

  8. A micro-CL system and its applications.

    PubMed

    Wei, Zenghui; Yuan, Lulu; Liu, Baodong; Wei, Cunfeng; Sun, Cuili; Yin, Pengfei; Wei, Long

    2017-11-01

    The computed laminography (CL) method is preferable to computed tomography for the non-destructive testing of plate-like objects. A micro-CL system is developed for three-dimensional imaging of plate-like objects. The details of the micro-CL system are described, including the system architecture, scanning modes, and reconstruction algorithm. The experiment results of plate-like fossils, insulated gate bipolar translator module, ball grid array packaging, and printed circuit board are also presented to demonstrate micro-CL's ability for 3D imaging of flat specimens and universal applicability in various fields.

  9. Architecture and Implementation of OpenPET Firmware and Embedded Software

    DOE PAGES

    Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; ...

    2016-01-11

    OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the versatility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics-a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures andmore » implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration.« less

  10. Localization algorithms for micro-channel x-ray telescope on board SVOM space mission

    NASA Astrophysics Data System (ADS)

    Gosset, L.; Götz, D.; Osborne, J.; Willingale, R.

    2016-07-01

    SVOM is a French-Chinese space mission to be launched in 2021, whose goal is the study of Gamma-Ray Bursts, the most powerful stellar explosions in the Universe. The Micro-channel X-ray Telescope (MXT) is an X-ray focusing telescope, on board SVOM, with a field of view of 1 degree (working in the 0.2-10 keV energy band), dedicated to the rapid follow-up of the Gamma-Ray Bursts counterparts and to their precise localization (smaller than 2 arc minutes). In order to reduce the optics mass and to have an angular resolution of few arc minutes, a "lobster-Eye" configuration has been chosen. Using a numerical model of the MXT Point Spread Function (PSF) we simulated MXT observations of point sources in order to develop and test different localization algorithms to be implemented on board MXT. We included preliminary estimations of the instrumental and sky background. The algorithms on board have to be a combination of speed and precision (the brightest sources are expected to be localized at a precision better than 10 arc seconds in the MXT reference frame). We present the comparison between different methods such as barycentre, PSF fitting in one or two dimensions. The temporal performance of the algorithms is being tested using the X-ray afterglow data base of the XRT telescope on board the NASA Swift satellite.

  11. Determination of stores pointing error due to wing flexibility under flight load

    NASA Technical Reports Server (NTRS)

    Lokos, William A.; Bahm, Catherine M.; Heinle, Robert A.

    1995-01-01

    The in-flight elastic wing twist of a fighter-type aircraft was studied to provide for an improved on-board real-time computed prediction of pointing variations of three wing store stations. This is an important capability to correct sensor pod alignment variation or to establish initial conditions of iron bombs or smart weapons prior to release. The original algorithm was based upon coarse measurements. The electro-optical Flight Deflection Measurement System measured the deformed wing shape in flight under maneuver loads to provide a higher resolution database from which an improved twist prediction algorithm could be developed. The FDMS produced excellent repeatable data. In addition, a NASTRAN finite-element analysis was performed to provide additional elastic deformation data. The FDMS data combined with the NASTRAN analysis indicated that an improved prediction algorithm could be derived by using a different set of aircraft parameters, namely normal acceleration, stores configuration, Mach number, and gross weight.

  12. The Kepler Science Operations Center Pipeline Framework Extensions

    NASA Technical Reports Server (NTRS)

    Klaus, Todd C.; Cote, Miles T.; McCauliff, Sean; Girouard, Forrest R.; Wohler, Bill; Allen, Christopher; Chandrasekaran, Hema; Bryson, Stephen T.; Middour, Christopher; Caldwell, Douglas A.; hide

    2010-01-01

    The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including managing targets, generating on-board data compression tables, monitoring photometer health and status, processing the science data, and exporting the pipeline products to the mission archive. We describe how the generic pipeline framework software developed for Kepler is extended to achieve these goals, including pipeline configurations for processing science data and other support roles, and custom unit of work generators that control how the Kepler data are partitioned and distributed across the computing cluster. We describe the interface between the Java software that manages the retrieval and storage of the data for a given unit of work and the MATLAB algorithms that process these data. The data for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing these files to be used to debug and evolve the algorithms offline.

  13. Exploiting Artificial Intelligence for Analysis and Data Selection on-board the Puerto Rico CubeSat

    NASA Astrophysics Data System (ADS)

    Bergman, J. E. S.; Bruhn, F.; Funk, P.; Isham, B.; Rincón-Charris, A. A.; Capo-Lugo, P.; Åhlén, L.

    2015-10-01

    CubeSat missions are constrained by the limited resources provided by the platform. Many payload providers have learned to cope with the low mass and power but the poor telemetry allocation remains a bottleneck. In the end, it is the data delivered to ground which determines the value of the mission. However, transmitting more data does not necessarily guarantee high value, since the value also depends on the data quality. By exploiting fast on-board computing and efficient artificial intelligence (AI) algorithms for analysis and data selection one could optimize the usage of the telemetry link and so increase the value of the mission. In a pilot project, we attempt to do this on the Puerto Rico CubeSat, where science objectives include the acquisition of space weather data to aid better understanding of the Sun to Earth connection.

  14. Real-Time On-Board Airborne Demonstration of High-Speed On-Board Data Processing for Science Instruments (HOPS)

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Davis, Mitchell J.; Adams, James K.; Bowen, Stephen C.; Fay, James J.; Hutchinson, Mark A.

    2015-01-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program since April, 2012. The HOPS team recently completed two flight campaigns during the summer of 2014 on two different aircrafts with two different science instruments. The first flight campaign was in July, 2014 based at NASA Langley Research Center (LaRC) in Hampton, VA on the NASA's HU-25 aircraft. The science instrument that flew with HOPS was Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) funded by NASA's Instrument Incubator Program (IIP). The second campaign was in August, 2014 based at NASA Armstrong Flight Research Center (AFRC) in Palmdale, CA on the NASA's DC-8 aircraft. HOPS flew with the Multifunctional Fiber Laser Lidar (MFLL) instrument developed by Excelis Inc. The goal of the campaigns was to perform an end-to-end demonstration of the capabilities of the HOPS prototype system (HOPS COTS) while running the most computationally intensive part of the ASCENDS algorithm real-time on-board. The comparison of the two flight campaigns and the results of the functionality tests of the HOPS COTS are presented in this paper.

  15. Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew

    2009-01-01

    Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.

  16. Smart Board in the Music Classroom

    ERIC Educational Resources Information Center

    Baker, Jean

    2007-01-01

    A Smart Board is an interactive whiteboard connected to a computer and a data projector. Images can be projected on the board, and the Smart Board can be used as a computer. A person can control the computer using his finger, and can mark directly on the screen using various colors. Best of all, users can easily import many types of information,…

  17. FPGA implementation of sparse matrix algorithm for information retrieval

    NASA Astrophysics Data System (ADS)

    Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio

    2005-06-01

    Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.

  18. An acquisition system for CMOS imagers with a genuine 10 Gbit/s bandwidth

    NASA Astrophysics Data System (ADS)

    Guérin, C.; Mahroug, J.; Tromeur, W.; Houles, J.; Calabria, P.; Barbier, R.

    2012-12-01

    This paper presents a high data throughput acquisition system for pixel detector readout such as CMOS imagers. This CMOS acquisition board offers a genuine 10 Gbit/s bandwidth to the workstation and can provide an on-line and continuous high frame rate imaging capability. On-line processing can be implemented either on the Data Acquisition Board or on the multi-cores workstation depending on the complexity of the algorithms. The different parts composing the acquisition board have been designed to be used first with a single-photon detector called LUSIPHER (800×800 pixels), developed in our laboratory for scientific applications ranging from nano-photonics to adaptive optics. The architecture of the acquisition board is presented and the performances achieved by the produced boards are described. The future developments (hardware and software) concerning the on-line implementation of algorithms dedicated to single-photon imaging are tackled.

  19. Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs

    NASA Technical Reports Server (NTRS)

    Howell, Lauren R.; Allen, B. Danette

    2016-01-01

    A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.

  20. ICC '86; Proceedings of the International Conference on Communications, Toronto, Canada, June 22-25, 1986, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    Papers are presented on ISDN, mobile radio systems and techniques for digital connectivity, centralized and distributed algorithms in computer networks, communications networks, quality assurance and impact on cost, adaptive filters in communications, the spread spectrum, signal processing, video communication techniques, and digital satellite services. Topics discussed include performance evaluation issues for integrated protocols, packet network operations, the computer network theory and multiple-access, microwave single sideband systems, switching architectures, fiber optic systems, wireless local communications, modulation, coding, and synchronization, remote switching, software quality, transmission, and expert systems in network operations. Consideration is given to wide area networks, image and speech processing, office communications application protocols, multimedia systems, customer-controlled network operations, digital radio systems, channel modeling and signal processing in digital communications, earth station/on-board modems, computer communications system performance evaluation, source encoding, compression, and quantization, and adaptive communications systems.

  1. Recycling of WEEE: Characterization of spent printed circuit boards from mobile phones and computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamane, Luciana Harue, E-mail: lucianayamane@uol.com.br; Tavares de Moraes, Viviane, E-mail: tavares.vivi@gmail.com; Crocce Romano Espinosa, Denise, E-mail: espinosa@usp.br

    Highlights: > This paper presents new and important data on characterization of wastes of electric and electronic equipments. > Copper concentration is increasing in mobile phones and remaining constant in personal computers. > Printed circuit boards from mobile phones and computers would not be mixed prior treatment. - Abstract: This paper presents a comparison between printed circuit boards from computers and mobile phones. Since printed circuits boards are becoming more complex and smaller, the amount of materials is constantly changing. The main objective of this work was to characterize spent printed circuit boards from computers and mobile phones applying mineralmore » processing technique to separate the metal, ceramic, and polymer fractions. The processing was performed by comminution in a hammer mill, followed by particle size analysis, and by magnetic and electrostatic separation. Aqua regia leaching, loss-on-ignition and chemical analysis (inductively coupled plasma atomic emission spectroscopy - ICP-OES) were carried out to determine the composition of printed circuit boards and the metal rich fraction. The composition of the studied mobile phones printed circuit boards (PCB-MP) was 63 wt.% metals; 24 wt.% ceramics and 13 wt.% polymers; and of the printed circuit boards from studied personal computers (PCB-PC) was 45 wt.% metals; 27 wt.% polymers and ceramics 28 wt.% ceramics. The chemical analysis showed that copper concentration in printed circuit boards from personal computers was 20 wt.% and in printed circuit boards from mobile phones was 34.5 wt.%. According to the characteristics of each type of printed circuit board, the recovery of precious metals may be the main goal of the recycling process of printed circuit boards from personal computers and the recovery of copper should be the main goal of the recycling process of printed circuit boards from mobile phones. Hence, these printed circuit boards would not be mixed prior treatment. The results of this paper show that copper concentration is increasing in mobile phones and remaining constant in personal computers.« less

  2. A Fine-Grained Pipelined Implementation for Large-Scale Matrix Inversion on FPGA

    NASA Astrophysics Data System (ADS)

    Zhou, Jie; Dou, Yong; Zhao, Jianxun; Xia, Fei; Lei, Yuanwu; Tang, Yuxing

    Large-scale matrix inversion play an important role in many applications. However to the best of our knowledge, there is no FPGA-based implementation. In this paper, we explore the possibility of accelerating large-scale matrix inversion on FPGA. To exploit the computational potential of FPGA, we introduce a fine-grained parallel algorithm for matrix inversion. A scalable linear array processing elements (PEs), which is the core component of the FPGA accelerator, is proposed to implement this algorithm. A total of 12 PEs can be integrated into an Altera StratixII EP2S130F1020C5 FPGA on our self-designed board. Experimental results show that a factor of 2.6 speedup and the maximum power-performance of 41 can be achieved compare to Pentium Dual CPU with double SSE threads.

  3. Creating a transducer electronic datasheet using I2C serial EEPROM memory and PIC32-based microcontroller development board

    NASA Astrophysics Data System (ADS)

    Croitoru, Bogdan; Tulbure, Adrian; Abrudean, Mihail; Secara, Mihai

    2015-02-01

    The present paper describes a software method for creating / managing one type of Transducer Electronic Datasheet (TEDS) according to IEEE 1451.4 standard in order to develop a prototype of smart multi-sensor platform (with up to ten different analog sensors simultaneously connected) with Plug and Play capabilities over ETHERNET and Wi-Fi. In the experiments were used: one analog temperature sensor, one analog light sensor, one PIC32-based microcontroller development board with analog and digital I/O ports and other computing resources, one 24LC256 I2C (Inter Integrated Circuit standard) serial Electrically Erasable Programmable Read Only Memory (EEPROM) memory with 32KB available space and 3 bytes internal buffer for page writes (1 byte for data and 2 bytes for address). It was developed a prototype algorithm for writing and reading TEDS information to / from I2C EEPROM memories using the standard C language (up to ten different TEDS blocks coexisting in the same EEPROM device at once). The algorithm is able to write and read one type of TEDS: transducer information with standard TEDS content. A second software application, written in VB.NET platform, was developed in order to access the EEPROM sensor information from a computer through a serial interface (USB).

  4. Spacecube V2.0 Micro Single Board Computer

    NASA Technical Reports Server (NTRS)

    Petrick, David J. (Inventor); Geist, Alessandro (Inventor); Lin, Michael R. (Inventor); Crum, Gary R. (Inventor)

    2017-01-01

    A single board computer system radiation hardened for space flight includes a printed circuit board having a top side and bottom side; a reconfigurable field programmable gate array (FPGA) processor device disposed on the top side; a connector disposed on the top side; a plurality of peripheral components mounted on the bottom side; and wherein a size of the single board computer system is not greater than approximately 7 cm.times.7 cm.

  5. Evaluation of odometry algorithm performances using a railway vehicle dynamic model

    NASA Astrophysics Data System (ADS)

    Allotta, B.; Pugi, L.; Ridolfi, A.; Malvezzi, M.; Vettori, G.; Rindi, A.

    2012-05-01

    In modern railway Automatic Train Protection and Automatic Train Control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. Simplified two-dimensional models of railway vehicles have been usually used for Hardware in the Loop test rig testing of conventional odometry algorithms and of on-board safety relevant subsystems (like the Wheel Slide Protection braking system) in which the train speed is estimated from the measures of the wheel angular speed. Two-dimensional models are not suitable to develop solutions like the inertial type localisation algorithms (using 3D accelerometers and 3D gyroscopes) and the introduction of Global Positioning System (or similar) or the magnetometer. In order to test these algorithms correctly and increase odometry performances, a three-dimensional multibody model of a railway vehicle has been developed, using Matlab-Simulink™, including an efficient contact model which can simulate degraded adhesion conditions (the development and prototyping of odometry algorithms involve the simulation of realistic environmental conditions). In this paper, the authors show how a 3D railway vehicle model, able to simulate the complex interactions arising between different on-board subsystems, can be useful to evaluate the odometry algorithm and safety relevant to on-board subsystem performances.

  6. Reducing On-Board Computer Propagation Errors Due to Omitted Geopotential Terms by Judicious Selection of Uploaded State Vector

    NASA Technical Reports Server (NTRS)

    Greatorex, Scott (Editor); Beckman, Mark

    1996-01-01

    Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would minimize the OBC propagation error. This techniques should greatly improve the accuracy of the OBC propagation on-board future spacecraft such as TRMM, WIRE, SWAS, and XTE without increasing complexity in the ground processing.

  7. Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller.

    PubMed

    Lopez-Franco, Carlos; Gomez-Avila, Javier; Alanis, Alma Y; Arana-Daniel, Nancy; Villaseñor, Carlos

    2017-08-12

    In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results.

  8. Visual Servoing for an Autonomous Hexarotor Using a Neural Network Based PID Controller

    PubMed Central

    Lopez-Franco, Carlos; Alanis, Alma Y.; Arana-Daniel, Nancy; Villaseñor, Carlos

    2017-01-01

    In recent years, unmanned aerial vehicles (UAVs) have gained significant attention. However, we face two major drawbacks when working with UAVs: high nonlinearities and unknown position in 3D space since it is not provided with on-board sensors that can measure its position with respect to a global coordinate system. In this paper, we present a real-time implementation of a servo control, integrating vision sensors, with a neural proportional integral derivative (PID), in order to develop an hexarotor image based visual servo control (IBVS) that knows the position of the robot by using a velocity vector as a reference to control the hexarotor position. This integration requires a tight coordination between control algorithms, models of the system to be controlled, sensors, hardware and software platforms and well-defined interfaces, to allow the real-time implementation, as well as the design of different processing stages with their respective communication architecture. All of these issues and others provoke the idea that real-time implementations can be considered as a difficult task. For the purpose of showing the effectiveness of the sensor integration and control algorithm to address these issues on a high nonlinear system with noisy sensors as cameras, experiments were performed on the Asctec Firefly on-board computer, including both simulation and experimenta results. PMID:28805689

  9. Using ACIS on the Chandra X-ray Observatory as a Particle Radiation Monitor II

    NASA Technical Reports Server (NTRS)

    Grant, C. E.; Ford, P. G.; Bautz, M. W.; ODell, S. L.

    2012-01-01

    The Advanced CCD Imaging Spectrometer is an instrument on the Chandra X-ray Observatory. CCDs are vulnerable to radiation damage, particularly by soft protons in the radiation belts and solar storms. The Chandra team has implemented procedures to protect ACIS during high-radiation events including autonomous protection triggered by an on-board radiation monitor. Elevated temperatures have reduced the effectiveness of the on-board monitor. The ACIS team has developed an algorithm which uses data from the CCDs themselves to detect periods of high radiation and a flight software patch to apply this algorithm is currently active on-board the instrument. In this paper, we explore the ACIS response to particle radiation through comparisons to a number of external measures of the radiation environment. We hope to better understand the efficiency of the algorithm as a function of the flux and spectrum of the particles and the time-profile of the radiation event.

  10. Multi-channel pre-beamformed data acquisition system for research on advanced ultrasound imaging methods.

    PubMed

    Cheung, Chris C P; Yu, Alfred C H; Salimi, Nazila; Yiu, Billy Y S; Tsang, Ivan K H; Kerby, Benjamin; Azar, Reza Zahiri; Dickie, Kris

    2012-02-01

    The lack of open access to the pre-beamformed data of an ultrasound scanner has limited the research of novel imaging methods to a few privileged laboratories. To address this need, we have developed a pre-beamformed data acquisition (DAQ) system that can collect data over 128 array elements in parallel from the Ultrasonix series of research-purpose ultrasound scanners. Our DAQ system comprises three system-level blocks: 1) a connector board that interfaces with the array probe and the scanner through a probe connector port; 2) a main board that triggers DAQ and controls data transfer to a computer; and 3) four receiver boards that are each responsible for acquiring 32 channels of digitized raw data and storing them to the on-board memory. This system can acquire pre-beamformed data with 12-bit resolution when using a 40-MHz sampling rate. It houses a 16 GB RAM buffer that is sufficient to store 128 channels of pre-beamformed data for 8000 to 25 000 transmit firings, depending on imaging depth; corresponding to nearly a 2-s period in typical imaging setups. Following the acquisition, the data can be transferred through a USB 2.0 link to a computer for offline processing and analysis. To evaluate the feasibility of using the DAQ system for advanced imaging research, two proof-of-concept investigations have been conducted on beamforming and plane-wave B-flow imaging. Results show that adaptive beamforming algorithms such as the minimum variance approach can generate sharper images of a wire cross-section whose diameter is equal to the imaging wavelength (150 μm in our example). Also, planewave B-flow imaging can provide more consistent visualization of blood speckle movement given the higher temporal resolution of this imaging approach (2500 fps in our example).

  11. Automated recognition of helium speech. Phase I: Investigation of microprocessor based analysis/synthesis system

    NASA Astrophysics Data System (ADS)

    Jelinek, H. J.

    1986-01-01

    This is the Final Report of Electronic Design Associates on its Phase I SBIR project. The purpose of this project is to develop a method for correcting helium speech, as experienced in diver-surface communication. The goal of the Phase I study was to design, prototype, and evaluate a real time helium speech corrector system based upon digital signal processing techniques. The general approach was to develop hardware (an IBM PC board) to digitize helium speech and software (a LAMBDA computer based simulation) to translate the speech. As planned in the study proposal, this initial prototype may now be used to assess expected performance from a self contained real time system which uses an identical algorithm. The Final Report details the work carried out to produce the prototype system. Four major project tasks were: a signal processing scheme for converting helium speech to normal sounding speech was generated. The signal processing scheme was simulated on a general purpose (LAMDA) computer. Actual helium speech was supplied to the simulation and the converted speech was generated. An IBM-PC based 14 bit data Input/Output board was designed and built. A bibliography of references on speech processing was generated.

  12. Development of the HERMIES III mobile robot research testbed at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manges, W.W.; Hamel, W.R.; Weisbin, C.R.

    1988-01-01

    The latest robot in the Hostile Environment Robotic Machine Intelligence Experiment Series (HERMIES) is now under development at the Center for Engineering Systems Advanced Research (CESAR) in the Oak Ridge National Laboratory. The HERMIES III robot incorporates a larger than human size 7-degree-of-freedom manipulator mounted on a 2-degree-of-freedom mobile platform including a variety of sensors and computers. The deployment of this robot represents a significant increase in research capabilities for the CESAR laboratory. The initial on-board computer capacity of the robot exceeds that of 20 Vax 11/780s. The navigation and vision algorithms under development make extensive use of the on-boardmore » NCUBE hypercube computer while the sensors are interfaced through five VME computers running the OS-9 real-time, multitasking operating system. This paper describes the motivation, key issues, and detailed design trade-offs of implementing the first phase (basic functionality) of the HERMIES III robot. 10 refs., 7 figs.« less

  13. Multispectral atmospheric mapping sensor of mesoscale water vapor features

    NASA Technical Reports Server (NTRS)

    Menzel, P.; Jedlovec, G.; Wilson, G.; Atkinson, R.; Smith, W.

    1985-01-01

    The Multispectral atmospheric mapping sensor was checked out for specified spectral response and detector noise performance in the eight visible and three infrared (6.7, 11.2, 12.7 micron) spectral bands. A calibration algorithm was implemented for the infrared detectors. Engineering checkout flights on board the ER-2 produced imagery at 50 m resolution in which water vapor features in the 6.7 micron spectral band are most striking. These images were analyzed on the Man computer Interactive Data Access System (McIDAS). Ground truth and ancillary data was accessed to verify the calibration.

  14. Real-time spectral analysis of HRV signals: an interactive and user-friendly PC system.

    PubMed

    Basano, L; Canepa, F; Ottonello, P

    1998-01-01

    We present a real-time system, built around a PC and a low-cost data acquisition board, for the spectral analysis of the heart rate variability signal. The Windows-like operating environment on which it is based makes the computer program very user-friendly even for non-specialized personnel. The Power Spectral Density is computed through the use of a hybrid method, in which a classical FFT analysis follows an autoregressive finite-extension of data; the stationarity of the sequence is continuously checked. The use of this algorithm gives a high degree of robustness of the spectral estimation. Moreover, always in real time, the FFT of every data block is computed and displayed in order to corroborate the results as well as to allow the user to interactively choose a proper AR model order.

  15. Low-power wearable respiratory sound sensing.

    PubMed

    Oletic, Dinko; Arsenali, Bruno; Bilas, Vedran

    2014-04-09

    Building upon the findings from the field of automated recognition of respiratory sound patterns, we propose a wearable wireless sensor implementing on-board respiratory sound acquisition and classification, to enable continuous monitoring of symptoms, such as asthmatic wheezing. Low-power consumption of such a sensor is required in order to achieve long autonomy. Considering that the power consumption of its radio is kept minimal if transmitting only upon (rare) occurrences of wheezing, we focus on optimizing the power consumption of the digital signal processor (DSP). Based on a comprehensive review of asthmatic wheeze detection algorithms, we analyze the computational complexity of common features drawn from short-time Fourier transform (STFT) and decision tree classification. Four algorithms were implemented on a low-power TMS320C5505 DSP. Their classification accuracies were evaluated on a dataset of prerecorded respiratory sounds in two operating scenarios of different detection fidelities. The execution times of all algorithms were measured. The best classification accuracy of over 92%, while occupying only 2.6% of the DSP's processing time, is obtained for the algorithm featuring the time-frequency tracking of shapes of crests originating from wheezing, with spectral features modeled using energy.

  16. Digital data, composite video multiplexer and demultiplexer boards for an IBM PC/AT compatible computer

    NASA Technical Reports Server (NTRS)

    Smith, Dean Lance

    1993-01-01

    Work continued on the design of two IBM PC/AT compatible computer interface boards. The boards will permit digital data to be transmitted over a composite video channel from the Orbiter. One board combines data with a composite video signal. The other board strips the data from the video signal.

  17. Evaluation of the OpenCL AES Kernel using the Intel FPGA SDK for OpenCL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    The OpenCL standard is an open programming model for accelerating algorithms on heterogeneous computing system. OpenCL extends the C-based programming language for developing portable codes on different platforms such as CPU, Graphics processing units (GPUs), Digital Signal Processors (DSPs) and Field Programmable Gate Arrays (FPGAs). The Intel FPGA SDK for OpenCL is a suite of tools that allows developers to abstract away the complex FPGA-based development flow for a high-level software development flow. Users can focus on the design of hardware-accelerated kernel functions in OpenCL and then direct the tools to generate the low-level FPGA implementations. The approach makes themore » FPGA-based development more accessible to software users as the needs for hybrid computing using CPUs and FPGAs are increasing. It can also significantly reduce the hardware development time as users can evaluate different ideas with high-level language without deep FPGA domain knowledge. In this report, we evaluate the performance of the kernel using the Intel FPGA SDK for OpenCL and Nallatech 385A FPGA board. Compared to the M506 module, the board provides more hardware resources for a larger design exploration space. The kernel performance is measured with the compute kernel throughput, an upper bound to the FPGA throughput. The report presents the experimental results in details. The Appendix lists the kernel source code.« less

  18. Random On-Board Pixel Sampling (ROPS) X-Ray Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhehui; Iaroshenko, O.; Li, S.

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustratemore » the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  19. Development of a two wheeled self balancing robot with speech recognition and navigation algorithm

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh

    2016-07-01

    This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.

  20. Comparative assessment of techniques for initial pose estimation using monocular vision

    NASA Astrophysics Data System (ADS)

    Sharma, Sumant; D`Amico, Simone

    2016-06-01

    This work addresses the comparative assessment of initial pose estimation techniques for monocular navigation to enable formation-flying and on-orbit servicing missions. Monocular navigation relies on finding an initial pose, i.e., a coarse estimate of the attitude and position of the space resident object with respect to the camera, based on a minimum number of features from a three dimensional computer model and a single two dimensional image. The initial pose is estimated without the use of fiducial markers, without any range measurements or any apriori relative motion information. Prior work has been done to compare different pose estimators for terrestrial applications, but there is a lack of functional and performance characterization of such algorithms in the context of missions involving rendezvous operations in the space environment. Use of state-of-the-art pose estimation algorithms designed for terrestrial applications is challenging in space due to factors such as limited on-board processing power, low carrier to noise ratio, and high image contrasts. This paper focuses on performance characterization of three initial pose estimation algorithms in the context of such missions and suggests improvements.

  1. On-board autonomous attitude maneuver planning for planetary spacecraft using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Kornfeld, Richard P.

    2003-01-01

    A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This paper presents an approach for attitude path planning that makes full use of a priori constraint knowledge and is computationally tractable enough to be executed on-board a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used 'as is' or as an initial solution to initialize additional deterministic optimization algorithms. A number of example simulations are presented including the case examples of a generic Europa Orbiter spacecraft in cruise as well as in orbit around Europa. The search times are typically on the order of minutes, thus demonstrating the viability of the presented approach. The results are applicable to all future deep space missions where greater spacecraft autonomy is required. In addition, onboard autonomous attitude planning greatly facilitates navigation and science observation planning, benefiting thus all missions to planet Earth as well.

  2. Time-of-Travel Methods for Measuring Optical Flow on Board a Micro Flying Robot

    PubMed Central

    Vanhoutte, Erik; Mafrica, Stefano; Ruffier, Franck; Bootsma, Reinoud J.; Serres, Julien

    2017-01-01

    For use in autonomous micro air vehicles, visual sensors must not only be small, lightweight and insensitive to light variations; on-board autopilots also require fast and accurate optical flow measurements over a wide range of speeds. Using an auto-adaptive bio-inspired Michaelis–Menten Auto-adaptive Pixel (M2APix) analog silicon retina, in this article, we present comparative tests of two optical flow calculation algorithms operating under lighting conditions from 6×10−7 to 1.6×10−2 W·cm−2 (i.e., from 0.2 to 12,000 lux for human vision). Contrast “time of travel” between two adjacent light-sensitive pixels was determined by thresholding and by cross-correlating the two pixels’ signals, with measurement frequency up to 5 kHz for the 10 local motion sensors of the M2APix sensor. While both algorithms adequately measured optical flow between 25 ∘/s and 1000 ∘/s, thresholding gave rise to a lower precision, especially due to a larger number of outliers at higher speeds. Compared to thresholding, cross-correlation also allowed for a higher rate of optical flow output (99 Hz and 1195 Hz, respectively) but required substantially more computational resources. PMID:28287484

  3. Recycling of WEEE: characterization of spent printed circuit boards from mobile phones and computers.

    PubMed

    Yamane, Luciana Harue; de Moraes, Viviane Tavares; Espinosa, Denise Crocce Romano; Tenório, Jorge Alberto Soares

    2011-12-01

    This paper presents a comparison between printed circuit boards from computers and mobile phones. Since printed circuits boards are becoming more complex and smaller, the amount of materials is constantly changing. The main objective of this work was to characterize spent printed circuit boards from computers and mobile phones applying mineral processing technique to separate the metal, ceramic, and polymer fractions. The processing was performed by comminution in a hammer mill, followed by particle size analysis, and by magnetic and electrostatic separation. Aqua regia leaching, loss-on-ignition and chemical analysis (inductively coupled plasma atomic emission spectroscopy - ICP-OES) were carried out to determine the composition of printed circuit boards and the metal rich fraction. The composition of the studied mobile phones printed circuit boards (PCB-MP) was 63 wt.% metals; 24 wt.% ceramics and 13 wt.% polymers; and of the printed circuit boards from studied personal computers (PCB-PC) was 45 wt.% metals; 27 wt.% polymers and ceramics 28 wt.% ceramics. The chemical analysis showed that copper concentration in printed circuit boards from personal computers was 20 wt.% and in printed circuit boards from mobile phones was 34.5 wt.%. According to the characteristics of each type of printed circuit board, the recovery of precious metals may be the main goal of the recycling process of printed circuit boards from personal computers and the recovery of copper should be the main goal of the recycling process of printed circuit boards from mobile phones. Hence, these printed circuit boards would not be mixed prior treatment. The results of this paper show that copper concentration is increasing in mobile phones and remaining constant in personal computers. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... result in a complete personal computer system. If the oscillator and the microprocessor circuits are... microprocessor circuits are contained on separate circuit boards, both boards, typical of the combination that...

  5. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... result in a complete personal computer system. If the oscillator and the microprocessor circuits are... microprocessor circuits are contained on separate circuit boards, both boards, typical of the combination that...

  6. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... result in a complete personal computer system. If the oscillator and the microprocessor circuits are... microprocessor circuits are contained on separate circuit boards, both boards, typical of the combination that...

  7. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... result in a complete personal computer system. If the oscillator and the microprocessor circuits are... microprocessor circuits are contained on separate circuit boards, both boards, typical of the combination that...

  8. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... result in a complete personal computer system. If the oscillator and the microprocessor circuits are... microprocessor circuits are contained on separate circuit boards, both boards, typical of the combination that...

  9. An ATR architecture for algorithm development and testing

    NASA Astrophysics Data System (ADS)

    Breivik, Gøril M.; Løkken, Kristin H.; Brattli, Alvin; Palm, Hans C.; Haavardsholm, Trym

    2013-05-01

    A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.

  10. Hyperspectral processing in graphical processing units

    NASA Astrophysics Data System (ADS)

    Winter, Michael E.; Winter, Edwin M.

    2011-06-01

    With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance increase with each generation of new video cards. While these cards were designed primarily for visualization and video games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers. It has been found that many image processing problems scale well to modern GPU systems. We have implemented four popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special cases showing extreme speedups of a hundred times or more.

  11. An economical semi-analytical orbit theory for micro-computer applications

    NASA Technical Reports Server (NTRS)

    Gordon, R. A.

    1988-01-01

    An economical algorithm is presented for predicting the position of a satellite perturbed by drag and zonal harmonics J sub 2 through J sub 4. Simplicity being of the essence, drag is modeled as a secular decay rate in the semi-axis (retarded motion); with the zonal perturbations modeled from a modified version of the Brouwers formulas. The algorithm is developed as: an alternative on-board orbit predictor; a back up propagator requiring low energy consumption; or a ground based propagator for microcomputer applications (e.g., at the foot of an antenna). An O(J sub 2) secular retarded state partial matrix (matrizant) is also given to employ with state estimation. The theory was implemented in BASIC on an inexpensive microcomputer, the program occupying under 8K bytes of memory. Simulated trajectory data and real tracking data are employed to illustrate the theory's ability to accurately accommodate oblateness and drag effects.

  12. An economical semi-analytical orbit theory for micro-computer applications

    NASA Technical Reports Server (NTRS)

    Gordon, R. A.

    1986-01-01

    An economical algorithm is presented for predicting the position of a satellite perturbed by drag and zonal harmonics J2 through J4. Simplicity being of the essence, drag is modeled as a secular decay rate in the semimajor axis (retarded motion) with the zonal perturbations modeled from a modified version of Brouwers formulas. The algorithm is developed as an alternative on-board orbit predictor; a back up propagator requiring low energy consumption; or a ground based propagator for microcomputer applications (e.g., at the foot of an antenna). An O(J2) secular retarded state partial matrix (matrizant) is also given to employ with state estimation. The theory has been implemented in BASIC on an inexpensive microcomputer, the program occupying under 8K bytes of memory. Simulated trajectory data and real tracking data are employed to illustrate the theory's ability to accurately accommodate oblateness and drag effects.

  13. 40 CFR 51.357 - Test procedures and standards.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms and...

  14. 40 CFR 51.357 - Test procedures and standards.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms and...

  15. 40 CFR 51.357 - Test procedures and standards.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms and...

  16. 40 CFR 51.357 - Test procedures and standards.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... invalid test condition, unsafe conditions, fast pass/fail algorithms, or, in the case of the on-board... using approved fast pass or fast fail algorithms and multiple pass/fail algorithms may be used during the test cycle to eliminate false failures. The transient test procedure, including algorithms and...

  17. Interconnection arrangement of routers of processor boards in array of cabinets supporting secure physical partition

    DOEpatents

    Tomkins, James L [Albuquerque, NM; Camp, William J [Albuquerque, NM

    2007-07-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure includes routers in service or compute processor boards distributed in an array of cabinets connected in series on each board and to respective routers in neighboring row cabinet boards with the routers in series connection coupled to routers in series connection in respective neighboring column cabinet boards. The array can include disconnect cabinets or respective routers in all boards in each cabinet connected in a toroid. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  18. Evaluation of a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography of scaphoid fixation screws.

    PubMed

    Filli, Lukas; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio; Finkenstädt, Tim; Andreisek, Gustav; Guggenberger, Roman

    2014-12-01

    The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was "almost perfect" (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. Flat detector computed tomography (FDCT) is a helpful imaging tool for scaphoid fixation. The correction algorithm significantly reduces artefacts in FDCT induced by scaphoid fixation screws. This may facilitate intra- and postoperative follow-up imaging.

  19. The operational cloud retrieval algorithms from TROPOMI on board Sentinel-5 Precursor

    NASA Astrophysics Data System (ADS)

    Loyola, Diego G.; Gimeno García, Sebastián; Lutz, Ronny; Argyrouli, Athina; Romahn, Fabian; Spurr, Robert J. D.; Pedergnana, Mattia; Doicu, Adrian; Molina García, Víctor; Schüssler, Olena

    2018-01-01

    This paper presents the operational cloud retrieval algorithms for the TROPOspheric Monitoring Instrument (TROPOMI) on board the European Space Agency Sentinel-5 Precursor (S5P) mission scheduled for launch in 2017. Two algorithms working in tandem are used for retrieving cloud properties: OCRA (Optical Cloud Recognition Algorithm) and ROCINN (Retrieval of Cloud Information using Neural Networks). OCRA retrieves the cloud fraction using TROPOMI measurements in the ultraviolet (UV) and visible (VIS) spectral regions, and ROCINN retrieves the cloud top height (pressure) and optical thickness (albedo) using TROPOMI measurements in and around the oxygen A-band in the near infrared (NIR). Cloud parameters from TROPOMI/S5P will be used not only for enhancing the accuracy of trace gas retrievals but also for extending the satellite data record of cloud information derived from oxygen A-band measurements, a record initiated with the Global Ozone Monitoring Experiment (GOME) on board the second European Remote-Sensing Satellite (ERS-2) over 20 years ago. The OCRA and ROCINN algorithms are integrated in the S5P operational processor UPAS (Universal Processor for UV/VIS/NIR Atmospheric Spectrometers), and we present here UPAS cloud results using the Ozone Monitoring Instrument (OMI) and GOME-2 measurements. In addition, we examine anticipated challenges for the TROPOMI/S5P cloud retrieval algorithms, and we discuss the future validation needs for OCRA and ROCINN.

  20. The implementation of aerial object recognition algorithm based on contour descriptor in FPGA-based on-board vision system

    NASA Astrophysics Data System (ADS)

    Babayan, Pavel; Smirnov, Sergey; Strotov, Valery

    2017-10-01

    This paper describes the aerial object recognition algorithm for on-board and stationary vision system. Suggested algorithm is intended to recognize the objects of a specific kind using the set of the reference objects defined by 3D models. The proposed algorithm based on the outer contour descriptor building. The algorithm consists of two stages: learning and recognition. Learning stage is devoted to the exploring of reference objects. Using 3D models we can build the database containing training images by rendering the 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the recognition stage of the algorithm. The recognition stage is focusing on estimating the similarity of the captured object and the reference objects by matching an observed image descriptor and the reference object descriptors. The experimental research was performed using a set of the models of the aircraft of the different types (airplanes, helicopters, UAVs). The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.

  1. Synthetic aperture radar signal data compression using block adaptive quantization

    NASA Technical Reports Server (NTRS)

    Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian

    1994-01-01

    This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.

  2. FPGA implementation of image dehazing algorithm for real time applications

    NASA Astrophysics Data System (ADS)

    Kumar, Rahul; Kaushik, Brajesh Kumar; Balasubramanian, R.

    2017-09-01

    Weather degradation such as haze, fog, mist, etc. severely reduces the effective range of visual surveillance. This degradation is a spatially varying phenomena, which makes this problem non trivial. Dehazing is an essential preprocessing stage in applications such as long range imaging, border security, intelligent transportation system, etc. However, these applications require low latency of the preprocessing block. In this work, single image dark channel prior algorithm is modified and implemented for fast processing with comparable visual quality of the restored image/video. Although conventional single image dark channel prior algorithm is computationally expensive, it yields impressive results. Moreover, a two stage image dehazing architecture is introduced, wherein, dark channel and airlight are estimated in the first stage. Whereas, transmission map and intensity restoration are computed in the next stages. The algorithm is implemented using Xilinx Vivado software and validated by using Xilinx zc702 development board, which contains an Artix7 equivalent Field Programmable Gate Array (FPGA) and ARM Cortex A9 dual core processor. Additionally, high definition multimedia interface (HDMI) has been incorporated for video feed and display purposes. The results show that the dehazing algorithm attains 29 frames per second for the image resolution of 1920x1080 which is suitable of real time applications. The design utilizes 9 18K_BRAM, 97 DSP_48, 6508 FFs and 8159 LUTs.

  3. Design of an FPGA-Based Algorithm for Real-Time Solutions of Statistics-Based Positioning

    PubMed Central

    DeWitt, Don; Johnson-Williams, Nathan G.; Miyaoka, Robert S.; Li, Xiaoli; Lockhart, Cate; Lewellen, Tom K.; Hauck, Scott

    2010-01-01

    We report on the implementation of an algorithm and hardware platform to allow real-time processing of the statistics-based positioning (SBP) method for continuous miniature crystal element (cMiCE) detectors. The SBP method allows an intrinsic spatial resolution of ~1.6 mm FWHM to be achieved using our cMiCE design. Previous SBP solutions have required a postprocessing procedure due to the computation and memory intensive nature of SBP. This new implementation takes advantage of a combination of algebraic simplifications, conversion to fixed-point math, and a hierarchal search technique to greatly accelerate the algorithm. For the presented seven stage, 127 × 127 bin LUT implementation, these algorithm improvements result in a reduction from >7 × 106 floating-point operations per event for an exhaustive search to < 5 × 103 integer operations per event. Simulations show nearly identical FWHM positioning resolution for this accelerated SBP solution, and positioning differences of <0.1 mm from the exhaustive search solution. A pipelined field programmable gate array (FPGA) implementation of this optimized algorithm is able to process events in excess of 250 K events per second, which is greater than the maximum expected coincidence rate for an individual detector. In contrast with all detectors being processed at a centralized host, as in the current system, a separate FPGA is available at each detector, thus dividing the computational load. These methods allow SBP results to be calculated in real-time and to be presented to the image generation components in real-time. A hardware implementation has been developed using a commercially available prototype board. PMID:21197135

  4. FIVE YEARS OF SYNTHESIS OF SOLAR SPECTRAL IRRADIANCE FROM SDID/SISA AND SDO /AIA IMAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontenla, J. M.; Codrescu, M.; Fedrizzi, M.

    In this paper we describe the synthetic solar spectral irradiance (SSI) calculated from 2010 to 2015 using data from the Atmospheric Imaging Assembly (AIA) instrument, on board the Solar Dynamics Observatory spacecraft. We used the algorithms for solar disk image decomposition (SDID) and the spectral irradiance synthesis algorithm (SISA) that we had developed over several years. The SDID algorithm decomposes the images of the solar disk into areas occupied by nine types of chromospheric and 5 types of coronal physical structures. With this decomposition and a set of pre-computed angle-dependent spectra for each of the features, the SISA algorithm ismore » used to calculate the SSI. We discuss the application of the basic SDID/SISA algorithm to a subset of the AIA images and the observed variation occurring in the 2010–2015 period of the relative areas of the solar disk covered by the various solar surface features. Our results consist of the SSI and total solar irradiance variations over the 2010–2015 period. The SSI results include soft X-ray, ultraviolet, visible, infrared, and far-infrared observations and can be used for studies of the solar radiative forcing of the Earth’s atmosphere. These SSI estimates were used to drive a thermosphere–ionosphere physical simulation model. Predictions of neutral mass density at low Earth orbit altitudes in the thermosphere and peak plasma densities at mid-latitudes are in reasonable agreement with the observations. The correlation between the simulation results and the observations was consistently better when fluxes computed by SDID/SISA procedures were used.« less

  5. Scheduling Operations for Massive Heterogeneous Clusters

    NASA Technical Reports Server (NTRS)

    Humphrey, John; Spagnoli, Kyle

    2013-01-01

    High-performance computing (HPC) programming has become increasingly difficult with the advent of hybrid supercomputers consisting of multicore CPUs and accelerator boards such as the GPU. Manual tuning of software to achieve high performance on this type of machine has been performed by programmers. This is needlessly difficult and prone to being invalidated by new hardware, new software, or changes in the underlying code. A system was developed for task-based representation of programs, which when coupled with a scheduler and runtime system, allows for many benefits, including higher performance and utilization of computational resources, easier programming and porting, and adaptations of code during runtime. The system consists of a method of representing computer algorithms as a series of data-dependent tasks. The series forms a graph, which can be scheduled for execution on many nodes of a supercomputer efficiently by a computer algorithm. The schedule is executed by a dispatch component, which is tailored to understand all of the hardware types that may be available within the system. The scheduler is informed by a cluster mapping tool, which generates a topology of available resources and their strengths and communication costs. Software is decoupled from its hardware, which aids in porting to future architectures. A computer algorithm schedules all operations, which for systems of high complexity (i.e., most NASA codes), cannot be performed optimally by a human. The system aids in reducing repetitive code, such as communication code, and aids in the reduction of redundant code across projects. It adds new features to code automatically, such as recovering from a lost node or the ability to modify the code while running. In this project, the innovators at the time of this reporting intend to develop two distinct technologies that build upon each other and both of which serve as building blocks for more efficient HPC usage. First is the scheduling and dynamic execution framework, and the second is scalable linear algebra libraries that are built directly on the former.

  6. Efficient Hardware Implementation of the Horn-Schunck Algorithm for High-Resolution Real-Time Dense Optical Flow Sensor

    PubMed Central

    Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek

    2014-01-01

    This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1, 920 × 1, 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems. PMID:24526303

  7. Efficient hardware implementation of the Horn-Schunck algorithm for high-resolution real-time dense optical flow sensor.

    PubMed

    Komorkiewicz, Mateusz; Kryjak, Tomasz; Gorgon, Marek

    2014-02-12

    This article presents an efficient hardware implementation of the Horn-Schunck algorithm that can be used in an embedded optical flow sensor. An architecture is proposed, that realises the iterative Horn-Schunck algorithm in a pipelined manner. This modification allows to achieve data throughput of 175 MPixels/s and makes processing of Full HD video stream (1; 920 × 1; 080 @ 60 fps) possible. The structure of the optical flow module as well as pre- and post-filtering blocks and a flow reliability computation unit is described in details. Three versions of optical flow modules, with different numerical precision, working frequency and obtained results accuracy are proposed. The errors caused by switching from floating- to fixed-point computations are also evaluated. The described architecture was tested on popular sequences from an optical flow dataset of the Middlebury University. It achieves state-of-the-art results among hardware implementations of single scale methods. The designed fixed-point architecture achieves performance of 418 GOPS with power efficiency of 34 GOPS/W. The proposed floating-point module achieves 103 GFLOPS, with power efficiency of 24 GFLOPS/W. Moreover, a 100 times speedup compared to a modern CPU with SIMD support is reported. A complete, working vision system realized on Xilinx VC707 evaluation board is also presented. It is able to compute optical flow for Full HD video stream received from an HDMI camera in real-time. The obtained results prove that FPGA devices are an ideal platform for embedded vision systems.

  8. Rapid cable tension estimation using dynamic and mechanical properties

    NASA Astrophysics Data System (ADS)

    Martínez-Castro, Rosana E.; Jang, Shinae; Christenson, Richard E.

    2016-04-01

    Main tension elements are critical to the overall stability of cable-supported bridges. A dependable and rapid determination of cable tension is desired to assess the state of a cable-supported bridge and evaluate its operability. A portable smart sensor setup is presented to reduce post-processing time and deployment complexity while reliably determining cable tension using dynamic characteristics extracted from spectral analysis. A self-recording accelerometer is coupled with a single-board microcomputer that communicates wirelessly with a remote host computer. The portable smart sensing device is designed such that additional algorithms, sensors and controlling devices for various monitoring applications can be installed and operated for additional structural assessment. The tension-estimating algorithms are based on taut string theory and expand to consider bending stiffness. The successful combination of cable properties allows the use of a cable's dynamic behavior to determine tension force. The tension-estimating algorithms are experimentally validated on a through-arch steel bridge subject to ambient vibration induced by passing traffic. The tension estimation is determined in well agreement with previously determined tension values for the structure.

  9. A trajectory planning scheme for spacecraft in the space station environment. M.S. Thesis - University of California

    NASA Technical Reports Server (NTRS)

    Soller, Jeffrey Alan; Grunwald, Arthur J.; Ellis, Stephen R.

    1991-01-01

    Simulated annealing is used to solve a minimum fuel trajectory problem in the space station environment. The environment is special because the space station will define a multivehicle environment in space. The optimization surface is a complex nonlinear function of the initial conditions of the chase and target crafts. Small permutations in the input conditions can result in abrupt changes to the optimization surface. Since no prior knowledge about the number or location of local minima on the surface is available, the optimization must be capable of functioning on a multimodal surface. It was reported in the literature that the simulated annealing algorithm is more effective on such surfaces than descent techniques using random starting points. The simulated annealing optimization was found to be capable of identifying a minimum fuel, two-burn trajectory subject to four constraints which are integrated into the optimization using a barrier method. The computations required to solve the optimization are fast enough that missions could be planned on board the space station. Potential applications for on board planning of missions are numerous. Future research topics may include optimal planning of multi-waypoint maneuvers using a knowledge base to guide the optimization, and a study aimed at developing robust annealing schedules for potential on board missions.

  10. 17 CFR Appendix A to Part 37 - Guidance on Compliance With Registration Criteria

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... facility should include the system's trade-matching algorithm and order entry procedures. A submission involving a trade-matching algorithm that is based on order priority factors other than on a best price/earliest time basis should include a brief explanation of the alternative algorithm. (b) A board of trade's...

  11. 17 CFR Appendix A to Part 37 - Guidance on Compliance With Registration Criteria

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... facility should include the system's trade-matching algorithm and order entry procedures. A submission involving a trade-matching algorithm that is based on order priority factors other than on a best price/earliest time basis should include a brief explanation of the alternative algorithm. (b) A board of trade's...

  12. Wave front sensing for next generation earth observation telescope

    NASA Astrophysics Data System (ADS)

    Delvit, J.-M.; Thiebaut, C.; Latry, C.; Blanchet, G.

    2017-09-01

    High resolution observations systems are highly dependent on optics quality and are usually designed to be nearly diffraction limited. Such a performance allows to set a Nyquist frequency closer to the cut off frequency, or equivalently to minimize the pupil diameter for a given ground sampling distance target. Up to now, defocus is the only aberration that is allowed to evolve slowly and that may be inflight corrected, using an open loop correction based upon ground estimation and refocusing command upload. For instance, Pleiades satellites defocus is assessed from star acquisitions and refocusing is done with a thermal actuation of the M2 mirror. Next generation systems under study at CNES should include active optics in order to allow evolving aberrations not only limited to defocus, due for instance to in orbit thermal variable conditions. Active optics relies on aberration estimations through an onboard Wave Front Sensor (WFS). One option is using a Shack Hartmann. The Shack-Hartmann wave-front sensor could be used on extended scenes (unknown landscapes). A wave-front computation algorithm should then be implemented on-board the satellite to provide the control loop wave-front error measure. In the worst case scenario, this measure should be computed before each image acquisition. A robust and fast shift estimation algorithm between Shack-Hartmann images is then needed to fulfill this last requirement. A fast gradient-based algorithm using optical flows with a Lucas-Kanade method has been studied and implemented on an electronic device developed by CNES. Measurement accuracy depends on the Wave Front Error (WFE), the landscape frequency content, the number of searched aberrations, the a priori knowledge of high order aberrations and the characteristics of the sensor. CNES has realized a full scale sensitivity analysis on the whole parameter set with our internally developed algorithm.

  13. In-camera video-stream processing for bandwidth reduction in web inspection

    NASA Astrophysics Data System (ADS)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  14. An approach for finding long period elliptical orbits for precursor SEI missions

    NASA Technical Reports Server (NTRS)

    Fraietta, Michael F.; Bond, Victor R.

    1993-01-01

    Precursors for Solar System Exploration Initiative (SEI) missions may require long period elliptical orbits about a planet. These orbits will typically have periods on the order of tens to hundreds of days. Some potential uses for these orbits may include the following: studying the effects of galactic cosmic radiation, parking orbits for engineering and operational test of systems, and ferrying orbits between libration points and low altitude orbits. This report presents an approach that can be used to find these orbits. The approach consists of three major steps. First, it uses a restricted three-body targeting algorithm to determine the initial conditions which satisfy certain desired final conditions in a system of two massive primaries. Then the initial conditions are transformed to an inertial coordinate system for use by a special perturbation method. Finally, using the special perturbation method, other perturbations (e.g., sun third body and solar radiation pressure) can be easily incorporated to determine their effects on the nominal trajectory. An algorithm potentially suitable for on-board guidance will also be discussed. This algorithm uses an analytic method relying on Chebyshev polynomials to compute the desired position and velocity of the satellite as a function of time. Together with navigation updates, this algorithm can be implemented to predict the size and timing for AV corrections.

  15. Unsupervised texture image segmentation by improved neural network ART2

    NASA Technical Reports Server (NTRS)

    Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco

    1994-01-01

    We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.

  16. Obtaining Approximate Values of Exterior Orientation Elements of Multi-Intersection Images Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Li, X.; Li, S. W.

    2012-07-01

    In this paper, an efficient global optimization algorithm in the field of artificial intelligence, named Particle Swarm Optimization (PSO), is introduced into close range photogrammetric data processing. PSO can be applied to obtain the approximate values of exterior orientation elements under the condition that multi-intersection photography and a small portable plane control frame are used. PSO, put forward by an American social psychologist J. Kennedy and an electrical engineer R.C. Eberhart, is a stochastic global optimization method based on swarm intelligence, which was inspired by social behavior of bird flocking or fish schooling. The strategy of obtaining the approximate values of exterior orientation elements using PSO is as follows: in terms of image coordinate observed values and space coordinates of few control points, the equations of calculating the image coordinate residual errors can be given. The sum of absolute value of each image coordinate is minimized to be the objective function. The difference between image coordinate observed value and the image coordinate computed through collinear condition equation is defined as the image coordinate residual error. Firstly a gross area of exterior orientation elements is given, and then the adjustment of other parameters is made to get the particles fly in the gross area. After iterative computation for certain times, the satisfied approximate values of exterior orientation elements are obtained. By doing so, the procedures like positioning and measuring space control points in close range photogrammetry can be avoided. Obviously, this method can improve the surveying efficiency greatly and at the same time can decrease the surveying cost. And during such a process, only one small portable control frame with a couple of control points is employed, and there are no strict requirements for the space distribution of control points. In order to verify the effectiveness of this algorithm, two experiments are carried out. In the first experiment, images of a standard grid board are taken according to multi-intersection photography using digital camera. Three points or six points which are located on the left-down corner of the standard grid are regarded as control points respectively, and the exterior orientation elements of each image are computed through PSO, and compared with these elements computed through bundle adjustment. In the second experiment, the exterior orientation elements obtained from the first experiment are used as approximate values in bundle adjustment and then the space coordinates of other grid points on the board can be computed. The coordinate difference of grid points between these computed space coordinates and their known coordinates can be used to compute the accuracy. The point accuracy computed in above experiments are ±0.76mm and ±0.43mm respectively. The above experiments prove the effectiveness of PSO used in close range photogrammetry to compute approximate values of exterior orientation elements, and the algorithm can meet the requirement of higher accuracy. In short, PSO can get better results in a faster, cheaper way compared with other surveying methods in close range photogrammetry.

  17. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  18. A real time, FEM based optimal control algorithm and its implementation using parallel processing hardware (transistors) in a microprocessor environment

    NASA Technical Reports Server (NTRS)

    Patten, William Neff

    1989-01-01

    There is an evident need to discover a means of establishing reliable, implementable controls for systems that are plagued by nonlinear and, or uncertain, model dynamics. The development of a generic controller design tool for tough-to-control systems is reported. The method utilizes a moving grid, time infinite element based solution of the necessary conditions that describe an optimal controller for a system. The technique produces a discrete feedback controller. Real time laboratory experiments are now being conducted to demonstrate the viability of the method. The algorithm that results is being implemented in a microprocessor environment. Critical computational tasks are accomplished using a low cost, on-board, multiprocessor (INMOS T800 Transputers) and parallel processing. Progress to date validates the methodology presented. Applications of the technique to the control of highly flexible robotic appendages are suggested.

  19. Practices in source code sharing in astrophysics

    NASA Astrophysics Data System (ADS)

    Shamir, Lior; Wallin, John F.; Allen, Alice; Berriman, Bruce; Teuben, Peter; Nemiroff, Robert J.; Mink, Jessica; Hanisch, Robert J.; DuPrie, Kimberly

    2013-02-01

    While software and algorithms have become increasingly important in astronomy, the majority of authors who publish computational astronomy research do not share the source code they develop, making it difficult to replicate and reuse the work. In this paper we discuss the importance of sharing scientific source code with the entire astrophysics community, and propose that journals require authors to make their code publicly available when a paper is published. That is, we suggest that a paper that involves a computer program not be accepted for publication unless the source code becomes publicly available. The adoption of such a policy by editors, editorial boards, and reviewers will improve the ability to replicate scientific results, and will also make computational astronomy methods more available to other researchers who wish to apply them to their data.

  20. An artificial retina processor for track reconstruction at the LHC crossing rate

    DOE PAGES

    Bedeschi, F.; Cenci, R.; Marino, P.; ...

    2017-11-23

    The goal of the INFN-RETINA R&D project is to develop and implement a computational methodology that allows to reconstruct events with a large number (> 100) of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus matching the requirements for processing LHC events at the full bunch-crossing frequency. Our approach relies on a parallel pattern-recognition algorithm, dubbed artificial retina, inspired by the early stages of image processing by the brain. In order to demonstrate that a track-processing system based on this algorithm is feasible, we built a sizable prototype of a tracking processor tuned to 3 000more » patterns, based on already existing readout boards equipped with Altera Stratix III FPGAs. The detailed geometry and charged-particle activity of a large tracking detector currently in operation are used to assess its performances. Here, we report on the test results with such a prototype.« less

  1. A system for real-time measurement of the brachial artery diameter in B-mode ultrasound images.

    PubMed

    Gemignani, Vincenzo; Faita, Francesco; Ghiadoni, Lorenzo; Poggianti, Elisa; Demi, Marcello

    2007-03-01

    The measurement of the brachial artery diameter is frequently used in clinical studies for evaluating the flow-mediated dilation and, in conjunction with the blood pressure value, for assessing arterial stiffness. This paper presents a system for computing the brachial artery diameter in real-time by analyzing B-mode ultrasound images. The method is based on a robust edge detection algorithm which is used to automatically locate the two walls of the vessel. The measure of the diameter is obtained with subpixel precision and with a temporal resolution of 25 samples/s, so that the small dilations induced by the cardiac cycle can also be retrieved. The algorithm is implemented on a standalone video processing board which acquires the analog video signal from the ultrasound equipment. Results are shown in real-time on a graphical user interface. The system was tested both on synthetic ultrasound images and in clinical studies of flow-mediated dilation. Accuracy, robustness, and intra/inter observer variability of the method were evaluated.

  2. An artificial retina processor for track reconstruction at the LHC crossing rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bedeschi, F.; Cenci, R.; Marino, P.

    The goal of the INFN-RETINA R&D project is to develop and implement a computational methodology that allows to reconstruct events with a large number (> 100) of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus matching the requirements for processing LHC events at the full bunch-crossing frequency. Our approach relies on a parallel pattern-recognition algorithm, dubbed artificial retina, inspired by the early stages of image processing by the brain. In order to demonstrate that a track-processing system based on this algorithm is feasible, we built a sizable prototype of a tracking processor tuned to 3 000more » patterns, based on already existing readout boards equipped with Altera Stratix III FPGAs. The detailed geometry and charged-particle activity of a large tracking detector currently in operation are used to assess its performances. Here, we report on the test results with such a prototype.« less

  3. 20 CFR 225.4 - Limitation on amount of earnings used to compute a PIA.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... compute a PIA. 225.4 Section 225.4 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE... earnings used to compute a PIA. Certain PIA's used by the Board are based on a combination of compensation... purposes of crediting earnings when computing any PIA, compensation is always treated as wages. Regardless...

  4. The Effect of Computer Automation on Institutional Review Board (IRB) Office Efficiency

    ERIC Educational Resources Information Center

    Oder, Karl; Pittman, Stephanie

    2015-01-01

    Companies purchase computer systems to make their processes more efficient through automation. Some academic medical centers (AMC) have purchased computer systems for their institutional review boards (IRB) to increase efficiency and compliance with regulations. IRB computer systems are expensive to purchase, deploy, and maintain. An AMC should…

  5. 75 FR 36147 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Order Approving...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-24

    ..., as Modified by Amendment No. 1 Thereto, Related to the Hybrid Matching Algorithms June 17, 2010. On... Hybrid System. Each rule currently provides allocation algorithms the Exchange can utilize when executing incoming electronic orders, including the Ultimate Matching Algorithm (``UMA''), and price-time and pro...

  6. Real-time, autonomous precise satellite orbit determination using the global positioning system

    NASA Astrophysics Data System (ADS)

    Goldstein, David Ben

    2000-10-01

    The desire for autonomously generated, rapidly available, and highly accurate satellite ephemeris is growing with the proliferation of constellations of satellites and the cost and overhead of ground tracking resources. Autonomous Orbit Determination (OD) may be done on the ground in a post-processing mode or in real-time on board a satellite and may be accomplished days, hours or immediately after observations are processed. The Global Positioning System (GPS) is now widely used as an alternative to ground tracking resources to supply observation data for satellite positioning and navigation. GPS is accurate, inexpensive, provides continuous coverage, and is an excellent choice for autonomous systems. In an effort to estimate precise satellite ephemeris in real-time on board a satellite, the Goddard Space Flight Center (GSFC) created the GPS Enhanced OD Experiment (GEODE) flight navigation software. This dissertation offers alternative methods and improvements to GEODE to increase on board autonomy and real-time total position accuracy and precision without increasing computational burden. First, GEODE is modified to include a Gravity Acceleration Approximation Function (GAAF) to replace the traditional spherical harmonic representation of the gravity field. Next, an ionospheric correction method called Differenced Range Versus Integrated Doppler (DRVID) is applied to correct for ionospheric errors in the GPS measurements used in GEODE. Then, Dynamic Model Compensation (DMC) is added to estimate unmodeled and/or mismodeled forces in the dynamic model and to provide an alternative process noise variance-covariance formulation. Finally, a Genetic Algorithm (GA) is implemented in the form of Genetic Model Compensation (GMC) to optimize DMC forcing noise parameters. Application of GAAF, DRVID and DMC improved GEODE's position estimates by 28.3% when applied to GPS/MET data collected in the presence of Selective Availability (SA), 17.5% when SA is removed from the GPS/MET data and 10.8% on SA free TOPEX data. Position estimates with RSS errors below I meter are now achieved using SA free TOPEX data. DRVID causes an increase in computational burden while GAAF and DMC reduce computational burden. The net effect of applying GAAF, DRVID and DMC is an improvement in GEODE's accuracy/precision without an increase in computational burden.

  7. Visual navigation of the UAVs on the basis of 3D natural landmarks

    NASA Astrophysics Data System (ADS)

    Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry

    2015-12-01

    This work considers the tracking of the UAV (unmanned aviation vehicle) on the basis of onboard observations of natural landmarks including azimuth and elevation angles. It is assumed that UAV's cameras are able to capture the angular position of reference points and to measure the angles of the sight line. Such measurements involve the real position of UAV in implicit form, and therefore some of nonlinear filters such as Extended Kalman filter (EKF) or others must be used in order to implement these measurements for UAV control. Recently it was shown that modified pseudomeasurement method may be used to control UAV on the basis of the observation of reference points assigned along the UAV path in advance. However, the use of such set of points needs the cumbersome recognition procedure with the huge volume of on-board memory. The natural landmarks serving as such reference points which may be determined on-line can significantly reduce the on-board memory and the computational difficulties. The principal difference of this work is the usage of the 3D reference points coordinates which permits to determine the position of the UAV more precisely and thereby to guide along the path with higher accuracy which is extremely important for successful performance of the autonomous missions. The article suggests the new RANSAC for ISOMETRY algorithm and the use of recently developed estimation and control algorithms for tracking of given reference path under external perturbation and noised angular measurements.

  8. Near real-time stereo vision system

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)

    1993-01-01

    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.

  9. Impact of freeway weaving segment design on light-duty vehicle exhaust emissions.

    PubMed

    Li, Qing; Qiao, Fengxiang; Yu, Lei; Chen, Shuyan; Li, Tiezhu

    2018-06-01

    In the United States, 26% of greenhouse gas emissions is emitted from the transportation sector; these emisssions meanwhile are accompanied by enormous toxic emissions to humans, such as carbon monoxide (CO), nitrogen oxides (NO x ), and hydrocarbon (HC), approximately 2.5% and 2.44% of a total exhaust emissions for a petrol and a diesel engine, respectively. These exhaust emissions are typically subject to vehicles' intermittent operations, such as hard acceleration and hard braking. In practice, drivers are inclined to operate intermittently while driving through a weaving segment, due to complex vehicle maneuvering for weaving. As a result, the exhaust emissions within a weaving segment ought to vary from those on a basic segment. However, existing emission models usually rely on vehicle operation information, and compute a generalized emission result, regardless of road configuration. This research proposes to explore the impacts of weaving segment configuration on vehicle emissions, identify important predictors for emission estimations, and develop a nonlinear normalized emission factor (NEF) model for weaving segments. An on-board emission test was conducted on 12 subjects on State Highway 288 in Houston, Texas. Vehicles' activity information, road conditions, and real-time exhaust emissions were collected by on-board diagnosis (OBD), a smartphone-based roughness app, and a portable emission measurement system (PEMS), respectively. Five feature selection algorithms were used to identify the important predictors for the response of NEF and the modeling algorithm. The predictive power of four algorithm-based emission models was tested by 10-fold cross-validation. Results showed that emissions are also susceptible to the type and length of a weaving segment. Bagged decision tree algorithm was chosen to develop a 50-grown-tree NEF model, which provided a validation error of 0.0051. The estimated NEFs are highly correlated with the observed NEFs in the training data set as well as in the validation data set, with the R values of 0.91 and 0.90, respectively. Existing emission models usually rely on vehicle operation information to compute a generalized emission result, regardless of road configuration. In practice, while driving through a weaving segment, drivers are inclined to perform erratic maneuvers, such as hard braking and hard acceleration due to the complex weaving maneuver required. As a result, the exhaust emissions within a weaving segment vary from those on a basic segment. This research proposes to involve road configuration, in terms of the type and length of a weaving segment, in constructing an emission nonlinear model, which significantly improves emission estimations at a microscopic level.

  10. Space station data system analysis/architecture study. Task 2: Options development DR-5. Volume 1: Technology options

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The second task in the Space Station Data System (SSDS) Analysis/Architecture Study is the development of an information base that will support the conduct of trade studies and provide sufficient data to make key design/programmatic decisions. This volume identifies the preferred options in the technology category and characterizes these options with respect to performance attributes, constraints, cost, and risk. The technology category includes advanced materials, processes, and techniques that can be used to enhance the implementation of SSDS design structures. The specific areas discussed are mass storage, including space and round on-line storage and off-line storage; man/machine interface; data processing hardware, including flight computers and advanced/fault tolerant computer architectures; and software, including data compression algorithms, on-board high level languages, and software tools. Also discussed are artificial intelligence applications and hard-wire communications.

  11. An embedded vision system for an unmanned four-rotor helicopter

    NASA Astrophysics Data System (ADS)

    Lillywhite, Kirt; Lee, Dah-Jye; Tippetts, Beau; Fowers, Spencer; Dennis, Aaron; Nelson, Brent; Archibald, James

    2006-10-01

    In this paper an embedded vision system and control module is introduced that is capable of controlling an unmanned four-rotor helicopter and processing live video for various law enforcement, security, military, and civilian applications. The vision system is implemented on a newly designed compact FPGA board (Helios). The Helios board contains a Xilinx Virtex-4 FPGA chip and memory making it capable of implementing real time vision algorithms. A Smooth Automated Intelligent Leveling daughter board (SAIL), attached to the Helios board, collects attitude and heading information to be processed in order to control the unmanned helicopter. The SAIL board uses an electrolytic tilt sensor, compass, voltage level converters, and analog to digital converters to perform its operations. While level flight can be maintained, problems stemming from the characteristics of the tilt sensor limits maneuverability of the helicopter. The embedded vision system has proven to give very good results in its performance of a number of real-time robotic vision algorithms.

  12. 75 FR 53005 - Privacy Act of 1974, as amended; Notice of Computer Matching Program (Railroad Retirement Board...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-30

    ... notice of its renewal of an ongoing computer-matching program with the Social Security Administration... computer-matching program with the Committee on Homeland Security and Governmental Affairs of the Senate... RAILROAD RETIREMENT BOARD Privacy Act of 1974, as amended; Notice of Computer Matching Program...

  13. 78 FR 34678 - Privacy Act of 1974, as Amended; Notice of Computer Matching Program (Railroad Retirement Board...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-10

    ... notice of its renewal of an ongoing computer-matching program with the Social Security Administration... computer-matching program with the Committee on Homeland Security and Governmental Affairs of the Senate... RAILROAD RETIREMENT BOARD Privacy Act of 1974, as Amended; Notice of Computer Matching Program...

  14. 75 FR 53004 - Privacy Act of 1974, as Amended; Notice of Computer-Matching Program (Railroad Retirement Board...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-30

    ... report of this computer-matching program with the Committee on Homeland Security and Governmental Affairs... INFORMATION: A. General The Computer-Matching and Privacy Protection Act of 1988, (Pub. L. 100-503), amended... RAILROAD RETIREMENT BOARD Privacy Act of 1974, as Amended; Notice of Computer-Matching Program...

  15. Computer optimization of cutting yield from multiple ripped boards

    Treesearch

    A.R. Stern; K.A. McDonald

    1978-01-01

    RIPYLD is a computer program that optimizes the cutting yield from multiple-ripped boards. Decisions are based on automatically collected defect information, cutting bill requirements, and sawing variables. The yield of clear cuttings from a board is calculated for every possible permutation of specified rip widths and both the maximum and minimum percent yield...

  16. Adaptive approach for on-board impedance parameters and voltage estimation of lithium-ion batteries in electric vehicles

    NASA Astrophysics Data System (ADS)

    Farmann, Alexander; Waag, Wladislaw; Sauer, Dirk Uwe

    2015-12-01

    Robust algorithms using reduced order equivalent circuit model (ECM) for an accurate and reliable estimation of battery states in various applications become more popular. In this study, a novel adaptive, self-learning heuristic algorithm for on-board impedance parameters and voltage estimation of lithium-ion batteries (LIBs) in electric vehicles is introduced. The presented approach is verified using LIBs with different composition of chemistries (NMC/C, NMC/LTO, LFP/C) at different aging states. An impedance-based reduced order ECM incorporating ohmic resistance and a combination of a constant phase element and a resistance (so-called ZARC-element) is employed. Existing algorithms in vehicles are much more limited in the complexity of the ECMs. The algorithm is validated using seven day real vehicle data with high temperature variation including very low temperatures (from -20 °C to +30 °C) at different Depth-of-Discharges (DoDs). Two possibilities to approximate both ZARC-elements with finite number of RC-elements on-board are shown and the results of the voltage estimation are compared. Moreover, the current dependence of the charge-transfer resistance is considered by employing Butler-Volmer equation. Achieved results indicate that both models yield almost the same grade of accuracy.

  17. An approximate, maximum terminal velocity descent to a point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisler, G.R.; Hull, D.G.

    1987-01-01

    No closed form control solution exists for maximizing the terminal velocity of a hypersonic glider at an arbitrary point. As an alternative, this study uses neighboring extremal theory to provide a sampled data feedback law to guide the vehicle to a constrained ground range and altitude. The guidance algorithm is divided into two parts: 1) computation of a nominal, approximate, maximum terminal velocity trajectory to a constrained final altitude and computation of the resulting unconstrained groundrange, and 2) computation of the neighboring extremal control perturbation at the sample value of flight path angle to compensate for changes in the approximatemore » physical model and enable the vehicle to reach the on-board computed groundrange. The trajectories are characterized by glide and dive flight to the target to minimize the time spent in the denser parts of the atmosphere. The proposed on-line scheme successfully brings the final altitude and range constraints together, as well as compensates for differences in flight model, atmosphere, and aerodynamics at the expense of guidance update computation time. Comparison with an independent, parameter optimization solution for the terminal velocity is excellent. 6 refs., 3 figs.« less

  18. Evaluation of On-Board kV Cone Beam Computed Tomography–Based Dose Calculation With Deformable Image Registration Using Hounsfield Unit Modifications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onozato, Yusuke; Kadoya, Noriyuki, E-mail: kadoya.n@rad.med.tohoku.ac.jp; Fujita, Yukio

    2014-06-01

    Purpose: The purpose of this study was to estimate the accuracy of the dose calculation of On-Board Imager (Varian, Palo Alto, CA) cone beam computed tomography (CBCT) with deformable image registration (DIR), using the multilevel-threshold (MLT) algorithm and histogram matching (HM) algorithm in pelvic radiation therapy. Methods and Materials: One pelvis phantom and 10 patients with prostate cancer treated with intensity modulated radiation therapy were studied. To minimize the effect of organ deformation and different Hounsfield unit values between planning CT (PCT) and CBCT, we modified CBCT (mCBCT) with DIR by using the MLT (mCBCT{sub MLT}) and HM (mCBCT{sub HM})more » algorithms. To evaluate the accuracy of the dose calculation, we compared dose differences in dosimetric parameters (mean dose [D{sub mean}], minimum dose [D{sub min}], and maximum dose [D{sub max}]) for planning target volume, rectum, and bladder between PCT (reference) and CBCTs or mCBCTs. Furthermore, we investigated the effect of organ deformation compared with DIR and rigid registration (RR). We determined whether dose differences between PCT and mCBCTs were significantly lower than in CBCT by using Student t test. Results: For patients, the average dose differences in all dosimetric parameters of CBCT with DIR were smaller than those of CBCT with RR (eg, rectum; 0.54% for DIR vs 1.24% for RR). For the mCBCTs with DIR, the average dose differences in all dosimetric parameters were less than 1.0%. Conclusions: We evaluated the accuracy of the dose calculation in CBCT, mCBCT{sub MLT}, and mCBCT{sub HM} with DIR for 10 patients. The results showed that dose differences in D{sub mean}, D{sub min}, and D{sub max} in mCBCTs were within 1%, which were significantly better than those in CBCT, especially for the rectum (P<.05). Our results indicate that the mCBCT{sub MLT} and mCBCT{sub HM} can be useful for improving the dose calculation for adaptive radiation therapy.« less

  19. 77 FR 59444 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-27

    ... provides a ``menu'' of matching algorithms to choose from when executing incoming electronic orders. The menu format allows the Exchange to utilize different matching algorithms on a class-by-class basis. The menu includes, among other choices, the ultimate matching algorithm (``UMA''), as well as price-time...

  20. 75 FR 27850 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-18

    ... Change, as Modified by Amendment No. 1 Thereto, Related to the Hybrid Matching Algorithms May 12, 2010... allocation algorithms to choose from when executing incoming electronic orders. The menu format allows the Exchange to utilize different allocation algorithms on a class-by-class basis. The menu includes, among...

  1. Test-bench system for a borehole azimuthal acoustic reflection imaging logging tool

    NASA Astrophysics Data System (ADS)

    Liu, Xianping; Ju, Xiaodong; Qiao, Wenxiao; Lu, Junqiang; Men, Baiyong; Liu, Dong

    2016-06-01

    The borehole azimuthal acoustic reflection imaging logging tool (BAAR) is a new generation of imaging logging tool, which is able to investigate stratums in a relatively larger range of space around the borehole. The BAAR is designed based on the idea of modularization with a very complex structure, so it has become urgent for us to develop a dedicated test-bench system to debug each module of the BAAR. With the help of a test-bench system introduced in this paper, test and calibration of BAAR can be easily achieved. The test-bench system is designed based on the client/server model. The hardware system mainly consists of a host computer, an embedded controlling board, a bus interface board, a data acquisition board and a telemetry communication board. The host computer serves as the human machine interface and processes the uploaded data. The software running on the host computer is designed based on VC++. The embedded controlling board uses Advanced Reduced Instruction Set Machines 7 (ARM7) as the micro controller and communicates with the host computer via Ethernet. The software for the embedded controlling board is developed based on the operating system uClinux. The bus interface board, data acquisition board and telemetry communication board are designed based on a field programmable gate array (FPGA) and provide test interfaces for the logging tool. To examine the feasibility of the test-bench system, it was set up to perform a test on BAAR. By analyzing the test results, an unqualified channel of the electronic receiving cabin was discovered. It is suggested that the test-bench system can be used to quickly determine the working condition of sub modules of BAAR and it is of great significance in improving production efficiency and accelerating industrial production of the logging tool.

  2. On a Three-Channel Cosmic Ray Detector based on Aluminum Blocks

    NASA Astrophysics Data System (ADS)

    Arceo, L.; Félix, J.

    2017-10-01

    There are many general purpose cosmic ray detectors based on plastic scintillators and electronic boards from the market. This is a new cosmic ray detector designed on three 2.54 cm × 5.08 cm × 20.32 cm Aluminum blocks in stack arrangement, and three Hamamatsu S12572-100P photodiodes. The photodiode board, the passive electronic board, and the discriminator board are own designed. The electronic signals are stored with a CompactRIO -cRIO- by National Instruments. It is presented the design, the construction, the data acquisition system algorithm, and the preliminary physical results.

  3. Digital data acquisition and preliminary instrumentation study for the F-16 laminar flow control vehicle

    NASA Technical Reports Server (NTRS)

    Ostowari, Cyrus

    1992-01-01

    Preliminary studies have shown that maintenance of laminar flow through active boundary-layer control is viable. Current research activity at NASA Langley and NASA Dryden is utilizing the F-16XL-1 research vehicle fitted with a laminar-flow suction glove that is connected to a vacuum manifold in order to create and control laminar flow at supersonic flight speeds. This experimental program has been designed to establish the feasibility of obtaining laminar flow at supersonic speeds with highly swept wing and to provide data for computational fluid dynamics (CFD) code calibration. Flight experiments conducted as supersonic speeds have indicated that it is possible to achieve laminar flow under controlled suction at flight Mach numbers greater than 1. Currently this glove is fitted with a series of pressure belts and flush mounted hot film sensors for the purpose of determining the pressure distributions and the extent of laminar flow region past the stagnation point. The present mode of data acquisition relies on out-dated on board multi-channel FM analogue tape recorder system. At the end of each flight, the analogue data is digitized through a long laborious process and then analyzed. It is proposed to replace this outdated system with an on board state-of-the-art digital data acquisition system capable of a through put rate of up to 1 MegaHertz. The purpose of this study was three-fold: (1) to develop a simple algorithm for acquiring data via 2 analogue-to-digital convertor boards simultaneously (total of 32 channels); (2) to interface hot-film/wire anemometry instrumentation with a PCAT type computer; and (3) to characterize the frequency response of a flush mounted film sensor. A brief description of each of the above tasks along with recommendations are given.

  4. 78 FR 70971 - Privacy Act of 1974, as Amended; Notice of Computer Matching Program (Railroad Retirement Board...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-27

    ... will file a report of this computer-matching program with the Committee on Homeland Security and... . SUPPLEMENTARY INFORMATION: A. General The Computer Matching and Privacy Protection Act of 1988, (Pub. L. 100-503... RAILROAD RETIREMENT BOARD Privacy Act of 1974, as Amended; Notice of Computer Matching Program...

  5. A compressed sensing X-ray camera with a multilayer architecture

    NASA Astrophysics Data System (ADS)

    Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.

    2018-01-01

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.

  6. Managing Emergency Situations in VANET Through Heterogeneous Technologies Cooperation.

    PubMed

    Santamaria, Amilcare Francesco; Tropea, Mauro; Fazio, Peppino; De Rango, Floriano

    2018-05-08

    Nowadays, the research on vehicular computing enhanced a very huge amount of services and protocols, aimed to vehicles security and comfort. The investigation of the IEEE802.11p, Wireless Access in Vehicular Environments (WAVE) and Dedicated Short Range Communication (DSRC) standards gave to the scientific world the chance to integrate new services, protocols, algorithms and devices inside vehicles. This opportunity attracted the attention of private/public organizations, which spent lot of resources and money to promote vehicular technologies. In this paper, the attention is focused on the design of a new approach for vehicular environments able to gather information during mobile node trips, for advising dangerous or emergency situations by exploiting on-board sensors. It is assumed that each vehicle has an integrated on-board unit composed of several sensors and Global Position System (GPS) device, able to spread alerting messages around the network, regarding warning and dangerous situations/conditions. On-board units, based on the standard communication protocols, share the collected information with the surrounding road-side units, while the sensing platform is able to recognize the environment that vehicles are passing through (obstacles, accidents, emergencies, dangerous situations, etc.). Finally, through the use of the GPS receiver, the exact location of the caught event is determined and spread along the network. In this way, if an accident occurs, the arriving cars will, probably, avoid delay and danger situations.

  7. Managing Emergency Situations in VANET Through Heterogeneous Technologies Cooperation

    PubMed Central

    Tropea, Mauro; De Rango, Floriano

    2018-01-01

    Nowadays, the research on vehicular computing enhanced a very huge amount of services and protocols, aimed to vehicles security and comfort. The investigation of the IEEE802.11p, Wireless Access in Vehicular Environments (WAVE) and Dedicated Short Range Communication (DSRC) standards gave to the scientific world the chance to integrate new services, protocols, algorithms and devices inside vehicles. This opportunity attracted the attention of private/public organizations, which spent lot of resources and money to promote vehicular technologies. In this paper, the attention is focused on the design of a new approach for vehicular environments able to gather information during mobile node trips, for advising dangerous or emergency situations by exploiting on-board sensors. It is assumed that each vehicle has an integrated on-board unit composed of several sensors and Global Position System (GPS) device, able to spread alerting messages around the network, regarding warning and dangerous situations/conditions. On-board units, based on the standard communication protocols, share the collected information with the surrounding road-side units, while the sensing platform is able to recognize the environment that vehicles are passing through (obstacles, accidents, emergencies, dangerous situations, etc.). Finally, through the use of the GPS receiver, the exact location of the caught event is determined and spread along the network. In this way, if an accident occurs, the arriving cars will, probably, avoid delay and danger situations. PMID:29738453

  8. Implementing Legacy-C Algorithms in FPGA Co-Processors for Performance Accelerated Smart Payloads

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.; Hartzell, Christine

    2008-01-01

    Accurate, on-board classification of instrument data is used to increase science return by autonomously identifying regions of interest for priority transmission or generating summary products to conserve transmission bandwidth. Due to on-board processing constraints, such classification has been limited to using the simplest functions on a small subset of the full instrument data. FPGA co-processor designs for SVM1 classifiers will lead to significant improvement in on-board classification capability and accuracy.

  9. Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.

    PubMed

    Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F

    2011-03-01

    This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.

  10. Interesting viewpoints to those who will put Ada into practice

    NASA Technical Reports Server (NTRS)

    Carlsson, Arne

    1986-01-01

    Ada will most probably be used as the programming language for computers in the NASA Space Station. It is reasonable to suppose that Ada will be used for at least embedded computers, because the high software costs for these embedded computers were the reason why Ada activities were initiated about ten years ago. The on-board computers are designed for use in space applications, where maintenance by man is impossible. All manipulation of such computers has to be performed in an autonomous way or remote with commands from the ground. In a manned Space Station some maintenance work can be performed by service people on board, but there are still a lot of applications, which require autonomous computers, for example, vital Space Station functions and unmanned orbital transfer vehicles. Those aspect which have come out of the analysis of Ada characteristics together with the experience of requirements for embedded on-board computers in space applications are examined.

  11. Deployment of the OSIRIS EM-PIC code on the Intel Knights Landing architecture

    NASA Astrophysics Data System (ADS)

    Fonseca, Ricardo

    2017-10-01

    Electromagnetic particle-in-cell (EM-PIC) codes such as OSIRIS have found widespread use in modelling the highly nonlinear and kinetic processes that occur in several relevant plasma physics scenarios, ranging from astrophysical settings to high-intensity laser plasma interaction. Being computationally intensive, these codes require large scale HPC systems, and a continuous effort in adapting the algorithm to new hardware and computing paradigms. In this work, we report on our efforts on deploying the OSIRIS code on the new Intel Knights Landing (KNL) architecture. Unlike the previous generation (Knights Corner), these boards are standalone systems, and introduce several new features, include the new AVX-512 instructions and on-package MCDRAM. We will focus on the parallelization and vectorization strategies followed, as well as memory management, and present a detailed performance evaluation of code performance in comparison with the CPU code. This work was partially supported by Fundaçã para a Ciência e Tecnologia (FCT), Portugal, through Grant No. PTDC/FIS-PLA/2940/2014.

  12. --No Title--

    Science.gov Websites

    interoperability emerging infrastructure for data management on computational grids Software Packages Services : ATLAS: Management and Steering: Computing Management Board Software Project Management Board Database Model Group Computing TDR: 4.5 Event Data 4.8 Database and Data Management Services 6.3.4 Production and

  13. FPGA-based architecture for motion recovering in real-time

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel; Maya-Rueda, Selene E.; Torres-Huitzil, Cesar

    2002-03-01

    A key problem in the computer vision field is the measurement of object motion in a scene. The main goal is to compute an approximation of the 3D motion from the analysis of an image sequence. Once computed, this information can be used as a basis to reach higher level goals in different applications. Motion estimation algorithms pose a significant computational load for the sequential processors limiting its use in practical applications. In this work we propose a hardware architecture for motion estimation in real time based on FPGA technology. The technique used for motion estimation is Optical Flow due to its accuracy, and the density of velocity estimation, however other techniques are being explored. The architecture is composed of parallel modules working in a pipeline scheme to reach high throughput rates near gigaflops. The modules are organized in a regular structure to provide a high degree of flexibility to cover different applications. Some results will be presented and the real-time performance will be discussed and analyzed. The architecture is prototyped in an FPGA board with a Virtex device interfaced to a digital imager.

  14. Spatiotemporal Local-Remote Senor Fusion (ST-LRSF) for Cooperative Vehicle Positioning.

    PubMed

    Jeong, Han-You; Nguyen, Hoa-Hung; Bhawiyuga, Adhitya

    2018-04-04

    Vehicle positioning plays an important role in the design of protocols, algorithms, and applications in the intelligent transport systems. In this paper, we present a new framework of spatiotemporal local-remote sensor fusion (ST-LRSF) that cooperatively improves the accuracy of absolute vehicle positioning based on two state estimates of a vehicle in the vicinity: a local sensing estimate, measured by the on-board exteroceptive sensors, and a remote sensing estimate, received from neighbor vehicles via vehicle-to-everything communications. Given both estimates of vehicle state, the ST-LRSF scheme identifies the set of vehicles in the vicinity, determines the reference vehicle state, proposes a spatiotemporal dissimilarity metric between two reference vehicle states, and presents a greedy algorithm to compute a minimal weighted matching (MWM) between them. Given the outcome of MWM, the theoretical position uncertainty of the proposed refinement algorithm is proven to be inversely proportional to the square root of matching size. To further reduce the positioning uncertainty, we also develop an extended Kalman filter model with the refined position of ST-LRSF as one of the measurement inputs. The numerical results demonstrate that the proposed ST-LRSF framework can achieve high positioning accuracy for many different scenarios of cooperative vehicle positioning.

  15. A Comprehensive Training Data Set for the Development of Satellite-Based Volcanic Ash Detection Algorithms

    NASA Astrophysics Data System (ADS)

    Schmidl, Marius

    2017-04-01

    We present a comprehensive training data set covering a large range of atmospheric conditions, including disperse volcanic ash and desert dust layers. These data sets contain all information required for the development of volcanic ash detection algorithms based on artificial neural networks, urgently needed since volcanic ash in the airspace is a major concern of aviation safety authorities. Selected parts of the data are used to train the volcanic ash detection algorithm VADUGS. They contain atmospheric and surface-related quantities as well as the corresponding simulated satellite data for the channels in the infrared spectral range of the SEVIRI instrument on board MSG-2. To get realistic results, ECMWF, IASI-based, and GEOS-Chem data are used to calculate all parameters describing the environment, whereas the software package libRadtran is used to perform radiative transfer simulations returning the brightness temperatures for each atmospheric state. As optical properties are a prerequisite for radiative simulations accounting for aerosol layers, the development also included the computation of optical properties for a set of different aerosol types from different sources. A description of the developed software and the used methods is given, besides an overview of the resulting data sets.

  16. DSPACE hardware architecture for on-board real-time image/video processing in European space missions

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio; Donati, Massimiliano; Fanucci, Luca; Odendahl, Maximilian; Leupers, Reiner; Errico, Walter

    2013-02-01

    The on-board data processing is a vital task for any satellite and spacecraft due to the importance of elaborate the sensing data before sending them to the Earth, in order to exploit effectively the bandwidth to the ground station. In the last years the amount of sensing data collected by scientific and commercial space missions has increased significantly, while the available downlink bandwidth is comparatively stable. The increasing demand of on-board real-time processing capabilities represents one of the critical issues in forthcoming European missions. Faster and faster signal and image processing algorithms are required to accomplish planetary observation, surveillance, Synthetic Aperture Radar imaging and telecommunications. The only available space-qualified Digital Signal Processor (DSP) free of International Traffic in Arms Regulations (ITAR) restrictions faces inadequate performance, thus the development of a next generation European DSP is well known to the space community. The DSPACE space-qualified DSP architecture fills the gap between the computational requirements and the available devices. It leverages a pipelined and massively parallel core based on the Very Long Instruction Word (VLIW) paradigm, with 64 registers and 8 operational units, along with cache memories, memory controllers and SpaceWire interfaces. Both the synthesizable VHDL and the software development tools are generated from the LISA high-level model. A Xilinx-XC7K325T FPGA is chosen to realize a compact PCI demonstrator board. Finally first synthesis results on CMOS standard cell technology (ASIC 180 nm) show an area of around 380 kgates and a peak performance of 1000 MIPS and 750 MFLOPS at 125MHz.

  17. Single frequency GPS measurements in real-time artificial satellite orbit determination

    NASA Astrophysics Data System (ADS)

    Chiaradia, orbit determination A. P. M.; Kuga, H. K.; Prado, A. F. B. A.

    2003-07-01

    A simplified and compact algorithm with low computational cost providing an accuracy around tens of meters for artificial satellite orbit determination in real-time and on-board is developed in this work. The state estimation method is the extended Kalman filter. The Cowell's method is used to propagate the state vector, through a simple Runge-Kutta numerical integrator of fourth order with fixed step size. The modeled forces are due to the geopotential up to 50th order and degree of JGM-2 model. To time-update the state error covariance matrix, it is considered a simplified force model. In other words, in computing the state transition matrix, the effect of J 2 (Earth flattening) is analytically considered, which unloads dramatically the processing time. In the measurement model, the single frequency GPS pseudorange is used, considering the effects of the ionospheric delay, clock offsets of the GPS and user satellites, and relativistic effects. To validate this model, real live data are used from Topex/Poseidon satellite and the results are compared with the Topex/Poseidon Precision Orbit Ephemeris (POE) generated by NASA/JPL, for several test cases. It is concluded that this compact algorithm enables accuracies of tens of meters with such simplified force model, analytical approach for computing the transition matrix, and a cheap GPS receiver providing single frequency pseudorange measurements.

  18. A Hybrid FPGA/Tilera Compute Element for Autonomous Hazard Detection and Navigation

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Werner, Robert A.; Carson, John M., III; Khanoyan, Garen; Stern, Ryan A.; Trawny, Nikolas

    2013-01-01

    To increase safety for future missions landing on other planetary or lunar bodies, the Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is developing an integrated sensor for autonomous surface analysis and hazard determination. The ALHAT Hazard Detection System (HDS) consists of a Flash LIDAR for measuring the topography of the landing site, a gimbal to scan across the terrain, and an Inertial Measurement Unit (IMU), along with terrain analysis algorithms to identify the landing site and the local hazards. An FPGA and Manycore processor system was developed to interface all the devices in the HDS, to provide high-resolution timing to accurately measure system state, and to run the surface analysis algorithms quickly and efficiently. In this paper, we will describe how we integrated COTS components such as an FPGA evaluation board, a TILExpress64, and multi-threaded/multi-core aware software to build the HDS Compute Element (HDSCE). The ALHAT program is also working with the NASA Morpheus Project and has integrated the HDS as a sensor on the Morpheus Lander. This paper will also describe how the HDS is integrated with the Morpheus lander and the results of the initial test flights with the HDS installed. We will also describe future improvements to the HDSCE.

  19. A hybrid FPGA/Tilera compute element for autonomous hazard detection and navigation

    NASA Astrophysics Data System (ADS)

    Villalpando, C. Y.; Werner, R. A.; Carson, J. M.; Khanoyan, G.; Stern, R. A.; Trawny, N.

    To increase safety for future missions landing on other planetary or lunar bodies, the Autonomous Landing and Hazard Avoidance Technology (ALHAT) program is developing an integrated sensor for autonomous surface analysis and hazard determination. The ALHAT Hazard Detection System (HDS) consists of a Flash LIDAR for measuring the topography of the landing site, a gimbal to scan across the terrain, and an Inertial Measurement Unit (IMU), along with terrain analysis algorithms to identify the landing site and the local hazards. An FPGA and Manycore processor system was developed to interface all the devices in the HDS, to provide high-resolution timing to accurately measure system state, and to run the surface analysis algorithms quickly and efficiently. In this paper, we will describe how we integrated COTS components such as an FPGA evaluation board, a TILExpress64, and multi-threaded/multi-core aware software to build the HDS Compute Element (HDSCE). The ALHAT program is also working with the NASA Morpheus Project and has integrated the HDS as a sensor on the Morpheus Lander. This paper will also describe how the HDS is integrated with the Morpheus lander and the results of the initial test flights with the HDS installed. We will also describe future improvements to the HDSCE.

  20. Development and Control of the Naval Postgraduate School Planar Autonomous Docking Simulator (NPADS)

    NASA Astrophysics Data System (ADS)

    Porter, Robert D.

    2002-09-01

    The objective of this thesis was to design, construct and develop the initial autonomous control algorithm for the NPS Planar Autonomous Docking Simulator (NPADS) The effort included hardware design, fabrication, installation and integration; mass property determination; and the development and testing of control laws utilizing MATLAB and Simulink for modeling and LabView for NPADS control, The NPADS vehicle uses air pads and a granite table to simulate a 2-D, drag-free, zero-g space environment, It is a completely self-contained vehicle equipped with eight cold-gas, bang-bang type thrusters and a reaction wheel for motion control, A 'star sensor' CCD camera locates the vehicle on the table while a color CCD docking camera and two robotic arms will locate and dock with a target vehicle, The on-board computer system leverages PXI technology and a single source, simplifying systems integration, The vehicle is powered by two lead-acid batteries for completely autonomous operation, A graphical user interface and wireless Ethernet enable the user to command and monitor the vehicle from a remote command and data acquisition computer. Two control algorithms were developed and allow the user to either control the thrusters and reaction wheel manually or simply specify a desired location and rotation angle,

  1. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  2. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  3. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  4. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  5. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  6. An FPGA-based High Speed Parallel Signal Processing System for Adaptive Optics Testbed

    NASA Astrophysics Data System (ADS)

    Kim, H.; Choi, Y.; Yang, Y.

    In this paper a state-of-the-art FPGA (Field Programmable Gate Array) based high speed parallel signal processing system (SPS) for adaptive optics (AO) testbed with 1 kHz wavefront error (WFE) correction frequency is reported. The AO system consists of Shack-Hartmann sensor (SHS) and deformable mirror (DM), tip-tilt sensor (TTS), tip-tilt mirror (TTM) and an FPGA-based high performance SPS to correct wavefront aberrations. The SHS is composed of 400 subapertures and the DM 277 actuators with Fried geometry, requiring high speed parallel computing capability SPS. In this study, the target WFE correction speed is 1 kHz; therefore, it requires massive parallel computing capabilities as well as strict hard real time constraints on measurements from sensors, matrix computation latency for correction algorithms, and output of control signals for actuators. In order to meet them, an FPGA based real-time SPS with parallel computing capabilities is proposed. In particular, the SPS is made up of a National Instrument's (NI's) real time computer and five FPGA boards based on state-of-the-art Xilinx Kintex 7 FPGA. Programming is done with NI's LabView environment, providing flexibility when applying different algorithms for WFE correction. It also facilitates faster programming and debugging environment as compared to conventional ones. One of the five FPGA's is assigned to measure TTS and calculate control signals for TTM, while the rest four are used to receive SHS signal, calculate slops for each subaperture and correction signal for DM. With this parallel processing capabilities of the SPS the overall closed-loop WFE correction speed of 1 kHz has been achieved. System requirements, architecture and implementation issues are described; furthermore, experimental results are also given.

  7. Validation of a wireless modular monitoring system for structures

    NASA Astrophysics Data System (ADS)

    Lynch, Jerome P.; Law, Kincho H.; Kiremidjian, Anne S.; Carryer, John E.; Kenny, Thomas W.; Partridge, Aaron; Sundararajan, Arvind

    2002-06-01

    A wireless sensing unit for use in a Wireless Modular Monitoring System (WiMMS) has been designed and constructed. Drawing upon advanced technological developments in the areas of wireless communications, low-power microprocessors and micro-electro mechanical system (MEMS) sensing transducers, the wireless sensing unit represents a high-performance yet low-cost solution to monitoring the short-term and long-term performance of structures. A sophisticated reduced instruction set computer (RISC) microcontroller is placed at the core of the unit to accommodate on-board computations, measurement filtering and data interrogation algorithms. The functionality of the wireless sensing unit is validated through various experiments involving multiple sensing transducers interfaced to the sensing unit. In particular, MEMS-based accelerometers are used as the primary sensing transducer in this study's validation experiments. A five degree of freedom scaled test structure mounted upon a shaking table is employed for system validation.

  8. An efficient parallel termination detection algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A. H.; Crivelli, S.; Jessup, E. R.

    2004-05-27

    Information local to any one processor is insufficient to monitor the overall progress of most distributed computations. Typically, a second distributed computation for detecting termination of the main computation is necessary. In order to be a useful computational tool, the termination detection routine must operate concurrently with the main computation, adding minimal overhead, and it must promptly and correctly detect termination when it occurs. In this paper, we present a new algorithm for detecting the termination of a parallel computation on distributed-memory MIMD computers that satisfies all of those criteria. A variety of termination detection algorithms have been devised. Ofmore » these, the algorithm presented by Sinha, Kale, and Ramkumar (henceforth, the SKR algorithm) is unique in its ability to adapt to the load conditions of the system on which it runs, thereby minimizing the impact of termination detection on performance. Because their algorithm also detects termination quickly, we consider it to be the most efficient practical algorithm presently available. The termination detection algorithm presented here was developed for use in the PMESC programming library for distributed-memory MIMD computers. Like the SKR algorithm, our algorithm adapts to system loads and imposes little overhead. Also like the SKR algorithm, ours is tree-based, and it does not depend on any assumptions about the physical interconnection topology of the processors or the specifics of the distributed computation. In addition, our algorithm is easier to implement and requires only half as many tree traverses as does the SKR algorithm. This paper is organized as follows. In section 2, we define our computational model. In section 3, we review the SKR algorithm. We introduce our new algorithm in section 4, and prove its correctness in section 5. We discuss its efficiency and present experimental results in section 6.« less

  9. Computer Science and Telecommunications Board summary of activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumenthal, M.S.

    1992-03-27

    The Computer Science and Telecommunications Board (CSTB) considers technical and policy issues pertaining to computer science, telecommunications, and associated technologies. CSTB actively disseminates the results of its completed projects to those in a position to help implement their recommendations or otherwise use their insights. It provides a forum for the exchange of information on computer science, computing technology, and telecommunications. This report discusses the major accomplishments of CSTB.

  10. Two-dimensional thermography image retrieval from zig-zag scanned data with TZ-SCAN

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Yamasaki, Ryohei; Arai, Kohei

    2008-10-01

    TZ-SCAN is a simple and low cost thermal imaging device which consists of a single point radiation thermometer on a tripod with a pan-tilt rotator, a DC motor controller board with a USB interface, and a laptop computer for rotator control, data acquisition, and data processing. TZ-SCAN acquires a series of zig-zag scanned data and stores the data as CSV file. A 2-D thermal distribution image can be retrieved by using the second quefrency peak calculated from TZ-SCAN data. An experiment is conducted to confirm the validity of the thermal retrieval algorithm. The experimental result shows efficient accuracy for 2-D thermal distribution image retrieval.

  11. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  12. 78 FR 37647 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Railroad Retirement Board (RRB...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-21

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2013-0010] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Railroad Retirement Board (RRB))--Match Number 1006 AGENCY: Social Security Administration. ACTION: Notice of a renewal of an existing computer matching program that will expire on...

  13. All solid state mid-infrared dual-comb spectroscopy platform based on QCL technology

    NASA Astrophysics Data System (ADS)

    Hugi, Andreas; Geiser, Markus; Villares, Gustavo; Cappelli, Francesco; Blaser, Stephane; Faist, Jérôme

    2015-01-01

    We develop a spectroscopy platform for industrial applications based on semiconductor quantum cascade laser (QCL) frequency combs. The platform's key features will be an unmatched combination of bandwidth of 100 cm-1, resolution of 100 kHz, speed of ten to hundreds of μs as well as size and robustness, opening doors to beforehand unreachable markets. The sensor can be built extremely compact and robust since the laser source is an all-electrically pumped semiconductor optical frequency comb and no mechanical elements are required. However, the parallel acquisition of dual-comb spectrometers comes at the price of enormous data-rates. For system scalability, robustness and optical simplicity we use free-running QCL combs. Therefore no complicated optical locking mechanisms are required. To reach high signal-to-noise ratios, we develop an algorithm, which is based on combination of coherent and non-coherent averaging. This algorithm is specifically optimized for free-running and small footprint, therefore high-repetition rate, comb sources. As a consequence, our system generates data-rates of up to 3.2 GB/sec. These data-rates need to be reduced by several orders of magnitude in real-time in order to be useful for spectral fitting algorithms. We present the development of a data-treatment solution, which reaches a single-channel throughput of 22% using a standard laptop-computer. Using a state-of-the art desktop computer, the throughput is increased to 43%. This is combined with a data-acquisition board to a stand-alone data processing unit, allowing real-time industrial process observation and continuous averaging to achieve highest signal fidelity.

  14. Model and algorithm for container ship stowage planning based on bin-packing problem

    NASA Astrophysics Data System (ADS)

    Zhang, Wei-Ying; Lin, Yan; Ji, Zhuo-Shang

    2005-09-01

    In a general case, container ship serves many different ports on each voyage. A stowage planning for container ship made at one port must take account of the influence on subsequent ports. So the complexity of stowage planning problem increases due to its multi-ports nature. This problem is NP-hard problem. In order to reduce the computational complexity, the problem is decomposed into two sub-problems in this paper. First, container ship stowage problem (CSSP) is regarded as “packing problem”, ship-bays on the board of vessel are regarded as bins, the number of slots at each bay are taken as capacities of bins, and containers with different characteristics (homogeneous containers group) are treated as items packed. At this stage, there are two objective functions, one is to minimize the number of bays packed by containers and the other is to minimize the number of overstows. Secondly, containers assigned to each bays at first stage are allocate to special slot, the objective functions are to minimize the metacentric height, heel and overstows. The taboo search heuristics algorithm are used to solve the subproblem. The main focus of this paper is on the first subproblem. A case certifies the feasibility of the model and algorithm.

  15. Monte-Carlo Tree Search in Settlers of Catan

    NASA Astrophysics Data System (ADS)

    Szita, István; Chaslot, Guillaume; Spronck, Pieter

    Games are considered important benchmark opportunities for artificial intelligence research. Modern strategic board games can typically be played by three or more people, which makes them suitable test beds for investigating multi-player strategic decision making. Monte-Carlo Tree Search (MCTS) is a recently published family of algorithms that achieved successful results with classical, two-player, perfect-information games such as Go. In this paper we apply MCTS to the multi-player, non-deterministic board game Settlers of Catan. We implemented an agent that is able to play against computer-controlled and human players. We show that MCTS can be adapted successfully to multi-agent environments, and present two approaches of providing the agent with a limited amount of domain knowledge. Our results show that the agent has a considerable playing strength when compared to game implementation with existing heuristics. So, we may conclude that MCTS is a suitable tool for achieving a strong Settlers of Catan player.

  16. Space Debris Detection on the HPDP, a Coarse-Grained Reconfigurable Array Architecture for Space

    NASA Astrophysics Data System (ADS)

    Suarez, Diego Andres; Bretz, Daniel; Helfers, Tim; Weidendorfer, Josef; Utzmann, Jens

    2016-08-01

    Stream processing, widely used in communications and digital signal processing applications, requires high- throughput data processing that is achieved in most cases using Application-Specific Integrated Circuit (ASIC) designs. Lack of programmability is an issue especially in space applications, which use on-board components with long life-cycles requiring applications updates. To this end, the High Performance Data Processor (HPDP) architecture integrates an array of coarse-grained reconfigurable elements to provide both flexible and efficient computational power suitable for stream-based data processing applications in space. In this work the capabilities of the HPDP architecture are demonstrated with the implementation of a real-time image processing algorithm for space debris detection in a space-based space surveillance system. The implementation challenges and alternatives are described making trade-offs to improve performance at the expense of negligible degradation of detection accuracy. The proposed implementation uses over 99% of the available computational resources. Performance estimations based on simulations show that the HPDP can amply match the application requirements.

  17. A case study for the real-time experimental evaluation of the VIPER microprocessor

    NASA Astrophysics Data System (ADS)

    Carreno, Victor A.; Angellatta, Rob K.

    1991-09-01

    An experiment to evaluate the applicability of the Verifiable Integrated Processor for Enhanced Reliability (VIPER) microprocessor to real time control is described. The VIPER microprocessor was invented by the Royal Signals and Radar Establishment (RSRE), U.K., and is an example of the use of formal mathematical methods for developing electronic digital systems with a high degree of assurance on the system design and implementation correctness. The experiment consisted of selecting a control law, writing the control law algorithm for the VIPER processor, and providing real time, dynamic inputs into the processor and monitoring the outputs. The control law selected and coded for the VIPER processor was the yaw damper function of an automatic landing program for a 737 aircraft. The mechanisms for interfacing the VIPER Single Board Computer to the VAX host are described. Results include run time experiences, performance evaluation, and comparison of VIPER and FORTRAN yaw damper algorithm output for accuracy estimation.

  18. A case study for the real-time experimental evaluation of the VIPER microprocessor

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Angellatta, Rob K.

    1991-01-01

    An experiment to evaluate the applicability of the Verifiable Integrated Processor for Enhanced Reliability (VIPER) microprocessor to real time control is described. The VIPER microprocessor was invented by the Royal Signals and Radar Establishment (RSRE), U.K., and is an example of the use of formal mathematical methods for developing electronic digital systems with a high degree of assurance on the system design and implementation correctness. The experiment consisted of selecting a control law, writing the control law algorithm for the VIPER processor, and providing real time, dynamic inputs into the processor and monitoring the outputs. The control law selected and coded for the VIPER processor was the yaw damper function of an automatic landing program for a 737 aircraft. The mechanisms for interfacing the VIPER Single Board Computer to the VAX host are described. Results include run time experiences, performance evaluation, and comparison of VIPER and FORTRAN yaw damper algorithm output for accuracy estimation.

  19. Assessment of a visually guided autonomous exploration robot

    NASA Astrophysics Data System (ADS)

    Harris, C.; Evans, R.; Tidey, E.

    2008-10-01

    A system has been developed to enable a robot vehicle to autonomously explore and map an indoor environment using only visual sensors. The vehicle is equipped with a single camera, whose output is wirelessly transmitted to an off-board standard PC for processing. Visual features within the camera imagery are extracted and tracked, and their 3D positions are calculated using a Structure from Motion algorithm. As the vehicle travels, obstacles in its surroundings are identified and a map of the explored region is generated. This paper discusses suitable criteria for assessing the performance of the system by computer-based simulation and practical experiments with a real vehicle. Performance measures identified include the positional accuracy of the 3D map and the vehicle's location, the efficiency and completeness of the exploration and the system reliability. Selected results are presented and the effect of key system parameters and algorithms on performance is assessed. This work was funded by the Systems Engineering for Autonomous Systems (SEAS) Defence Technology Centre established by the UK Ministry of Defence.

  20. Orbital theory in terms of KS elements with luni-solar perturbations

    NASA Astrophysics Data System (ADS)

    Sellamuthu, Harishkumar; Sharma, Ram

    2016-07-01

    Precise orbit computation of Earth orbiting satellites is essential for efficient mission planning of planetary exploration, navigation and satellite geodesy. The third-body perturbations of the Sun and the Moon predominantly affect the satellite motion in the high altitude and elliptical orbits, where the effect of atmospheric drag is negligible. The physics of the luni-solar gravity effect on Earth satellites have been studied extensively over the years. The combined luni-solar gravitational attraction will induce a cumulative effect on the dynamics of satellite orbits, which mainly oscillates the perigee altitude. Though accurate orbital parameters are computed by numerical integration with respect to complex force models, analytical theories are highly valued for the manifold of solutions restricted to relatively simple force models. During close approach, the classical equations of motion in celestial mechanics are almost singular and they are unstable for long-term orbit propagation. A new singularity-free analytical theory in terms of KS (Kustaanheimo and Stiefel) regular elements with respect to luni-solar perturbation is developed. These equations are regular everywhere and eccentric anomaly is the independent variable. Plataforma Solar de Almería (PSA) algorithm and a Fourier series algorithm are used to compute the accurate positions of the Sun and the Moon, respectively. Numerical studies are carried out for wide range of initial parameters and the analytical solutions are found to be satisfactory when compared with numerically integrated values. The symmetrical nature of the equations allows only two of the nine equations to be solved for computing the state vectors and the time. Only a change in the initial conditions is required to solve the other equations. This theory will find multiple applications including on-board software packages and for mission analysis purposes.

  1. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation.

    PubMed

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro

    2012-06-01

    To investigate and validate the clinical feasibility of using half-value layer (HVL) and peak tube potential (kVp) for characterizing a kilovoltage (kV) source spectrum for the purpose of computing kV x-ray dose accrued from imaging procedures. To use this approach to characterize a Varian® On-Board Imager® (OBI) source and perform experimental validation of a novel in-house hybrid dose computation algorithm for kV x-rays. We characterized the spectrum of an imaging kV x-ray source using the HVL and the kVp as the sole beam quality identifiers using third-party freeware Spektr to generate the spectra. We studied the sensitivity of our dose computation algorithm to uncertainties in the beam's HVL and kVp by systematically varying these spectral parameters. To validate our approach experimentally, we characterized the spectrum of a Varian® OBI system by measuring the HVL using a Farmer-type Capintec ion chamber (0.06 cc) in air and compared dose calculations using our computationally validated in-house kV dose calculation code to measured percent depth-dose and transverse dose profiles for 80, 100, and 125 kVp open beams in a homogeneous phantom and a heterogeneous phantom comprising tissue, lung, and bone equivalent materials. The sensitivity analysis of the beam quality parameters (i.e., HVL, kVp, and field size) on dose computation accuracy shows that typical measurement uncertainties in the HVL and kVp (±0.2 mm Al and ±2 kVp, respectively) source characterization parameters lead to dose computation errors of less than 2%. Furthermore, for an open beam with no added filtration, HVL variations affect dose computation accuracy by less than 1% for a 125 kVp beam when field size is varied from 5 × 5 cm(2) to 40 × 40 cm(2). The central axis depth dose calculations and experimental measurements for the 80, 100, and 125 kVp energies agreed within 2% for the homogeneous and heterogeneous block phantoms, and agreement for the transverse dose profiles was within 6%. The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm. © 2012 American Association of Physicists in Medicine.

  2. QPSO-Based Adaptive DNA Computing Algorithm

    PubMed Central

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409

  3. Quantitative Features of Liver Lesions, Lung Nodules, and Renal Stones at Multi-Detector Row CT Examinations: Dependency on Radiation Dose and Reconstruction Algorithm.

    PubMed

    Solomon, Justin; Mileto, Achille; Nelson, Rendon C; Roy Choudhury, Kingshuk; Samei, Ehsan

    2016-04-01

    To determine if radiation dose and reconstruction algorithm affect the computer-based extraction and analysis of quantitative imaging features in lung nodules, liver lesions, and renal stones at multi-detector row computed tomography (CT). Retrospective analysis of data from a prospective, multicenter, HIPAA-compliant, institutional review board-approved clinical trial was performed by extracting 23 quantitative imaging features (size, shape, attenuation, edge sharpness, pixel value distribution, and texture) of lesions on multi-detector row CT images of 20 adult patients (14 men, six women; mean age, 63 years; range, 38-72 years) referred for known or suspected focal liver lesions, lung nodules, or kidney stones. Data were acquired between September 2011 and April 2012. All multi-detector row CT scans were performed at two different radiation dose levels; images were reconstructed with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) algorithms. A linear mixed-effects model was used to assess the effect of radiation dose and reconstruction algorithm on extracted features. Among the 23 imaging features assessed, radiation dose had a significant effect on five, three, and four of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Adaptive statistical iterative reconstruction had a significant effect on three, one, and one of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). MBIR reconstruction had a significant effect on nine, 11, and 15 of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Of note, the measured size of lung nodules and renal stones with MBIR was significantly different than those for the other two algorithms (P < .002 for all comparisons). Although lesion texture was significantly affected by the reconstruction algorithm used (average of 3.33 features affected by MBIR throughout lesion types; P < .002, for all comparisons), no significant effect of the radiation dose setting was observed for all but one of the texture features (P = .002-.998). Radiation dose settings and reconstruction algorithms affect the extraction and analysis of quantitative imaging features in lesions at multi-detector row CT.

  4. An advanced retrieval algorithm for greenhouse gases using polarization information measured by GOSAT TANSO-FTS SWIR I: Simulation study

    NASA Astrophysics Data System (ADS)

    Kikuchi, N.; Yoshida, Y.; Uchino, O.; Morino, I.; Yokota, T.

    2016-11-01

    We present an algorithm for retrieving column-averaged dry air mole fraction of carbon dioxide (XCO2) and methane (XCH4) from reflected spectra in the shortwave infrared (SWIR) measured by the TANSO-FTS (Thermal And Near infrared Sensor for carbon Observation Fourier Transform Spectrometer) sensor on board the Greenhouse gases Observing SATellite (GOSAT). The algorithm uses the two linear polarizations observed by TANSO-FTS to improve corrections to the interference effects of atmospheric aerosols, which degrade the accuracy in the retrieved greenhouse gas concentrations. To account for polarization by the land surface reflection in the forward model, we introduced a bidirectional reflection matrix model that has two parameters to be retrieved simultaneously with other state parameters. The accuracy in XCO2 and XCH4 values retrieved with the algorithm was evaluated by using simulated retrievals over both land and ocean, focusing on the capability of the algorithm to correct imperfect prior knowledge of aerosols. To do this, we first generated simulated TANSO-FTS spectra using a global distribution of aerosols computed by the aerosol transport model SPRINTARS. Then the simulated spectra were submitted to the algorithms as measurements both with and without polarization information, adopting a priori profiles of aerosols that differ from the true profiles. We found that the accuracy of XCO2 and XCH4, as well as profiles of aerosols, retrieved with polarization information was considerably improved over values retrieved without polarization information, for simulated observations over land with aerosol optical thickness greater than 0.1 at 1.6 μm.

  5. Computation of Symmetric Discrete Cosine Transform Using Bakhvalov's Algorithm

    NASA Technical Reports Server (NTRS)

    Aburdene, Maurice F.; Strojny, Brian C.; Dorband, John E.

    2005-01-01

    A number of algorithms for recursive computation of the discrete cosine transform (DCT) have been developed recently. This paper presents a new method for computing the discrete cosine transform and its inverse using Bakhvalov's algorithm, a method developed for evaluation of a polynomial at a point. In this paper, we will focus on both the application of the algorithm to the computation of the DCT-I and its complexity. In addition, Bakhvalov s algorithm is compared with Clenshaw s algorithm for the computation of the DCT.

  6. A model-based 3D template matching technique for pose acquisition of an uncooperative space object.

    PubMed

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2015-03-16

    This paper presents a customized three-dimensional template matching technique for autonomous pose determination of uncooperative targets. This topic is relevant to advanced space applications, like active debris removal and on-orbit servicing. The proposed technique is model-based and produces estimates of the target pose without any prior pose information, by processing three-dimensional point clouds provided by a LIDAR. These estimates are then used to initialize a pose tracking algorithm. Peculiar features of the proposed approach are the use of a reduced number of templates and the idea of building the database of templates on-line, thus significantly reducing the amount of on-board stored data with respect to traditional techniques. An algorithm variant is also introduced aimed at further accelerating the pose acquisition time and reducing the computational cost. Technique performance is investigated within a realistic numerical simulation environment comprising a target model, LIDAR operation and various target-chaser relative dynamics scenarios, relevant to close-proximity flight operations. Specifically, the capability of the proposed techniques to provide a pose solution suitable to initialize the tracking algorithm is demonstrated, as well as their robustness against highly variable pose conditions determined by the relative dynamics. Finally, a criterion for autonomous failure detection of the presented techniques is presented.

  7. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  8. An FTIR point sensor for identifying chemical WMD and hazardous materials

    NASA Astrophysics Data System (ADS)

    Norman, Mark L.; Gagnon, Aaron M.; Reffner, John A.; Schiering, David W.; Allen, Jeffrey D.

    2004-03-01

    A new point sensor for identifying chemical weapons of mass destruction and other hazardous materials based on Fourier transform infrared (FT-IR) spectroscopy is presented. The sensor is a portable, fully functional FT-IR system that features a miniaturized Michelson interferometer, an integrated diamond attenuated total reflection (ATR) sample interface, and an embedded on-board computer. Samples are identified by an automated search algorithm that compares their infrared spectra to digitized databases that include reference spectra of nerve and blister agents, toxic industrial chemicals, and other hazardous materials. The hardware and software are designed for use by technicians with no background in infrared spectroscopy. The unit, which is fully self-contained, can be hand-carried and used in a hot zone by personnel in Level A protective gear, and subsequently decontaminated by spraying or immersion. Wireless control by a remote computer is also possible. Details of the system design and performance, including results of field validation tests, are discussed.

  9. Passive range estimation for rotorcraft low-altitude flight

    NASA Technical Reports Server (NTRS)

    Sridhar, B.; Suorsa, R.; Hussien, B.

    1991-01-01

    The automation of rotorcraft low-altitude flight presents challenging problems in control, computer vision and image understanding. A critical element in this problem is the ability to detect and locate obstacles, using on-board sensors, and modify the nominal trajectory. This requirement is also necessary for the safe landing of an autonomous lander on Mars. This paper examines some of the issues in the location of objects using a sequence of images from a passive sensor, and describes a Kalman filter approach to estimate the range to obstacles. The Kalman filter is also used to track features in the images leading to a significant reduction of search effort in the feature extraction step of the algorithm. The method can compute range for both straight line and curvilinear motion of the sensor. A laboratory experiment was designed to acquire a sequence of images along with sensor motion parameters under conditions similar to helicopter flight. Range estimation results using this imagery are presented.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poivey, C.; Notebaert, O.; Garnier, P.

    The ARIANE5 On Board Computer (OBC) and Inertial Reference System (SRI) are based on Motorola MC68020 processor and MC68882 coprocessor. The SRI data acquisition board also uses the DSP TMS320C25 from Texas Instruments. These devices were characterized to proton induced SEUs. But representativeness of SEU test results on processors was questioned during ARIANE5 studies. Protons test of these devices were also performed in the actual equipments with flight (or representative of) softwares. The results show that the On Board Computer and the Inertial Reference System can satisfy the requirements of the ARIANE5 missions.

  11. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  12. A class of parallel algorithms for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.

  13. Enhancement of the Computer Lumber Grading Program to Support Polygonal Defects

    Treesearch

    Powsiri Klinkhachorn; R. Kathari; D. Yost; Philip A. Araman

    1993-01-01

    Computer grading of hardwood lumber promises to avoid regrading of the same lumber because of disagreements between the buyer and the seller. However, the first generation of computer programs for hardwood lumber grading simplify the process by modeling defects on the board as rectangles. This speeds up the grading process buy can inadvertently put a board into a lower...

  14. Some implications of remanufacturing hardwood lumber

    Treesearch

    Charles J. Gatchell; R. Edward Thomas; Elizabeth S. Walker

    2000-01-01

    Research on several hundred well-manufactured 1 and 2A Common red oak boards shows that better edging and/or trimming or division-based remanufacturing can produce boards of higher grade and value. Division-based remanufacturing divides a board into as many as four smaller boards. The UGRS computer program (3,4) grades digitized boards, examines their remanufacturing...

  15. On-board Payload Data Processing from Earth to Space Segment

    NASA Astrophysics Data System (ADS)

    Tragni, M.; Abbattista, C.; Amoruso, L.; Cinquepalmi, L.; Bgongiari, F.; Errico, W.

    2013-09-01

    Matching the users application requirements with the more and more huge data streaming of the satellite missions is becoming very complex. But we need both of them. To face both the data management (memory availability) and their transmission (band availability) many recent R&D activities are studying the right way to move the data processing from the ground segment to the space segment by the development of the so-called On-board Payload Data Processing (OPDP). The space designer are trying to find new strategies to increase the on board computation capacity and its viability to overcome such limitations, memory and band, focusing the transmission of remote sensing information (not only data) towards their final use. Some typical applications which can benefit of the on board payload data processing include the automatic control of a satellites constellation which can modify its scheduled acquisitions directly on-board and according to the information extracted from the just acquired data, increasing, for example, the capability of monitoring a specific objective (such as oil spills, illegal traffic) with a greater versatility than a traditional ground segment workflow. The authors and their companies can count on a sound experience in design and development of open, modular and compact on-board processing systems. Actually they are involved in a program, the Space Payload Data Processing (SpacePDP) whose main objective is to develop an hardware and a software framework able to perform both the space mission standard tasks (sensors control, mass storage devices management, uplink and downlink) and the specific tasks required by each mission. SpacePDP is an Open and modular Payload Data Processing system, composed of Hardware and Software modules included a SDK. The whole system is characterised by flexible and customizable building blocks that form the system architectures and by a very easy way to be integrated in the missions by the SDK (a development environment with encapsulated low-level drivers, HW support and testing environment). Furthermore Space PDP presents an advanced processing system to be fully adopted both as on-board module for EO spacecrafts and extra-planetary exploration rovers. The main innovative aspects are: • HW and SW modularity - scalability for the Payload Data Processing and AOC S/S • Complex processing capabilities fully available onboard (on spacecrafts or rovers) • Reduced effort in mission SW design, implementation, verification and validation tasks • HW abstraction level comparable to present multitasking Unix-like systems allowing SW and algorithms re-use (also from available GS applications). The development approach addressed by SpacePDP is based both on the re-use and resources sharing with flexible elements adjustable to different missions and to different tasks within the same mission (e.g. shared between AOCS and data management S/S) and on a strong specialization in the system elements that are designed to satisfy specific mission needs and specific technological innovations. The innovative processing system is proven in many possible scenarios of use from standard compression task up to the most complex one as the image classification directly on-board. The first one is just useful for standard benchmark trade-off analysis of HW and SW capabilities respect to the other common processing modules. The classification is the ambitious objective of that system to process directly on board the data from sensor (by down-sampling or in no-full resolution acquisition modality if necessary) to detect at flight time any features on ground or observed phenomenas. For Earth application it could be the cloud coverage (to avoid the acquisition and discard the data), burning areas or vessels detection and similar. On Planetary o Universe exploration mission it could be the path recognition for a rover, or high power energy events in outbound galaxies. Sometimes it could be need to review the GS algorithms to approach the problem in the Space scenario, i.e. for Synthetic Aperture Radar (SAR) application the typical focalization of the raw image needs to be improved to be effectively in this context. Many works are actually available on that, the authors have developed a specific ones for neural network algorithms. By the information directly "acquired" (so computed) on-board and without intervention of typical ground systems facilities, the spacecraft can take autonomously decision regarding a re-planning of acquisition for itself (at high performance modalities) or other platforms in constellation or affiliated reducing the time elapse as in the nowadays approach. For no EO missions it is big advantage to reduce the large round trip flight of transmission. In general the saving of resources is extensible to memory and RF transmission band resources, time reaction (like civil protection applications), etc. enlarging the flexibility of missions and improving the final results. SpacePDP main HW and SW characteristics: • Compactness: size and weight of each module are fitted in a Eurocard 3U 8HP format with «Inter-Board» connection through cPCI peripheral bus. • Modularity: the Payload is usually composed by several sub-systems. • Flexibility: coprocessor FPGA, on-board memory and support avionic protocols are flexible, allowing different modules customization according to mission needs • Completeness: the two core boards (CPU and Companion) are enough to obtain a first complete payload data processing system in a basic configuration. • Integrability: The payload data processing system is open to accept custom modules to be connected on its open peripheral bus. • CPU HW module (one or more) based on a RISC processor (LEON2FT, a SPARC V8 architecture, 80Mips @100MHz on ASIC ATMEL AT697F) • DSP HW module (optional with more instances) based on a FPGA dedicated architecture to ensure an effective multitasking control and to offer high numerical computation with huge memory availability. • Real time OS RTEMS and SW libraries (with C/C++ external interfaces) acting as HW abstraction level • SDK with a development environment, a tool chain and an integrated graphical user interface • "Callbacks" management and support to HW events (interrupts, timer, ...), including external devices (via SpaceWire) and priority definition and management. • Large amount of volatile memory on CPU board (64 Mb Flash Memory, 80 Mb SRAM and 2 Gb SDR-SDRAM) and non-volatile (up to 2 Mb EEPROM) • Remote programmability of the LEON bootable code. • Debug access point: for software debug and tuning with LEON serial port (DSU) or for «in flight» monitoring via SpaceWire-RMAP

  16. Impact of the Shodan Computer Search Engine on Internet-facing Industrial Control System Devices

    DTIC Science & Technology

    2014-03-27

    bridge implementation. The transparent bridge is designed using a Raspberry Pi configured with Linux IPtables and bridge-utils to bridge the on board...Ethernet card and a second USB Ethernet adapter. A Raspberry Pi is a credit-card-sized single-board computer running a version of Debian Linux. There

  17. Ares I-X Best Estimated Trajectory Analysis and Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; Starr, Brett R.; Derry, Stephen D.; Brandon, Jay; Olds, Aaron D.

    2011-01-01

    The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air-data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.

  18. Ares I-X Best Estimated Trajectory and Comparison with Pre-Flight Predictions

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; Derry, Stephen D.; Brandon, Jay M.; Starr, Brett R.; Tartabini, Paul V.; Olds, Aaron D.

    2011-01-01

    The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air- data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.

  19. A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.

    PubMed

    Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto

    2017-09-29

    The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.

  20. Vehicle dynamic analysis using neuronal network algorithms

    NASA Astrophysics Data System (ADS)

    Oloeriu, Florin; Mocian, Oana

    2014-06-01

    Theoretical developments of certain engineering areas, the emergence of new investigation tools, which are better and more precise and their implementation on-board the everyday vehicles, all these represent main influence factors that impact the theoretical and experimental study of vehicle's dynamic behavior. Once the implementation of these new technologies onto the vehicle's construction had been achieved, it had led to more and more complex systems. Some of the most important, such as the electronic control of engine, transmission, suspension, steering, braking and traction had a positive impact onto the vehicle's dynamic behavior. The existence of CPU on-board vehicles allows data acquisition and storage and it leads to a more accurate and better experimental and theoretical study of vehicle dynamics. It uses the information offered directly by the already on-board built-in elements of electronic control systems. The technical literature that studies vehicle dynamics is entirely focused onto parametric analysis. This kind of approach adopts two simplifying assumptions. Functional parameters obey certain distribution laws, which are known in classical statistics theory. The second assumption states that the mathematical models are previously known and have coefficients that are not time-dependent. Both the mentioned assumptions are not confirmed in real situations: the functional parameters do not follow any known statistical repartition laws and the mathematical laws aren't previously known and contain families of parameters and are mostly time-dependent. The purpose of the paper is to present a more accurate analysis methodology that can be applied when studying vehicle's dynamic behavior. A method that provides the setting of non-parametrical mathematical models for vehicle's dynamic behavior is relying on neuronal networks. This method contains coefficients that are time-dependent. Neuronal networks are mostly used in various types' system controls, thus being a non-linear process identification algorithm. The common use of neuronal networks for non-linear processes is justified by the fact that both have the ability to organize by themselves. That is why the neuronal networks best define intelligent systems, thus the word `neuronal' is sending one's mind to the biological neuron cell. The paper presents how to better interpret data fed from the on-board computer and a new way of processing that data to better model the real life dynamic behavior of the vehicle.

  1. Medical Device Plug-and-Play Interoperability Standards and Technology Leadership

    DTIC Science & Technology

    2012-10-01

    External Network Pump Adapter PulseOx Adapter • MD MP3 cart is a platform for the development of smart pump control algorithms • It includes...delivery with bounded latency Medical Device Mobile PnP Prototype Platform (MD MP3 ) • Got MDCF code to run on the BeagleBoard development boards we are

  2. Edge Pushing is Equivalent to Vertex Elimination for Computing Hessians

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Mu; Pothen, Alex; Hovland, Paul

    We prove the equivalence of two different Hessian evaluation algorithms in AD. The first is the Edge Pushing algorithm of Gower and Mello, which may be viewed as a second order Reverse mode algorithm for computing the Hessian. In earlier work, we have derived the Edge Pushing algorithm by exploiting a Reverse mode invariant based on the concept of live variables in compiler theory. The second algorithm is based on eliminating vertices in a computational graph of the gradient, in which intermediate variables are successively eliminated from the graph, and the weights of the edges are updated suitably. We provemore » that if the vertices are eliminated in a reverse topological order while preserving symmetry in the computational graph of the gradient, then the Vertex Elimination algorithm and the Edge Pushing algorithm perform identical computations. In this sense, the two algorithms are equivalent. This insight that unifies two seemingly disparate approaches to Hessian computations could lead to improved algorithms and implementations for computing Hessians. Read More: http://epubs.siam.org/doi/10.1137/1.9781611974690.ch11« less

  3. Common Board Design for the OBC I/O Unit and The OBC CCSDS Unit of The Stuttgart University Satellite "Flying Laptop"

    NASA Astrophysics Data System (ADS)

    Eickhoff, Jens; Cook, Barry; Walker, Paul; Habinc, Sadi; Witt, Rouven; Roser, Hans-Peter

    2011-08-01

    As already published in another paper at DASIA 2010 in Budapest [1] the University of Stuttgart, Germany, is developing an advanced 3-axis stabilized small satellite applying industry standards for command/control techniques, onboard software design and onboard computer components.The satellite has a launch mass of approx. 120kg and is foreseen to be launched end 2013 as piggy back payload on an Indian PSLV launcher.During phase C the main challenge was the conceptual design for an ultra compact and performant onboard computer (OBC), which is able to support an industry standard operating system, a PUS standard based onboard software (OBSW) and CCSDS standard based ground/space communication. The developed architecture is based on 4 main elements (see [1] and Figure 4):• the OBC core board (single board computer based on LEON3 FT architecture),• an I/O Board for all OBC digital interfaces to S/C equipment,• a CCSDS TC/TM pre-processor board,• CPDU being embedded in the PCDU.The EM for the OBC core meanwhile has been shipped to the University by the supplier Aeroflex Colorado Springs, USA and is in use in Stuttgart since January 2011. Figure 2 and Figure 3 provide brief impressions. This paper concentrates on the common design of the I/O board and the CCSDS processor boards.

  4. Automated mixed traffic transit vehicle microprocessor controller

    NASA Technical Reports Server (NTRS)

    Marks, R. A.; Cassell, P.; Johnston, A. R.

    1981-01-01

    An improved Automated Mixed Traffic Vehicle (AMTV) speed control system employing a microprocessor and transistor chopper motor current controller is described and its performance is presented in terms of velocity versus time curves. The on board computer hardware and software systems are described as is the software development system. All of the programming used in this controller was implemented using FORTRAN. This microprocessor controller made possible a number of safety features and improved the comfort associated with starting and shopping. In addition, most of the vehicle's performance characteristics can be altered by simple program parameter changes. A failure analysis of the microprocessor controller was generated and the results are included. Flow diagrams for the speed control algorithms and complete FORTRAN code listings are also included.

  5. A compressed sensing X-ray camera with a multilayer architecture

    DOE PAGES

    Wang, Zhehui; Laroshenko, O.; Li, S.; ...

    2018-01-25

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  6. A compressed sensing X-ray camera with a multilayer architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhehui; Laroshenko, O.; Li, S.

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  7. Design and analysis of advanced flight planning concepts

    NASA Technical Reports Server (NTRS)

    Sorensen, John A.

    1987-01-01

    The objectives of this continuing effort are to develop and evaluate new algorithms and advanced concepts for flight management and flight planning. This includes the minimization of fuel or direct operating costs, the integration of the airborne flight management and ground-based flight planning processes, and the enhancement of future traffic management systems design. Flight management (FMS) concepts are for on-board profile computation and steering of transport aircraft in the vertical plane between a city pair and along a given horizontal path. Flight planning (FPS) concepts are for the pre-flight ground based computation of the three-dimensional reference trajectory that connects the city pair and specifies the horizontal path, fuel load, and weather profiles for initializing the FMS. As part of these objectives, a new computer program called EFPLAN has been developed and utilized to study advanced flight planning concepts. EFPLAN represents an experimental version of an FPS. It has been developed to generate reference flight plans compatible as input to an FMS and to provide various options for flight planning research. This report describes EFPLAN and the associated research conducted in its development.

  8. Spatiotemporal Local-Remote Senor Fusion (ST-LRSF) for Cooperative Vehicle Positioning

    PubMed Central

    Bhawiyuga, Adhitya

    2018-01-01

    Vehicle positioning plays an important role in the design of protocols, algorithms, and applications in the intelligent transport systems. In this paper, we present a new framework of spatiotemporal local-remote sensor fusion (ST-LRSF) that cooperatively improves the accuracy of absolute vehicle positioning based on two state estimates of a vehicle in the vicinity: a local sensing estimate, measured by the on-board exteroceptive sensors, and a remote sensing estimate, received from neighbor vehicles via vehicle-to-everything communications. Given both estimates of vehicle state, the ST-LRSF scheme identifies the set of vehicles in the vicinity, determines the reference vehicle state, proposes a spatiotemporal dissimilarity metric between two reference vehicle states, and presents a greedy algorithm to compute a minimal weighted matching (MWM) between them. Given the outcome of MWM, the theoretical position uncertainty of the proposed refinement algorithm is proven to be inversely proportional to the square root of matching size. To further reduce the positioning uncertainty, we also develop an extended Kalman filter model with the refined position of ST-LRSF as one of the measurement inputs. The numerical results demonstrate that the proposed ST-LRSF framework can achieve high positioning accuracy for many different scenarios of cooperative vehicle positioning. PMID:29617341

  9. High-Rate Digital Receiver Board

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder; Bialas, Thomas; Brambora, Clifford; Fisher, David

    2004-01-01

    A high-rate digital receiver (HRDR) implemented as a peripheral component interface (PCI) board has been developed as a prototype of compact, general-purpose, inexpensive, potentially mass-producible data-acquisition interfaces between telemetry systems and personal computers. The installation of this board in a personal computer together with an analog preprocessor enables the computer to function as a versatile, highrate telemetry-data-acquisition and demodulator system. The prototype HRDR PCI board can handle data at rates as high as 600 megabits per second, in a variety of telemetry formats, transmitted by diverse phase-modulation schemes that include binary phase-shift keying and various forms of quadrature phaseshift keying. Costing less than $25,000 (as of year 2003), the prototype HRDR PCI board supplants multiple racks of older equipment that, when new, cost over $500,000. Just as the development of standard network-interface chips has contributed to the proliferation of networked computers, it is anticipated that the development of standard chips based on the HRDR could contribute to reductions in size and cost and increases in performance of telemetry systems.

  10. A PC-Based Controller for Dextrous Arms

    NASA Technical Reports Server (NTRS)

    Fiorini, Paolo; Seraji, Homayoun; Long, Mark

    1996-01-01

    This paper describes the architecture and performance of a PC-based controller for 7-DOF dextrous manipulators. The computing platform is a 486-based personal computer equipped with a bus extender to access the robot Multibus controller, together with a single board computer as the graphical engine, and with a parallel I/O board to interface with a force-torque sensor mounted on the manipulator wrist.

  11. Human Centered Design and Development for NASA's MerBoard

    NASA Technical Reports Server (NTRS)

    Trimble, Jay

    2003-01-01

    This viewgraph presentation provides an overview of the design and development process for NASA's MerBoard. These devices are large interactive display screens which can be shown on the user's computer, which will allow scientists in many locations to interpret and evaluate mission data in real-time. These tools are scheduled to be used during the 2003 Mars Exploration Rover (MER) expeditions. Topics covered include: mission overview, Mer Human Centered Computers, FIDO 2001 observations and MerBoard prototypes.

  12. CACTUS: Calculator and Computer Technology User Service.

    ERIC Educational Resources Information Center

    Hyde, Hartley

    1998-01-01

    Presents an activity in which students use computer-based spreadsheets to find out how much grain should be added to a chess board when a grain of rice is put on the first square, the amount is doubled for the next square, and the chess board is covered. (ASK)

  13. A survey of the state-of-the-art and focused research in range systems, task 1

    NASA Technical Reports Server (NTRS)

    Omura, J. K.

    1986-01-01

    This final report presents the latest research activity in voice compression. We have designed a non-real time simulation system that is implemented around the IBM-PC where the IBM-PC is used as a speech work station for data acquisition and analysis of voice samples. A real-time implementation is also proposed. This real-time Voice Compression Board (VCB) is built around the Texas Instruments TMS-3220. The voice compression algorithm investigated here was described in an earlier report titled, Low Cost Voice Compression for Mobile Digital Radios, by the author. We will assume the reader is familiar with the voice compression algorithm discussed in this report. The VCB compresses speech waveforms at data rates ranging from 4.8 K bps to 16 K bps. This board interfaces to the IBM-PC 8-bit bus, and plugs into a single expansion slot on the mother board.

  14. A model-based approach for detection of runways and other objects in image sequences acquired using an on-board camera

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Devadiga, Sadashiva; Tang, Yuan-Liang

    1994-01-01

    This research was initiated as a part of the Advanced Sensor and Imaging System Technology (ASSIST) program at NASA Langley Research Center. The primary goal of this research is the development of image analysis algorithms for the detection of runways and other objects using an on-board camera. Initial effort was concentrated on images acquired using a passive millimeter wave (PMMW) sensor. The images obtained using PMMW sensors under poor visibility conditions due to atmospheric fog are characterized by very low spatial resolution but good image contrast compared to those images obtained using sensors operating in the visible spectrum. Algorithms developed for analyzing these images using a model of the runway and other objects are described in Part 1 of this report. Experimental verification of these algorithms was limited to a sequence of images simulated from a single frame of PMMW image. Subsequent development and evaluation of algorithms was done using video image sequences. These images have better spatial and temporal resolution compared to PMMW images. Algorithms for reliable recognition of runways and accurate estimation of spatial position of stationary objects on the ground have been developed and evaluated using several image sequences. These algorithms are described in Part 2 of this report. A list of all publications resulting from this work is also included.

  15. SU-E-J-246: A Deformation-Field Map Based Liver 4D CBCT Reconstruction Method Using Gold Nanoparticles as Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, W; Zhang, Y; Ren, L

    2014-06-01

    Purpose: To investigate the feasibility of using nanoparticle markers to validate liver tumor motion together with a deformation field map-based four dimensional (4D) cone-beam computed tomography (CBCT) reconstruction method. Methods: A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In this method, each phase of the 4D-CBCT is considered as a deformation of a prior CT volume. The DFM is solved by a motion modeling and free-form deformation (MM-FD) technique, using a data fidelity constraint and the deformation energy minimization. For liver imaging, there is low contrast of a liver tumor inmore » on-board projections. A validation of liver tumor motion using implanted gold nanoparticles, along with the MM-FD deformation technique is implemented to reconstruct onboard 4D CBCT liver radiotherapy images. These nanoparticles were placed around the liver tumor to reflect the tumor positions in both CT simulation and on-board image acquisition. When reconstructing each phase of the 4D-CBCT, the migrations of the gold nanoparticles act as a constraint to regularize the deformation field, along with the data fidelity and the energy minimization constraints. In this study, multiple tumor diameters and positions were simulated within the liver for on-board 4D-CBCT imaging. The on-board 4D-CBCT reconstructed by the proposed method was compared with the “ground truth” image. Results: The preliminary data, which uses reconstruction for lung radiotherapy suggests that the advanced reconstruction algorithm including the gold nanoparticle constraint will Resultin volume percentage differences (VPD) between lesions in reconstructed images by MM-FD and “ground truth” on-board images of 11.5% (± 9.4%) and a center of mass shift of 1.3 mm (± 1.3 mm) for liver radiotherapy. Conclusion: The advanced MM-FD technique enforcing the additional constraints from gold nanoparticles, results in improved accuracy for reconstructing on-board 4D-CBCT of liver tumor. Varian medical systems research grant.« less

  16. Sensitivity Analysis of ProSEDS (Propulsive Small Expendable Deployer System) Data Communication System

    NASA Technical Reports Server (NTRS)

    Park, Nohpill; Reagan, Shawn; Franks, Greg; Jones, William G.

    1999-01-01

    This paper discusses analytical approaches to evaluating performance of Spacecraft On-Board Computing systems, thereby ultimately achieving a reliable spacecraft data communications systems. The sensitivity analysis approach of memory system on the ProSEDS (Propulsive Small Expendable Deployer System) as a part of its data communication system will be investigated. Also, general issues and possible approaches to reliable Spacecraft On-Board Interconnection Network and Processor Array will be shown. The performance issues of a spacecraft on-board computing systems such as sensitivity, throughput, delay and reliability will be introduced and discussed.

  17. A Functional Description of the Geophysical Data Acquisition System

    DTIC Science & Technology

    1990-08-10

    less than 50 SPS nor greater than 250 SPS 3.0 SENSORS/TRANSDUCERS 3.1 CHAPTER OVERVIEW Most of the research supported by GDAS has primarily involved two...signal for the computer. The SRUN signal from the computer is fed to a retriggerable oneshot multivibrator on the board. SRUN consists of a pulse train...that is present when the computer is running. The oneshot output drives the RUN lamp on the front panel. Finally, one pin on the board edge connector is

  18. Results of the NFIRAOS RTC trade study

    NASA Astrophysics Data System (ADS)

    Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent L.; Gilles, Luc; Herriot, Glen; Kerley, Daniel A.; Ljusic, Zoran; McVeigh, Eric A.; Prior, Robert; Smith, Malcolm; Wang, Lianqi

    2014-07-01

    With two large deformable mirrors with a total of more than 7000 actuators that need to be driven from the measurements of six 60x60 LGS WFSs (total 1.23Mpixels) at 800Hz with a latency of less than one frame, NFIRAOS presents an interesting real-time computing challenge. This paper reports on a recent trade study to evaluate which current technology could meet this challenge, with the plan to select a baseline architecture by the beginning of NFIRAOS construction in 2014. We have evaluated a number of architectures, ranging from very specialized layouts with custom boards to more generic architectures made from commercial off-the-shelf units (CPUs with or without accelerator boards). For each architecture, we have found the most suitable algorithm, mapped it onto the hardware and evaluated the performance through benchmarking whenever possible. We have evaluated a large number of criteria, including cost, power consumption, reliability and flexibility, and proceeded with scoring each architecture based on these criteria. We have found that, with today's technology, the NFIRAOS requirements are well within reach of off-the-shelf commercial hardware running a parallel implementation of the straightforward matrix-vector multiply (MVM) algorithm for wave-front reconstruction. Even accelerators such as GPUs and Xeon Phis are no longer necessary. Indeed, we have found that the entire NFIRAOS RTC can be handled by seven 2U high-end PC-servers using 10GbE connectivity. Accelerators are only required for the off-line process of updating the matrix control matrix every ~10s, as observing conditions change.

  19. Development and Flight Results of a PC104/QNX-Based On-Board Computer and Software for the YES2 Tether Experiment

    NASA Astrophysics Data System (ADS)

    Spiliotopoulos, I.; Mirmont, M.; Kruijff, M.

    2008-08-01

    This paper highlights the flight preparation and mission performance of a PC104-based On-Board Computer for ESA's second Young Engineer's Satellite (YES2), with additional attention to the flight software design and experience of QNX as multi-process real-time operating system. This combination of Commercial-Of-The-Shelf (COTS) technologies is an accessible option for small satellites with high computational demands.

  20. Thermodynamic cost of computation, algorithmic complexity and the information metric

    NASA Technical Reports Server (NTRS)

    Zurek, W. H.

    1989-01-01

    Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.

  1. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment.

    PubMed

    Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong

    2016-05-13

    In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.

  2. Characterization of Adrenal Adenoma by Gaussian Model-Based Algorithm.

    PubMed

    Hsu, Larson D; Wang, Carolyn L; Clark, Toshimasa J

    2016-01-01

    We confirmed that computed tomography (CT) attenuation values of pixels in an adrenal nodule approximate a Gaussian distribution. Building on this and the previously described histogram analysis method, we created an algorithm that uses mean and standard deviation to estimate the percentage of negative attenuation pixels in an adrenal nodule, thereby allowing differentiation of adenomas and nonadenomas. The institutional review board approved both components of this study in which we developed and then validated our criteria. In the first, we retrospectively assessed CT attenuation values of adrenal nodules for normality using a 2-sample Kolmogorov-Smirnov test. In the second, we evaluated a separate cohort of patients with adrenal nodules using both the conventional 10HU unit mean attenuation method and our Gaussian model-based algorithm. We compared the sensitivities of the 2 methods using McNemar's test. A total of 183 of 185 observations (98.9%) demonstrated a Gaussian distribution in adrenal nodule pixel attenuation values. The sensitivity and specificity of our Gaussian model-based algorithm for identifying adrenal adenoma were 86.1% and 83.3%, respectively. The sensitivity and specificity of the mean attenuation method were 53.2% and 94.4%, respectively. The sensitivities of the 2 methods were significantly different (P value < 0.001). In conclusion, the CT attenuation values within an adrenal nodule follow a Gaussian distribution. Our Gaussian model-based algorithm can characterize adrenal adenomas with higher sensitivity than the conventional mean attenuation method. The use of our algorithm, which does not require additional postprocessing, may increase workflow efficiency and reduce unnecessary workup of benign nodules. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  4. Connect Computer Education to Policies.

    ERIC Educational Resources Information Center

    Kimmelman, Paul

    1985-01-01

    The computer phenomenon has made rapid inroads into school curricula, often without proper board guidance or approval. Accordingly, this pamphlet discusses why and how computer education should be provided in schools and sets forth guidelines for school board policy regarding computers. An umbrella policy is proposed, defining "computer literacy"…

  5. A highly efficient multi-core algorithm for clustering extremely large datasets

    PubMed Central

    2010-01-01

    Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922

  6. Developments in the Aerosol Layer Height Retrieval Algorithm for the Copernicus Sentinel-4/UVN Instrument

    NASA Astrophysics Data System (ADS)

    Nanda, Swadhin; Sanders, Abram; Veefkind, Pepijn

    2016-04-01

    The Sentinel-4 mission is a part of the European Commission's Copernicus programme, the goal of which is to provide geo-information to manage environmental assets, and to observe, understand and mitigate the effects of the changing climate. The Sentinel-4/UVN instrument design is motivated by the need to monitor trace gas concentrations and aerosols in the atmosphere from a geostationary orbit. The on-board instrument is a high resolution UV-VIS-NIR (UVN) spectrometer system that provides hourly radiance measurements over Europe and northern Africa with a spatial sampling of 8 km. The main application area of Sentinel-4/UVN is air quality. One of the data products that is being developed for Sentinel-4/UVN is the Aerosol Layer Height (ALH). The goal is to determine the height of aerosol plumes with a resolution of better than 0.5 - 1 km. The ALH product thus targets aerosol layers in the free troposphere, such as desert dust, volcanic ash and biomass during plumes. KNMI is assigned with the development of the Aerosol Layer Height (ALH) algorithm. Its heritage is the ALH algorithm developed by Sanders and De Haan (ATBD, 2016) for the TROPOMI instrument on board the Sentinel-5 Precursor mission that is to be launched in June or July 2016 (tentative date). The retrieval algorithm designed so far for the aerosol height product is based on the absorption characteristics of the oxygen-A band (759-770 nm). The algorithm has heritage to the ALH algorithm developed for TROPOMI on the Sentinel 5 precursor satellite. New aspects for Sentinel-4/UVN include the higher resolution (0.116 nm compared to 0.4 for TROPOMI) and hourly observation from the geostationary orbit. The algorithm uses optimal estimation to obtain a spectral fit of the reflectance across absorption band, while assuming a single uniform layer with fixed width to represent the aerosol vertical distribution. The state vector includes amongst other elements the height of this layer and its aerosol optical thickness. We will present the development work around the ALH retrieval algorithm in the framework of the Sentinel-4/UVN instrument. The main challenges are highlighted and retrieval simulation results are provided. Also, an outlook towards application of the S4 bread board algorithm to Sentinel-5 Precursor data later this year will be discussed.

  7. A unifying framework for rigid multibody dynamics and serial and parallel computational issues

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Jain, Abhinandan

    1989-01-01

    A unifying framework for various formulations of the dynamics of open-chain rigid multibody systems is discussed. Their suitability for serial and parallel processing is assessed. The framework is based on the derivation of intrinsic, i.e., coordinate-free, equations of the algorithms which provides a suitable abstraction and permits a distinction to be made between the computational redundancy in the intrinsic and extrinsic equations. A set of spatial notation is used which allows the derivation of the various algorithms in a common setting and thus clarifies the relationships among them. The three classes of algorithms viz., O(n), O(n exp 2) and O(n exp 3) or the solution of the dynamics problem are investigated. Researchers begin with the derivation of O(n exp 3) algorithms based on the explicit computation of the mass matrix and it provides insight into the underlying basis of the O(n) algorithms. From a computational perspective, the optimal choice of a coordinate frame for the projection of the intrinsic equations is discussed and the serial computational complexity of the different algorithms is evaluated. The three classes of algorithms are also analyzed for suitability for parallel processing. It is shown that the problem belongs to the class of N C and the time and processor bounds are of O(log2/2(n)) and O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 2) processors, and results from the parallelization of the O(n exp 3) serial algorithm.

  8. Speed Approach for UAV Collision Avoidance

    NASA Astrophysics Data System (ADS)

    Berdonosov, V. D.; Zivotova, A. A.; Htet Naing, Zaw; Zhuravlev, D. O.

    2018-05-01

    The article represents a new approach of defining potential collision of two or more UAVs in a common aviation area. UAVs trajectories are approximated by two or three trajectories’ points obtained from the ADS-B system. In the process of defining meeting points of trajectories, two cutoff values of the critical speed range, at which a UAVs collision is possible, are calculated. As calculation expressions for meeting points and cutoff values of the critical speed are represented in the analytical form, even if an on-board computer system has limited computational capacity, the time for calculation will be far less than the time of receiving data from ADS-B. For this reason, calculations can be updated at each cycle of new data receiving, and the trajectory approximation can be bounded by straight lines. Such approach allows developing the compact algorithm of collision avoidance, even for a significant amount of UAVs (more than several dozens). To proof the research adequacy, modeling was performed using a software system developed specifically for this purpose.

  9. Airborne optical tracking control system design study

    NASA Astrophysics Data System (ADS)

    1992-09-01

    The Kestrel LOS Tracking Program involves the development of a computer and algorithms for use in passive tracking of airborne targets from a high altitude balloon platform. The computer receivers track error signals from a video tracker connected to one of the imaging sensors. In addition, an on-board IRU (gyro), accelerometers, a magnetometer, and a two-axis inclinometer provide inputs which are used for initial acquisitions and course and fine tracking. Signals received by the control processor from the video tracker, IRU, accelerometers, magnetometer, and inclinometer are utilized by the control processor to generate drive signals for the payload azimuth drive, the Gimballed Mirror System (GMS), and the Fast Steering Mirror (FSM). The hardware which will be procured under the LOS tracking activity is the Controls Processor (CP), the IRU, and the FSM. The performance specifications for the GMS and the payload canister azimuth driver are established by the LOS tracking design team in an effort to achieve a tracking jitter of less than 3 micro-rad, 1 sigma for one axis.

  10. Semantic modeling and structural synthesis of onboard electronics protection means as open information system

    NASA Astrophysics Data System (ADS)

    Zhevnerchuk, D. V.; Surkova, A. S.; Lomakina, L. S.; Golubev, A. S.

    2018-05-01

    The article describes the component representation approach and semantic models of on-board electronics protection from ionizing radiation of various nature. Semantic models are constructed, the feature of which is the representation of electronic elements, protection modules, sources of impact in the form of blocks with interfaces. The rules of logical inference and algorithms for synthesizing the object properties of the semantic network, imitating the interface between the components of the protection system and the sources of radiation, are developed. The results of the algorithm are considered using the example of radiation-resistant microcircuits 1645RU5U, 1645RT2U and the calculation and experimental method for estimating the durability of on-board electronics.

  11. 78 FR 89 - Announcing an Open Meeting of the Information Security and Privacy Advisory Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-02

    ... Management and Budget, and the Director of NIST on security and privacy issues pertaining to federal computer... Computer Security Division. Note that agenda items may change without notice because of possible unexpected... of the Information Security and Privacy Advisory Board AGENCY: National Institute of Standards and...

  12. Research on the precise positioning of customers in large data environment

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; He, Lili

    2018-04-01

    Customer positioning has always been a problem that enterprises focus on. In this paper, FCM clustering algorithm is used to cluster customer groups. However, due to the traditional FCM clustering algorithm, which is susceptible to the influence of the initial clustering center and easy to fall into the local optimal problem, the short board of FCM is solved by the gray optimization algorithm (GWO) to achieve efficient and accurate handling of a large number of retailer data.

  13. Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm

    NASA Technical Reports Server (NTRS)

    Povitsky, A.

    1998-01-01

    In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.

  14. Evaluation of Demons- and FEM-Based Registration Algorithms for Lung Cancer.

    PubMed

    Yang, Juan; Li, Dengwang; Yin, Yong; Zhao, Fen; Wang, Hongjun

    2016-04-01

    We evaluated and compared the accuracy of 2 deformable image registration algorithms in 4-dimensional computed tomography images for patients with lung cancer. Ten patients with non-small cell lung cancer or small cell lung cancer were enrolled in this institutional review board-approved study. The displacement vector fields relative to a specific reference image were calculated by using the diffeomorphic demons (DD) algorithm and the finite element method (FEM)-based algorithm. The registration accuracy was evaluated by using normalized mutual information (NMI), the sum of squared intensity difference (SSD), modified Hausdorff distance (dH_M), and ratio of gross tumor volume (rGTV) difference between reference image and deformed phase image. We also compared the registration speed of the 2 algorithms. Of all patients, the FEM-based algorithm showed stronger ability in aligning 2 images than the DD algorithm. The means (±standard deviation) of NMI were 0.86 (±0.05) and 0.90 (±0.05) using the DD algorithm and the FEM-based algorithm, respectively. The means of SSD were 0.006 (±0.003) and 0.003 (±0.002) using the DD algorithm and the FEM-based algorithm, respectively. The means of dH_M were 0.04 (±0.02) and 0.03 (±0.03) using the DD algorithm and the FEM-based algorithm, respectively. The means of rGTV were 3.9% (±1.01%) and 2.9% (±1.1%) using the DD algorithm and the FEM-based algorithm, respectively. However, the FEM-based algorithm costs a longer time than the DD algorithm, with the average running time of 31.4 minutes compared to 21.9 minutes for all patients. The preliminary results showed that the FEM-based algorithm was more accurate than the DD algorithm while compromised with the registration speed. © The Author(s) 2015.

  15. A review on quantum search algorithms

    NASA Astrophysics Data System (ADS)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  16. Application of Reconfigurable Computing Technology to Multi-KiloHertz Micro-Laser Altimeter (MMLA) Data Processing

    NASA Technical Reports Server (NTRS)

    Powell, Wesley; Dabney, Philip; Hicks, Edward; Pinchinat, Maxime; Day, John H. (Technical Monitor)

    2002-01-01

    The Multi-KiloHertz Micro-Laser Altimeter (MMLA) is an aircraft based instrument developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This presentation describes how reconfigurable computing technology was employed to perform MMLA signal extraction in real-time under realistic operating constraints. The MMLA is a "single-photon-counting" airborne laser altimeter that is used to measure land surface features such as topography and vegetation canopy height. This instrument has to date flown a number of times aboard the NASA P3 aircraft acquiring data at a number of sites in the Mid-Atlantic region. This instrument pulses a relatively low-powered laser at a very high rate (10 kHz) and then measures the time-of-flight of discrete returns from the target surface. The instrument then bins these measurements into a two-dimensional array (vertical height vs. horizontal ground track) and selects the most likely signal path through the array. Return data that does not correspond to the selected signal path are classified as noise returns and are then discarded. The MMLA signal extraction algorithm is very compute intensive in that a score must be computed for every possible path through the two dimensional array in order to select the most likely signal path. Given a typical array size with 50 x 6, up to 33 arrays must be processed per second. And for each of these arrays, roughly 12,000 individual paths must be scored. Furthermore, the number of paths increases exponentially with the horizontal size of the array, and linearly with the vertical size. Yet, increasing the horizontal and vertical sizes of the array offer science advantages such as improved range, resolution, and noise rejection. Due to the volume of return data and the compute intensive signal extraction algorithm, the existing PC-based MMLA data system has been unable to perform signal extraction in real-time unless the array is limited in size to one column, This limits the ability of the MMLA to operate in environments with sparse signal returns and a high number of noise return. However, under an IR&D project, an FPGA-based, reconfigurable computing data system has been developed that has been demonstrated to perform real-time signal extraction under realistic operating constraints. This reconfigurable data system is based on the commercially available Firebird Board from Annapolis Microsystems. This PCI board consists of a Xilinx Virtex 2000E FPGA along with 36 MB of SRAM arranged in five separately addressable banks. This board is housed in a rackmount PC with dual 850MHz Pentium processors running the Windows 2000 operating system. This data system performs all signal extraction in hardware on the Firebird, but also runs the existing "software based" signal extraction in tandem for comparison purposes. Using a relatively small amount of the Virtex XCV2000E resources, the reconfigurable data system has demonstrated to improve performance improvement over the existing software based data system by an order of magnitude. Performance could be further improved by employing parallelism. Ground testing and a preliminary engineering test flight aboard the NASA P3 has been performed, during which the reconfigurable data system has been demonstrated to match the results of the existing data system.

  17. Efficient mapping algorithms for scheduling robot inverse dynamics computation on a multiprocessor system

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chen, C. L.

    1989-01-01

    Two efficient mapping algorithms for scheduling the robot inverse dynamics computation consisting of m computational modules with precedence relationship to be executed on a multiprocessor system consisting of p identical homogeneous processors with processor and communication costs to achieve minimum computation time are presented. An objective function is defined in terms of the sum of the processor finishing time and the interprocessor communication time. The minimax optimization is performed on the objective function to obtain the best mapping. This mapping problem can be formulated as a combination of the graph partitioning and the scheduling problems; both have been known to be NP-complete. Thus, to speed up the searching for a solution, two heuristic algorithms were proposed to obtain fast but suboptimal mapping solutions. The first algorithm utilizes the level and the communication intensity of the task modules to construct an ordered priority list of ready modules and the module assignment is performed by a weighted bipartite matching algorithm. For a near-optimal mapping solution, the problem can be solved by the heuristic algorithm with simulated annealing. These proposed optimization algorithms can solve various large-scale problems within a reasonable time. Computer simulations were performed to evaluate and verify the performance and the validity of the proposed mapping algorithms. Finally, experiments for computing the inverse dynamics of a six-jointed PUMA-like manipulator based on the Newton-Euler dynamic equations were implemented on an NCUBE/ten hypercube computer to verify the proposed mapping algorithms. Computer simulation and experimental results are compared and discussed.

  18. Onboard spectral imager data processor

    NASA Astrophysics Data System (ADS)

    Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.

    1999-10-01

    Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.

  19. Athena X-IFU event reconstruction software: SIRENA

    NASA Astrophysics Data System (ADS)

    Ceballos, Maria Teresa; Cobo, Beatriz; Peille, Philippe; Wilms, Joern; Brand, Thorsten; Dauser, Thomas; Bandler, Simon; Smith, Stephen

    2015-09-01

    This contribution describes the status and technical details of the SIRENA package, the software currently in development to perform the on board event energy reconstruction for the Athena calorimeter X-IFU. This on board processing will be done in the X-IFU DRE unit and it will consist in an initial triggering of event pulses followed by an analysis (with the SIRENA package) to determine the energy content of such events.The current algorithm used by SIRENA is the optimal filtering technique (also used by ASTRO-H processor) although some other algorithms are also being tested.Here we present these studies and some preliminary results about the energy resolution of the instrument based on simulations done with the SIXTE simulator (http://www.sternwarte.uni-erlangen.de/research/sixte/) in which SIRENA is integrated.

  20. A Method for Measuring the Effective Throughput Time Delay in Simulated Displays Involving Manual Control

    NASA Technical Reports Server (NTRS)

    Jewell, W. F.; Clement, W. F.

    1984-01-01

    The advent and widespread use of the computer-generated image (CGI) device to simulate visual cues has a mixed impact on the realism and fidelity of flight simulators. On the plus side, CGIs provide greater flexibility in scene content than terrain boards and closed circuit television based visual systems, and they have the potential for a greater field of view. However, on the minus side, CGIs introduce into the visual simulation relatively long time delays. In many CGIs, this delay is as much as 200 ms, which is comparable to the inherent delay time of the pilot. Because most GCIs use multiloop processing and smoothing algorithms and are linked to a multiloop host computer, it is seldom possible to identify a unique throughput time delay, and it is therefore difficult to quantify the performance of the closed loop pilot simulator system relative to the real world task. A method to address these issues using the critical task tester is described. Some empirical results from applying the method are presented, and a novel technique for improving the performance of GCIs is discussed.

  1. Implementation of Multispectral Image Classification on a Remote Adaptive Computer

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna

    1999-01-01

    As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).

  2. A design of an interface board between a MRC thermistor probe and a personal computer.

    DOT National Transportation Integrated Search

    2013-09-01

    The main purpose of this project was to design and build a prototype of an interface board between an MRC temperature probe : (thermistor array) and a personal laptop computer. This interface board replaces and significantly improve the capabilities ...

  3. THE EFFECTS OF COMPUTER-BASED FIRE SAFETY TRAINING ON THE KNOWLEDGE, ATTITUDES, AND PRACTICES OF CAREGIVERS

    PubMed Central

    Harrington, Susan S.; Walker, Bonnie L.

    2010-01-01

    Background Older adults in small residential board and care facilities are at a particularly high risk of fire death and injury because of their characteristics and environment. Methods The authors investigated computer-based instruction as a way to teach fire emergency planning to owners, operators, and staff of small residential board and care facilities. Participants (N = 59) were randomly assigned to a treatment or control group. Results Study participants who completed the training significantly improved their scores from pre- to posttest when compared to a control group. Participants indicated on the course evaluation that the computers were easy to use for training (97%) and that they would like to use computers for future training courses (97%). Conclusions This study demonstrates the potential for using interactive computer-based training as a viable alternative to instructor-led training to meet the fire safety training needs of owners, operators, and staff of small board and care facilities for the elderly. PMID:19263929

  4. A Red Oak Data Bank for Computer Simulations of Secondary Processing

    Treesearch

    Charles J. Gatchell; Janice K. Wiedenbeck; Elizabeth S. Walker

    1993-01-01

    An extensive data bank for red oak lumber that is compatible with most secondary manufacturing computer simulator tools is now available. Currently, the data bank contains 10,718 board feet in 1,578 boards. The National Hardwood Lumber Associations (NHLA) Special Kiln Dried Rule was used to grade the boards. The percentage of a boardâs surface measure contained in...

  5. Design and initial validation of a wireless control system based on WSN

    NASA Astrophysics Data System (ADS)

    Yu, Yan; Li, Luyu; Li, Peng; Wang, Xu; Liu, Hang; Ou, Jinping

    2013-04-01

    At present, cantilever structure used widely in civil structures will generate continuous vibration by external force due to their low damping characteristic, which leads to a serious impact on the working performance and service time. Therefore, it is very important to control the vibration of these structures. The active vibration control is the primary means of controlling the vibration with high precision and strong adaptive ability. Nowadays, there are many researches using piezoelectric materials in the structural vibration control. Piezoelectric materials are cheap, reliable and they can provide braking and sensing method harmless to the structure, therefore they have broad usage. They are used for structural vibration control in a lot of civil engineering research currently. In traditional sensor applications, information exchanges with the monitoring center or a computer system through wires. If wireless sensor networks(WSN) technology is used, cabling links is not needed, thus the cost of the whole system is greatly reduced. Based on the above advantages, a wireless control system is designed and validated through preliminary tests. The system consists of a cantilever, PVDF as sensor, signal conditioning circuit(SCM), A/D acquisition board, control arithmetic unit, D/A output board, power amplifier, piezoelectric bimorph as actuator. DSP chip is used as the control arithmetic unit and PD control algorithm is embedded in it. PVDF collects the parameters of vibration, sends them to the SCM after A/D conversion. SCM passes the data to the DSP through wireless technology, and DSP calculates and outputs the control values according to the control algorithm. The output signal is amplified by the power amplifier to drive the piezoelectric bimorph for vibration control. The structural vibration duration reduces to 1/4 of the uncontrolled case, which verifies the feasibility of the system.

  6. Network Community Detection based on the Physarum-inspired Computational Framework.

    PubMed

    Gao, Chao; Liang, Mingxin; Li, Xianghua; Zhang, Zili; Wang, Zhen; Zhou, Zhili

    2016-12-13

    Community detection is a crucial and essential problem in the structure analytics of complex networks, which can help us understand and predict the characteristics and functions of complex networks. Many methods, ranging from the optimization-based algorithms to the heuristic-based algorithms, have been proposed for solving such a problem. Due to the inherent complexity of identifying network structure, how to design an effective algorithm with a higher accuracy and a lower computational cost still remains an open problem. Inspired by the computational capability and positive feedback mechanism in the wake of foraging process of Physarum, which is a large amoeba-like cell consisting of a dendritic network of tube-like pseudopodia, a general Physarum-based computational framework for community detection is proposed in this paper. Based on the proposed framework, the inter-community edges can be identified from the intra-community edges in a network and the positive feedback of solving process in an algorithm can be further enhanced, which are used to improve the efficiency of original optimization-based and heuristic-based community detection algorithms, respectively. Some typical algorithms (e.g., genetic algorithm, ant colony optimization algorithm, and Markov clustering algorithm) and real-world datasets have been used to estimate the efficiency of our proposed computational framework. Experiments show that the algorithms optimized by Physarum-inspired computational framework perform better than the original ones, in terms of accuracy and computational cost. Moreover, a computational complexity analysis verifies the scalability of our framework.

  7. Optimization of dynamic soaring maneuvers to enhance endurance of a versatile UAV

    NASA Astrophysics Data System (ADS)

    Mir, Imran; Maqsood, Adnan; Akhtar, Suhail

    2017-06-01

    Dynamic soaring is a process of acquiring energy available in atmospheric wind shears and is commonly exhibited by soaring birds to perform long distance flights. This paper aims to demonstrate a viable algorithm which can be implemented in near real time environment to formulate optimal trajectories for dynamic soaring maneuvers for a small scale Unmanned Aerial Vehicle (UAV). The objective is to harness maximum energy from atmosphere wind shear to improve loiter time for Intelligence, Surveillance and Reconnaissance (ISR) missions. Three-dimensional point-mass UAV equations of motion and linear wind gradient profile are used to model flight dynamics. Utilizing UAV states, controls, operational constraints, initial and terminal conditions that enforce a periodic flight, dynamic soaring problem is formulated as an optimal control problem. Optimized trajectories of the maneuver are subsequently generated employing pseudo spectral techniques against distant UAV performance parameters. The discussion also encompasses the requirement for generation of optimal trajectories for dynamic soaring in real time environment and the ability of the proposed algorithm for speedy solution generation. Coupled with the fact that dynamic soaring is all about immediately utilizing the available energy from the wind shear encountered, the proposed algorithm promises its viability for practical on board implementations requiring computation of trajectories in near real time.

  8. 76 FR 81477 - Announcing an Open Meeting of the Information Security and Privacy Advisory Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-28

    ... sessions will be open to the public. The ISPAB was established by the Computer Security Act of 1987 (Pub. L... Secure Mobile Devices, --Panel Discussion on cyber R&D Strategy, and --Update of NIST Computer Security... of the Information Security and Privacy Advisory Board AGENCY: National Institute of Standards and...

  9. Portable control device for networked mobile robots

    DOEpatents

    Feddema, John T.; Byrne, Raymond H.; Bryan, Jon R.; Harrington, John J.; Gladwell, T. Scott

    2002-01-01

    A handheld control device provides a way for controlling one or multiple mobile robotic vehicles by incorporating a handheld computer with a radio board. The device and software use a personal data organizer as the handheld computer with an additional microprocessor and communication device on a radio board for use in controlling one robot or multiple networked robots.

  10. CROMAX : a crosscut-first computer simulation program to determine cutting yield

    Treesearch

    Pamela J. Giese; Jeanne D. Danielson

    1983-01-01

    CROMAX simulates crosscut-first, then rip operations as commonly practiced in furniture manufacture. This program calculates cutting yields from individual boards based on board size and defect location. Such information can be useful in predicting yield from various grades and grade mixes thereby allowing for better management decisions in the rough mill. The computer...

  11. Algorithm integration using ADL (Algorithm Development Library) for improving CrIMSS EDR science product quality

    NASA Astrophysics Data System (ADS)

    Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.

    2013-05-01

    Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.

  12. Autonomous On-Board Calibration of Attitude Sensors and Gyros

    NASA Technical Reports Server (NTRS)

    Pittelkau, Mark E.

    2007-01-01

    This paper presents the state of the art and future prospects for autonomous real-time on-orbit calibration of gyros and attitude sensors. The current practice in ground-based calibration is presented briefly to contrast it with on-orbit calibration. The technical and economic benefits of on-orbit calibration are discussed. Various algorithms for on-orbit calibration are evaluated, including some that are already operating on board spacecraft. Because Redundant Inertial Measurement Units (RIMUs, which are IMUs that have more than three sense axes) are almost ubiquitous on spacecraft, special attention will be given to calibration of RIMUs. In addition, we discuss autonomous on board calibration and how it may be implemented.

  13. Open-source meteor detection software for low-cost single-board computers

    NASA Astrophysics Data System (ADS)

    Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.

    2016-01-01

    This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.

  14. Terrain mapping and control of unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kang, Yeonsik

    In this thesis, methods for terrain mapping and control of unmanned aerial vehicles (UAVs) are proposed. First, robust obstacle detection and tracking algorithm are introduced to eliminate the clutter noise uncorrelated with the real obstacle. This is an important problem since most types of sensor measurements are vulnerable to noise. In order to eliminate such noise, a Kalman filter-based interacting multiple model (IMM) algorithm is employed to effectively detect obstacles and estimate their positions precisely. Using the outcome of the IMM-based obstacle detection algorithm, a new method of building a probabilistic occupancy grid map is proposed based on Bayes rule in probability theory. Since the proposed map update law uses the outputs of the IMM-based obstacle detection algorithm, simultaneous tracking of moving targets and mapping of stationary obstacles are possible. This can be helpful especially in a noisy outdoor environment where different types of obstacles exist. Another feature of the algorithm is its capability to eliminate clutter noise as well as measurement noise. The proposed algorithm is simulated in Matlab using realistic sensor models. The results show close agreement with the layout of real obstacles. An efficient method called "quadtree" is used to process massive geographical information in a convenient manner. The algorithm is evaluated in a realistic simulation environment called RIPTIDE, which the NASA Ames Research Center developed to access the performance of complicated software for UAVs. Supposing that a UAV is equipped with abovementioned obstacle detection and mapping algorithm, the control problem of a small fixed-wing UAV is studied. A Nonlinear Model Predictive Control (NMPC is designed as a high level controller for the fixed-wing UAV using a kinematic model of the UAV. The kinematic model is employed because of the assumption that there exist low level controls on the UAV. The UAV dynamics are nonlinear with input constraints which is the main challenge explored in this thesis. The control objective of the NMPC is determined to track a desired line, and the analysis of the designed NMPC's stability is followed to find the conditions that can assure stability. Then, the control objective is extended to track adjoined multiple line segments with obstacle avoidance capability. In simulation, the performance of the NMPC is superb with fast convergence and small overshoot. The computation time is not a burden for a fixed-wing UAV controller with a Pentium level on-board computer that provides a reasonable control update rate.

  15. Algorithmic Mechanism Design of Evolutionary Computation.

    PubMed

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  16. Algorithmic Mechanism Design of Evolutionary Computation

    PubMed Central

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm. PMID:26257777

  17. Parallel conjugate gradient algorithms for manipulator dynamic simulation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheld, Robert E.

    1989-01-01

    Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).

  18. Shor's factoring algorithm and modern cryptography. An illustration of the capabilities inherent in quantum computers

    NASA Astrophysics Data System (ADS)

    Gerjuoy, Edward

    2005-06-01

    The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.

  19. VMEbus based computer and real-time UNIX as infrastructure of DAQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yasu, Y.; Fujii, H.; Nomachi, M.

    1994-12-31

    This paper describes what the authors have constructed as the infrastructure of data acquisition system (DAQ). The paper reports recent developments concerned with HP VME board computer with LynxOS (HP742rt/HP-RT) and Alpha/OSF1 with VMEbus adapter. The paper also reports current status of developing a Benchmark Suite for Data Acquisition (DAQBENCH) for measuring not only the performance of VME/CAMAC access but also that of the context switching, the inter-process communications and so on, for various computers including Workstation-based systems and VME board computers.

  20. Development of Labview based data acquisition and multichannel analyzer software for radioactive particle tracking system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, Nur Aira Abd, E-mail: nur-aira@nuclearmalaysia.gov.my; Yussup, Nolida; Ibrahim, Maslina Bt. Mohd

    2015-04-29

    A DAQ (data acquisition) software called RPTv2.0 has been developed for Radioactive Particle Tracking System in Malaysian Nuclear Agency. RPTv2.0 that features scanning control GUI, data acquisition from 12-channel counter via RS-232 interface, and multichannel analyzer (MCA). This software is fully developed on National Instruments Labview 8.6 platform. Ludlum Model 4612 Counter is used to count the signals from the scintillation detectors while a host computer is used to send control parameters, acquire and display data, and compute results. Each detector channel consists of independent high voltage control, threshold or sensitivity value and window settings. The counter is configured withmore » a host board and twelve slave boards. The host board collects the counts from each slave board and communicates with the computer via RS-232 data interface.« less

  1. Computer methods for sampling from the gamma distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, M.E.; Tadikamalla, P.R.

    1978-01-01

    Considerable attention has recently been directed at developing ever faster algorithms for generating gamma random variates on digital computers. This paper surveys the current state of the art including the leading algorithms of Ahrens and Dieter, Atkinson, Cheng, Fishman, Marsaglia, Tadikamalla, and Wallace. General random variate generation techniques are explained with reference to these gamma algorithms. Computer simulation experiments on IBM and CDC computers are reported.

  2. Accuracy and speed in computing the Chebyshev collocation derivative

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Solomonoff, Alex

    1991-01-01

    We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.

  3. Algorithm for space-time analysis of data on geomagnetic field

    NASA Technical Reports Server (NTRS)

    Kulanin, N. V.; Golokov, V. P. (Editor); Tyupkin, S. (Editor)

    1984-01-01

    The algorithm for the execution of the space-time analysis of data on geomagnetic fields is described. The primary constraints figuring in the specific realization of the algorithm on a computer stem exclusively from the limited possibilities of the computer involved. It is realized in the form of a program for the BESM-6 computer.

  4. Algorithmic complexity of quantum capacity

    NASA Astrophysics Data System (ADS)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  5. A review of classification algorithms for EEG-based brain-computer interfaces.

    PubMed

    Lotte, F; Congedo, M; Lécuyer, A; Lamarche, F; Arnaldi, B

    2007-06-01

    In this paper we review classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.

  6. Distributed computation of graphics primitives on a transputer network

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1988-01-01

    A method is developed for distributing the computation of graphics primitives on a parallel processing network. Off-the-shelf transputer boards are used to perform the graphics transformations and scan-conversion tasks that would normally be assigned to a single transputer based display processor. Each node in the network performs a single graphics primitive computation. Frequently requested tasks can be duplicated on several nodes. The results indicate that the current distribution of commands on the graphics network shows a performance degradation when compared to the graphics display board alone. A change to more computation per node for every communication (perform more complex tasks on each node) may cause the desired increase in throughput.

  7. PERSONAL COMPUTER MONITORS: A SCREENING EVALUATION OF VOLATILE ORGANIC EMISSIONS FROM EXISTING PRINTED CIRCUIT BOARD LAMINATES AND POTENTIAL POLLUTION PREVENTION ALTERNATIVES

    EPA Science Inventory

    The report gives results of a screening evaluation of volatile organic emissions from printed circuit board laminates and potential pollution prevention alternatives. In the evaluation, printed circuit board laminates, without circuitry, commonly found in personal computer (PC) m...

  8. Evaluation of forearm support provided by the Workplace Board on perceived tension, comfort and productivity in pregnant and non-pregnant computer users.

    PubMed

    Slot, Tegan; Charpentier, Karine; Dumas, Geneviève; Delisle, Alain; Leger, Andy; Plamondon, André

    2009-01-01

    The aim of the study was to evaluate the effect of forearm support provided by the Workplace Board on perceived tension, comfort and productivity among pregnant and non-pregnant female computer workers. Ten pregnant and 18 non-pregnant women participated in the study. Participants completed three sets of tension/discomfort questionnaires at two week intervals. The first set was completed prior to any workstation intervention; the second set was completed after two weeks working with an ergonomically adjusted workstation; the third set was completed after two weeks working with the Workplace Board integrated into the office workstation. With the Workplace Board, decreased perceived tension was reported in the left shoulder, wrist and low back in non-pregnant women only. The Board was generally liked by all participants, and increased comfort and productivity in all areas, with the exception of a negative effect on productivity of general office tasks. The board is suitable for integration in most office workstations and for most users, but has no special benefits for pregnant women.

  9. Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions

    ERIC Educational Resources Information Center

    Torbeyns, Joke; Verschaffel, Lieven

    2016-01-01

    This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…

  10. Rapid prototyping of an EEG-based brain-computer interface (BCI).

    PubMed

    Guger, C; Schlögl, A; Neuper, C; Walterspacher, D; Strein, T; Pfurtscheller, G

    2001-03-01

    The electroencephalogram (EEG) is modified by motor imagery and can be used by patients with severe motor impairments (e.g., late stage of amyotrophic lateral sclerosis) to communicate with their environment. Such a direct connection between the brain and the computer is known as an EEG-based brain-computer interface (BCI). This paper describes a new type of BCI system that uses rapid prototyping to enable a fast transition of various types of parameter estimation and classification algorithms to real-time implementation and testing. Rapid prototyping is possible by using Matlab, Simulink, and the Real-Time Workshop. It is shown how to automate real-time experiments and perform the interplay between on-line experiments and offline analysis. The system is able to process multiple EEG channels on-line and operates under Windows 95 in real-time on a standard PC without an additional digital signal processor (DSP) board. The BCI can be controlled over the Internet, LAN or modem. This BCI was tested on 3 subjects whose task it was to imagine either left or right hand movement. A classification accuracy between 70% and 95% could be achieved with two EEG channels after some sessions with feedback using an adaptive autoregressive (AAR) model and linear discriminant analysis (LDA).

  11. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  12. Implementation of a cone-beam backprojection algorithm on the cell broadband engine processor

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Knaup, Michael; Kachelrieß, Marc

    2007-03-01

    Tomographic image reconstruction is computationally very demanding. In all cases the backprojection represents the performance bottleneck due to the high operational count and due to the high demand put on the memory subsystem. In the past, solving this problem has lead to the implementation of specific architectures, connecting Application Specific Integrated Circuits (ASICs) or Field Programmable Gate Arrays (FPGAs) to memory through dedicated high speed busses. More recently, there have also been attempt to use Graphic Processing Units (GPUs) to perform the backprojection step. Originally aimed at the gaming market, IBM, Toshiba and Sony have introduced the Cell Broadband Engine (CBE) processor, often considered as a multicomputer on a chip. Clocked at 3 GHz, the Cell allows for a theoretical performance of 192 GFlops and a peak data transfer rate over the internal bus of 200 GB/s. This performance indeed makes the Cell a very attractive architecture for implementing tomographic image reconstruction algorithms. In this study, we investigate the relative performance of a perspective backprojection algorithm when implemented on a standard PC and on the Cell processor. We compare these results to the performance achievable with FPGAs based boards and high end GPUs. The cone-beam backprojection performance was assessed by backprojecting a full circle scan of 512 projections of 1024x1024 pixels into a volume of size 512x512x512 voxels. It took 3.2 minutes on the PC (single CPU) and is as fast as 13.6 seconds on the Cell.

  13. Optimized design of embedded DSP system hardware supporting complex algorithms

    NASA Astrophysics Data System (ADS)

    Li, Yanhua; Wang, Xiangjun; Zhou, Xinling

    2003-09-01

    The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.

  14. Algorithm for Determination of Orion Ascent Abort Mode Achievability

    NASA Technical Reports Server (NTRS)

    Tedesco, Mark B.

    2011-01-01

    For human spaceflight missions, a launch vehicle failure poses the challenge of returning the crew safely to earth through environments that are often much more stressful than the nominal mission. Manned spaceflight vehicles require continuous abort capability throughout the ascent trajectory to protect the crew in the event of a failure of the launch vehicle. To provide continuous abort coverage during the ascent trajectory, different types of Orion abort modes have been developed. If a launch vehicle failure occurs, the crew must be able to quickly and accurately determine the appropriate abort mode to execute. Early in the ascent, while the Launch Abort System (LAS) is attached, abort mode selection is trivial, and any failures will result in a LAS abort. For failures after LAS jettison, the Service Module (SM) effectors are employed to perform abort maneuvers. Several different SM abort mode options are available depending on the current vehicle location and energy state. During this region of flight the selection of the abort mode that maximizes the survivability of the crew becomes non-trivial. To provide the most accurate and timely information to the crew and the onboard abort decision logic, on-board algorithms have been developed to propagate the abort trajectories based on the current launch vehicle performance and to predict the current abort capability of the Orion vehicle. This paper will provide an overview of the algorithm architecture for determining abort achievability as well as the scalar integration scheme that makes the onboard computation possible. Extension of the algorithm to assessing abort coverage impacts from Orion design modifications and launch vehicle trajectory modifications is also presented.

  15. 2007 Beyond SBIR Phase II: Bringing Technology Edge to the Warfighter

    DTIC Science & Technology

    2007-08-23

    Systems Trade-Off Analysis and Optimization Verification and Validation On-Board Diagnostics and Self - healing Security and Anti-Tampering Rapid...verification; Safety and reliability analysis of flight and mission critical systems On-Board Diagnostics and Self - Healing Model-based monitoring and... self - healing On-board diagnostics and self - healing ; Autonomic computing; Network intrusion detection and prevention Anti-Tampering and Trust

  16. Compressed sensing of hyperspectral images based on scrambled block Hadamard ensemble

    NASA Astrophysics Data System (ADS)

    Wang, Li; Feng, Yan

    2016-11-01

    A fast measurement matrix based on scrambled block Hadamard ensemble for compressed sensing (CS) of hyperspectral images (HSI) is investigated. The proposed measurement matrix offers several attractive features. First, the proposed measurement matrix possesses Gaussian behavior, which illustrates that the matrix is universal and requires a near-optimal number of samples for exact reconstruction. In addition, it could be easily implemented in the optical domain due to its integer-valued elements. More importantly, the measurement matrix only needs small memory for storage in the sampling process. Experimental results on HSIs reveal that the reconstruction performance of the proposed measurement matrix is comparable or better than Gaussian matrix and Bernoulli matrix using different reconstruction algorithms while consuming less computational time. The proposed matrix could be used in CS of HSI, which would save the storage memory on board, improve the sampling efficiency, and ameliorate the reconstruction quality.

  17. Autonomous Evolution of Dynamic Gaits with Two Quadruped Robots

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.; Takamura, Seichi; Yamamoto, Takashi; Fujita, Masahiro

    2004-01-01

    A challenging task that must be accomplished for every legged robot is creating the walking and running behaviors needed for it to move. In this paper we describe our system for autonomously evolving dynamic gaits on two of Sony's quadruped robots. Our evolutionary algorithm runs on board the robot and uses the robot's sensors to compute the quality of a gait without assistance from the experimenter. First we show the evolution of a pace and trot gait on the OPEN-R prototype robot. With the fastest gait, the robot moves at over 10/min/min., which is more than forty body-lengths/min. While these first gaits are somewhat sensitive to the robot and environment in which they are evolved, we then show the evolution of robust dynamic gaits, one of which is used on the ERS-110, the first consumer version of AIBO.

  18. QCCM Center for Quantum Algorithms

    DTIC Science & Technology

    2008-10-17

    algorithms (e.g., quantum walks and adiabatic computing ), as well as theoretical advances relating algorithms to physical implementations (e.g...Park, NC 27709-2211 15. SUBJECT TERMS Quantum algorithms, quantum computing , fault-tolerant error correction Richard Cleve MITACS East Academic...0511200 Algebraic results on quantum automata A. Ambainis, M. Beaudry, M. Golovkins, A. Kikusts, M. Mercer, D. Thrien Theory of Computing Systems 39(2006

  19. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, A.M.; Wagner, D.G.; Teese, G.D.

    1994-06-28

    An apparatus is described for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm. 5 figures.

  20. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, Aed M.; Wagner, David G.; Teese, Gregory D.

    1994-01-01

    An apparatus for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm.

  1. Automatic maintenance payload on board of a Mexican LEO microsatellite

    NASA Astrophysics Data System (ADS)

    Vicente-Vivas, Esaú; García-Nocetti, Fabián; Mendieta-Jiménez, Francisco

    2006-02-01

    Few research institutions from Mexico work together to finalize the integration of a technological demonstration microsatellite called Satex, aiming the launching of the first ever fully designed and manufactured domestic space vehicle. The project is based on technical knowledge gained in previous space experiences, particularly in developing GASCAN automatic experiments for NASA's space shuttle, and in some support obtained from the local team which assembled the México-OSCAR-30 microsatellites. Satex includes three autonomous payloads and a power subsystem, each one with a local microcomputer to provide intelligent and dedicated control. It also contains a flight computer (FC) with a pair of full redundancies. This enables the remote maintenance of processing boards from the ground station. A fourth communications payload depends on the flight computer for control purposes. A fifth payload was decided to be developed for the satellite. It adds value to the available on-board computers and extends the opportunity for a developing country to learn and to generate domestic space technology. Its aim is to provide automatic maintenance capabilities for the most critical on-board computer in order to achieve continuous satellite operations. This paper presents the virtual computer architecture specially developed to provide maintenance capabilities to the flight computer. The architecture is periodically implemented by software with a small amount of physical processors (FC processors) and virtual redundancies (payload processors) to emulate a hybrid redundancy computer. Communications among processors are accomplished over a fault-tolerant LAN. This allows a versatile operating behavior in terms of data communication as well as in terms of distributed fault tolerance. Obtained results, payload validation and reliability results are also presented.

  2. Direct volumetric rendering based on point primitives in OpenGL.

    PubMed

    da Rosa, André Luiz Miranda; de Almeida Souza, Ilana; Yuuji Hira, Adilson; Zuffo, Marcelo Knörich

    2006-01-01

    The aim of this project is to present a renderization by software algorithm of acquired volumetric data. The algorithm was implemented in Java language and the LWJGL graphical library was used, allowing the volume renderization by software and thus preventing the necessity to acquire specific graphical boards for the 3D reconstruction. The considered algorithm creates a model in OpenGL, through point primitives, where each voxel becomes a point with the color values related to this pixel position in the corresponding images.

  3. Performance Evaluation of Multichannel Adaptive Algorithms for Local Active Noise Control

    NASA Astrophysics Data System (ADS)

    DE DIEGO, M.; GONZALEZ, A.

    2001-07-01

    This paper deals with the development of a multichannel active noise control (ANC) system inside an enclosed space. The purpose is to design a real practical system which works well in local ANC applications. Moreover, the algorithm implemented in the adaptive controller should be robust, of low computational complexity and it should manage to generate a uniform useful-size zone of quite in order to allow the head motion of a person seated on a seat inside a car. Experiments were carried out under semi-anechoic and listening room conditions to verify the successful implementation of the multichannel system. The developed prototype consists of an array of up to four microphones used as error sensors mounted on the headrest of a seat place inside the enclosure. One loudspeaker was used as single primary source and two secondary sources were placed facing the seat. The aim of this multichannel system is to reduce the sound pressure levels in an area around the error sensors, following a local control strategy. When using this technique, the cancellation points are not only the error sensor positions but an area around them, which is measured by using a monitoring microphone. Different multichannel adaptive algorithms for ANC have been analyzed and their performance verified. Multiple error algorithms are used in order to cancel out different types of primary noise (engine noise and random noise) with several configurations (up to four channels system). As an alternative to the multiple error LMS algorithm (multichannel version of the filtered-X LMS algorithm, MELMS), the least maximum mean squares (LMMS) and the scanning error-LMS algorithm have been developed in this work in order to reduce computational complexity and achieve a more uniform residual field. The ANC algorithms were programmed on a digital signal processing board equipped with a TMS320C40 floating point DSP processor. Measurements concerning real-time experiments on local noise reduction in two environments and at frequencies below 230 Hz are presented. Better noise levels attenuation is obtained in the semianechoic chamber due to the simplicity of the acoustic field. The size of the zone of quiet makes the system useful at relatively low frequencies and it is large enough to cover a listener's head movements. The spatial extent of the zones of quiet is generally observed to increase as the error sensors are moved away from the secondary source, they are put closer together or its number increases. In summary, different algorithms' performance and the viability of the multichannel system for local active noise control in real listening conditions are evaluated and some guidelines for designing such systems are then proposed.

  4. Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fijany, A.; Milman, M.; Redding, D.

    1994-12-31

    In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less

  5. Development of a Laparoscopic Box Trainer Based on Open Source Hardware and Artificial Intelligence for Objective Assessment of Surgical Psychomotor Skills.

    PubMed

    Alonso-Silverio, Gustavo A; Pérez-Escamirosa, Fernando; Bruno-Sanchez, Raúl; Ortiz-Simon, José L; Muñoz-Guerrero, Roberto; Minor-Martinez, Arturo; Alarcón-Paredes, Antonio

    2018-05-01

    A trainer for online laparoscopic surgical skills assessment based on the performance of experts and nonexperts is presented. The system uses computer vision, augmented reality, and artificial intelligence algorithms, implemented into a Raspberry Pi board with Python programming language. Two training tasks were evaluated by the laparoscopic system: transferring and pattern cutting. Computer vision libraries were used to obtain the number of transferred points and simulated pattern cutting trace by means of tracking of the laparoscopic instrument. An artificial neural network (ANN) was trained to learn from experts and nonexperts' behavior for pattern cutting task, whereas the assessment of transferring task was performed using a preestablished threshold. Four expert surgeons in laparoscopic surgery, from hospital "Raymundo Abarca Alarcón," constituted the experienced class for the ANN. Sixteen trainees (10 medical students and 6 residents) without laparoscopic surgical skills and limited experience in minimal invasive techniques from School of Medicine at Universidad Autónoma de Guerrero constituted the nonexperienced class. Data from participants performing 5 daily repetitions for each task during 5 days were used to build the ANN. The participants tend to improve their learning curve and dexterity with this laparoscopic training system. The classifier shows mean accuracy and receiver operating characteristic curve of 90.98% and 0.93, respectively. Moreover, the ANN was able to evaluate the psychomotor skills of users into 2 classes: experienced or nonexperienced. We constructed and evaluated an affordable laparoscopic trainer system using computer vision, augmented reality, and an artificial intelligence algorithm. The proposed trainer has the potential to increase the self-confidence of trainees and to be applied to programs with limited resources.

  6. The FELICIA bulletin board system and the IRBIS anonymous FTP server: Computer security information sources for the DOE community. CIAC-2302

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orvis, W.J.

    1993-11-03

    The Computer Incident Advisory Capability (CIAC) operates two information servers for the DOE community, FELICIA (formerly FELIX) and IRBIS. FELICIA is a computer Bulletin Board System (BBS) that can be accessed by telephone with a modem. IRBIS is an anonymous ftp server that can be accessed on the Internet. Both of these servers contain all of the publicly available CIAC, CERT, NIST, and DDN bulletins, virus descriptions, the VIRUS-L moderated virus bulletin board, copies of public domain and shareware virus- detection/protection software, and copies of useful public domain and shareware utility programs. This guide describes how to connect these systemsmore » and obtain files from them.« less

  7. SPARTAN: A High-Fidelity Simulation for Automated Rendezvous and Docking Applications

    NASA Technical Reports Server (NTRS)

    Turbe, Michael A.; McDuffie, James H.; DeKock, Brandon K.; Betts, Kevin M.; Carrington, Connie K.

    2007-01-01

    bd Systems (a subsidiary of SAIC) has developed the Simulation Package for Autonomous Rendezvous Test and ANalysis (SPARTAN), a high-fidelity on-orbit simulation featuring multiple six-degree-of-freedom (6DOF) vehicles. SPARTAN has been developed in a modular fashion in Matlab/Simulink to test next-generation automated rendezvous and docking guidance, navigation,and control algorithms for NASA's new Vision for Space Exploration. SPARTAN includes autonomous state-based mission manager algorithms responsible for sequencing the vehicle through various flight phases based on on-board sensor inputs and closed-loop guidance algorithms, including Lambert transfers, Clohessy-Wiltshire maneuvers, and glideslope approaches The guidance commands are implemented using an integrated translation and attitude control system to provide 6DOF control of each vehicle in the simulation. SPARTAN also includes high-fidelity representations of a variety of absolute and relative navigation sensors that maybe used for NASA missions, including radio frequency, lidar, and video-based rendezvous sensors. Proprietary navigation sensor fusion algorithms have been developed that allow the integration of these sensor measurements through an extended Kalman filter framework to create a single optimal estimate of the relative state of the vehicles. SPARTAN provides capability for Monte Carlo dispersion analysis, allowing for rigorous evaluation of the performance of the complete proposed AR&D system, including software, sensors, and mechanisms. SPARTAN also supports hardware-in-the-loop testing through conversion of the algorithms to C code using Real-Time Workshop in order to be hosted in a mission computer engineering development unit running an embedded real-time operating system. SPARTAN also contains both runtime TCP/IP socket interface and post-processing compatibility with bdStudio, a visualization tool developed by bd Systems, allowing for intuitive evaluation of simulation results. A description of the SPARTAN architecture and capabilities is provided, along with details on the models and algorithms utilized and results from representative missions.

  8. Theory and experiments in model-based space system anomaly management

    NASA Astrophysics Data System (ADS)

    Kitts, Christopher Adam

    This research program consists of an experimental study of model-based reasoning methods for detecting, diagnosing and resolving anomalies that occur when operating a comprehensive space system. Using a first principles approach, several extensions were made to the existing field of model-based fault detection and diagnosis in order to develop a general theory of model-based anomaly management. Based on this theory, a suite of algorithms were developed and computationally implemented in order to detect, diagnose and identify resolutions for anomalous conditions occurring within an engineering system. The theory and software suite were experimentally verified and validated in the context of a simple but comprehensive, student-developed, end-to-end space system, which was developed specifically to support such demonstrations. This space system consisted of the Sapphire microsatellite which was launched in 2001, several geographically distributed and Internet-enabled communication ground stations, and a centralized mission control complex located in the Space Technology Center in the NASA Ames Research Park. Results of both ground-based and on-board experiments demonstrate the speed, accuracy, and value of the algorithms compared to human operators, and they highlight future improvements required to mature this technology.

  9. Parallel algorithms for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Amin-Javaheri, Masoud; Orin, David E.

    1989-01-01

    The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.

  10. Imaging Performance of a Handheld Ultrasound System With Real-Time Computer-Aided Detection of Lumbar Spine Anatomy: A Feasibility Study.

    PubMed

    Tiouririne, Mohamed; Dixon, Adam J; Mauldin, F William; Scalzo, David; Krishnaraj, Arun

    2017-08-01

    The aim of this study was to evaluate the imaging performance of a handheld ultrasound system and the accuracy of an automated lumbar spine computer-aided detection (CAD) algorithm in the spines of human subjects. This study was approved by the institutional review board of the University of Virginia. The authors designed a handheld ultrasound system with enhanced bone image quality and fully automated CAD of lumbar spine anatomy. The imaging performance was evaluated by imaging the lumbar spines of 68 volunteers with body mass index between 18.5 and 48 kg/m. The accuracy, sensitivity, and specificity of the lumbar spine CAD algorithm were assessed by comparing the algorithm's results to ground-truth segmentations of neuraxial anatomy provided by radiologists. The lumbar spine CAD algorithm detected the epidural space with a sensitivity of 94.2% (95% confidence interval [CI], 85.1%-98.1%) and a specificity of 85.5% (95% CI, 81.7%-88.6%) and measured its depth with an error of approximately ±0.5 cm compared with measurements obtained manually from the 2-dimensional ultrasound images. The spine midline was detected with a sensitivity of 93.9% (95% CI, 85.8%-97.7%) and specificity of 91.3% (95% CI, 83.6%-96.9%), and its lateral position within the ultrasound image was measured with an error of approximately ±0.3 cm. The bone enhancement imaging mode produced images with 5.1- to 10-fold enhanced bone contrast when compared with a comparable handheld ultrasound imaging system. The results of this study demonstrate the feasibility of CAD for assisting with real-time interpretation of ultrasound images of the lumbar spine at the bedside.

  11. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images.

    PubMed

    Du, Xiaogang; Dang, Jianwu; Wang, Yangping; Wang, Song; Lei, Tao

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU).

  12. On the performances of computer vision algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  13. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  14. Reversible Data Hiding Based on DNA Computing

    PubMed Central

    Xie, Yingjie

    2017-01-01

    Biocomputing, especially DNA, computing has got great development. It is widely used in information security. In this paper, a novel algorithm of reversible data hiding based on DNA computing is proposed. Inspired by the algorithm of histogram modification, which is a classical algorithm for reversible data hiding, we combine it with DNA computing to realize this algorithm based on biological technology. Compared with previous results, our experimental results have significantly improved the ER (Embedding Rate). Furthermore, some PSNR (peak signal-to-noise ratios) of test images are also improved. Experimental results show that it is suitable for protecting the copyright of cover image in DNA-based information security. PMID:28280504

  15. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    PubMed

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  16. DSP-Based dual-polarity mass spectrum pattern recognition for bio-detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V; Coffee, K; Gard, E

    2006-04-21

    The Bio-Aerosol Mass Spectrometry (BAMS) instrument analyzes single aerosol particles using a dual-polarity time-of-flight mass spectrometer recording simultaneously spectra of thirty to a hundred thousand points on each polarity. We describe here a real-time pattern recognition algorithm developed at Lawrence Livermore National Laboratory that has been implemented on a nine Digital Signal Processor (DSP) system from Signatec Incorporated. The algorithm first preprocesses independently the raw time-of-flight data through an adaptive baseline removal routine. The next step consists of a polarity dependent calibration to a mass-to-charge representation, reducing the data to about five hundred to a thousand channels per polarity. Themore » last step is the identification step using a pattern recognition algorithm based on a library of known particle signatures including threat agents and background particles. The identification step includes integrating the two polarities for a final identification determination using a score-based rule tree. This algorithm, operating on multiple channels per-polarity and multiple polarities, is well suited for parallel real-time processing. It has been implemented on the PMP8A from Signatec Incorporated, which is a computer based board that can interface directly to the two one-Giga-Sample digitizers (PDA1000 from Signatec Incorporated) used to record the two polarities of time-of-flight data. By using optimized data separation, pipelining, and parallel processing across the nine DSPs it is possible to achieve a processing speed of up to a thousand particles per seconds, while maintaining the recognition rate observed on a non-real time implementation. This embedded system has allowed the BAMS technology to improve its throughput and therefore its sensitivity while maintaining a large dynamic range (number of channels and two polarities) thus maintaining the systems specificity for bio-detection.« less

  17. A ground-based memory state tracker for satellite on-board computer memory

    NASA Technical Reports Server (NTRS)

    Quan, Alan; Angelino, Robert; Hill, Michael; Schwuttke, Ursula; Hervias, Felipe

    1993-01-01

    The TOPEX/POSEIDON satellite, currently in Earth orbit, will use radar altimetry to measure sea surface height over 90 percent of the world's ice-free oceans. In combination with a precise determination of the spacecraft orbit, the altimetry data will provide maps of ocean topography, which will be used to calculate the speed and direction of ocean currents worldwide. NASA's Jet Propulsion Laboratory (JPL) has primary responsibility for mission operations for TOPEX/POSEIDON. Software applications have been developed to automate mission operations tasks. This paper describes one of these applications, the Memory State Tracker, which allows the ground analyst to examine and track the contents of satellite on-board computer memory quickly and efficiently, in a human-readable format, without having to receive the data directly from the spacecraft. This process is accomplished by maintaining a groundbased mirror-image of spacecraft On-board Computer memory.

  18. A parallel algorithm for the initial screening of space debris collisions prediction using the SGP4/SDP4 models and GPU acceleration

    NASA Astrophysics Data System (ADS)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-05-01

    Currently, a tremendous amount of space debris in Earth's orbit imperils operational spacecraft. It is essential to undertake risk assessments of collisions and predict dangerous encounters in space. However, collision predictions for an enormous amount of space debris give rise to large-scale computations. In this paper, a parallel algorithm is established on the Compute Unified Device Architecture (CUDA) platform of NVIDIA Corporation for collision prediction. According to the parallel structure of NVIDIA graphics processors, a block decomposition strategy is adopted in the algorithm. Space debris is divided into batches, and the computation and data transfer operations of adjacent batches overlap. As a consequence, the latency to access shared memory during the entire computing process is significantly reduced, and a higher computing speed is reached. Theoretically, a simulation of collision prediction for space debris of any amount and for any time span can be executed. To verify this algorithm, a simulation example including 1382 pieces of debris, whose operational time scales vary from 1 min to 3 days, is conducted on Tesla C2075 of NVIDIA. The simulation results demonstrate that with the same computational accuracy as that of a CPU, the computing speed of the parallel algorithm on a GPU is 30 times that on a CPU. Based on this algorithm, collision prediction of over 150 Chinese spacecraft for a time span of 3 days can be completed in less than 3 h on a single computer, which meets the timeliness requirement of the initial screening task. Furthermore, the algorithm can be adapted for multiple tasks, including particle filtration, constellation design, and Monte-Carlo simulation of an orbital computation.

  19. Laboratory and exterior decay of wood plastic composite boards: voids analysis and computed tomography

    Treesearch

    Grace Sun; Rebecca E. Ibach; Meghan Faillace; Marek Gnatowski; Jessie A. Glaeser; John Haight

    2016-01-01

    After exposure in the field and laboratory soil block culture testing, the void content of wood–plastic composite (WPC) decking boards was compared to unexposed samples. A void volume analysis was conducted based on calculations of sample density and from micro-computed tomography (microCT) data. It was found that reference WPC contains voids of different sizes from...

  20. Twelve tips for use of a white board in clinical teaching: reviving the chalk talk.

    PubMed

    Orlander, Jay D

    2007-03-01

    Little has been written on the art of using a board in clinical teaching. The technological development of the white board appears to have coincided with that of the laptop computer and accompanying LCD projector, so that fewer and fewer teaching sessions appear to utilize the board as an efficient teaching tool. I have observed this most commonly among younger faculty who are most comfortable with technology and who may lack training and experience with a blank board. This paper offers suggestions on using the board in clinical teaching in order to enhance the educational process through better engagement of the learners.

  1. Multi-sensor Navigation System Design

    DOT National Transportation Integrated Search

    1971-03-01

    This report treats the design of naviggation systems that collect data from two or more on-board measurement subsystems and precess this data in an on-board computer. Such systems are called Multi-sensor Navigation Systems. : The design begins with t...

  2. A Discussion of Using a Reconfigurable Processor to Implement the Discrete Fourier Transform

    NASA Technical Reports Server (NTRS)

    White, Michael J.

    2004-01-01

    This paper presents the design and implementation of the Discrete Fourier Transform (DFT) algorithm on a reconfigurable processor system. While highly applicable to many engineering problems, the DFT is an extremely computationally intensive algorithm. Consequently, the eventual goal of this work is to enhance the execution of a floating-point precision DFT algorithm by off loading the algorithm from the computing system. This computing system, within the context of this research, is a typical high performance desktop computer with an may of field programmable gate arrays (FPGAs). FPGAs are hardware devices that are configured by software to execute an algorithm. If it is desired to change the algorithm, the software is changed to reflect the modification, then download to the FPGA, which is then itself modified. This paper will discuss methodology for developing the DFT algorithm to be implemented on the FPGA. We will discuss the algorithm, the FPGA code effort, and the results to date.

  3. Quantitative evaluation of low-cost frame-grabber boards for personal computers.

    PubMed

    Kofler, J M; Gray, J E; Fuelberth, J T; Taubel, J P

    1995-11-01

    Nine moderately priced frame-grabber boards for both Macintosh (Apple Computers, Cupertino, CA) and IBM-compatible computers were evaluated using a Society of Motion Pictures and Television Engineers (SMPTE) pattern and a video signal generator for dynamic range, gray-scale reproducibility, and spatial integrity of the captured image. The degradation of the video information ranged from minor to severe. Some boards are of reasonable quality for applications in diagnostic imaging and education. However, price and quality are not necessarily directly related.

  4. A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications

    NASA Technical Reports Server (NTRS)

    Povitsky, Alex; Morris, Philip J.

    1999-01-01

    In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.

  5. Exact parallel algorithms for some members of the traveling salesman problem family

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pekny, J.F.

    1989-01-01

    The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less

  6. GNSS-derived Path Delay Plus (GPD+): a methodology for the computation of improved wet tropospheric corrections for coastal altimetry

    NASA Astrophysics Data System (ADS)

    Fernandes, Joana; Lázaro, Clara; Ambrózio, Américo; Restano, Marco; Benveniste, Jérôme

    2017-04-01

    Satellite altimetry missions provide the sea surface height above a reference ellipsoid with centimetric accuracy as long as all terms involved in the altimeter measurement system (satellite orbit, altimeter range between the satellite and the sea surface, and instrumental, range and geophysical corrections) are known with the same accuracy. The wet tropospheric correction (WTC), the range correction that accounts for the delay induced by the presence of water vapour and liquid water in the troposphere, has an absolute value less than 50 cm but large space-time variability, being therefore difficult to model. Despite the progress observed in WTC modelling from numerical weather models (NWM), the accuracy of present NWM-derived WTC is still deficient for most altimetry applications such as e.g. sea level variation. Actually, accurate WTC at time and location of the altimetric measurements can only be achieved through observations of the atmospheric water vapour content, acquired by on-board microwave radiometers (MWR). In open ocean, MWR-derived WTC are centimeter-level accurate; in coastal regions, WTC degrades due to several reasons, among which is the contamination, from the surrounding land surfaces, of the signal measured by the MWR. Also the presence of ice and rain contaminates the MWR observations. Therefore, MWR-derived WTC are generally incorrect or invalid in coastal, rainy and high-latitude regions, and altimeter measurements cannot benefit from MWR corrections. The GNSS-derived Path Delay (GPD) algorithm was developed by the University of Porto (UPorto) aiming at computing the WTC for coastal regions where MWR observations are invalid, envisaging the recovery of the altimeter data in these regions. The GPD-derived WTC is based on a space-time optimal interpolation that combines path delays measured by MWR and computed at more than 800 coastal/island GNSS stations. Its most recent version, the GPD Plus (GPD+) estimates the WTC globally relying also on path delay observations from 19 scanning imaging MWR on-board various remote sensing missions. After adequate tuning, the GPD+ is applicable to any altimetric mission with or without an on-board MWR, as CryoSat-2 for which only a NWM-derived WTC would be, otherwise, available. To ensure consistency and WTC long term stability, and prior to their use in the GPD+, path delay observations from all radiometers were previously inter-calibrated with respect to the Special Sensor Microwave Imager (SSM/I) and SSMI/I Sounder (SSM/IS). The GPD+ WTC were computed, in the scope of several ESA-funded projects e.g., Sea Level CCI, CP4O, for 9 altimetry missions and were independently validated through statistical analyses of sea level anomaly variance. Overall, results show that GPD+ recovers a significant number of measurements in the coastal regions, ensuring the continuity and consistency of the correction in the open-ocean/coastal transition zone and also at high latitudes. As a consequence, GPD+ WTC have been chosen as the best available WTC for climate studies and adopted as reference in the Sea Level CCI products; the GPD+ has also been adopted as reference in CrySat-2 Level 2 IOP and GOP products. The GPD+ algorithm, its implementation, path delay datasets used and sensor calibration are here described.

  7. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    PubMed Central

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical modeling. PMID:27044039

  8. BASKET on-board software library

    NASA Astrophysics Data System (ADS)

    Luntzer, Armin; Ottensamer, Roland; Kerschbaum, Franz

    2014-07-01

    The University of Vienna is a provider of on-board data processing software with focus on data compression, such as used on board the highly successful Herschel/PACS instrument, as well as in the small BRITE-Constellation fleet of cube-sats. Current contributions are made to CHEOPS, SAFARI and PLATO. The effort was taken to review the various functions developed for Herschel and provide a consolidated software library to facilitate the work for future missions. This library is a shopping basket of algorithms. Its contents are separated into four classes: auxiliary functions (e.g. circular buffers), preprocessing functions (e.g. for calibration), lossless data compression (arithmetic or Rice coding) and lossy reduction steps (ramp fitting etc.). The "BASKET" has all functionality that is needed to create an on-board data processing chain. All sources are written in C, supplemented by optimized versions in assembly, targeting popular CPU architectures for space applications. BASKET is open source and constantly growing

  9. Parallel Algorithms for Least Squares and Related Computations.

    DTIC Science & Technology

    1991-03-22

    for dense computations in linear algebra . The work has recently been published in a general reference book on parallel algorithms by SIAM. AFO SR...written his Ph.D. dissertation with the principal investigator. (See publication 6.) • Parallel Algorithms for Dense Linear Algebra Computations. Our...and describe and to put into perspective a selection of the more important parallel algorithms for numerical linear algebra . We give a major new

  10. Parallel, stochastic measurement of molecular surface area.

    PubMed

    Juba, Derek; Varshney, Amitabh

    2008-08-01

    Biochemists often wish to compute surface areas of proteins. A variety of algorithms have been developed for this task, but they are designed for traditional single-processor architectures. The current trend in computer hardware is towards increasingly parallel architectures for which these algorithms are not well suited. We describe a parallel, stochastic algorithm for molecular surface area computation that maps well to the emerging multi-core architectures. Our algorithm is also progressive, providing a rough estimate of surface area immediately and refining this estimate as time goes on. Furthermore, the algorithm generates points on the molecular surface which can be used for point-based rendering. We demonstrate a GPU implementation of our algorithm and show that it compares favorably with several existing molecular surface computation programs, giving fast estimates of the molecular surface area with good accuracy.

  11. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment

    PubMed Central

    Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong

    2016-01-01

    In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405

  12. Advancements of In-Flight Mass Moment of Inertia and Structural Deflection Algorithms for Satellite Attitude Simulators

    DTIC Science & Technology

    2015-03-26

    pendulum [15] to estimate the MOI. The benefit to this methodology is that instead of a direct comparison to Euler’s equations when using an on-board ACS...the equations of motion of pendulum motion are evaluated to estimate the resistance to angular acceleration. Instead of attempting to compare noisy...sensor data instantaneously when using on-board ACS data, the pendulum oscillation frequency is estimated, which can be globally smoothed for highly

  13. Computer Starters!

    ERIC Educational Resources Information Center

    Instructor, 1983

    1983-01-01

    Instructor's Computer-Using Teachers Board members give practical tips on how to get a classroom ready for a new computer, introduce students to the machine, and help them learn about programing and computer literacy. Safety, scheduling, and supervision requirements are noted. (PP)

  14. SBAS-InSAR analysis of surface deformation at Mauna Loa and Kilauea volcanoes in Hawaii

    USGS Publications Warehouse

    Casu, F.; Lanari, Riccardo; Sansosti, E.; Solaro, G.; Tizzani, Pietro; Poland, M.; Miklius, Asta

    2009-01-01

    We investigate the deformation of Mauna Loa and K??lauea volcanoes, Hawai'i, by exploiting the advanced differential Synthetic Aperture Radar Interferometry (InSAR) technique referred to as the Small BAseline Subset (SBAS) algorithm. In particular, we present time series of line-of-sight (LOS) displacements derived from SAR data acquired by the ASAR instrument, on board the ENVISAT satellite, from the ascending (track 93) and descending (track 429) orbits between 2003 and 2008. For each coherent pixel of the radar images we compute time-dependent surface displacements as well as the average LOS deformation rate. Our results quantify, in space and time, the complex deformation of Mauna Loa and K??lauea volcanoes. The derived InSAR measurements are compared to continuous GPS data to asses the quality of the SBAS-InSAR products. ??2009 IEEE.

  15. 2005 Science and Technology for Chem-Bio Information Systems (S and T CBIS) volume 3 Thursday

    DTIC Science & Technology

    2005-10-28

    radar, lidar, or sodar with computer on-board. Temperature and moisture MW radiometer with computer on- board. Portable meteorological sensors ... Wireless on the go is a way of life now – my cell phone , my PDA, my IPOD (look, I’m “Podcasting”!) and dock it when I’m at home – Same components...Team.. Other specifications will follow… Standardization of the interfaces across all CBRN sensors / devices ! JPEO-CBD 20 Joint Program Executive Office

  16. Demonstration of a small programmable quantum computer with atomic qubits.

    PubMed

    Debnath, S; Linke, N M; Figgatt, C; Landsman, K A; Wright, K; Monroe, C

    2016-08-04

    Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.

  17. Demonstration of a small programmable quantum computer with atomic qubits

    NASA Astrophysics Data System (ADS)

    Debnath, S.; Linke, N. M.; Figgatt, C.; Landsman, K. A.; Wright, K.; Monroe, C.

    2016-08-01

    Quantum computers can solve certain problems more efficiently than any possible conventional computer. Small quantum algorithms have been demonstrated on multiple quantum computing platforms, many specifically tailored in hardware to implement a particular algorithm or execute a limited number of computational paths. Here we demonstrate a five-qubit trapped-ion quantum computer that can be programmed in software to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates. We compile algorithms into a fully connected set of gate operations that are native to the hardware and have a mean fidelity of 98 per cent. Reconfiguring these gate sequences provides the flexibility to implement a variety of algorithms without altering the hardware. As examples, we implement the Deutsch-Jozsa and Bernstein-Vazirani algorithms with average success rates of 95 and 90 per cent, respectively. We also perform a coherent quantum Fourier transform on five trapped-ion qubits for phase estimation and period finding with average fidelities of 62 and 84 per cent, respectively. This small quantum computer can be scaled to larger numbers of qubits within a single register, and can be further expanded by connecting several such modules through ion shuttling or photonic quantum channels.

  18. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    NASA Astrophysics Data System (ADS)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.

  19. The Air Force Geophysics Laboratory Standalone Data Acquisition System: A Functional Description.

    DTIC Science & Technology

    1980-10-09

    the board are a buffer for the RUN/HALT front panel switch and a retriggerable oneshot multivibrator. This latter circuit senses the SRUN pulse train...recording on the data tapes, and providing the master timing source for data acquisition. An Electronic Research Company (ERC) model 2446 digital...the computer is fed to a retriggerable oneshot multivibrator on the board. (SRUN consists of a pulse train that is present when the computer is running

  20. Three Way Comparison between Two OMI/Aura and One POLDER/PARASOL Cloud Pressure Products

    NASA Technical Reports Server (NTRS)

    Sneep, M.; deHaan, J. F.; Stammes, P.; Vanbaunce, C.; Joiner, J.; Vasilkov, A. P.; Levelt, P. F.

    2007-01-01

    The cloud pressures determined by three different algorithms, operating on reflectances measured by two space-borne instruments in the "A" train, are compared with each other. The retrieval algorithms are based on absorption in the oxygen A-band near 760 nm, absorption by a collision induced absorption in oxygen near 477nm, and the filling in of Fraunhofer lines by rotational Raman scattering. The first algorithm operates on data collected by the POLDER instrument on board PARASOL, while the latter two operate on data from the OMI instrument on board Aura. The satellites sample the same air mass within about 15 minutes. Using one month of data, the cloud pressures from the three algorithms are found to show a similar behavior, with correlation coefficients larger than 0.85 between the data sets for thick clouds. The average differences in the cloud pressure are also small, between 2 and 45 hPa, for the whole data set. For optically thin to medium thick clouds, the cloud pressure the distribution found by POLDER is very similar to that found by OMI using the O2 - O2 absorption. Somewhat larger differences are found for very thick clouds, and we hypothesise that the strong absorption in the oxygen A-band causes the POLDER instrument to retrieve lower pressures for those scenes.

  1. Exploration of a physiologically-inspired hearing-aid algorithm using a computer model mimicking impaired hearing.

    PubMed

    Jürgens, Tim; Clark, Nicholas R; Lecluyse, Wendy; Meddis, Ray

    2016-01-01

    To use a computer model of impaired hearing to explore the effects of a physiologically-inspired hearing-aid algorithm on a range of psychoacoustic measures. A computer model of a hypothetical impaired listener's hearing was constructed by adjusting parameters of a computer model of normal hearing. Absolute thresholds, estimates of compression, and frequency selectivity (summarized to a hearing profile) were assessed using this model with and without pre-processing the stimuli by a hearing-aid algorithm. The influence of different settings of the algorithm on the impaired profile was investigated. To validate the model predictions, the effect of the algorithm on hearing profiles of human impaired listeners was measured. A computer model simulating impaired hearing (total absence of basilar membrane compression) was used, and three hearing-impaired listeners participated. The hearing profiles of the model and the listeners showed substantial changes when the test stimuli were pre-processed by the hearing-aid algorithm. These changes consisted of lower absolute thresholds, steeper temporal masking curves, and sharper psychophysical tuning curves. The hearing-aid algorithm affected the impaired hearing profile of the model to approximate a normal hearing profile. Qualitatively similar results were found with the impaired listeners' hearing profiles.

  2. Considerations for the Use of STEREO -HI Data for Astronomical Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tappin, S. J., E-mail: james.tappin@stfc.ac.uk

    Recent refinements to the photometric calibrations of the Heliospheric Imagers (HI) on board the Solar TErrestrial RElations Observatory ( STEREO ) have revealed a number of subtle effects in the measurement of stellar signals with those instruments. These effects need to be considered in the interpretation of STEREO -HI data for astronomy. In this paper we present an analysis of these effects and how to compensate for them when using STEREO -HI data for astronomical studies. We determine how saturation of the HI CCD detectors affects the apparent count rates of stars after the on-board summing of pixels and exposures.more » Single-exposure calibration images are analyzed and compared with binned and summed science images to determine the influence of saturation on the science images. We also analyze how the on-board cosmic-ray scrubbing algorithm affects stellar images. We determine how this interacts with the variations of instrument pointing to affect measurements of stars. We find that saturation is a significant effect only for the brightest stars, and that its onset is gradual. We also find that degraded pointing stability, whether of the entire spacecraft or of the imagers, leads to reduced stellar count rates and also increased variation thereof through interaction with the on-board cosmic-ray scrubbing algorithm. We suggest ways in which these effects can be mitigated for astronomical studies and also suggest how the situation can be improved for future imagers.« less

  3. Special-purpose computer for holography HORN-2

    NASA Astrophysics Data System (ADS)

    Ito, Tomoyoshi; Eldeib, Hesham; Yoshida, Kenji; Takahashi, Shinya; Yabe, Takashi; Kunugi, Tomoaki

    1996-01-01

    We designed and built a special-purpose computer for holography, HORN-2 (HOlographic ReconstructioN). HORN-2 calculates light intensity at high speed of 0.3 Gflops per one board with single (32-bit floating point) precision. The cost of the board is 500 000 Japanese yen (5000 US dollar). We made three boards. Operating them in parallel, we get about 1 Gflops.

  4. Report of the Defense Science Board Task Force on Military Applications of New-Generation Computing Technologies.

    DTIC Science & Technology

    1984-12-01

    1980’s we are seeing enhancement of breadth, power, and accessibility of computers in many dimensions: o Pov~erfu1, costly fragile mainframes for...During the 1980’s we are seeing enhancement of breadth, power and accessibility of computers in many dimensions. (1) Powerful, costly, fragile mainframes... X A~ ’ EMORANDlUM FOR THE t-RAIRMAN, DEFENSE<. ’ ’...’"" S!B.FECT: Defense Science Board T is F- Supercomputei Applicai io, Yoi are requested to

  5. Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew

    2009-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.

  6. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images

    PubMed Central

    Wang, Yangping; Wang, Song

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU). PMID:28053653

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zheming; Yoshii, Kazutomo; Finkel, Hal

    Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication andmore » kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.« less

  8. 160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA)

    PubMed Central

    Li, Isaac TS; Shum, Warren; Truong, Kevin

    2007-01-01

    Background To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. Results In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. Conclusion This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching. PMID:17555593

  9. 160-fold acceleration of the Smith-Waterman algorithm using a field programmable gate array (FPGA).

    PubMed

    Li, Isaac T S; Shum, Warren; Truong, Kevin

    2007-06-07

    To infer homology and subsequently gene function, the Smith-Waterman (SW) algorithm is used to find the optimal local alignment between two sequences. When searching sequence databases that may contain hundreds of millions of sequences, this algorithm becomes computationally expensive. In this paper, we focused on accelerating the Smith-Waterman algorithm by using FPGA-based hardware that implemented a module for computing the score of a single cell of the SW matrix. Then using a grid of this module, the entire SW matrix was computed at the speed of field propagation through the FPGA circuit. These modifications dramatically accelerated the algorithm's computation time by up to 160 folds compared to a pure software implementation running on the same FPGA with an Altera Nios II softprocessor. This design of FPGA accelerated hardware offers a new promising direction to seeking computation improvement of genomic database searching.

  10. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    PubMed

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  11. Advanced On-Board Processor (AOP). [for future spacecraft applications

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Advanced On-board Processor the (AOP) uses large scale integration throughout and is the most advanced space qualified computer of its class in existence today. It was designed to satisfy most spacecraft requirements which are anticipated over the next several years. The AOP design utilizes custom metallized multigate arrays (CMMA) which have been designed specifically for this computer. This approach provides the most efficient use of circuits, reduces volume, weight, assembly costs and provides for a significant increase in reliability by the significant reduction in conventional circuit interconnections. The required 69 CMMA packages are assembled on a single multilayer printed circuit board which together with associated connectors constitutes the complete AOP. This approach also reduces conventional interconnections thus further reducing weight, volume and assembly costs.

  12. The Ship Movement Trajectory Prediction Algorithm Using Navigational Data Fusion.

    PubMed

    Borkowski, Piotr

    2017-06-20

    It is essential for the marine navigator conducting maneuvers of his ship at sea to know future positions of himself and target ships in a specific time span to effectively solve collision situations. This article presents an algorithm of ship movement trajectory prediction, which, through data fusion, takes into account measurements of the ship's current position from a number of doubled autonomous devices. This increases the reliability and accuracy of prediction. The algorithm has been implemented in NAVDEC, a navigation decision support system and practically used on board ships.

  13. The Ship Movement Trajectory Prediction Algorithm Using Navigational Data Fusion

    PubMed Central

    Borkowski, Piotr

    2017-01-01

    It is essential for the marine navigator conducting maneuvers of his ship at sea to know future positions of himself and target ships in a specific time span to effectively solve collision situations. This article presents an algorithm of ship movement trajectory prediction, which, through data fusion, takes into account measurements of the ship’s current position from a number of doubled autonomous devices. This increases the reliability and accuracy of prediction. The algorithm has been implemented in NAVDEC, a navigation decision support system and practically used on board ships. PMID:28632176

  14. Star adaptation for two-algorithms used on serial computers

    NASA Technical Reports Server (NTRS)

    Howser, L. M.; Lambiotte, J. J., Jr.

    1974-01-01

    Two representative algorithms used on a serial computer and presently executed on the Control Data Corporation 6000 computer were adapted to execute efficiently on the Control Data STAR-100 computer. Gaussian elimination for the solution of simultaneous linear equations and the Gauss-Legendre quadrature formula for the approximation of an integral are the two algorithms discussed. A description is given of how the programs were adapted for STAR and why these adaptations were necessary to obtain an efficient STAR program. Some points to consider when adapting an algorithm for STAR are discussed. Program listings of the 6000 version coded in 6000 FORTRAN, the adapted STAR version coded in 6000 FORTRAN, and the STAR version coded in STAR FORTRAN are presented in the appendices.

  15. On-line, adaptive state estimator for active noise control

    NASA Technical Reports Server (NTRS)

    Lim, Tae W.

    1994-01-01

    Dynamic characteristics of airframe structures are expected to vary as aircraft flight conditions change. Accurate knowledge of the changing dynamic characteristics is crucial to enhancing the performance of the active noise control system using feedback control. This research investigates the development of an adaptive, on-line state estimator using a neural network concept to conduct active noise control. In this research, an algorithm has been developed that can be used to estimate displacement and velocity responses at any locations on the structure from a limited number of acceleration measurements and input force information. The algorithm employs band-pass filters to extract from the measurement signal the frequency contents corresponding to a desired mode. The filtered signal is then used to train a neural network which consists of a linear neuron with three weights. The structure of the neural network is designed as simple as possible to increase the sampling frequency as much as possible. The weights obtained through neural network training are then used to construct the transfer function of a mode in z-domain and to identify modal properties of each mode. By using the identified transfer function and interpolating the mode shape obtained at sensor locations, the displacement and velocity responses are estimated with reasonable accuracy at any locations on the structure. The accuracy of the response estimates depends on the number of modes incorporated in the estimates and the number of sensors employed to conduct mode shape interpolation. Computer simulation demonstrates that the algorithm is capable of adapting to the varying dynamic characteristics of structural properties. Experimental implementation of the algorithm on a DSP (digital signal processing) board for a plate structure is underway. The algorithm is expected to reach the sampling frequency range of about 10 kHz to 20 kHz which needs to be maintained for a typical active noise control application.

  16. Development and application of unified algorithms for problems in computational science

    NASA Technical Reports Server (NTRS)

    Shankar, Vijaya; Chakravarthy, Sukumar

    1987-01-01

    A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.

  17. Exact and heuristic algorithms for Space Information Flow.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng

    2018-01-01

    Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.

  18. Onboard Radar Processing Development for Rapid Response Applications

    NASA Technical Reports Server (NTRS)

    Lou, Yunling; Chien, Steve; Clark, Duane; Doubleday, Josh; Muellerschoen, Ron; Wang, Charles C.

    2011-01-01

    We are developing onboard processor (OBP) technology to streamline data acquisition on-demand and explore the potential of the L-band SAR instrument onboard the proposed DESDynI mission and UAVSAR for rapid response applications. The technology would enable the observation and use of surface change data over rapidly evolving natural hazards, both as an aid to scientific understanding and to provide timely data to agencies responsible for the management and mitigation of natural disasters. We are adapting complex science algorithms for surface water extent to detect flooding, snow/water/ice classification to assist in transportation/ shipping forecasts, and repeat-pass change detection to detect disturbances. We are near completion of the development of a custom FPGA board to meet the specific memory and processing needs of L-band SAR processor algorithms and high speed interfaces to reformat and route raw radar data to/from the FPGA processor board. We have also developed a high fidelity Matlab model of the SAR processor that is modularized and parameterized for ease to prototype various SAR processor algorithms targeted for the FPGA. We will be testing the OBP and rapid response algorithms with UAVSAR data to determine the fidelity of the products.

  19. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    NASA Astrophysics Data System (ADS)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  20. Generalization of the Lord-Wingersky Algorithm to Computing the Distribution of Summed Test Scores Based on Real-Number Item Scores

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2013-01-01

    With known item response theory (IRT) item parameters, Lord and Wingersky provided a recursive algorithm for computing the conditional frequency distribution of number-correct test scores, given proficiency. This article presents a generalized algorithm for computing the conditional distribution of summed test scores involving real-number item…

  1. Unified algorithm of cone optics to compute solar flux on central receiver

    NASA Astrophysics Data System (ADS)

    Grigoriev, Victor; Corsi, Clotilde

    2017-06-01

    Analytical algorithms to compute flux distribution on central receiver are considered as a faster alternative to ray tracing. They have quite too many modifications, with HFLCAL and UNIZAR being the most recognized and verified. In this work, a generalized algorithm is presented which is valid for arbitrary sun shape of radial symmetry. Heliostat mirrors can have a nonrectangular profile, and the effects of shading and blocking, strong defocusing and astigmatism can be taken into account. The algorithm is suitable for parallel computing and can benefit from hardware acceleration of polygon texturing.

  2. Symbolic Computation of Strongly Connected Components Using Saturation

    NASA Technical Reports Server (NTRS)

    Zhao, Yang; Ciardo, Gianfranco

    2010-01-01

    Finding strongly connected components (SCCs) in the state-space of discrete-state models is a critical task in formal verification of LTL and fair CTL properties, but the potentially huge number of reachable states and SCCs constitutes a formidable challenge. This paper is concerned with computing the sets of states in SCCs or terminal SCCs of asynchronous systems. Because of its advantages in many applications, we employ saturation on two previously proposed approaches: the Xie-Beerel algorithm and transitive closure. First, saturation speeds up state-space exploration when computing each SCC in the Xie-Beerel algorithm. Then, our main contribution is a novel algorithm to compute the transitive closure using saturation. Experimental results indicate that our improved algorithms achieve a clear speedup over previous algorithms in some cases. With the help of the new transitive closure computation algorithm, up to 10(exp 150) SCCs can be explored within a few seconds.

  3. Synthesis of the unmanned aerial vehicle remote control augmentation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomczyk, Andrzej, E-mail: A.Tomczyk@prz.edu.pl

    Medium size Unmanned Aerial Vehicle (UAV) usually flies as an autonomous aircraft including automatic take-off and landing phases. However in the case of the on-board control system failure, the remote steering is using as an emergency procedure. In this reason, remote manual control of unmanned aerial vehicle is used more often during take-of and landing phases. Depends on UAV take-off mass and speed (total energy) the potential crash can be very danger for airplane and environment. So, handling qualities of UAV is important from pilot-operator point of view. In many cases the dynamic properties of remote controlling UAV are notmore » suitable for obtaining the desired properties of the handling qualities. In this case the control augmentation system (CAS) should be applied. Because the potential failure of the on-board control system, the better solution is that the CAS algorithms are placed on the ground station computers. The method of UAV handling qualities shaping in the case of basic control system failure is presented in this paper. The main idea of this method is that UAV reaction on the operator steering signals should be similar - almost the same - as reaction of the 'ideal' remote control aircraft. The model following method was used for controller parameters calculations. The numerical example concerns the medium size MP-02A UAV applied as an aerial observer system.« less

  4. Pixel Perfect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perrine, Kenneth A.; Hopkins, Derek F.; Lamarche, Brian L.

    2005-09-01

    Biologists and computer engineers at Pacific Northwest National Laboratory have specified, designed, and implemented a hardware/software system for performing real-time, multispectral image processing on a confocal microscope. This solution is intended to extend the capabilities of the microscope, enabling scientists to conduct advanced experiments on cell signaling and other kinds of protein interactions. FRET (fluorescence resonance energy transfer) techniques are used to locate and monitor protein activity. In FRET, it is critical that spectral images be precisely aligned with each other despite disturbances in the physical imaging path caused by imperfections in lenses and cameras, and expansion and contraction ofmore » materials due to temperature changes. The central importance of this work is therefore automatic image registration. This runs in a framework that guarantees real-time performance (processing pairs of 1024x1024, 8-bit images at 15 frames per second) and enables the addition of other types of advanced image processing algorithms such as image feature characterization. The supporting system architecture consists of a Visual Basic front-end containing a series of on-screen interfaces for controlling various aspects of the microscope and a script engine for automation. One of the controls is an ActiveX component written in C++ for handling the control and transfer of images. This component interfaces with a pair of LVDS image capture boards and a PCI board containing a 6-million gate Xilinx Virtex-II FPGA. Several types of image processing are performed on the FPGA in a pipelined fashion, including the image registration. The FPGA offloads work that would otherwise need to be performed by the main CPU and has a guaranteed real-time throughput. Image registration is performed in the FPGA by applying a cubic warp on one image to precisely align it with the other image. Before each experiment, an automated calibration procedure is run in order to set up the cubic warp. During image acquisitions, the cubic warp is evaluated by way of forward differencing. Unwanted pixelation artifacts are minimized by bilinear sampling. The resulting system is state-of-the-art for biological imaging. Precisely registered images enable the reliable use of FRET techniques. In addition, real-time image processing performance allows computed images to be fed back and displayed to scientists immediately, and the pipelined nature of the FPGA allows additional image processing algorithms to be incorporated into the system without slowing throughput.« less

  5. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  6. On Algorithms for Nonlinear Minimax and Min-Max-Min Problems and Their Efficiency

    DTIC Science & Technology

    2011-03-01

    dissertation is complete, I can finally stay home after dinner to play Wii with you. LET’S GO Mario and Yellow Mushroom... xv THIS PAGE INTENTIONALLY... balance the accuracy of the approximation with problem ill-conditioning. The sim- plest smoothing algorithm creates an accurate smooth approximating...sizing in electronic circuit boards (Chen & Fan, 1998), obstacle avoidance for robots (Kirjner- Neto & Polak, 1998), optimal design centering

  7. Vectorization of transport and diffusion computations on the CDC Cyber 205

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abu-Shumays, I.K.

    1986-01-01

    The development and testing of alternative numerical methods and computational algorithms specifically designed for the vectorization of transport and diffusion computations on a Control Data Corporation (CDC) Cyber 205 vector computer are described. Two solution methods for the discrete ordinates approximation to the transport equation are summarized and compared. Factors of 4 to 7 reduction in run times for certain large transport problems were achieved on a Cyber 205 as compared with run times on a CDC-7600. The solution of tridiagonal systems of linear equations, central to several efficient numerical methods for multidimensional diffusion computations and essential for fluid flowmore » and other physics and engineering problems, is also dealt with. Among the methods tested, a combined odd-even cyclic reduction and modified Cholesky factorization algorithm for solving linear symmetric positive definite tridiagonal systems is found to be the most effective for these systems on a Cyber 205. For large tridiagonal systems, computation with this algorithm is an order of magnitude faster on a Cyber 205 than computation with the best algorithm for tridiagonal systems on a CDC-7600.« less

  8. CAD system for footwear design based on whole real 3D data of last surface

    NASA Astrophysics Data System (ADS)

    Song, Wanzhong; Su, Xianyu

    2000-10-01

    Two major parts of application of CAD in footwear design are studied: the development of last surface; computer-aided design of planar shoe-template. A new quasi-experiential development algorithm of last surface based on triangulation approximation is presented. This development algorithm consumes less time and does not need any interactive operation for precisely development compared with other development algorithm of last surface. Based on this algorithm, a software, SHOEMAKERTM, which contains computer aided automatic measurement, automatic development of last surface and computer aide design of shoe-template has been developed.

  9. Algorithm implementation on the Navier-Stokes computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krist, S.E.; Zang, T.A.

    1987-03-01

    The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.

  10. Algorithm implementation on the Navier-Stokes computer

    NASA Technical Reports Server (NTRS)

    Krist, Steven E.; Zang, Thomas A.

    1987-01-01

    The Navier-Stokes Computer is a multi-purpose parallel-processing supercomputer which is currently under development at Princeton University. It consists of multiple local memory parallel processors, called Nodes, which are interconnected in a hypercube network. Details of the procedures involved in implementing an algorithm on the Navier-Stokes computer are presented. The particular finite difference algorithm considered in this analysis was developed for simulation of laminar-turbulent transition in wall bounded shear flows. Projected timing results for implementing this algorithm indicate that operation rates in excess of 42 GFLOPS are feasible on a 128 Node machine.

  11. An efficient parallel algorithm for the solution of a tridiagonal linear system of equations

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1971-01-01

    Tridiagonal linear systems of equations are solved on conventional serial machines in a time proportional to N, where N is the number of equations. The conventional algorithms do not lend themselves directly to parallel computations on computers of the ILLIAC IV class, in the sense that they appear to be inherently serial. An efficient parallel algorithm is presented in which computation time grows as log sub 2 N. The algorithm is based on recursive doubling solutions of linear recurrence relations, and can be used to solve recurrence relations of all orders.

  12. Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.

    NASA Astrophysics Data System (ADS)

    Elliott, William Dewey

    1995-01-01

    A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over several simulation timesteps. One MD application described here highlights the utility of including long range contributions to Lennard-Jones potential in constant pressure simulations. Another application shows the time dependence of long range forces in a multiple time step MD simulation.

  13. SOLARTRAK. Solar Array Tracking Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manish, A.B.; Dudley, J.

    1995-06-01

    SolarTrak used in conjunction with various versions of 68HC11-based SolarTrack hardware boards provides control system for one or two axis solar tracking arrays. Sun position is computed from stored position data and time from an on-board clock/calendar chip. Position feedback can be by one or two offset motor turn counter square wave signals per axis, or by a position potentiometer. A limit of 256 counts resolution is imposed by the on-board analog to digital (A/D) convertor. Control is provided for one or two motors. Numerous options are provided to customize the controller for specific applications. Some options are imposed atmore » compile time, some are setable during operation. Software and hardware board designs are provided for Control Board and separate User Interface Board that accesses and displays variables from Control Board. Controller can be used with range of sensor options ranging from a single turn count sensor per motor to systems using dual turn-count sensors, limit sensors, and a zero reference sensor. Dual axis trackers oriented azimuth elevation, east west, north south, or polar declination can be controlled. Misalignments from these orientations can also be accommodated. The software performs a coordinate transformation using six parameters to compute sun position in misaligned coordinates of the tracker. Parameters account for tilt of tracker in two directions, rotation about each axis, and gear ration errors in each axis. The software can even measure and compute these prameters during an initial setup period if current from a sun position sensor or output from photovoltaic array is available as an anlog voltage to the control board`s A/D port. Wind or emergency stow to aj present position is available triggered by digital or analog signals. Night stow is also available. Tracking dead band is adjustable from narrow to wide. Numerous features of the hardware and software conserve energy for use with battery powered systems.« less

  14. Solar Array Tracking Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maish, Alexander

    1995-06-22

    SolarTrak used in conjunction with various versions of 68HC11-based SolarTrack hardware boards provides control system for one or two axis solar tracking arrays. Sun position is computed from stored position data and time from an on-board clock/calendar chip. Position feedback can be by one or two offset motor turn counter square wave signals per axis, or by a position potentiometer. A limit of 256 counts resolution is imposed by the on-board analog to digital (A/D) convertor. Control is provided for one or two motors. Numerous options are provided to customize the controller for specific applications. Some options are imposed atmore » compile time, some are setable during operation. Software and hardware board designs are provided for Control Board and separate User Interface Board that accesses and displays variables from Control Board. Controller can be used with range of sensor options ranging from a single turn count sensor per motor to systems using dual turn-count sensors, limit sensors, and a zero reference sensor. Dual axis trackers oriented azimuth elevation, east west, north south, or polar declination can be controlled. Misalignments from these orientations can also be accommodated. The software performs a coordinate transformation using six parameters to compute sun position in misaligned coordinates of the tracker. Parameters account for tilt of tracker in two directions, rotation about each axis, and gear ration errors in each axis. The software can even measure and compute these prameters during an initial setup period if current from a sun position sensor or output from photovoltaic array is available as an anlog voltage to the control board''s A/D port. Wind or emergency stow to aj present position is available triggered by digital or analog signals. Night stow is also available. Tracking dead band is adjustable from narrow to wide. Numerous features of the hardware and software conserve energy for use with battery powered systems.« less

  15. Simulation Experiment on Landing Site Selection Using a Simple Geometric Approach

    NASA Astrophysics Data System (ADS)

    Zhao, W.; Tong, X.; Xie, H.; Jin, Y.; Liu, S.; Wu, D.; Liu, X.; Guo, L.; Zhou, Q.

    2017-07-01

    Safe landing is an important part of the planetary exploration mission. Even fine scale terrain hazards (such as rocks, small craters, steep slopes, which would not be accurately detected from orbital reconnaissance) could also pose a serious risk on planetary lander or rover and scientific instruments on-board it. In this paper, a simple geometric approach on planetary landing hazard detection and safe landing site selection is proposed. In order to achieve full implementation of this algorithm, two easy-to-compute metrics are presented for extracting the terrain slope and roughness information. Unlike conventional methods which must do the robust plane fitting and elevation interpolation for DEM generation, in this work, hazards is identified through the processing directly on LiDAR point cloud. For safe landing site selection, a Generalized Voronoi Diagram is constructed. Based on the idea of maximum empty circle, the safest landing site can be determined. In this algorithm, hazards are treated as general polygons, without special simplification (e.g. regarding hazards as discrete circles or ellipses). So using the aforementioned method to process hazards is more conforming to the real planetary exploration scenario. For validating the approach mentioned above, a simulated planetary terrain model was constructed using volcanic ash with rocks in indoor environment. A commercial laser scanner mounted on a rail was used to scan the terrain surface at different hanging positions. The results demonstrate that fairly hazard detection capability and reasonable site selection was obtained compared with conventional method, yet less computational time and less memory usage was consumed. Hence, it is a feasible candidate approach for future precision landing selection on planetary surface.

  16. Collaborative Strategic Board Games as a Site for Distributed Computational Thinking

    ERIC Educational Resources Information Center

    Berland, Matthew; Lee, Victor R.

    2011-01-01

    This paper examines the idea that contemporary strategic board games represent an informal, interactional context in which complex computational thinking takes place. When games are collaborative--that is, a game requires that players work in joint pursuit of a shared goal--the computational thinking is easily observed as distributed across…

  17. 75 FR 59780 - Privacy Act of 1974, as Amended; Computer Matching Program (SSA/Railroad Retirement Board (RRB...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-28

    ... SOCIAL SECURITY ADMINISTRATION [Docket No. SSA 2010-0040] Privacy Act of 1974, as Amended; Computer Matching Program (SSA/ Railroad Retirement Board (RRB))--Match Number 1006 AGENCY: Social Security...: A. General The Computer Matching and Privacy Protection Act of 1988 (Pub. L.) 100-503), amended the...

  18. Next Processor Module: A Hardware Accelerator of UT699 LEON3-FT System for On-Board Computer Software Simulation

    NASA Astrophysics Data System (ADS)

    Langlois, Serge; Fouquet, Olivier; Gouy, Yann; Riant, David

    2014-08-01

    On-Board Computers (OBC) are more and more using integrated systems on-chip (SOC) that embed processors running from 50MHz up to several hundreds of MHz, and around which are plugged some dedicated communication controllers together with other Input/Output channels.For ground testing and On-Board SoftWare (OBSW) validation purpose, a representative simulation of these systems, faster than real-time and with cycle-true timing of execution, is not achieved with current purely software simulators.Since a few years some hybrid solutions where put in place ([1], [2]), including hardware in the loop so as to add accuracy and performance in the computer software simulation.This paper presents the results of the works engaged by Thales Alenia Space (TAS-F) at the end of 2010, that led to a validated HW simulator of the UT699 by mid- 2012 and that is now qualified and fully used in operational contexts.

  19. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).

    PubMed

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  20. SOI layout decomposition for double patterning lithography on high-performance computer platforms

    NASA Astrophysics Data System (ADS)

    Verstov, Vladimir; Zinchenko, Lyudmila; Makarchuk, Vladimir

    2014-12-01

    In the paper silicon on insulator layout decomposition algorithms for the double patterning lithography on high performance computing platforms are discussed. Our approach is based on the use of a contradiction graph and a modified concurrent breadth-first search algorithm. We evaluate our technique on 45 nm Nangate Open Cell Library including non-Manhattan geometry. Experimental results show that our soft computing algorithms decompose layout successfully and a minimal distance between polygons in layout is increased.

  1. GPU-accelerated computing for Lagrangian coherent structures of multi-body gravitational regimes

    NASA Astrophysics Data System (ADS)

    Lin, Mingpei; Xu, Ming; Fu, Xiaoyu

    2017-04-01

    Based on a well-established theoretical foundation, Lagrangian Coherent Structures (LCSs) have elicited widespread research on the intrinsic structures of dynamical systems in many fields, including the field of astrodynamics. Although the application of LCSs in dynamical problems seems straightforward theoretically, its associated computational cost is prohibitive. We propose a block decomposition algorithm developed on Compute Unified Device Architecture (CUDA) platform for the computation of the LCSs of multi-body gravitational regimes. In order to take advantage of GPU's outstanding computing properties, such as Shared Memory, Constant Memory, and Zero-Copy, the algorithm utilizes a block decomposition strategy to facilitate computation of finite-time Lyapunov exponent (FTLE) fields of arbitrary size and timespan. Simulation results demonstrate that this GPU-based algorithm can satisfy double-precision accuracy requirements and greatly decrease the time needed to calculate final results, increasing speed by approximately 13 times. Additionally, this algorithm can be generalized to various large-scale computing problems, such as particle filters, constellation design, and Monte-Carlo simulation.

  2. Bio-inspired algorithms applied to molecular docking simulations.

    PubMed

    Heberlé, G; de Azevedo, W F

    2011-01-01

    Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.

  3. Harem: Hardwood lumber remanufacturing program for maxmizing value based on size, grade and current market prices

    Treesearch

    C.J. Schwehm; P. Klinkhachorn; Charles W. McMillin; Henry A. Huber

    1990-01-01

    This paper describes an expert system computer program which will determine the optimum way to edge and trim a hardwood board so as to yield the highest dollar value based on the grade, size of each board, and current market prices. The program uses the Automated Hardwood Lumber Grading Program written by Klinkhachorn, et al. for determining the grade of each board...

  4. Accelerating phylogenetics computing on the desktop: experiments with executing UPGMA in programmable logic.

    PubMed

    Davis, J P; Akella, S; Waddell, P H

    2004-01-01

    Having greater computational power on the desktop for processing taxa data sets has been a dream of biologists/statisticians involved in phylogenetics data analysis. Many existing algorithms have been highly optimized-one example being Felsenstein's PHYLIP code, written in C, for UPGMA and neighbor joining algorithms. However, the ability to process more than a few tens of taxa in a reasonable amount of time using conventional computers has not yielded a satisfactory speedup in data processing, making it difficult for phylogenetics practitioners to quickly explore data sets-such as might be done from a laptop computer. We discuss the application of custom computing techniques to phylogenetics. In particular, we apply this technology to speed up UPGMA algorithm execution by a factor of a hundred, against that of PHYLIP code running on the same PC. We report on these experiments and discuss how custom computing techniques can be used to not only accelerate phylogenetics algorithm performance on the desktop, but also on larger, high-performance computing engines, thus enabling the high-speed processing of data sets involving thousands of taxa.

  5. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  6. Parallel grid generation algorithm for distributed memory computers

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  7. Concepts for on board satellite image registration. Volume 4: Impact of data set selection on satellite on board signal processing

    NASA Technical Reports Server (NTRS)

    Ruedger, W. H.; Aanstoos, J. V.; Snyder, W. E.

    1982-01-01

    The NASA NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. This volume addresses the impact of data set selection on data formatting required for efficient telemetering of the acquired satellite sensor data. More specifically, the FILE algorithm developed by Martin-Marietta provides a means for the determination of those pixels from the data stream effects an improvement in the achievable system throughput. It will be seen that based on the lack of statistical stationarity in cloud cover, spatial distribution periods exist where data acquisition rates exceed the throughput capability. The study therefore addresses various approaches to data compression and truncation as applicable to this sensor mission.

  8. A new augmentation based algorithm for extracting maximal chordal subgraphs

    DOE PAGES

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2014-10-18

    If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less

  9. A design approach for small vision-based autonomous vehicles

    NASA Astrophysics Data System (ADS)

    Edwards, Barrett B.; Fife, Wade S.; Archibald, James K.; Lee, Dah-Jye; Wilde, Doran K.

    2006-10-01

    This paper describes the design of a small autonomous vehicle based on the Helios computing platform, a custom FPGA-based board capable of supporting on-board vision. Target applications for the Helios computing platform are those that require lightweight equipment and low power consumption. To demonstrate the capabilities of FPGAs in real-time control of autonomous vehicles, a 16 inch long R/C monster truck was outfitted with a Helios board. The platform provided by such a small vehicle is ideal for testing and development. The proof of concept application for this autonomous vehicle was a timed race through an environment with obstacles. Given the size restrictions of the vehicle and its operating environment, the only feasible on-board sensor is a small CMOS camera. The single video feed is therefore the only source of information from the surrounding environment. The image is then segmented and processed by custom logic in the FPGA that also controls direction and speed of the vehicle based on visual input.

  10. Risk Mitigation for the Development of the New Ariane 5 On-Board Computer

    NASA Astrophysics Data System (ADS)

    Stransky, Arnaud; Chevalier, Laurent; Dubuc, Francois; Conde-Reis, Alain; Ledoux, Alain; Miramont, Philippe; Johansson, Leif

    2010-08-01

    In the frame of the Ariane 5 production, some equipment will become obsolete and need to be redesigned and redeveloped. This is the case for the On-Board Computer, which has to be completely redesigned and re-qualified by RUAG Space, as well as all its on-board software and associated development tools by ASTRIUM ST. This paper presents this obsolescence treatment, which has started in 2007 under an ESA contract, in the frame of ACEP and ARTA accompaniment programmes, and is very critical in technical term but also from schedule point of view: it gives the context and overall development plan, and details the risk mitigation actions agreed with ESA, especially those related to the development of the input/output ASIC, and also the on-board software porting and revalidation strategy. The efficiency of these risk mitigation actions has been proven by the outcome schedule; this development constitutes an up-to-date case for good practices, including some experience report and feedback for future other developments.

  11. Concept and realization of unmanned aerial system with different modes of operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czyba, Roman; Szafrański, Grzegorz; Janusz, Wojciech

    2014-12-10

    In this paper we describe the development process of unmanned aerial system, its mechanical components, electronics and software solutions. During the stage of design, we have formulated some necessary requirements for the multirotor vehicle and ground control station in order to build an optimal system which can be used for the reconnaissance missions. Platform is controlled by use of the ground control station (GCS) and has possibility of accomplishing video based observation tasks. In order to fulfill this requirement the on-board payload consists of mechanically stabilized camera augmented with machine vision algorithms to enable object tracking tasks. Novelty of themore » system are four modes of flight, which give full functionality of the developed UAV system. Designed ground control station is consisted not only of the application itself, but also a built-in dedicated components located inside the chassis, which together creates an advanced UAV system supporting the control and management of the flight. Mechanical part of quadrotor is designed to ensure its robustness while meeting objectives of minimizing weight of the platform. Finally the designed electronics allows for implementation of control and estimation algorithms without the needs for their excessive computational optimization.« less

  12. Symmetric log-domain diffeomorphic Registration: a demons-based approach.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2008-01-01

    Modern morphometric studies use non-linear image registration to compare anatomies and perform group analysis. Recently, log-Euclidean approaches have contributed to promote the use of such computational anatomy tools by permitting simple computations of statistics on a rather large class of invertible spatial transformations. In this work, we propose a non-linear registration algorithm perfectly fit for log-Euclidean statistics on diffeomorphisms. Our algorithm works completely in the log-domain, i.e. it uses a stationary velocity field. This implies that we guarantee the invertibility of the deformation and have access to the true inverse transformation. This also means that our output can be directly used for log-Euclidean statistics without relying on the heavy computation of the log of the spatial transformation. As it is often desirable, our algorithm is symmetric with respect to the order of the input images. Furthermore, we use an alternate optimization approach related to Thirion's demons algorithm to provide a fast non-linear registration algorithm. First results show that our algorithm outperforms both the demons algorithm and the recently proposed diffeomorphic demons algorithm in terms of accuracy of the transformation while remaining computationally efficient.

  13. 75 FR 81704 - Self-Regulatory Organizations; Chicago Board Options Exchange, Incorporated; Notice of Filing and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ... algorithm \\5\\ for HOSS and to make related changes to Interpretation and Policy .03. Currently, there are... applicable allocation algorithm for the HOSS and modified HOSS rotation procedures. Paragraph (c)(iv) of the... allocation algorithm in effect for the option class pursuant to Rule 6.45A or 6.45B), then to limit orders...

  14. Extracting potential bus lines of Customized City Bus Service based on public transport big data

    NASA Astrophysics Data System (ADS)

    Ren, Yibin; Chen, Ge; Han, Yong; Zheng, Huangcheng

    2016-11-01

    Customized City Bus Service (CCBS) can reduce the traffic congestion and environmental pollution that caused by the increasing in private cars, effectively. This study aims to extract the potential bus lines and each line's passenger density of CCBS by mining the public transport big data. The datasets used in this study are mainly Smart Card Data (SCD) and bus GPS data of Qingdao, China, from October 11th and November 7th 2015. Firstly, we compute the temporal-origin-destination (TOD) of passengers by mining SCD and bus GPS data. Compared with the traditional OD, TOD not only has the spatial location, but also contains the trip's boarding time. Secondly, based on the traditional DBSCAN algorithm, we put forwards an algorithm, named TOD-DBSCAN, combined with the spatial-temporal features of TOD.TOD-DBSCAN is used to cluster the TOD trajectories in peak hours of all working days. Then, we define two variables P and N to describe the possibility and passenger destiny of a potential CCBS line. P is the probability of the CCBS line. And N represents the potential passenger destiny of the line. Lastly, we visualize the potential CCBS lines extracted by our procedure on the map and analyse relationship between potential CCBS lines and the urban spatial structure.

  15. Algorithm-enabled exploration of image-quality potential of cone-beam CT in image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Pearson, Erik; Pelizzari, Charles; Al-Hallaq, Hania; Sidky, Emil Y.; Bian, Junguo; Pan, Xiaochuan

    2015-06-01

    Kilo-voltage (KV) cone-beam computed tomography (CBCT) unit mounted onto a linear accelerator treatment system, often referred to as on-board imager (OBI), plays an increasingly important role in image-guided radiation therapy. While the FDK algorithm is currently used for reconstructing images from clinical OBI data, optimization-based reconstruction has also been investigated for OBI CBCT. An optimization-based reconstruction involves numerous parameters, which can significantly impact reconstruction properties (or utility). The success of an optimization-based reconstruction for a particular class of practical applications thus relies strongly on appropriate selection of parameter values. In the work, we focus on tailoring the constrained-TV-minimization-based reconstruction, an optimization-based reconstruction previously shown of some potential for CBCT imaging conditions of practical interest, to OBI imaging through appropriate selection of parameter values. In particular, for given real data of phantoms and patient collected with OBI CBCT, we first devise utility metrics specific to OBI-quality-assurance tasks and then apply them to guiding the selection of parameter values in constrained-TV-minimization-based reconstruction. The study results show that the reconstructions are with improvement, relative to clinical FDK reconstruction, in both visualization and quantitative assessments in terms of the devised utility metrics.

  16. Concepts for on-board satellite image registration, volume 1

    NASA Technical Reports Server (NTRS)

    Ruedger, W. H.; Daluge, D. R.; Aanstoos, J. V.

    1980-01-01

    The NASA-NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. One very specific area of interest is the preprocessing required to register imaging sensor data which have been distorted by anomalies in subsatellite-point position and/or attitude control. The concepts and considerations involved in using state-of-the-art positioning systems such as the Global Positioning System (GPS) in concert with state-of-the-art attitude stabilization and/or determination systems to provide the required registration accuracy are discussed with emphasis on assessing the accuracy to which a given image picture element can be located and identified, determining those algorithms required to augment the registration procedure and evaluating the technology impact on performing these procedures on-board the satellite.

  17. On Parallel Push-Relabel based Algorithms for Bipartite Maximum Matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langguth, Johannes; Azad, Md Ariful; Halappanavar, Mahantesh

    2014-07-01

    We study multithreaded push-relabel based algorithms for computing maximum cardinality matching in bipartite graphs. Matching is a fundamental combinatorial (graph) problem with applications in a wide variety of problems in science and engineering. We are motivated by its use in the context of sparse linear solvers for computing maximum transversal of a matrix. We implement and test our algorithms on several multi-socket multicore systems and compare their performance to state-of-the-art augmenting path-based serial and parallel algorithms using a testset comprised of a wide range of real-world instances. Building on several heuristics for enhancing performance, we demonstrate good scaling for themore » parallel push-relabel algorithm. We show that it is comparable to the best augmenting path-based algorithms for bipartite matching. To the best of our knowledge, this is the first extensive study of multithreaded push-relabel based algorithms. In addition to a direct impact on the applications using matching, the proposed algorithmic techniques can be extended to preflow-push based algorithms for computing maximum flow in graphs.« less

  18. Experimental quantum computing to solve systems of linear equations.

    PubMed

    Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei

    2013-06-07

    Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.

  19. Quantum Statistical Mechanics on a Quantum Computer

    NASA Astrophysics Data System (ADS)

    Raedt, H. D.; Hams, A. H.; Michielsen, K.; Miyashita, S.; Saito, K.

    We describe a quantum algorithm to compute the density of states and thermal equilibrium properties of quantum many-body systems. We present results obtained by running this algorithm on a software implementation of a 21-qubit quantum computer for the case of an antiferromagnetic Heisenberg model on triangular lattices of different size.

  20. Quantum rendering

    NASA Astrophysics Data System (ADS)

    Lanzagorta, Marco O.; Gomez, Richard B.; Uhlmann, Jeffrey K.

    2003-08-01

    In recent years, computer graphics has emerged as a critical component of the scientific and engineering process, and it is recognized as an important computer science research area. Computer graphics are extensively used for a variety of aerospace and defense training systems and by Hollywood's special effects companies. All these applications require the computer graphics systems to produce high quality renderings of extremely large data sets in short periods of time. Much research has been done in "classical computing" toward the development of efficient methods and techniques to reduce the rendering time required for large datasets. Quantum Computing's unique algorithmic features offer the possibility of speeding up some of the known rendering algorithms currently used in computer graphics. In this paper we discuss possible implementations of quantum rendering algorithms. In particular, we concentrate on the implementation of Grover's quantum search algorithm for Z-buffering, ray-tracing, radiosity, and scene management techniques. We also compare the theoretical performance between the classical and quantum versions of the algorithms.

  1. Signal and image processing algorithm performance in a virtual and elastic computing environment

    NASA Astrophysics Data System (ADS)

    Bennett, Kelly W.; Robertson, James

    2013-05-01

    The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.

  2. An Annotated Partial List of Science-Related Computer Bulletin Board Systems.

    ERIC Educational Resources Information Center

    Journal of Student Research, 1990

    1990-01-01

    A list of science-related computer bulletin board systems is presented. Entries include geographic area, phone number, and a short explanation of services. Also included are the addresses and phone numbers of selected commercial services. (KR)

  3. Robust tuning of robot control systems

    NASA Technical Reports Server (NTRS)

    Minis, I.; Uebel, M.

    1992-01-01

    The computed torque control problem is examined for a robot arm with flexible, geared, joint drive systems which are typical in many industrial robots. The standard computed torque algorithm is not directly applicable to this class of manipulators because of the dynamics introduced by the joint drive system. The proposed approach to computed torque control combines a computed torque algorithm with torque controller at each joint. Three such control schemes are proposed. The first scheme uses the joint torque control system currently implemented on the robot arm and a novel form of the computed torque algorithm. The other two use the standard computed torque algorithm and a novel model following torque control system based on model following techniques. Standard tasks and performance indices are used to evaluate the performance of the controllers. Both numerical simulations and experiments are used in evaluation. The study shows that all three proposed systems lead to improved tracking performance over a conventional PD controller.

  4. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    NASA Technical Reports Server (NTRS)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  5. Arbitrated Quantum Signature with Hamiltonian Algorithm Based on Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Shi, Ronghua; Ding, Wanting; Shi, Jinjing

    2018-03-01

    A novel arbitrated quantum signature (AQS) scheme is proposed motivated by the Hamiltonian algorithm (HA) and blind quantum computation (BQC). The generation and verification of signature algorithm is designed based on HA, which enables the scheme to rely less on computational complexity. It is unnecessary to recover original messages when verifying signatures since the blind quantum computation is applied, which can improve the simplicity and operability of our scheme. It is proved that the scheme can be deployed securely, and the extended AQS has some extensive applications in E-payment system, E-government, E-business, etc.

  6. Arbitrated Quantum Signature with Hamiltonian Algorithm Based on Blind Quantum Computation

    NASA Astrophysics Data System (ADS)

    Shi, Ronghua; Ding, Wanting; Shi, Jinjing

    2018-07-01

    A novel arbitrated quantum signature (AQS) scheme is proposed motivated by the Hamiltonian algorithm (HA) and blind quantum computation (BQC). The generation and verification of signature algorithm is designed based on HA, which enables the scheme to rely less on computational complexity. It is unnecessary to recover original messages when verifying signatures since the blind quantum computation is applied, which can improve the simplicity and operability of our scheme. It is proved that the scheme can be deployed securely, and the extended AQS has some extensive applications in E-payment system, E-government, E-business, etc.

  7. Algorithms for computing the geopotential using a simple density layer

    NASA Technical Reports Server (NTRS)

    Morrison, F.

    1976-01-01

    Several algorithms have been developed for computing the potential and attraction of a simple density layer. These are numerical cubature, Taylor series, and a mixed analytic and numerical integration using a singularity-matching technique. A computer program has been written to combine these techniques for computing the disturbing acceleration on an artificial earth satellite. A total of 1640 equal-area, constant surface density blocks on an oblate spheroid are used. The singularity-matching algorithm is used in the subsatellite region, Taylor series in the surrounding zone, and numerical cubature on the rest of the earth.

  8. Sorting on STAR. [CDC computer algorithm timing comparison

    NASA Technical Reports Server (NTRS)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  9. Interface between a printed circuit board computer aided design tool (Tektronix 4051 based) and a numerical paper tape controlled drill press (Slo-Syn 530: 100 w/ Dumore Automatic Head Number 8391)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heckman, B.K.; Chinn, V.K.

    1981-01-01

    The development and use of computer programs written to produce the paper tape needed for the automation, or numeric control, of drill presses employed to fabricate computed-designed printed circuit boards are described. (LCL)

  10. Hardware Acceleration of Adaptive Neural Algorithms.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, Conrad D.

    As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - worldmore » conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.« less

  11. Reconfigurable modular computer networks for spacecraft on-board processing

    NASA Technical Reports Server (NTRS)

    Rennels, D. A.

    1978-01-01

    The core electronics subsystems on unmanned spacecraft, which have been sent over the last 20 years to investigate the moon, Mars, Venus, and Mercury, have progressed through an evolution from simple fixed controllers and analog computers in the 1960's to general-purpose digital computers in current designs. This evolution is now moving in the direction of distributed computer networks. Current Voyager spacecraft already use three on-board computers. One is used to store commands and provide overall spacecraft management. Another is used for instrument control and telemetry collection, and the third computer is used for attitude control and scientific instrument pointing. An examination of the control logic in the instruments shows that, for many, it is cost-effective to replace the sequencing logic with a microcomputer. The Unified Data System architecture considered consists of a set of standard microcomputers connected by several redundant buses. A typical self-checking computer module will contain 23 RAMs, two microprocessors, one memory interface, three bus interfaces, and one core building block.

  12. On-board multicarrier demodulator for mobile applications using DSP implementation

    NASA Astrophysics Data System (ADS)

    Yim, W. H.; Kwan, C. C. D.; Coakley, F. P.; Evans, B. G.

    1990-11-01

    This paper describes the design and implementation of an on-board multicarrier demodulator using commercial digital signal processors. This is for use in a mobile satellite communication system employing an up-link SCPC/FDMA scheme. Channels are separated by a flexible multistage digital filter bank followed by a channel multiplexed digital demodulator array. The cross/dot product design approach of error detector leads to a new QPSK frequency control algorithm that allows fast acquisition without special preamble pattern. Timing correction is performed digitally using an extended stack of polyphase sub-filters.

  13. Continuous monitoring of the lunar or Martian subsurface using on-board pattern recognition and neural processing of Rover geophysical data

    NASA Technical Reports Server (NTRS)

    Glass, Charles E.; Boyd, Richard V.; Sternberg, Ben K.

    1991-01-01

    The overall aim is to provide base technology for an automated vision system for on-board interpretation of geophysical data. During the first year's work, it was demonstrated that geophysical data can be treated as patterns and interpreted using single neural networks. Current research is developing an integrated vision system comprising neural networks, algorithmic preprocessing, and expert knowledge. This system is to be tested incrementally using synthetic geophysical patterns, laboratory generated geophysical patterns, and field geophysical patterns.

  14. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  15. New design environment for defect detection in web inspection systems

    NASA Astrophysics Data System (ADS)

    Hajimowlana, S. Hossain; Muscedere, Roberto; Jullien, Graham A.; Roberts, James W.

    1997-09-01

    One of the aims of industrial machine vision is to develop computer and electronic systems destined to replace human vision in the process of quality control of industrial production. In this paper we discuss the development of a new design environment developed for real-time defect detection using reconfigurable FPGA and DSP processor mounted inside a DALSA programmable CCD camera. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The system is targeted for web inspection but has the potential for broader application areas. We describe and show test results of the prototype system board, mounted inside a DALSA camera and discuss some of the algorithms currently simulated and implemented for web inspection applications.

  16. A fast D.F.T. algorithm using complex integer transforms

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1978-01-01

    Winograd (1976) has developed a new class of algorithms which depend heavily on the computation of a cyclic convolution for computing the conventional DFT (discrete Fourier transform); this new algorithm, for a few hundred transform points, requires substantially fewer multiplications than the conventional FFT algorithm. Reed and Truong have defined a special class of finite Fourier-like transforms over GF(q squared), where q = 2 to the p power minus 1 is a Mersenne prime for p = 2, 3, 5, 7, 13, 17, 19, 31, 61. In the present paper it is shown that Winograd's algorithm can be combined with the aforementioned Fourier-like transform to yield a new algorithm for computing the DFT. A fast method for accurately computing the DFT of a sequence of complex numbers of very long transform-lengths is thus obtained.

  17. Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Lin, C. T.

    1989-01-01

    The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.

  18. Research in Wireless Networks and Communications

    DTIC Science & Technology

    2008-05-01

    TESTBED SETUP AND INITIAL MULTI-HOP EXPERIENCE As a proof of concept, we assembled a testbed platform of nodes based on 400MHz AMD Geode single-board...experi- ments on a testbed network consisting of 400MHz AMD Geode single-board computers made by Thecus Inc. We equipped each of these nodes with two...ground nodes were placed on a line, with about 3 feet of separation between adjacent nodes. The nodes were powered by 400MHz AMD Geode single-board

  19. CUDA Optimization Strategies for Compute- and Memory-Bound Neuroimaging Algorithms

    PubMed Central

    Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W.

    2011-01-01

    As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. PMID:21159404

  20. CUDA optimization strategies for compute- and memory-bound neuroimaging algorithms.

    PubMed

    Lee, Daren; Dinov, Ivo; Dong, Bin; Gutman, Boris; Yanovsky, Igor; Toga, Arthur W

    2012-06-01

    As neuroimaging algorithms and technology continue to grow faster than CPU performance in complexity and image resolution, data-parallel computing methods will be increasingly important. The high performance, data-parallel architecture of modern graphical processing units (GPUs) can reduce computational times by orders of magnitude. However, its massively threaded architecture introduces challenges when GPU resources are exceeded. This paper presents optimization strategies for compute- and memory-bound algorithms for the CUDA architecture. For compute-bound algorithms, the registers are reduced through variable reuse via shared memory and the data throughput is increased through heavier thread workloads and maximizing the thread configuration for a single thread block per multiprocessor. For memory-bound algorithms, fitting the data into the fast but limited GPU resources is achieved through reorganizing the data into self-contained structures and employing a multi-pass approach. Memory latencies are reduced by selecting memory resources whose cache performance are optimized for the algorithm's access patterns. We demonstrate the strategies on two computationally expensive algorithms and achieve optimized GPU implementations that perform up to 6× faster than unoptimized ones. Compared to CPU implementations, we achieve peak GPU speedups of 129× for the 3D unbiased nonlinear image registration technique and 93× for the non-local means surface denoising algorithm. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Going Paperless: How One School Board Made the Move to Electronic Agendas.

    ERIC Educational Resources Information Center

    Mills, Nancy V.

    2000-01-01

    An effort to improve communications between school board members and the superintendent and administrators of the Katy (Texas) Independent School District has evolved into electronic board agendas and paperless board meetings. Installation of laptop computers, printers, fax machines, and dedicated phone lines in board members' homes was key. (MLH)

  2. GPD+ wet tropospheric corrections for eight altimetric missions for the Sea Level ECV generation

    NASA Astrophysics Data System (ADS)

    Fernandes, Joana; Lázaro, Clara; Benveniste, Jérôme

    2016-04-01

    Due to its large spatio-temporal variability, the delay induced by the water vapour and liquid water content of the atmosphere in the altimeter signal or wet tropospheric correction (WTC) is still one of the largest sources of uncertainty in satellite altimetry. In the scope of the Sea Level (SL) Climate Change Initiative (cci) project, the University of Porto (UPorto) has been developing methods to improve the WTC (Fernandes et al., 2015). Started as a coastal algorithm to remove land effects in the microwave radiometers (MWR) on board altimeter missions, the GNSS-derived Path Delay (GPD) methodology evolved to cover the open ocean, including high latitudes, correcting for invalid observations due to land, ice and rain contamination, band instrument malfunction. The most recent version of the algorithm, GPD Plus (GPD+) computes wet path delays based on: i) WTC from the on-board MWR measurements, whenever they exist and are valid; ii) new WTC values estimated through space-time objective analysis of all available data sources, whenever the previous are considered invalid. In the estimation of the new WTC values, the following data sets are used: valid measurements from the on-board MWR, water vapour products derived from a set of 17 scanning imaging radiometers (SI-MWR) on board various remote sensing satellites and tropospheric delays derived from Global Navigation Satellite Systems (GNSS) coastal and island stations. In the estimation process, WTC derived from an atmospheric model such as the European Centre for Medium-range Weather Forecasts (ECMWF) ReAnalysis (ERA) Interim or the operational (Op) model are used as first guess, which is the adopted value in the absence of measurements. The corrections are provided for all missions used to generate the SL Essential Climate Variable (ECV): TOPEX/Poseidon- T/P, Jason-1, Jason-2, ERS-1, ERS-2, Envisat, CryoSat-2 and SARAL/ALtiKa. To ensure consistency and long term stability of the WTC datasets, the radiometers used in the GPD+ estimations have been inter-calibrated against the stable and independently-calibrated Special Sensor Microwave Imager (SSM/I) and SSMI/I Sounder (SSM/IS) sensors on-board the Defense Meteorological Satellite Program satellite series (F10, F11, F13, F14, F16 and F17). The new products reduce the sea level anomaly variance, both along-track and at crossovers with respect to previous non-calibrated versions and to other WTC data sets such as AVISO Composite (Comp) correction and atmospheric models. Improvements are particularly significant for TP and all ESA missions, especially in the coastal regions and at high latitudes. In comparison with previous GPD versions, the main impacts are on the sea level trends at decadal time scales and on regional sea level trends. For CryoSat-2, the GPD+ WTC improves the SL ECV when compared to the baseline correction from the ECMWF Op model. In view to obtain the best WTC for use in the version 2 of the SL_cci ECV, new products are under development, based on recently released on-board MWR WTC for missions such as Jason-1, Envisat and SARAL. Fernandes, M.J., Clara Lázaro, Michaël Ablain, Nelson Pires, Improved wet path delays for all ESA and reference altimetric missions, Remote Sensing of Environment, Volume 169, November 2015, Pages 50-74, ISSN 0034-4257, http://dx.doi.org/10.1016/j.rse.2015.07.023

  3. Partitioning sparse matrices with eigenvectors of graphs

    NASA Technical Reports Server (NTRS)

    Pothen, Alex; Simon, Horst D.; Liou, Kang-Pu

    1990-01-01

    The problem of computing a small vertex separator in a graph arises in the context of computing a good ordering for the parallel factorization of sparse, symmetric matrices. An algebraic approach for computing vertex separators is considered in this paper. It is shown that lower bounds on separator sizes can be obtained in terms of the eigenvalues of the Laplacian matrix associated with a graph. The Laplacian eigenvectors of grid graphs can be computed from Kronecker products involving the eigenvectors of path graphs, and these eigenvectors can be used to compute good separators in grid graphs. A heuristic algorithm is designed to compute a vertex separator in a general graph by first computing an edge separator in the graph from an eigenvector of the Laplacian matrix, and then using a maximum matching in a subgraph to compute the vertex separator. Results on the quality of the separators computed by the spectral algorithm are presented, and these are compared with separators obtained from other algorithms for computing separators. Finally, the time required to compute the Laplacian eigenvector is reported, and the accuracy with which the eigenvector must be computed to obtain good separators is considered. The spectral algorithm has the advantage that it can be implemented on a medium-size multiprocessor in a straightforward manner.

  4. The Construction of 3-d Neutral Density for Arbitrary Data Sets

    NASA Astrophysics Data System (ADS)

    Riha, S.; McDougall, T. J.; Barker, P. M.

    2014-12-01

    The Neutral Density variable allows inference of water pathways from thermodynamic properties in the global ocean, and is therefore an essential component of global ocean circulation analysis. The widely used algorithm for the computation of Neutral Density yields accurate results for data sets which are close to the observed climatological ocean. Long-term numerical climate simulations, however, often generate a significant drift from present-day climate, which renders the existing algorithm inaccurate. To remedy this problem, new algorithms which operate on arbitrary data have been developed, which may potentially be used to compute Neutral Density during runtime of a numerical model.We review existing approaches for the construction of Neutral Density in arbitrary data sets, detail their algorithmic structure, and present an analysis of the computational cost for implementations on a single-CPU computer. We discuss possible strategies for the implementation in state-of-the-art numerical models, with a focus on distributed computing environments.

  5. An innovative localisation algorithm for railway vehicles

    NASA Astrophysics Data System (ADS)

    Allotta, B.; D'Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Rindi, A.; Vettori, G.

    2014-11-01

    In modern railway automatic train protection and automatic train control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. The aim of this work has been developing an innovative localisation algorithm for railway vehicles able to enhance the performances, in terms of speed and position estimation accuracy, of the classical odometry algorithms, such as the Italian Sistema Controllo Marcia Treno (SCMT). The proposed strategy consists of a sensor fusion between the information coming from a tachometer and an Inertial Measurements Unit (IMU). The sensor outputs have been simulated through a 3D multibody model of a railway vehicle. The work has provided the development of a custom IMU, designed by ECM S.p.a, in order to meet their industrial and business requirements. The industrial requirements have to be compliant with the European Train Control System (ETCS) standards: the European Rail Traffic Management System (ERTMS), a project developed by the European Union to improve the interoperability among different countries, in particular as regards the train control and command systems, fixes some standard values for the odometric (ODO) performance, in terms of speed and travelled distance estimation. The reliability of the ODO estimation has to be taken into account basing on the allowed speed profiles. The results of the currently used ODO algorithms can be improved, especially in case of degraded adhesion conditions; it has been verified in the simulation environment that the results of the proposed localisation algorithm are always compliant with the ERTMS requirements. The estimation strategy has good performance also under degraded adhesion conditions and could be put on board of high-speed railway vehicles; it represents an accurate and reliable solution. The IMU board is tested via a dedicated Hardware in the Loop (HIL) test rig: it includes an industrial robot able to replicate the motion of the railway vehicle. Through the generated experimental outputs the performances of the innovative localisation algorithm have been evaluated: the HIL test rig permitted to test the proposed algorithm, avoiding expensive (in terms of time and cost) on-track tests, obtaining encouraging results. In fact, the preliminary results show a significant improvement of the position and speed estimation performances compared to those obtained with SCMT algorithms, currently in use on the Italian railway network.

  6. Plasmid mapping computer program.

    PubMed Central

    Nolan, G P; Maina, C V; Szalay, A A

    1984-01-01

    Three new computer algorithms are described which rapidly order the restriction fragments of a plasmid DNA which has been cleaved with two restriction endonucleases in single and double digestions. Two of the algorithms are contained within a single computer program (called MPCIRC). The Rule-Oriented algorithm, constructs all logical circular map solutions within sixty seconds (14 double-digestion fragments) when used in conjunction with the Permutation method. The program is written in Apple Pascal and runs on an Apple II Plus Microcomputer with 64K of memory. A third algorithm is described which rapidly maps double digests and uses the above two algorithms as adducts. Modifications of the algorithms for linear mapping are also presented. PMID:6320105

  7. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  8. Modeling inter-signal arrival times for accurate detection of CAN bus signal injection attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Michael Roy; Bridges, Robert A; Combs, Frank L

    Modern vehicles rely on hundreds of on-board electronic control units (ECUs) communicating over in-vehicle networks. As external interfaces to the car control networks (such as the on-board diagnostic (OBD) port, auxiliary media ports, etc.) become common, and vehicle-to-vehicle / vehicle-to-infrastructure technology is in the near future, the attack surface for vehicles grows, exposing control networks to potentially life-critical attacks. This paper addresses the need for securing the CAN bus by detecting anomalous traffic patterns via unusual refresh rates of certain commands. While previous works have identified signal frequency as an important feature for CAN bus intrusion detection, this paper providesmore » the first such algorithm with experiments on five attack scenarios. Our data-driven anomaly detection algorithm requires only five seconds of training time (on normal data) and achieves true positive / false discovery rates of 0.9998/0.00298, respectively (micro-averaged across the five experimental tests).« less

  9. Semiannual Report, April 1, 1989 through September 30, 1989 (Institute for Computer Applications in Science and Engineering)

    DTIC Science & Technology

    1990-02-01

    noise. Tobias B. Orloff Work began on developing a high quality rendering algorithm based on the radiosity method. The algorithm is similar to...previous progressive radiosity algorithms except for the following improvements: 1. At each iteration vertex radiosities are computed using a modified scan...line approach, thus eliminating the quadratic cost associated with a ray tracing computation of vortex radiosities . 2. At each iteration the scene is

  10. A Secure Alignment Algorithm for Mapping Short Reads to Human Genome.

    PubMed

    Zhao, Yongan; Wang, Xiaofeng; Tang, Haixu

    2018-05-09

    The elastic and inexpensive computing resources such as clouds have been recognized as a useful solution to analyzing massive human genomic data (e.g., acquired by using next-generation sequencers) in biomedical researches. However, outsourcing human genome computation to public or commercial clouds was hindered due to privacy concerns: even a small number of human genome sequences contain sufficient information for identifying the donor of the genomic data. This issue cannot be directly addressed by existing security and cryptographic techniques (such as homomorphic encryption), because they are too heavyweight to carry out practical genome computation tasks on massive data. In this article, we present a secure algorithm to accomplish the read mapping, one of the most basic tasks in human genomic data analysis based on a hybrid cloud computing model. Comparing with the existing approaches, our algorithm delegates most computation to the public cloud, while only performing encryption and decryption on the private cloud, and thus makes the maximum use of the computing resource of the public cloud. Furthermore, our algorithm reports similar results as the nonsecure read mapping algorithms, including the alignment between reads and the reference genome, which can be directly used in the downstream analysis such as the inference of genomic variations. We implemented the algorithm in C++ and Python on a hybrid cloud system, in which the public cloud uses an Apache Spark system.

  11. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  12. A Systematic Investigation of Computation Models for Predicting Adverse Drug Reactions (ADRs)

    PubMed Central

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Background Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. Principal Findings In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Conclusion Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms. PMID:25180585

  13. Parallel fuzzy connected image segmentation on GPU

    PubMed Central

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA’s compute unified device Architecture (cuda) platform for segmenting medical image data sets. Methods: In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as cuda kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Results: Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. Conclusions: The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set. PMID:21859037

  14. Parallel fuzzy connected image segmentation on GPU.

    PubMed

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W

    2011-07-01

    Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.

  15. Concurrent extensions to the FORTRAN language for parallel programming of computational fluid dynamics algorithms

    NASA Technical Reports Server (NTRS)

    Weeks, Cindy Lou

    1986-01-01

    Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.

  16. A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case

    PubMed Central

    Tsai, Chun-Wei; Tseng, Shih-Pang; Yang, Chu-Sing

    2014-01-01

    This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA. PMID:24892038

  17. A high-performance genetic algorithm: using traveling salesman problem as a case.

    PubMed

    Tsai, Chun-Wei; Tseng, Shih-Pang; Chiang, Ming-Chao; Yang, Chu-Sing; Hong, Tzung-Pei

    2014-01-01

    This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA) and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.

  18. Low-complexity R-peak detection in ECG signals: a preliminary step towards ambulatory fetal monitoring.

    PubMed

    Rooijakkers, Michiel; Rabotti, Chiara; Bennebroek, Martijn; van Meerbergen, Jef; Mischi, Massimo

    2011-01-01

    Non-invasive fetal health monitoring during pregnancy has become increasingly important. Recent advances in signal processing technology have enabled fetal monitoring during pregnancy, using abdominal ECG recordings. Ubiquitous ambulatory monitoring for continuous fetal health measurement is however still unfeasible due to the computational complexity of noise robust solutions. In this paper an ECG R-peak detection algorithm for ambulatory R-peak detection is proposed, as part of a fetal ECG detection algorithm. The proposed algorithm is optimized to reduce computational complexity, while increasing the R-peak detection quality compared to existing R-peak detection schemes. Validation of the algorithm is performed on two manually annotated datasets, the MIT/BIH Arrhythmia database and an in-house abdominal database. Both R-peak detection quality and computational complexity are compared to state-of-the-art algorithms as described in the literature. With a detection error rate of 0.22% and 0.12% on the MIT/BIH Arrhythmia and in-house databases, respectively, the quality of the proposed algorithm is comparable to the best state-of-the-art algorithms, at a reduced computational complexity.

  19. Parametric diagnosis of the adaptive gas path in the automatic control system of the aircraft engine

    NASA Astrophysics Data System (ADS)

    Kuznetsova, T. A.

    2017-01-01

    The paper dwells on the adaptive multimode mathematical model of the gas-turbine aircraft engine (GTE) embedded in the automatic control system (ACS). The mathematical model is based on the throttle performances, and is characterized by high accuracy of engine parameters identification in stationary and dynamic modes. The proposed on-board engine model is the state space linearized low-level simulation. The engine health is identified by the influence of the coefficient matrix. The influence coefficient is determined by the GTE high-level mathematical model based on measurements of gas-dynamic parameters. In the automatic control algorithm, the sum of squares of the deviation between the parameters of the mathematical model and real GTE is minimized. The proposed mathematical model is effectively used for gas path defects detecting in on-line GTE health monitoring. The accuracy of the on-board mathematical model embedded in ACS determines the quality of adaptive control and reliability of the engine. To improve the accuracy of identification solutions and sustainability provision, the numerical method of Monte Carlo was used. The parametric diagnostic algorithm based on the LPτ - sequence was developed and tested. Analysis of the results suggests that the application of the developed algorithms allows achieving higher identification accuracy and reliability than similar models used in practice.

  20. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the class of applications with moderate input-output data rates but large intermediate multi-thread data streams has been addressed and mitigated. This opens a new class of satellite image processing applications for bottleneck problems solution using RC technologies. The issue of a science algorithm level of abstraction necessary for RC hardware implementation is also described. Selected Matlab functions already implemented in hardware were investigated for their direct applicability to the GOES-8 application with the intent to create a library of Matlab and IDL RC functions for ongoing work. A complete class of spacecraft image processing applications using embedded re-configurable computing technology to meet real-time requirements, including performance results and comparison with the existing system, is described in this paper.

Top