Sample records for floating point arithmetic

  1. Algorithm XXX : functions to support the IEEE standard for binary floating-point arithmetic.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cody, W. J.; Mathematics and Computer Science

    1993-12-01

    This paper describes C programs for the support functions copysign(x,y), logb(x), scalb(x,n), nextafter(x,y), finite(x), and isnan(x) recommended in the Appendix to the IEEE Standard for Binary Floating-Point Arithmetic. In the case of logb, the modified definition given in the later IEEE Standard for Radix-Independent Floating-Point Arithmetic is followed. These programs should run without modification on most systems conforming to the binary standard.

  2. Floating point arithmetic in future supercomputers

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  3. Paranoia.Ada: A diagnostic program to evaluate Ada floating-point arithmetic

    NASA Technical Reports Server (NTRS)

    Hjermstad, Chris

    1986-01-01

    Many essential software functions in the mission critical computer resource application domain depend on floating point arithmetic. Numerically intensive functions associated with the Space Station project, such as emphemeris generation or the implementation of Kalman filters, are likely to employ the floating point facilities of Ada. Paranoia.Ada appears to be a valuabe program to insure that Ada environments and their underlying hardware exhibit the precision and correctness required to satisfy mission computational requirements. As a diagnostic tool, Paranoia.Ada reveals many essential characteristics of an Ada floating point implementation. Equipped with such knowledge, programmers need not tremble before the complex task of floating point computation.

  4. Defining the IEEE-854 floating-point standard in PVS

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.

    1995-01-01

    A significant portion of the ANSI/IEEE-854 Standard for Radix-Independent Floating-Point Arithmetic is defined in PVS (Prototype Verification System). Since IEEE-854 is a generalization of the ANSI/IEEE-754 Standard for Binary Floating-Point Arithmetic, the definition of IEEE-854 in PVS also formally defines much of IEEE-754. This collection of PVS theories provides a basis for machine checked verification of floating-point systems. This formal definition illustrates that formal specification techniques are sufficiently advanced that is is reasonable to consider their use in the development of future standards.

  5. Instabilities caused by floating-point arithmetic quantization.

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1972-01-01

    It is shown that an otherwise stable digital control system can be made unstable by signal quantization when the controller operates on floating-point arithmetic. Sufficient conditions of instability are determined, and an example of loss of stability is treated when only one quantizer is operated.

  6. Floating-point geometry: toward guaranteed geometric computations with approximate arithmetics

    NASA Astrophysics Data System (ADS)

    Bajard, Jean-Claude; Langlois, Philippe; Michelucci, Dominique; Morin, Géraldine; Revol, Nathalie

    2008-08-01

    Geometric computations can fail because of inconsistencies due to floating-point inaccuracy. For instance, the computed intersection point between two curves does not lie on the curves: it is unavoidable when the intersection point coordinates are non rational, and thus not representable using floating-point arithmetic. A popular heuristic approach tests equalities and nullities up to a tolerance ɛ. But transitivity of equality is lost: we can have A approx B and B approx C, but A not approx C (where A approx B means ||A - B|| < ɛ for A,B two floating-point values). Interval arithmetic is another, self-validated, alternative; the difficulty is to limit the swell of the width of intervals with computations. Unfortunately interval arithmetic cannot decide equality nor nullity, even in cases where it is decidable by other means. A new approach, developed in this paper, consists in modifying the geometric problems and algorithms, to account for the undecidability of the equality test and unavoidable inaccuracy. In particular, all curves come with a non-zero thickness, so two curves (generically) cut in a region with non-zero area, an inner and outer representation of which is computable. This last approach no more assumes that an equality or nullity test is available. The question which arises is: which geometric problems can still be solved with this last approach, and which cannot? This paper begins with the description of some cases where every known arithmetic fails in practice. Then, for each arithmetic, some properties of the problems they can solve are given. We end this work by proposing the bases of a new approach which aims to fulfill the geometric computations requirements.

  7. Rational Arithmetic in Floating-Point.

    DTIC Science & Technology

    1986-09-01

    RD-RI75 190 RATIONAL ARITHMETIC IN FLOTING-POINT(U) CALIFORNIA~UNIY BERKELEY CENTER FOR PURE AND APPLIED MATHEMATICS USI FE N KAHAN SEP 86 PRM-343...8217 ," .’,.-.’ .- " .- . ,,,.". ".. .. ". CENTER FOR PURE AND APPLIED MATHEMATICS UNIVERSITY OF CALIFORNIA, BERKELEY PAf4343 0l RATIONAL ARITHMIETIC IN FLOATING-POINT W. KAHAN SETMER18 SEPTEMBE...delicate balance between, on the one hand, the simplicity and aesthetic appeal of the specifications and, on the other hand, the complexity and

  8. Hardware math for the 6502 microprocessor

    NASA Technical Reports Server (NTRS)

    Kissel, R.; Currie, J.

    1985-01-01

    A floating-point arithmetic unit is described which is being used in the Ground Facility of Large Space Structures Control Verification (GF/LSSCV). The experiment uses two complete inertial measurement units and a set of three gimbal torquers in a closed loop to control the structural vibrations in a flexible test article (beam). A 6502 (8-bit) microprocessor controls four AMD 9511A floating-point arithmetic units to do all the computation in 20 milliseconds.

  9. High-precision arithmetic in mathematical physics

    DOE PAGES

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  10. Formal verification of mathematical software

    NASA Technical Reports Server (NTRS)

    Sutherland, D.

    1984-01-01

    Methods are investigated for formally specifying and verifying the correctness of mathematical software (software which uses floating point numbers and arithmetic). Previous work in the field was reviewed. A new model of floating point arithmetic called the asymptotic paradigm was developed and formalized. Two different conceptual approaches to program verification, the classical Verification Condition approach and the more recently developed Programming Logic approach, were adapted to use the asymptotic paradigm. These approaches were then used to verify several programs; the programs chosen were simplified versions of actual mathematical software.

  11. Constrained Chebyshev approximations to some elementary functions suitable for evaluation with floating point arithmetic

    NASA Technical Reports Server (NTRS)

    Manos, P.; Turner, L. R.

    1972-01-01

    Approximations which can be evaluated with precision using floating-point arithmetic are presented. The particular set of approximations thus far developed are for the function TAN and the functions of USASI FORTRAN excepting SQRT and EXPONENTIATION. These approximations are, furthermore, specialized to particular forms which are especially suited to a computer with a small memory, in that all of the approximations can share one general purpose subroutine for the evaluation of a polynomial in the square of the working argument.

  12. Basic mathematical function libraries for scientific computation

    NASA Technical Reports Server (NTRS)

    Galant, David C.

    1989-01-01

    Ada packages implementing selected mathematical functions for the support of scientific and engineering applications were written. The packages provide the Ada programmer with the mathematical function support found in the languages Pascal and FORTRAN as well as an extended precision arithmetic and a complete complex arithmetic. The algorithms used are fully described and analyzed. Implementation assumes that the Ada type FLOAT objects fully conform to the IEEE 754-1985 standard for single binary floating-point arithmetic, and that INTEGER objects are 32-bit entities. Codes for the Ada packages are included as appendixes.

  13. Paranoia.Ada: Sample output reports

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Paranoia.Ada is a program to diagnose floating point arithmetic in the context of the Ada programming language. The program evaluates the quality of a floating point arithmetic implementation with respect to the proposed IEEE Standards P754 and P854. Paranoia.Ada is derived from the original BASIC programming language version of Paranoia. The Paranoia.Ada replicates in Ada the test algorithms originally implemented in BASIC and adheres to the evaluation criteria established by W. M. Kahan. Paranoia.Ada incorporates a major structural redesign and employs applicable Ada architectural and stylistic features.

  14. Verification of IEEE Compliant Subtractive Division Algorithms

    NASA Technical Reports Server (NTRS)

    Miner, Paul S.; Leathrum, James F., Jr.

    1996-01-01

    A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.

  15. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.; Vallely, D. P.

    1978-01-01

    This paper considers digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. A quantization error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. The program can be integrated into existing digital simulations of a system.

  16. Gauss Elimination: Workhorse of Linear Algebra.

    DTIC Science & Technology

    1995-08-05

    linear algebra computation for solving systems, computing determinants and determining the rank of matrix. All of these are discussed in varying contexts. These include different arithmetic or algebraic setting such as integer arithmetic or polynomial rings as well as conventional real (floating-point) arithmetic. These have effects on both accuracy and complexity analyses of the algorithm. These, too, are covered here. The impact of modern parallel computer architecture on GE is also

  17. Software Aspects of IEEE Floating-Point Computations for Numerical Applications in High Energy Physics

    ScienceCinema

    Arnold, Jeffrey

    2018-05-14

    Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided. About the speaker: Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.

  18. Interpretation of IEEE-854 floating-point standard and definition in the HOL system

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.

    1995-01-01

    The ANSI/IEEE Standard 854-1987 for floating-point arithmetic is interpreted by converting the lexical descriptions in the standard into mathematical conditional descriptions organized in tables. The standard is represented in higher-order logic within the framework of the HOL (Higher Order Logic) system. The paper is divided in two parts with the first part the interpretation and the second part the description in HOL.

  19. Exploring the Feasibility of a DNA Computer: Design of an ALU Using Sticker-Based DNA Model.

    PubMed

    Sarkar, Mayukh; Ghosal, Prasun; Mohanty, Saraju P

    2017-09-01

    Since its inception, DNA computing has advanced to offer an extremely powerful, energy-efficient emerging technology for solving hard computational problems with its inherent massive parallelism and extremely high data density. This would be much more powerful and general purpose when combined with other existing well-known algorithmic solutions that exist for conventional computing architectures using a suitable ALU. Thus, a specifically designed DNA Arithmetic and Logic Unit (ALU) that can address operations suitable for both domains can mitigate the gap between these two. An ALU must be able to perform all possible logic operations, including NOT, OR, AND, XOR, NOR, NAND, and XNOR; compare, shift etc., integer and floating point arithmetic operations (addition, subtraction, multiplication, and division). In this paper, design of an ALU has been proposed using sticker-based DNA model with experimental feasibility analysis. Novelties of this paper may be in manifold. First, the integer arithmetic operations performed here are 2s complement arithmetic, and the floating point operations follow the IEEE 754 floating point format, resembling closely to a conventional ALU. Also, the output of each operation can be reused for any next operation. So any algorithm or program logic that users can think of can be implemented directly on the DNA computer without any modification. Second, once the basic operations of sticker model can be automated, the implementations proposed in this paper become highly suitable to design a fully automated ALU. Third, proposed approaches are easy to implement. Finally, these approaches can work on sufficiently large binary numbers.

  20. On the design of a radix-10 online floating-point multiplier

    NASA Astrophysics Data System (ADS)

    McIlhenny, Robert D.; Ercegovac, Milos D.

    2009-08-01

    This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.

  1. Toward a formal verification of a floating-point coprocessor and its composition with a central processing unit

    NASA Technical Reports Server (NTRS)

    Pan, Jing; Levitt, Karl N.; Cohen, Gerald C.

    1991-01-01

    Discussed here is work to formally specify and verify a floating point coprocessor based on the MC68881. The HOL verification system developed at Cambridge University was used. The coprocessor consists of two independent units: the bus interface unit used to communicate with the cpu and the arithmetic processing unit used to perform the actual calculation. Reasoning about the interaction and synchronization among processes using higher order logic is demonstrated.

  2. Universal Number Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lloyd, G. Scott

    This floating-point arithmetic library contains a software implementation of Universal Numbers (unums) as described by John Gustafson [1]. The unum format is a superset of IEEE 754 floating point with several advantages. Computing with unums provides more accurate answers without rounding errors, underflow or overflow. In contrast to fixed-sized IEEE numbers, a variable number of bits can be used to encode unums. This all allows number with only a few significant digits or with a small dynamic range to be represented more compactly.

  3. Design of permanent magnet synchronous motor speed control system based on SVPWM

    NASA Astrophysics Data System (ADS)

    Wu, Haibo

    2017-04-01

    The control system is designed to realize TMS320F28335 based on the permanent magnet synchronous motor speed control system, and put it to quoting all electric of injection molding machine. The system of the control method used SVPWM, through the sampling motor current and rotating transformer position information, realize speed, current double closed loop control. Through the TMS320F28335 hardware floating-point processing core, realize the application for permanent magnet synchronous motor in the floating point arithmetic, to replace the past fixed-point algorithm, and improve the efficiency of the code.

  4. On the Floating Point Performance of the i860 Microprocessor

    NASA Technical Reports Server (NTRS)

    Lee, King; Kutler, Paul (Technical Monitor)

    1997-01-01

    The i860 microprocessor is a pipelined processor that can deliver two double precision floating point results every clock. It is being used in the Touchstone project to develop a teraflop computer by the year 2000. With such high computational capabilities it was expected that memory bandwidth would limit performance on many kernels. Measured performance of three kernels showed performance is less than what memory bandwidth limitations would predict. This paper develops a model that explains the discrepancy in terms of memory latencies and points to some problems involved in moving data from memory to the arithmetic pipelines.

  5. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    NASA Astrophysics Data System (ADS)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  6. An Input Routine Using Arithmetic Statements for the IBM 704 Digital Computer

    NASA Technical Reports Server (NTRS)

    Turner, Don N.; Huff, Vearl N.

    1961-01-01

    An input routine has been designed for use with FORTRAN or SAP coded programs which are to be executed on an IBM 704 digital computer. All input to be processed by the routine is punched on IBM cards as declarative statements of the arithmetic type resembling the FORTRAN language. The routine is 850 words in length. It is capable of loading fixed- or floating-point numbers, octal numbers, and alphabetic words, and of performing simple arithmetic as indicated on input cards. Provisions have been made for rapid loading of arrays of numbers in consecutive memory locations.

  7. Y-MP floating point and Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Carter, Russell

    1991-01-01

    The floating point arithmetics implemented in the Cray 2 and Cray Y-MP computer systems are nearly identical, but large scale computations performed on the two systems have exhibited significant differences in accuracy. The difference in accuracy is analyzed for Cholesky factorization algorithm, and it is found that the source of the difference is the subtract magnitude operation of the Cray Y-MP. The results from numerical experiments for a range of problem sizes are presented, and an efficient method for improving the accuracy of the factorization obtained on the Y-MP is presented.

  8. Research in the design of high-performance reconfigurable systems

    NASA Technical Reports Server (NTRS)

    Slotnick, D. L.; Mcewan, S. D.; Spry, A. J.

    1984-01-01

    An initial design for the Bit Processor (BP) referred to in prior reports as the Processing Element or PE has been completed. Eight BP's, together with their supporting random-access memory, a 64 k x 9 ROM to perform addition, routing logic, and some additional logic, constitute the components of a single stage. An initial stage design is given. Stages may be combined to perform high-speed fixed or floating point arithmetic. Stages can be configured into a range of arithmetic modules that includes bit-serial one or two-dimensional arrays; one or two dimensional arrays fixed or floating point processors; and specialized uniprocessors, such as long-word arithmetic units. One to eight BP's represent a likely initial chip level. The Stage would then correspond to a first-level pluggable module. As both this project and VLSI CAD/CAM progress, however, it is expected that the chip level would migrate upward to the stage and, perhaps, ultimately the box level. The BP RAM, consisting of two banks, holds only operands and indices. Programs are at the box (high-level function) and system level. At the system level initial effort has been concentrated on specifying the tools needed to evaluate design alternatives.

  9. Inconsistencies in Numerical Simulations of Dynamical Systems Using Interval Arithmetic

    NASA Astrophysics Data System (ADS)

    Nepomuceno, Erivelton G.; Peixoto, Márcia L. C.; Martins, Samir A. M.; Rodrigues, Heitor M.; Perc, Matjaž

    Over the past few decades, interval arithmetic has been attracting widespread interest from the scientific community. With the expansion of computing power, scientific computing is encountering a noteworthy shift from floating-point arithmetic toward increased use of interval arithmetic. Notwithstanding the significant reliability of interval arithmetic, this paper presents a theoretical inconsistency in a simulation of dynamical systems using a well-known implementation of arithmetic interval. We have observed that two natural interval extensions present an empty intersection during a finite time range, which is contrary to the fundamental theorem of interval analysis. We have proposed a procedure to at least partially overcome this problem, based on the union of the two generated pseudo-orbits. This paper also shows a successful case of interval arithmetic application in the reduction of interval width size on the simulation of discrete map. The implications of our findings on the reliability of scientific computing using interval arithmetic have been properly addressed using two numerical examples.

  10. Rapid Prototyping in PVS

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar A.; Butler, Ricky (Technical Monitor)

    2003-01-01

    PVSio is a conservative extension to the PVS prelude library that provides basic input/output capabilities to the PVS ground evaluator. It supports rapid prototyping in PVS by enhancing the specification language with built-in constructs for string manipulation, floating point arithmetic, and input/output operations.

  11. Desirable floating-point arithmetic and elementary functions for numerical computation

    NASA Technical Reports Server (NTRS)

    Hull, T. E.

    1978-01-01

    The topics considered are: (1) the base of the number system, (2) precision control, (3) number representation, (4) arithmetic operations, (5) other basic operations, (6) elementary functions, and (7) exception handling. The possibility of doing without fixed-point arithmetic is also mentioned. The specifications are intended to be entirely at the level of a programming language such as FORTRAN. The emphasis is on convenience and simplicity from the user's point of view. Conforming to such specifications would have obvious beneficial implications for the portability of numerical software, and for proving programs correct, as well as attempting to provide facilities which are most suitable for the user. The specifications are not complete in every detail, but it is intended that they be complete in spirit - some further details, especially syntatic details, would have to be provided, but the proposals are otherwise relatively complete.

  12. Bit-parallel arithmetic in a massively-parallel associative processor

    NASA Technical Reports Server (NTRS)

    Scherson, Isaac D.; Kramer, David A.; Alleyne, Brian D.

    1992-01-01

    A simple but powerful new architecture based on a classical associative processor model is presented. Algorithms for performing the four basic arithmetic operations both for integer and floating point operands are described. For m-bit operands, the proposed architecture makes it possible to execute complex operations in O(m) cycles as opposed to O(m exp 2) for bit-serial machines. A word-parallel, bit-parallel, massively-parallel computing system can be constructed using this architecture with VLSI technology. The operation of this system is demonstrated for the fast Fourier transform and matrix multiplication.

  13. A floating-point/multiple-precision processor for airborne applications

    NASA Technical Reports Server (NTRS)

    Yee, R.

    1982-01-01

    A compact input output (I/O) numerical processor capable of performing floating-point, multiple precision and other arithmetic functions at execution times which are at least 100 times faster than comparable software emulation is described. The I/O device is a microcomputer system containing a 16 bit microprocessor, a numerical coprocessor with eight 80 bit registers running at a 5 MHz clock rate, 18K random access memory (RAM) and 16K electrically programmable read only memory (EPROM). The processor acts as an intelligent slave to the host computer and can be programmed in high order languages such as FORTRAN and PL/M-86.

  14. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  15. Floating-point system quantization errors in digital control systems

    NASA Technical Reports Server (NTRS)

    Phillips, C. L.

    1973-01-01

    The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.

  16. UNIX as an environment for producing numerical software

    NASA Technical Reports Server (NTRS)

    Schryer, N. L.

    1978-01-01

    The UNIX operating system supports a number of software tools; a mathematical equation-setting language, a phototypesetting language, a FORTRAN preprocessor language, a text editor, and a command interpreter. The design, implementation, documentation, and maintenance of a portable FORTRAN test of the floating-point arithmetic unit of a computer is used to illustrate these tools at work.

  17. Numerical computation of spherical harmonics of arbitrary degree and order by extending exponent of floating point numbers

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2012-04-01

    By extending the exponent of floating point numbers with an additional integer as the power index of a large radix, we compute fully normalized associated Legendre functions (ALF) by recursion without underflow problem. The new method enables us to evaluate ALFs of extremely high degree as 232 = 4,294,967,296, which corresponds to around 1 cm resolution on the Earth's surface. By limiting the application of exponent extension to a few working variables in the recursion, choosing a suitable large power of 2 as the radix, and embedding the contents of the basic arithmetic procedure of floating point numbers with the exponent extension directly in the program computing the recurrence formulas, we achieve the evaluation of ALFs in the double-precision environment at the cost of around 10% increase in computational time per single ALF. This formulation realizes meaningful execution of the spherical harmonic synthesis and/or analysis of arbitrary degree and order.

  18. A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus

    NASA Astrophysics Data System (ADS)

    Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei

    2005-01-01

    Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.

  19. DFT algorithms for bit-serial GaAs array processor architectures

    NASA Technical Reports Server (NTRS)

    Mcmillan, Gary B.

    1988-01-01

    Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.

  20. A sparse matrix algorithm on the Boolean vector machine

    NASA Technical Reports Server (NTRS)

    Wagner, Robert A.; Patrick, Merrell L.

    1988-01-01

    VLSI technology is being used to implement a prototype Boolean Vector Machine (BVM), which is a large network of very small processors with equally small memories that operate in SIMD mode; these use bit-serial arithmetic, and communicate via cube-connected cycles network. The BVM's bit-serial arithmetic and the small memories of individual processors are noted to compromise the system's effectiveness in large numerical problem applications. Attention is presently given to the implementation of a basic matrix-vector iteration algorithm for space matrices of the BVM, in order to generate over 1 billion useful floating-point operations/sec for this iteration algorithm. The algorithm is expressed in a novel language designated 'BVM'.

  1. A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing

    NASA Technical Reports Server (NTRS)

    Takaki, Mitsuo; Cavalcanti, Diego; Gheyi, Rohit; Iyoda, Juliano; dAmorim, Marcelo; Prudencio, Ricardo

    2009-01-01

    The complexity of constraints is a major obstacle for constraint-based software verification. Automatic constraint solvers are fundamentally incomplete: input constraints often build on some undecidable theory or some theory the solver does not support. This paper proposes and evaluates several randomized solvers to address this issue. We compare the effectiveness of a symbolic solver (CVC3), a random solver, three hybrid solvers (i.e., mix of random and symbolic), and two heuristic search solvers. We evaluate the solvers on two benchmarks: one consisting of manually generated constraints and another generated with a concolic execution of 8 subjects. In addition to fully decidable constraints, the benchmarks include constraints with non-linear integer arithmetic, integer modulo and division, bitwise arithmetic, and floating-point arithmetic. As expected symbolic solving (in particular, CVC3) subsumes the other solvers for the concolic execution of subjects that only generate decidable constraints. For the remaining subjects the solvers are complementary.

  2. CADNA: a library for estimating round-off error propagation

    NASA Astrophysics Data System (ADS)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie

    2008-06-01

    The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.

  3. Shift-connected SIMD array architectures for digital optical computing systems, with algorithms for numerical transforms and partial differential equations

    NASA Astrophysics Data System (ADS)

    Drabik, Timothy J.; Lee, Sing H.

    1986-11-01

    The intrinsic parallelism characteristics of easily realizable optical SIMD arrays prompt their present consideration in the implementation of highly structured algorithms for the numerical solution of multidimensional partial differential equations and the computation of fast numerical transforms. Attention is given to a system, comprising several spatial light modulators (SLMs), an optical read/write memory, and a functional block, which performs simple, space-invariant shifts on images with sufficient flexibility to implement the fastest known methods for partial differential equations as well as a wide variety of numerical transforms in two or more dimensions. Either fixed or floating-point arithmetic may be used. A performance projection of more than 1 billion floating point operations/sec using SLMs with 1000 x 1000-resolution and operating at 1-MHz frame rates is made.

  4. Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.

    PubMed

    Wang, Charlie C L; Manocha, Dinesh

    2013-01-01

    We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.

  5. Fixed-Rate Compressed Floating-Point Arrays.

    PubMed

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  6. Simulation of an Air Cushion Vehicle

    DTIC Science & Technology

    1977-03-01

    Massachusetts 02139 ! DDC Niov 219T March 1977 Final Report for Period January 1975 - December 1976 DOD DISTRIBUTION STATEMENT Approved for public...or in ,art is permitted for any purpose of the United States Government. II II JI UNCLASSI FIED SECURITY CLASSIFICATiON OF TIlS PAGE flWhen Dato...overflow Floating point fault Decimal arithmetic fault Watch Dog timer runout 186 NAVTRAEQUIPCEN 75-C-0057- 1 PROGRAM ENi\\TRY Initial Program - LOAD Inhibit

  7. Translation of one high-level language to another: COBOL to ADA, an example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, J.A.

    1986-01-01

    This dissertation discusses the difficulties encountered in, and explores possible solutions to, the task of automatically converting programs written in one HLL, COBOL, into programs written in another HLL, Ada, and still maintain readability. This paper presents at least one set of techniques and algorithms to solve many of the problems that were encountered. The differing view of records is solved by isolating those instances where it is a problem, then using the RENAMES option of Ada. Several solutions to doing the decimal-arithmetic translation are discussed. One method used is to emulate COBOL arithmetic in an arithmetic package. Another partialmore » solution suggested is to convert the values to decimal-scaled integers and use modular arithmetic. Conversion to fixed-point type and floating-point type are the third and fourth methods. The work of another researcher, Bobby Othmer, is utilized to correct any unstructured code, to remap statements not directly translatable such as ALTER, and to pull together isolated code sections. Algorithms are then presented to convert this restructured COBOL code into Ada code with local variables, parameters, and packages. The input/output requirements are partially met by mapping them to a series of procedure calls that interface with Ada's standard input-output package. Several examples are given of hand translations of COBOL programs. In addition, a possibly new method is shown for measuring the readability of programs.« less

  8. The MONGOOSE Rational Arithmetic Toolbox.

    PubMed

    Le, Christopher; Chindelevitch, Leonid

    2018-01-01

    The modeling of metabolic networks has seen a rapid expansion following the complete sequencing of thousands of genomes. The constraint-based modeling framework has emerged as one of the most popular approaches to reconstructing and analyzing genome-scale metabolic models. Its main assumption is that of a quasi-steady-state, requiring that the production of each internal metabolite be balanced by its consumption. However, due to the multiscale nature of the models, the large number of reactions and metabolites, and the use of floating-point arithmetic for the stoichiometric coefficients, ensuring that this assumption holds can be challenging.The MONGOOSE toolbox addresses this problem by using rational arithmetic, thus ensuring that models are analyzed in a reproducible manner and consistently with modeling assumptions. In this chapter we present a protocol for the complete analysis of a metabolic network model using the MONGOOSE toolbox, via its newly developed GUI, and describe how it can be used as a model-checking platform both during and after the model construction process.

  9. An exact arithmetic toolbox for a consistent and reproducible structural analysis of metabolic network models

    PubMed Central

    Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie

    2014-01-01

    Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations. PMID:25291352

  10. The most precise computations using Euler's method in standard floating-point arithmetic applied to modelling of biological systems.

    PubMed

    Kalinina, Elizabeth A

    2013-08-01

    The explicit Euler's method is known to be very easy and effective in implementation for many applications. This article extends results previously obtained for the systems of linear differential equations with constant coefficients to arbitrary systems of ordinary differential equations. Optimal (providing minimum total error) step size is calculated at each step of Euler's method. Several examples of solving stiff systems are included. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Multinode reconfigurable pipeline computer

    NASA Technical Reports Server (NTRS)

    Nosenchuck, Daniel M. (Inventor); Littman, Michael G. (Inventor)

    1989-01-01

    A multinode parallel-processing computer is made up of a plurality of innerconnected, large capacity nodes each including a reconfigurable pipeline of functional units such as Integer Arithmetic Logic Processors, Floating Point Arithmetic Processors, Special Purpose Processors, etc. The reconfigurable pipeline of each node is connected to a multiplane memory by a Memory-ALU switch NETwork (MASNET). The reconfigurable pipeline includes three (3) basic substructures formed from functional units which have been found to be sufficient to perform the bulk of all calculations. The MASNET controls the flow of signals from the memory planes to the reconfigurable pipeline and vice versa. the nodes are connectable together by an internode data router (hyperspace router) so as to form a hypercube configuration. The capability of the nodes to conditionally configure the pipeline at each tick of the clock, without requiring a pipeline flush, permits many powerful algorithms to be implemented directly.

  12. Fine-grained parallelization of fitness functions in bioinformatics optimization problems: gene selection for cancer classification and biclustering of gene expression data.

    PubMed

    Gomez-Pulido, Juan A; Cerrada-Barrios, Jose L; Trinidad-Amado, Sebastian; Lanza-Gutierrez, Jose M; Fernandez-Diaz, Ramon A; Crawford, Broderick; Soto, Ricardo

    2016-08-31

    Metaheuristics are widely used to solve large combinatorial optimization problems in bioinformatics because of the huge set of possible solutions. Two representative problems are gene selection for cancer classification and biclustering of gene expression data. In most cases, these metaheuristics, as well as other non-linear techniques, apply a fitness function to each possible solution with a size-limited population, and that step involves higher latencies than other parts of the algorithms, which is the reason why the execution time of the applications will mainly depend on the execution time of the fitness function. In addition, it is usual to find floating-point arithmetic formulations for the fitness functions. This way, a careful parallelization of these functions using the reconfigurable hardware technology will accelerate the computation, specially if they are applied in parallel to several solutions of the population. A fine-grained parallelization of two floating-point fitness functions of different complexities and features involved in biclustering of gene expression data and gene selection for cancer classification allowed for obtaining higher speedups and power-reduced computation with regard to usual microprocessors. The results show better performances using reconfigurable hardware technology instead of usual microprocessors, in computing time and power consumption terms, not only because of the parallelization of the arithmetic operations, but also thanks to the concurrent fitness evaluation for several individuals of the population in the metaheuristic. This is a good basis for building accelerated and low-energy solutions for intensive computing scenarios.

  13. Extreme-Scale Algorithms & Software Resilience (EASIR) Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, James W.

    This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emergingmore » memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a subset of the IEEE Floating Point Standard 754-2008, uses just 6 words to represent a “reproducible accumulator,” and requires just one read-only pass over the data, or one reduction in parallel. New instructions based on this work are being considered for inclusion in the future IEEE 754-2018 floating-point standard, and new reproducible BLAS are being considered for the next version of the BLAS standard.« less

  14. Fpga based L-band pulse doppler radar design and implementation

    NASA Astrophysics Data System (ADS)

    Savci, Kubilay

    As its name implies RADAR (Radio Detection and Ranging) is an electromagnetic sensor used for detection and locating targets from their return signals. Radar systems propagate electromagnetic energy, from the antenna which is in part intercepted by an object. Objects reradiate a portion of energy which is captured by the radar receiver. The received signal is then processed for information extraction. Radar systems are widely used for surveillance, air security, navigation, weather hazard detection, as well as remote sensing applications. In this work, an FPGA based L-band Pulse Doppler radar prototype, which is used for target detection, localization and velocity calculation has been built and a general-purpose Pulse Doppler radar processor has been developed. This radar is a ground based stationary monopulse radar, which transmits a short pulse with a certain pulse repetition frequency (PRF). Return signals from the target are processed and information about their location and velocity is extracted. Discrete components are used for the transmitter and receiver chain. The hardware solution is based on Xilinx Virtex-6 ML605 FPGA board, responsible for the control of the radar system and the digital signal processing of the received signal, which involves Constant False Alarm Rate (CFAR) detection and Pulse Doppler processing. The algorithm is implemented in MATLAB/SIMULINK using the Xilinx System Generator for DSP tool. The field programmable gate arrays (FPGA) implementation of the radar system provides the flexibility of changing parameters such as the PRF and pulse length therefore it can be used with different radar configurations as well. A VHDL design has been developed for 1Gbit Ethernet connection to transfer digitized return signal and detection results to PC. An A-Scope software has been developed with C# programming language to display time domain radar signals and detection results on PC. Data are processed both in FPGA chip and on PC. FPGA uses fixed point arithmetic operations as it is fast and facilitates source requirement as it consumes less hardware than floating point arithmetic operations. The software uses floating point arithmetic operations, which ensure precision in processing at the expense of speed. The functionality of the radar system has been tested for experimental validation in the field with a moving car and the validation of submodules are tested with synthetic data simulated on MATLAB.

  15. A CPU benchmark for protein crystallographic refinement.

    PubMed

    Bourne, P E; Hendrickson, W A

    1990-01-01

    The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.

  16. A decimal carry-free adder

    NASA Astrophysics Data System (ADS)

    Nikmehr, Hooman; Phillips, Braden; Lim, Cheng-Chew

    2005-02-01

    Recently, decimal arithmetic has become attractive in the financial and commercial world including banking, tax calculation, currency conversion, insurance and accounting. Although computers are still carrying out decimal calculation using software libraries and binary floating-point numbers, it is likely that in the near future, all processors will be equipped with units performing decimal operations directly on decimal operands. One critical building block for some complex decimal operations is the decimal carry-free adder. This paper discusses the mathematical framework of the addition, introduces a new signed-digit format for representing decimal numbers and presents an efficient architectural implementation. Delay estimation analysis shows that the adder offers improved performance over earlier designs.

  17. A new version of the CADNA library for estimating round-off error propagation in Fortran programs

    NASA Astrophysics Data System (ADS)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc

    2010-11-01

    The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: On 64-bit processors, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore a particular definition of mathematical functions for stochastic arguments has been included in the CADNA library to enable its use with the GNU Fortran compiler on 64-bit processors. Summary of revisions: If CADNA is used on a 64-bit processor with the GNU Fortran compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the stochastic argument of a mathematical function is never lost. Restrictions: CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf which shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs. The source code, which is located in the src directory, consists of one assembly language file (cadna_rounding.s) and eighteen Fortran language files. cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the Fortran compiler used. This assembly file contains routines which are frequently called in the CADNA Fortran files to change the rounding mode. The Fortran language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.

  18. 20-GFLOPS QR processor on a Xilinx Virtex-E FPGA

    NASA Astrophysics Data System (ADS)

    Walke, Richard L.; Smith, Robert W. M.; Lightbody, Gaye

    2000-11-01

    Adaptive beamforming can play an important role in sensor array systems in countering directional interference. In high-sample rate systems, such as radar and comms, the calculation of adaptive weights is a very computational task that requires highly parallel solutions. For systems where low power consumption and volume are important the only viable implementation is as an Application Specific Integrated Circuit (ASIC). However, the rapid advancement of Field Programmable Gate Array (FPGA) technology is enabling highly credible re-programmable solutions. In this paper we present the implementation of a scalable linear array processor for weight calculation using QR decomposition. We employ floating-point arithmetic with mantissa size optimized to the target application to minimize component size, and implement them as relationally placed macros (RPMs) on Xilinx Virtex FPGAs to achieve predictable dense layout and high-speed operation. We present results that show that 20GFLOPS of sustained computation on a single XCV3200E-8 Virtex-E FPGA is possible. We also describe the parameterized implementation of the floating-point operators and QR-processor, and the design methodology that enables us to rapidly generate complex FPGA implementations using the industry standard hardware description language VHDL.

  19. Implementation of the Sun Position Calculation in the PDC-1 Control Microprocessor

    NASA Technical Reports Server (NTRS)

    Stallkamp, J. A.

    1984-01-01

    The several computational approaches to providing the local azimuth and elevation angles of the Sun as a function of local time and then the utilization of the most appropriate method in the PDC-1 microprocessor are presented. The full algorithm, the FORTRAN form, is felt to be very useful in any kind or size of computer. It was used in the PDC-1 unit to generate efficient code for the microprocessor with its floating point arithmetic chip. The balance of the presentation consists of a brief discussion of the tracking requirements for PPDC-1, the planetary motion equations from the first to the final version, and the local azimuth-elevation geometry.

  20. AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1994-01-01

    This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.

  1. Chosen interval methods for solving linear interval systems with special type of matrix

    NASA Astrophysics Data System (ADS)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  2. Compensation for the signal processing characteristics of ultrasound B-mode scanners in adaptive speckle reduction.

    PubMed

    Crawford, D C; Bell, D S; Bamber, J C

    1993-01-01

    A systematic method to compensate for nonlinear amplification of individual ultrasound B-scanners has been investigated in order to optimise performance of an adaptive speckle reduction (ASR) filter for a wide range of clinical ultrasonic imaging equipment. Three potential methods have been investigated: (1) a method involving an appropriate selection of the speckle recognition feature was successful when the scanner signal processing executes simple logarithmic compressions; (2) an inverse transform (decompression) of the B-mode image was effective in correcting for the measured characteristics of image data compression when the algorithm was implemented in full floating point arithmetic; (3) characterising the behaviour of the statistical speckle recognition feature under conditions of speckle noise was found to be the method of choice for implementation of the adaptive speckle reduction algorithm in limited precision integer arithmetic. In this example, the statistical features of variance and mean were investigated. The third method may be implemented on commercially available fast image processing hardware and is also better suited for transfer into dedicated hardware to facilitate real-time adaptive speckle reduction. A systematic method is described for obtaining ASR calibration data from B-mode images of a speckle producing phantom.

  3. Failure detection in high-performance clusters and computers using chaotic map computations

    DOEpatents

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  4. Digital control system for space structure dampers

    NASA Technical Reports Server (NTRS)

    Haviland, J. K.

    1985-01-01

    A digital controller was developed using an SKD-51 System Design Kit, which incorporates an 8031 microcontroller. The necessary interfaces were installed in the wire wrap area of the SKD-51 and a pulse width modulator was developed to drive the coil of the actuator. Also, control equations were developed, using floating-point arithmetic. The design of the digital control system is emphasized, and it is shown that, provided certain rules are followed, an adequate design can be achieved. It is recommended that the so-called w-plane design method be used, and that the time elapsed before output of the up-dated coil-force signal be kept as small as possible. However, the cycle time for the controller should be watched carefully, because very small values for this time can lead to digital noise.

  5. JANUS: a bit-wise reversible integrator for N-body dynamics

    NASA Astrophysics Data System (ADS)

    Rein, Hanno; Tamayo, Daniel

    2018-01-01

    Hamiltonian systems such as the gravitational N-body problem have time-reversal symmetry. However, all numerical N-body integration schemes, including symplectic ones, respect this property only approximately. In this paper, we present the new N-body integrator JANUS , for which we achieve exact time-reversal symmetry by combining integer and floating point arithmetic. JANUS is explicit, formally symplectic and satisfies Liouville's theorem exactly. Its order is even and can be adjusted between two and ten. We discuss the implementation of JANUS and present tests of its accuracy and speed by performing and analysing long-term integrations of the Solar system. We show that JANUS is fast and accurate enough to tackle a broad class of dynamical problems. We also discuss the practical and philosophical implications of running exactly time-reversible simulations.

  6. Data reduction programs for a laser radar system

    NASA Technical Reports Server (NTRS)

    Badavi, F. F.; Copeland, G. E.

    1984-01-01

    The listing and description of software routines which were used to analyze the analog data obtained from LIDAR - system are given. All routines are written in FORTRAN - IV on a HP - 1000/F minicomputer which serves as the heart of the data acquisition system for the LIDAR program. This particular system has 128 kilobytes of highspeed memory and is equipped with a Vector Instruction Set (VIS) firmware package, which is used in all the routines, to handle quick execution of different long loops. The system handles floating point arithmetic in hardware in order to enhance the speed of execution. This computer is a 2177 C/F series version of HP - 1000 RTE-IVB data acquisition computer system which is designed for real time data capture/analysis and disk/tape mass storage environment.

  7. Reproducibility of neuroimaging analyses across operating systems

    PubMed Central

    Glatard, Tristan; Lewis, Lindsay B.; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C.

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed. PMID:25964757

  8. Reproducibility of neuroimaging analyses across operating systems.

    PubMed

    Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C

    2015-01-01

    Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.

  9. CADNA_C: A version of CADNA for use with C or C++ programs

    NASA Astrophysics Data System (ADS)

    Lamotte, Jean-Luc; Chesneaux, Jean-Marie; Jézéquel, Fabienne

    2010-11-01

    The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. The CADNA_C version enables this estimation in C or C++ programs, while the previous version had been developed for Fortran programs. The CADNA_C version has the same features as the previous one: with CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. New version program summaryProgram title: CADNA_C Catalogue identifier: AEGQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 60 075 No. of bytes in distributed program, including test data, etc.: 710 781 Distribution format: tar.gz Programming language: C++ Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 933 Does the new version supersede the previous version?: No Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: The previous version (AEAT_v1_0) enables the estimation of round-off error propagation in Fortran programs [2]. The new version has been developed to enable this estimation in C or C++ programs. Summary of revisions: The CADNA_C source code consists of one assembly language file (cadna_rounding.s) and twenty-three C++ language files (including three header files). cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the C++ compiler used. This assembly file contains routines which are frequently called in the CADNA_C C++ files to change the rounding mode. The C++ language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA_C specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. As a remark, on 64-bit processors, the mathematical library associated with the GNU C++ compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore, if CADNA_C is used on a 64-bit processor with the GNU C++ compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the argument of a mathematical function is never lost. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf and a reference guide named, ref_cadna.pdf. The user guide shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs.The reference guide briefly describes each function of the library. The source code (which consists of C++ and assembly files) is located in the src directory. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.

  10. A comparison of companion matrix methods to find roots of a trigonometric polynomial

    NASA Astrophysics Data System (ADS)

    Boyd, John P.

    2013-08-01

    A trigonometric polynomial is a truncated Fourier series of the form fN(t)≡∑j=0Naj cos(jt)+∑j=1N bj sin(jt). It has been previously shown by the author that zeros of such a polynomial can be computed as the eigenvalues of a companion matrix with elements which are complex valued combinations of the Fourier coefficients, the "CCM" method. However, previous work provided no examples, so one goal of this new work is to experimentally test the CCM method. A second goal is introduce a new alternative, the elimination/Chebyshev algorithm, and experimentally compare it with the CCM scheme. The elimination/Chebyshev matrix (ECM) algorithm yields a companion matrix with real-valued elements, albeit at the price of usefulness only for real roots. The new elimination scheme first converts the trigonometric rootfinding problem to a pair of polynomial equations in the variables (c,s) where c≡cos(t) and s≡sin(t). The elimination method next reduces the system to a single univariate polynomial P(c). We show that this same polynomial is the resultant of the system and is also a generator of the Groebner basis with lexicographic ordering for the system. Both methods give very high numerical accuracy for real-valued roots, typically at least 11 decimal places in Matlab/IEEE 754 16 digit floating point arithmetic. The CCM algorithm is typically one or two decimal places more accurate, though these differences disappear if the roots are "Newton-polished" by a single Newton's iteration. The complex-valued matrix is accurate for complex-valued roots, too, though accuracy decreases with the magnitude of the imaginary part of the root. The cost of both methods scales as O(N3) floating point operations. In spite of intimate connections of the elimination/Chebyshev scheme to two well-established technologies for solving systems of equations, resultants and Groebner bases, and the advantages of using only real-valued arithmetic to obtain a companion matrix with real-valued elements, the ECM algorithm is noticeably inferior to the complex-valued companion matrix in simplicity, ease of programming, and accuracy.

  11. Design of barrier bucket kicker control system

    NASA Astrophysics Data System (ADS)

    Ni, Fa-Fu; Wang, Yan-Yu; Yin, Jun; Zhou, De-Tai; Shen, Guo-Dong; Zheng, Yang-De.; Zhang, Jian-Chuan; Yin, Jia; Bai, Xiao; Ma, Xiao-Li

    2018-05-01

    The Heavy-Ion Research Facility in Lanzhou (HIRFL) contains two synchrotrons: the main cooler storage ring (CSRm) and the experimental cooler storage ring (CSRe). Beams are extracted from CSRm, and injected into CSRe. To apply the Barrier Bucket (BB) method on the CSRe beam accumulation, a new BB technology based kicker control system was designed and implemented. The controller of the system is implemented using an Advanced Reduced Instruction Set Computer (RISC) Machine (ARM) chip and a field-programmable gate array (FPGA) chip. Within the architecture, ARM is responsible for data presetting and floating number arithmetic processing. The FPGA computes the RF phase point of the two rings and offers more accurate control of the time delay. An online preliminary experiment on HIRFL was also designed to verify the functionalities of the control system. The result shows that the reference trigger point of two different sinusoidal RF signals for an arbitrary phase point was acquired with a matched phase error below 1° (approximately 2.1 ns), and the step delay time better than 2 ns were realized.

  12. Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Gilbreth, C. N.; Alhassid, Y.

    2015-03-01

    Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.

  13. ASIC For Complex Fixed-Point Arithmetic

    NASA Technical Reports Server (NTRS)

    Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.

    1995-01-01

    Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.

  14. Program Converts VAX Floating-Point Data To UNIX

    NASA Technical Reports Server (NTRS)

    Alves, Marcos; Chapman, Bruce; Chu, Eugene

    1996-01-01

    VAX Floating Point to Host Floating Point Conversion (VAXFC) software converts non-ASCII files to unformatted floating-point representation of UNIX machine. This is done by reading bytes bit by bit, converting them to floating-point numbers, then writing results to another file. Useful when data files created by VAX computer must be used on other machines. Written in C language.

  15. Orthogonal polynomials for refinable linear functionals

    NASA Astrophysics Data System (ADS)

    Laurie, Dirk; de Villiers, Johan

    2006-12-01

    A refinable linear functional is one that can be expressed as a convex combination and defined by a finite number of mask coefficients of certain stretched and shifted replicas of itself. The notion generalizes an integral weighted by a refinable function. The key to calculating a Gaussian quadrature formula for such a functional is to find the three-term recursion coefficients for the polynomials orthogonal with respect to that functional. We show how to obtain the recursion coefficients by using only the mask coefficients, and without the aid of modified moments. Our result implies the existence of the corresponding refinable functional whenever the mask coefficients are nonnegative, even when the same mask does not define a refinable function. The algorithm requires O(n^2) rational operations and, thus, can in principle deliver exact results. Numerical evidence suggests that it is also effective in floating-point arithmetic.

  16. The fault-tree compiler

    NASA Technical Reports Server (NTRS)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  17. The Fault Tree Compiler (FTC): Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1989-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.

  18. Multi-input and binary reproducible, high bandwidth floating point adder in a collective network

    DOEpatents

    Chen, Dong; Eisley, Noel A.; Heidelberger, Philip; Steinmacher-Burow, Burkhard

    2016-11-15

    To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to the collective logic device and receive outputs only once from the collective logic device.

  19. Comparison of the arithmetic and geometric means in estimating crown diameter and crown cross-sectional area

    Treesearch

    KaDonna Randolph

    2010-01-01

    The use of the geometric and arithmetic means for estimating tree crown diameter and crown cross-sectional area were examined for trees with crown width measurements taken at the widest point of the crown and perpendicular to the widest point of the crown. The average difference between the geometric and arithmetic mean crown diameters was less than 0.2 ft in absolute...

  20. Selection of floating-point or fixed-point for adaptive noise canceller in somatosensory evoked potential measurement.

    PubMed

    Shen, Chongfei; Liu, Hongtao; Xie, Xb; Luk, Keith Dk; Hu, Yong

    2007-01-01

    Adaptive noise canceller (ANC) has been used to improve signal to noise ratio (SNR) of somsatosensory evoked potential (SEP). In order to efficiently apply the ANC in hardware system, fixed-point algorithm based ANC can achieve fast, cost-efficient construction, and low-power consumption in FPGA design. However, it is still questionable whether the SNR improvement performance by fixed-point algorithm is as good as that by floating-point algorithm. This study is to compare the outputs of ANC by floating-point and fixed-point algorithm ANC when it was applied to SEP signals. The selection of step-size parameter (micro) was found different in fixed-point algorithm from floating-point algorithm. In this simulation study, the outputs of fixed-point ANC showed higher distortion from real SEP signals than that of floating-point ANC. However, the difference would be decreased with increasing micro value. In the optimal selection of micro, fixed-point ANC can get as good results as floating-point algorithm.

  1. Multi-input and binary reproducible, high bandwidth floating point adder in a collective network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Eisley, Noel A; Heidelberger, Philip

    To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to themore » collective logic device and receive outputs only once from the collective logic device.« less

  2. Environment parameters and basic functions for floating-point computation

    NASA Technical Reports Server (NTRS)

    Brown, W. S.; Feldman, S. I.

    1978-01-01

    A language-independent proposal for environment parameters and basic functions for floating-point computation is presented. Basic functions are proposed to analyze, synthesize, and scale floating-point numbers. The model provides a small set of parameters and a small set of axioms along with sharp measures of roundoff error. The parameters and functions can be used to write portable and robust codes that deal intimately with the floating-point representation. Subject to underflow and overflow constraints, a number can be scaled by a power of the floating-point radix inexpensively and without loss of precision. A specific representation for FORTRAN is included.

  3. Identification of mothball powder composition by float tests and melting point tests.

    PubMed

    Tang, Ka Yuen

    2018-07-01

    The aim of the study was to identify the composition, as either camphor, naphthalene, or paradichlorobenzene, of mothballs in the form of powder or tiny fragments by float tests and melting point tests. Naphthalene, paradichlorobenzene and camphor mothballs were blended into powder and tiny fragments (with sizes <1/10 of the size of an intact mothball). In the float tests, the mothball powder and tiny fragments were placed in water, saturated salt solution and 50% dextrose solution (D50), and the extent to which they floated or sank in the liquids was observed. In the melting point tests, the mothball powder and tiny fragments were placed in hot water with a temperature between 53 and 80 °C, and the extent to which they melted was observed. Both the float and melting point tests were then repeated using intact mothballs. Three emergency physicians blinded to the identities of samples and solutions visually evaluated each sample. In the float tests, paradichlorobenzene powder partially floated and partially sank in all three liquids, while naphthalene powder partially floated and partially sank in water. Naphthalene powder did not sink in D50 or saturated salt solution. Camphor powder floated in all three liquids. Float tests identified the compositions of intact mothball accurately. In the melting point tests, paradichlorobenzene powder melted completely in hot water within 1 min while naphthalene powder and camphor powder did not melt. The melted portions of paradichlorobenzene mothballs were sometimes too small to be observed in 1 min but the mothballs either partially or completely melted in 5 min. Both camphor and naphthalene intact mothballs did not melt in hot water. For mothball powder, the melting point tests were more accurate than the float tests in differentiating between paradichlorobenzene and non-paradichlorobenzene (naphthalene or camphor). For intact mothballs, float tests performed better than melting point tests. Float tests can identify camphor mothballs but melting point tests cannot. We suggest melting point tests for identifying mothball powder and tiny fragments while float tests are recommended for intact mothball and large fragments.

  4. Floating point only SIMD instruction set architecture including compare, select, Boolean, and alignment operations

    DOEpatents

    Gschwind, Michael K [Chappaqua, NY

    2011-03-01

    Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.

  5. Improvements in floating point addition/subtraction operations

    DOEpatents

    Farmwald, P.M.

    1984-02-24

    Apparatus is described for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.

  6. Bifurcated method and apparatus for floating point addition with decreased latency time

    DOEpatents

    Farmwald, Paul M.

    1987-01-01

    Apparatus for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.

  7. Arithmetic strategy development and its domain-specific and domain-general cognitive correlates: a longitudinal study in children with persistent mathematical learning difficulties.

    PubMed

    Vanbinst, Kiran; Ghesquière, Pol; De Smedt, Bert

    2014-11-01

    Deficits in arithmetic fact retrieval constitute the hallmark of children with mathematical learning difficulties (MLD). It remains, however, unclear which cognitive deficits underpin these difficulties in arithmetic fact retrieval. Many prior studies defined MLD by considering low achievement criteria and not by additionally taking the persistence of the MLD into account. Therefore, the present longitudinal study contrasted children with persistent MLD (MLD-p; mean age: 9 years 2 months) and typically developing (TD) children (mean age: 9 years 6 months) at three time points, to explore whether differences in arithmetic strategy development were associated with differences in numerical magnitude processing, working memory and phonological processing. Our longitudinal data revealed that children with MLD-p had persistent arithmetic fact retrieval deficits at each time point. Children with MLD-p showed persistent impairments in symbolic, but not in nonsymbolic, magnitude processing at each time point. The two groups differed in phonological processing, but not in working memory. Our data indicate that both domain-specific and domain-general cognitive abilities contribute to individual differences in children's arithmetic strategy development, and that the symbolic processing of numerical magnitudes might be a particular risk factor for children with MLD-p. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. NULL Convention Floating Point Multiplier

    PubMed Central

    Ramachandran, Seshasayanan

    2015-01-01

    Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation. PMID:25879069

  9. NULL convention floating point multiplier.

    PubMed

    Albert, Anitha Juliette; Ramachandran, Seshasayanan

    2015-01-01

    Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.

  10. An Efficient Implementation For Real Time Applications Of The Wigner-Ville Distribution

    NASA Astrophysics Data System (ADS)

    Boashash, Boualem; Black, Peter; Whitehouse, Harper J.

    1986-03-01

    The Wigner-Ville Distribution (WVD) is a valuable tool for time-frequency signal analysis. In order to implement the WVD in real time an efficient algorithm and architecture have been developed which may be implemented with commercial components. This algorithm successively computes the analytic signal corresponding to the input signal, forms a weighted kernel function and analyses the kernel via a Discrete Fourier Transform (DFT). To evaluate the analytic signal required by the algorithm it is shown that the time domain definition implemented as a finite impulse response (FIR) filter is practical and more efficient than the frequency domain definition of the analytic signal. The windowed resolution of the WVD in the frequency domain is shown to be similar to the resolution of a windowed Fourier Transform. A real time signal processsor has been designed for evaluation of the WVD analysis system. The system is easily paralleled and can be configured to meet a variety of frequency and time resolutions. The arithmetic unit is based on a pair of high speed VLSI floating-point multiplier and adder chips. Dual operand buses and an independent result bus maximize data transfer rates. The system is horizontally microprogrammed and utilizes a full instruction pipeline. Each microinstruction specifies two operand addresses, a result location, the type of arithmetic and the memory configuration. input and output is via shared memory blocks with front-end processors to handle data transfers during the non access periods of the analyzer.

  11. Bounds for the price of discrete arithmetic Asian options

    NASA Astrophysics Data System (ADS)

    Vanmaele, M.; Deelstra, G.; Liinev, J.; Dhaene, J.; Goovaerts, M. J.

    2006-01-01

    In this paper the pricing of European-style discrete arithmetic Asian options with fixed and floating strike is studied by deriving analytical lower and upper bounds. In our approach we use a general technique for deriving upper (and lower) bounds for stop-loss premiums of sums of dependent random variables, as explained in Kaas et al. (Ins. Math. Econom. 27 (2000) 151-168), and additionally, the ideas of Rogers and Shi (J. Appl. Probab. 32 (1995) 1077-1088) and of Nielsen and Sandmann (J. Financial Quant. Anal. 38(2) (2003) 449-473). We are able to create a unifying framework for European-style discrete arithmetic Asian options through these bounds, that generalizes several approaches in the literature as well as improves the existing results. We obtain analytical and easily computable bounds. The aim of the paper is to formulate an advice of the appropriate choice of the bounds given the parameters, investigate the effect of different conditioning variables and compare their efficiency numerically. Several sets of numerical results are included. We also discuss hedging using these bounds. Moreover, our methods are applicable to a wide range of (pricing) problems involving a sum of dependent random variables.

  12. Exploiting data representation for fault tolerance

    DOE PAGES

    Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...

    2015-01-06

    Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less

  13. Quantity, Revisited: An Object-Oriented Reusable Class

    NASA Technical Reports Server (NTRS)

    Funston, Monica Gayle; Gerstle, Walter; Panthaki, Malcolm

    1998-01-01

    "Quantity", a prototype implementation of an object-oriented class, was developed for two reasons: to help engineers and scientists manipulate the many types of quantities encountered during routine analysis, and to create a reusable software component to for large domain-specific applications. From being used as a stand-alone application to being incorporated into an existing computational mechanics toolkit, "Quantity" appears to be a useful and powerful object. "Quantity" has been designed to maintain the full engineering meaning of values with respect to units and coordinate systems. A value is a scalar, vector, tensor, or matrix, each of which is composed of Value Components, each of which may be an integer, floating point number, fuzzy number, etc., and its associated physical unit. Operations such as coordinate transformation and arithmetic operations are handled by member functions of "Quantity". The prototype has successfully tested such characteristics as maintaining a numeric value, an associated unit, and an annotation. In this paper we further explore the design of "Quantity", with particular attention to coordinate systems.

  14. Towards cortex sized artificial neural systems.

    PubMed

    Johansson, Christopher; Lansner, Anders

    2007-01-01

    We propose, implement, and discuss an abstract model of the mammalian neocortex. This model is instantiated with a sparse recurrently connected neural network that has spiking leaky integrator units and continuous Hebbian learning. First we study the structure, modularization, and size of neocortex, and then we describe a generic computational model of the cortical circuitry. A characterizing feature of the model is that it is based on the modularization of neocortex into hypercolumns and minicolumns. Both a floating- and fixed-point arithmetic implementation of the model are presented along with simulation results. We conclude that an implementation on a cluster computer is not communication but computation bounded. A mouse and rat cortex sized version of our model executes in 44% and 23% of real-time respectively. Further, an instance of the model with 1.6 x 10(6) units and 2 x 10(11) connections performed noise reduction and pattern completion. These implementations represent the current frontier of large-scale abstract neural network simulations in terms of network size and running speed.

  15. How Is Phonological Processing Related to Individual Differences in Children's Arithmetic Skills?

    ERIC Educational Resources Information Center

    De Smedt, Bert; Taylor, Jessica; Archibald, Lisa; Ansari, Daniel

    2010-01-01

    While there is evidence for an association between the development of reading and arithmetic, the precise locus of this relationship remains to be determined. Findings from cognitive neuroscience research that point to shared neural correlates for phonological processing and arithmetic as well as recent behavioral evidence led to the present…

  16. Extending the BEAGLE library to a multi-FPGA platform.

    PubMed

    Jin, Zheming; Bakos, Jason D

    2013-01-19

    Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.

  17. A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.

    1991-01-01

    A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.

  18. Verification of floating-point software

    NASA Technical Reports Server (NTRS)

    Hoover, Doug N.

    1990-01-01

    Floating point computation presents a number of problems for formal verification. Should one treat the actual details of floating point operations, or accept them as imprecisely defined, or should one ignore round-off error altogether and behave as if floating point operations are perfectly accurate. There is the further problem that a numerical algorithm usually only approximately computes some mathematical function, and we often do not know just how good the approximation is, even in the absence of round-off error. ORA has developed a theory of asymptotic correctness which allows one to verify floating point software with a minimum entanglement in these problems. This theory and its implementation in the Ariel C verification system are described. The theory is illustrated using a simple program which finds a zero of a given function by bisection. This paper is presented in viewgraph form.

  19. Design of a reversible single precision floating point subtractor.

    PubMed

    Anantha Lakshmi, Av; Sudha, Gf

    2014-01-04

    In recent years, Reversible logic has emerged as a major area of research due to its ability to reduce the power dissipation which is the main requirement in the low power digital circuit design. It has wide applications like low power CMOS design, Nano-technology, Digital signal processing, Communication, DNA computing and Optical computing. Floating-point operations are needed very frequently in nearly all computing disciplines, and studies have shown floating-point addition/subtraction to be the most used floating-point operation. However, few designs exist on efficient reversible BCD subtractors but no work on reversible floating point subtractor. In this paper, it is proposed to present an efficient reversible single precision floating-point subtractor. The proposed design requires reversible designs of an 8-bit and a 24-bit comparator unit, an 8-bit and a 24-bit subtractor, and a normalization unit. For normalization, a 24-bit Reversible Leading Zero Detector and a 24-bit reversible shift register is implemented to shift the mantissas. To realize a reversible 1-bit comparator, in this paper, two new 3x3 reversible gates are proposed The proposed reversible 1-bit comparator is better and optimized in terms of the number of reversible gates used, the number of transistor count and the number of garbage outputs. The proposed work is analysed in terms of number of reversible gates, garbage outputs, constant inputs and quantum costs. Using these modules, an efficient design of a reversible single precision floating point subtractor is proposed. Proposed circuits have been simulated using Modelsim and synthesized using Xilinx Virtex5vlx30tff665-3. The total on-chip power consumed by the proposed 32-bit reversible floating point subtractor is 0.410 W.

  20. Optimization of block-floating-point realizations for digital controllers with finite-word-length considerations.

    PubMed

    Wu, Jun; Hu, Xie-he; Chen, Sheng; Chu, Jian

    2003-01-01

    The closed-loop stability issue of finite-precision realizations was investigated for digital controllers implemented in block-floating-point format. The controller coefficient perturbation was analyzed resulting from using finite word length (FWL) block-floating-point representation scheme. A block-floating-point FWL closed-loop stability measure was derived which considers both the dynamic range and precision. To facilitate the design of optimal finite-precision controller realizations, a computationally tractable block-floating-point FWL closed-loop stability measure was then introduced and the method of computing the value of this measure for a given controller realization was developed. The optimal controller realization is defined as the solution that maximizes the corresponding measure, and a numerical optimization approach was adopted to solve the resulting optimal realization problem. A numerical example was used to illustrate the design procedure and to compare the optimal controller realization with the initial realization.

  1. Apparatus and method for implementing power saving techniques when processing floating point values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Young Moon; Park, Sang Phill

    An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.

  2. Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis

    NASA Technical Reports Server (NTRS)

    Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.

    2017-01-01

    This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.

  3. The Works of Archimedes

    NASA Astrophysics Data System (ADS)

    Archimedes; Heath, Thomas L.

    2009-09-01

    Part I. Introduction: 1. Archimedes; 2. Manuscripts and principle editions; 3. Relation of Archimedes to his predecessors; 4. Arithmetic in Archimedes; 5. On the problem known as neuseis; 6. Cubic equations; 7. Anticipations by Archimedes of the integral calculus; 8. The terminology of Archimedes; Part II. The Works of Archimedes: 1. On the sphere and cylinder; 2. Measurement of a circle; 3. On conoids and spheroids; 4. On spirals; 5. On the equilibrium of planes; 6. The sand-reckoner; 7. Quadrature of the parabola; 8. On floating bodies; 9. Book of lemmas; 10. The cattle-problem.

  4. Floating-Point Units and Algorithms for field-programmable gate arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Underwood, Keith D.; Hemmert, K. Scott

    2005-11-01

    The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of the BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and wemore » are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used and constructs the required routes between them. The result is a "bitstream" that is analogous to a compiled binary. The bitstream is loaded into the FPGA to create a specific hardware configuration.« less

  5. Floating-point performance of ARM cores and their efficiency in classical molecular dynamics

    NASA Astrophysics Data System (ADS)

    Nikolskiy, V.; Stegailov, V.

    2016-02-01

    Supercomputing of the exascale era is going to be inevitably limited by power efficiency. Nowadays different possible variants of CPU architectures are considered. Recently the development of ARM processors has come to the point when their floating point performance can be seriously considered for a range of scientific applications. In this work we present the analysis of the floating point performance of the latest ARM cores and their efficiency for the algorithms of classical molecular dynamics.

  6. Single-digit arithmetic processing—anatomical evidence from statistical voxel-based lesion analysis

    PubMed Central

    Mihulowicz, Urszula; Willmes, Klaus; Karnath, Hans-Otto; Klein, Elise

    2014-01-01

    Different specific mechanisms have been suggested for solving single-digit arithmetic operations. However, the neural correlates underlying basic arithmetic (multiplication, addition, subtraction) are still under debate. In the present study, we systematically assessed single-digit arithmetic in a group of acute stroke patients (n = 45) with circumscribed left- or right-hemispheric brain lesions. Lesion sites significantly related to impaired performance were found only in the left-hemisphere damaged (LHD) group. Deficits in multiplication and addition were related to subcortical/white matter brain regions differing from those for subtraction tasks, corroborating the notion of distinct processing pathways for different arithmetic tasks. Additionally, our results further point to the importance of investigating fiber pathways in numerical cognition. PMID:24847238

  7. Applying n-bit floating point numbers and integers, and the n-bit filter of HDF5 to reduce file sizes of remote sensing products in memory-sensitive environments

    NASA Astrophysics Data System (ADS)

    Zinke, Stephan

    2017-02-01

    Memory sensitive applications for remote sensing data require memory-optimized data types in remote sensing products. Hierarchical Data Format version 5 (HDF5) offers user defined floating point numbers and integers and the n-bit filter to create data types optimized for memory consumption. The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) applies a compaction scheme to the disseminated products of the Day and Night Band (DNB) data of Suomi National Polar-orbiting Partnership (S-NPP) satellite's instrument Visible Infrared Imager Radiometer Suite (VIIRS) through the EUMETSAT Advanced Retransmission Service, converting the original 32 bits floating point numbers to user defined floating point numbers in combination with the n-bit filter for the radiance dataset of the product. The radiance dataset requires a floating point representation due to the high dynamic range of the DNB. A compression factor of 1.96 is reached by using an automatically determined exponent size and an 8 bits trailing significand and thus reducing the bandwidth requirements for dissemination. It is shown how the parameters needed for user defined floating point numbers are derived or determined automatically based on the data present in a product.

  8. On the use of inexact, pruned hardware in atmospheric modelling

    PubMed Central

    Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.

    2014-01-01

    Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031

  9. Design and evaluation of an architecture for a digital signal processor for instrumentation applications

    NASA Astrophysics Data System (ADS)

    Fellman, Ronald D.; Kaneshiro, Ronald T.; Konstantinides, Konstantinos

    1990-03-01

    The authors present the design and evaluation of an architecture for a monolithic, programmable, floating-point digital signal processor (DSP) for instrumentation applications. An investigation of the most commonly used algorithms in instrumentation led to a design that satisfies the requirements for high computational and I/O (input/output) throughput. In the arithmetic unit, a 16- x 16-bit multiplier and a 32-bit accumulator provide the capability for single-cycle multiply/accumulate operations, and three format adjusters automatically adjust the data format for increased accuracy and dynamic range. An on-chip I/O unit is capable of handling data block transfers through a direct memory access port and real-time data streams through a pair of parallel I/O ports. I/O operations and program execution are performed in parallel. In addition, the processor includes two data memories with independent addressing units, a microsequencer with instruction RAM, and multiplexers for internal data redirection. The authors also present the structure and implementation of a design environment suitable for the algorithmic, behavioral, and timing simulation of a complete DSP system. Various benchmarking results are reported.

  10. Active vibration control of a full scale aircraft wing using a reconfigurable controller

    NASA Astrophysics Data System (ADS)

    Prakash, Shashikala; Renjith Kumar, T. G.; Raja, S.; Dwarakanathan, D.; Subramani, H.; Karthikeyan, C.

    2016-01-01

    This work highlights the design of a Reconfigurable Active Vibration Control (AVC) System for aircraft structures using adaptive techniques. The AVC system with a multichannel capability is realized using Filtered-X Least Mean Square algorithm (FxLMS) on Xilinx Virtex-4 Field Programmable Gate Array (FPGA) platform in Very High Speed Integrated Circuits Hardware Description Language, (VHDL). The HDL design is made based on Finite State Machine (FSM) model with Floating point Intellectual Property (IP) cores for arithmetic operations. The use of FPGA facilitates to modify the system parameters even during runtime depending on the changes in user's requirements. The locations of the control actuators are optimized based on dynamic modal strain approach using genetic algorithm (GA). The developed system has been successfully deployed for the AVC testing of the full-scale wing of an all composite two seater transport aircraft. Several closed loop configurations like single channel and multi-channel control have been tested. The experimental results from the studies presented here are very encouraging. They demonstrate the usefulness of the system's reconfigurability for real time applications.

  11. Accelerating scientific computations with mixed precision algorithms

    NASA Astrophysics Data System (ADS)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. Program summaryProgram title: ITER-REF Catalogue identifier: AECO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 41 862 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: desktop, server Operating system: Unix/Linux RAM: 512 Mbytes Classification: 4.8 External routines: BLAS (optional) Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision. Running time: seconds/minutes

  12. [Controlled observation of the efficacy between floating acupuncture at Tianying point and warm-needling therapy for supraspinous ligament injury].

    PubMed

    Li, Xin-Wei; Shao, Xiao-Mei; Tan, Ke-Ping; Fang, Jian-Qiao

    2013-04-01

    To compare the efficacy difference in the treatment of supraspinous ligament injury between floating acupuncture at Tianying point and the conventional warm needling therapy. Ninety patients were randomized into a floating acupuncture group and a warm needling group, 45 cases in each one. In the floating acupuncture group, the floating needling technique was adopted at Tianying point. In the warm needling group, the conventional warm needling therapy was applied at Tianying point as the chief point in the prescription. The treatment was given 3 times a week and 6 treatments made one session. The visual analogue scale (VAS) was adopted for pain comparison before and after treatment of the patients in two groups and the efficacy in two groups were assessed. The curative and remarkably effective rate was 81.8% (36/44) in the floating acupuncture group and the total effective rate was 95.5% (42/44), which were superior to 44.2% (19/43) and 79.1% (34/43) in the warm needling group separately (P < 0.01, P < 0.05). VAS score was lower as compared with that before treatment of the patients in two groups (both P < 0.01) and the score in the floating acupuncture group was lower than that in the warm needling group after treatment (P < 0.01). Thirty-six cases were cured and remarkably effective in the floating acupuncture group after treatment, in which 28 cases were cured and remarkably effective in 3 treatments, accounting for 77.8 (28/36), which was apparently higher than 26.3 (5/19) in the warm-needling group (P < 0.01). The floating acupuncture at Tianying point achieves the quick and definite efficacy on supraspinous ligament injury and presents the apparent analgesic effect. The efficacy is superior to the conventional warm-needling therapy.

  13. Extending the BEAGLE library to a multi-FPGA platform

    PubMed Central

    2013-01-01

    Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707

  14. Applications Performance on NAS Intel Paragon XP/S - 15#

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Simon, Horst D.; Copper, D. M. (Technical Monitor)

    1994-01-01

    The Numerical Aerodynamic Simulation (NAS) Systems Division received an Intel Touchstone Sigma prototype model Paragon XP/S- 15 in February, 1993. The i860 XP microprocessor with an integrated floating point unit and operating in dual -instruction mode gives peak performance of 75 million floating point operations (NIFLOPS) per second for 64 bit floating point arithmetic. It is used in the Paragon XP/S-15 which has been installed at NAS, NASA Ames Research Center. The NAS Paragon has 208 nodes and its peak performance is 15.6 GFLOPS. Here, we will report on early experience using the Paragon XP/S- 15. We have tested its performance using both kernels and applications of interest to NAS. We have measured the performance of BLAS 1, 2 and 3 both assembly-coded and Fortran coded on NAS Paragon XP/S- 15. Furthermore, we have investigated the performance of a single node one-dimensional FFT, a distributed two-dimensional FFT and a distributed three-dimensional FFT Finally, we measured the performance of NAS Parallel Benchmarks (NPB) on the Paragon and compare it with the performance obtained on other highly parallel machines, such as CM-5, CRAY T3D, IBM SP I, etc. In particular, we investigated the following issues, which can strongly affect the performance of the Paragon: a. Impact of the operating system: Intel currently uses as a default an operating system OSF/1 AD from the Open Software Foundation. The paging of Open Software Foundation (OSF) server at 22 MB to make more memory available for the application degrades the performance. We found that when the limit of 26 NIB per node out of 32 MB available is reached, the application is paged out of main memory using virtual memory. When the application starts paging, the performance is considerably reduced. We found that dynamic memory allocation can help applications performance under certain circumstances. b. Impact of data cache on the i860/XP: We measured the performance of the BLAS both assembly coded and Fortran coded. We found that the measured performance of assembly-coded BLAS is much less than what memory bandwidth limitation would predict. The influence of data cache on different sizes of vectors is also investigated using one-dimensional FFTs. c. Impact of processor layout: There are several different ways processors can be laid out within the two-dimensional grid of processors on the Paragon. We have used the FFT example to investigate performance differences based on processors layout.

  15. Hydrodynamic and Aerodynamic Tests of Models of Floats for Single-float Seaplanes NACA Models 41-D, 41-E, 61-A, 73, and 73-A

    NASA Technical Reports Server (NTRS)

    Parkinson, J B; HOUSE R O

    1938-01-01

    Tests were made in the NACA tank and in the NACA 7 by 10 foot wind tunnel on two models of transverse step floats and three models of pointed step floats considered to be suitable for use with single float seaplanes. The object of the program was the reduction of water resistance and spray of single float seaplanes without reducing the angle of dead rise believed to be necessary for the satisfactory absorption of the shock loads. The results indicated that all the models have less resistance and spray than the model of the Mark V float and that the pointed step floats are somewhat superior to the transverse step floats in these respects. Models 41-D, 61-A, and 73 were tested by the general method over a wide range of loads and speeds. The results are presented in the form of curves and charts for use in design calculations.

  16. Numerical Demons in Monte Carlo Estimation of Bayesian Model Evidence with Application to Soil Respiration Models

    NASA Astrophysics Data System (ADS)

    Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.

    2016-12-01

    Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can hamper AM resulting in severe underestimation of BME. TI turned out to be the most vulnerable, resulting in BME overestimation. Finally, we show how SS can be largely invariant to rounding errors, yielding the most accurate and computational efficient results. These research results are useful for MC simulations to estimate Bayesian model evidence.

  17. The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware

    NASA Astrophysics Data System (ADS)

    Kathiara, Jainik

    There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.

  18. High-performance floating-point image computing workstation for medical applications

    NASA Astrophysics Data System (ADS)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.

  19. The ALARM Experiment

    ERIC Educational Resources Information Center

    Gerhardt, Ira

    2015-01-01

    An experiment was conducted over three recent semesters of an introductory calculus course to test whether it was possible to quantify the effect that difficulty with basic algebraic and arithmetic computation had on individual performance. Points lost during the term were classified as being due to either algebraic and arithmetic mistakes…

  20. A Flexible VHDL Floating Point Module for Control Algorithm Implementation in Space Applications

    NASA Astrophysics Data System (ADS)

    Padierna, A.; Nicoleau, C.; Sanchez, J.; Hidalgo, I.; Elvira, S.

    2012-08-01

    The implementation of control loops for space applications is an area with great potential. However, the characteristics of this kind of systems, such as its wide dynamic range of numeric values, make inadequate the use of fixed-point algorithms.However, because the generic chips available for the treatment of floating point data are, in general, not qualified to operate in space environments and the possibility of using an IP module in a FPGA/ASIC qualified for space is not viable due to the low amount of logic cells available for these type of devices, it is necessary to find a viable alternative.For these reasons, in this paper a VHDL Floating Point Module is presented. This proposal allows the design and execution of floating point algorithms with acceptable occupancy to be implemented in FPGAs/ASICs qualified for space environments.

  1. Genetic analysis of floating Enteromorpha prolifera in the Yellow Sea with AFLP marker

    NASA Astrophysics Data System (ADS)

    Liu, Cui; Zhang, Jing; Sun, Xiaoyu; Li, Jian; Zhang, Xi; Liu, Tao

    2011-09-01

    Extremely large accumulation of green algae Enteromorpha prolifera floated along China' coastal region of the Yellow Sea ever since the summer of 2008. Amplified Fragment Length Polymorphism (AFLP) analysis was applied to assess the genetic diversity and relationships among E. prolifera samples collected from 9 affected areas of the Yellow Sea. Two hundred reproducible fragments were generated with 8 AFLP primer combinations, of which 194 (97%) were polymorphic. The average Nei's genetic diversity, the coefficiency of genetic differentiation (Gst), and the average gene flow estimated from Gst in the 9 populations were 0.4018, 0.6404 and 0.2807 respectively. Cluster analysis based on the unweighed pair group method with arithmetic averages (UPGMA) showed that the genetic relationships within one population or among different populations were all related to their collecting locations and sampling time. Large genetic differentiation was detected among the populations. The E. prolifera originated from different areas and were undergoing a course of mixing.

  2. 40 CFR 426.50 - Applicability; description of the float glass manufacturing subcategory.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... float glass manufacturing subcategory. 426.50 Section 426.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.50 Applicability; description of the float glass...

  3. 40 CFR 426.50 - Applicability; description of the float glass manufacturing subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... float glass manufacturing subcategory. 426.50 Section 426.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.50 Applicability; description of the float glass...

  4. Operateurs et engins de calcul en virgule flottante et leur application a la simulation en temps reel sur FPGA

    NASA Astrophysics Data System (ADS)

    Ould Bachir, Tarek

    The real-time simulation of electrical networks gained a vivid industrial interest during recent years, motivated by the substantial development cost reduction that such a prototyping approach can offer. Real-time simulation allows the progressive inclusion of real hardware during its development, allowing its testing under realistic conditions. However, CPU-based simulations suffer from certain limitations such as the difficulty to reach time-steps of a few microsecond, an important challenge brought by modern power converters. Hence, industrial practitioners adopted the FPGA as a platform of choice for the implementation of calculation engines dedicated to the rapid real-time simulation of electrical networks. The reconfigurable technology broke the 5 kHz switching frequency barrier that is characteristic of CPU-based simulations. Moreover, FPGA-based real-time simulation offers many advantages, including the reduced latency of the simulation loop that is obtained thanks to a direct access to sensors and actuators. The fixed-point format is paradigmatic to FPGA-based digital signal processing. However, the format imposes a time penalty in the development process since the designer has to asses the required precision for all model variables. This fact brought an import research effort on the use of the floating-point format for the simulation of electrical networks. One of the main challenges in the use of the floating-point format are the long latencies required by the elementary arithmetic operators, particularly when an adder is used as an accumulator, an important building bloc for the implementation of integration rules such as the trapezoidal method. Hence, single-cycle floating-point accumulation forms the core of this research work. Our results help building such operators as accumulators, multiply-accumulators (MACs), and dot-product (DP) operators. These operators play a key role in the implementation of the proposed calculation engines. Therefore, this thesis contributes to the realm of FPGA-based real-time simulation in many ways. The research work proposes a new summation algorithm, which is a generalization of the so-called self-alignment technique. The new formulation is broader, simpler in its expression and hardware implementation. Our research helps formulating criteria to guarantee good accuracy, the criteria being established on a theoretical, as well as empirical basis. Moreover, the thesis offers a comprehensive analysis on the use of the redundant high radix carry-save (HRCS) format. The HRCS format is used to perform rapid additions of large mantissas. Two new HRCS operators are also proposed, namely an endomorphic adder and a HRCS to conventional converter. Once the mean to single-cycle accumulation is defined as a combination of the self-alignment technique and the HRCS format, the research focuses on the FPGA implementation of SIMD calculation engines using parallel floating-point MACs or DPs. The proposed operators are characterized by low latencies, allowing the engines to reach very low time-steps. The document finally discusses power electronic circuits modelling, and concludes with the presentation of a versatile calculation engine capable of simulating power converter with arbitrary topologies and up to 24 switches, while achieving time steps below 1 mus and allowing switching frequencies in the range of tens kilohertz. The latter realization has led to commercialization of a product by our industrial partner.

  5. Generating and executing programs for a floating point single instruction multiple data instruction set architecture

    DOEpatents

    Gschwind, Michael K

    2013-04-16

    Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.

  6. Improving energy efficiency in handheld biometric applications

    NASA Astrophysics Data System (ADS)

    Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.

    2012-06-01

    With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.

  7. 40 CFR 63.1063 - Floating roof requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the point of refloating the floating roof shall be continuous and shall be performed as soon as... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Floating roof requirements. 63.1063...) National Emission Standards for Storage Vessels (Tanks)-Control Level 2 § 63.1063 Floating roof...

  8. 50 CFR 679.94 - Economic data report (EDR) for the Amendment 80 sector.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...: NMFS, Alaska Fisheries Science Center, Economic Data Reports, 7600 Sand Point Way NE, F/AKC2, Seattle... Operation Description of code Code NMFS Alaska region ADF&G FCP Catcher/processor Floating catcher processor. FLD Mothership Floating domestic mothership. IFP Stationary Floating Processor Inshore floating...

  9. 50 CFR 86.13 - What is boating infrastructure?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., currents, etc., that provide a temporary safe anchorage point or harbor of refuge during storms); (f) Floating docks and fixed piers; (g) Floating and fixed breakwaters; (h) Dinghy docks (floating or fixed...

  10. Developing an Energy Policy for the United States

    ERIC Educational Resources Information Center

    Keefe, Pat

    2014-01-01

    Al Bartlett's video "Arithmetic, Population, and Energy" spells out many of the complex issues related to energy use in our society. Bartlett makes the point that basic arithmetic is the fundamental obstacle preventing us from being able to grasp the relationships between energy consumption, population, and lifestyles. In an earlier…

  11. Predator Arithmetic

    ERIC Educational Resources Information Center

    Shutler, Paul M. E.; Fong, Ng Swee

    2010-01-01

    Modern Hindu-Arabic numeration is the end result of a long period of evolution, and is clearly superior to any system that has gone before, but is it optimal? We compare it to a hypothetical base 5 system, which we dub Predator arithmetic, and judge which of the two systems is superior from a mathematics education point of view. We find that…

  12. Spontaneous Meta-Arithmetic as a First Step toward School Algebra

    ERIC Educational Resources Information Center

    Caspi, Shai; Sfard, Anna

    2012-01-01

    Taking as the point of departure the vision of school algebra as a formalized meta-discourse of arithmetic, we have been following five pairs of 7th grade students as they progress in algebraic discourse during 24 months, from their informal algebraic talk to the formal algebraic discourse, as taught in school. Our analysis follows changes that…

  13. A High-Level Formalization of Floating-Point Number in PVS

    NASA Technical Reports Server (NTRS)

    Boldo, Sylvie; Munoz, Cesar

    2006-01-01

    We develop a formalization of floating-point numbers in PVS based on a well-known formalization in Coq. We first describe the definitions of all the needed notions, e.g., floating-point number, format, rounding modes, etc.; then, we present an application to polynomial evaluation for elementary function evaluation. The application already existed in Coq, but our formalization shows a clear improvement in the quality of the result due to the automation provided by PVS. We finally integrate our formalization into a PVS hardware-level formalization of the IEEE-854 standard previously developed at NASA.

  14. 33 CFR 147.815 - ExxonMobil Hoover Floating OCS Facility safety zone.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false ExxonMobil Hoover Floating OCS... Floating OCS Facility safety zone. (a) Description. The ExxonMobil Hoover Floating OCS Facility, Alaminos... (1640.4 feet) from each point on the structure's outer edge is a safety zone. (b) Regulation. No vessel...

  15. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics

    NASA Astrophysics Data System (ADS)

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  16. Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics.

    PubMed

    Doronin, Alexander; Meglinski, Igor

    2012-09-01

    In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.

  17. Parallel processing of real-time dynamic systems simulation on OSCAR (Optimally SCheduled Advanced multiprocessoR)

    NASA Technical Reports Server (NTRS)

    Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke

    1989-01-01

    Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.

  18. Graphics processing unit (GPU)-based computation of heat conduction in thermally anisotropic solids

    NASA Astrophysics Data System (ADS)

    Nahas, C. A.; Balasubramaniam, Krishnan; Rajagopal, Prabhu

    2013-01-01

    Numerical modeling of anisotropic media is a computationally intensive task since it brings additional complexity to the field problem in such a way that the physical properties are different in different directions. Largely used in the aerospace industry because of their lightweight nature, composite materials are a very good example of thermally anisotropic media. With advancements in video gaming technology, parallel processors are much cheaper today and accessibility to higher-end graphical processing devices has increased dramatically over the past couple of years. Since these massively parallel GPUs are very good in handling floating point arithmetic, they provide a new platform for engineers and scientists to accelerate their numerical models using commodity hardware. In this paper we implement a parallel finite difference model of thermal diffusion through anisotropic media using the NVIDIA CUDA (Compute Unified device Architecture). We use the NVIDIA GeForce GTX 560 Ti as our primary computing device which consists of 384 CUDA cores clocked at 1645 MHz with a standard desktop pc as the host platform. We compare the results from standard CPU implementation for its accuracy and speed and draw implications for simulation using the GPU paradigm.

  19. [Study on the experimental application of floating-reference method to noninvasive blood glucose sensing].

    PubMed

    Yu, Hui; Qi, Dan; Li, Heng-da; Xu, Ke-xin; Yuan, Wei-jie

    2012-03-01

    Weak signal, low instrument signal-to-noise ratio, continuous variation of human physiological environment and the interferences from other components in blood make it difficult to extract the blood glucose information from near infrared spectrum in noninvasive blood glucose measurement. The floating-reference method, which analyses the effect of glucose concentration variation on absorption coefficient and scattering coefficient, gets spectrum at the reference point and the measurement point where the light intensity variations from absorption and scattering are counteractive and biggest respectively. By using the spectrum from reference point as reference, floating-reference method can reduce the interferences from variation of physiological environment and experiment circumstance. In the present paper, the effectiveness of floating-reference method working on improving prediction precision and stability was assessed through application experiments. The comparison was made between models whose data were processed with and without floating-reference method. The results showed that the root mean square error of prediction (RMSEP) decreased by 34.7% maximally. The floating-reference method could reduce the influences of changes of samples' state, instrument noises and drift, and improve the models' prediction precision and stability effectively.

  20. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  1. Evaluation of the Triple Code Model of numerical processing-Reviewing past neuroimaging and clinical findings.

    PubMed

    Siemann, Julia; Petermann, Franz

    2018-01-01

    This review reconciles past findings on numerical processing with key assumptions of the most predominant model of arithmetic in the literature, the Triple Code Model (TCM). This is implemented by reporting diverse findings in the literature ranging from behavioral studies on basic arithmetic operations over neuroimaging studies on numerical processing to developmental studies concerned with arithmetic acquisition, with a special focus on developmental dyscalculia (DD). We evaluate whether these studies corroborate the model and discuss possible reasons for contradictory findings. A separate section is dedicated to the transfer of TCM to arithmetic development and to alternative accounts focusing on developmental questions of numerical processing. We conclude with recommendations for future directions of arithmetic research, raising questions that require answers in models of healthy as well as abnormal mathematical development. This review assesses the leading model in the field of arithmetic processing (Triple Code Model) by presenting knowledge from interdisciplinary research. It assesses the observed contradictory findings and integrates the resulting opposing viewpoints. The focus is on the development of arithmetic expertise as well as abnormal mathematical development. The original aspect of this article is that it points to a gap in research on these topics and provides possible solutions for future models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Performance of FORTRAN floating-point operations on the Flex/32 multicomputer

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1987-01-01

    A series of experiments has been run to examine the floating-point performance of FORTRAN programs on the Flex/32 (Trademark) computer. The experiments are described, and the timing results are presented. The time required to execute a floating-point operation is found to vary considerbaly depending on a number of factors. One factor of particular interest from an algorithm design standpoint is the difference in speed between common memory accesses and local memory accesses. Common memory accesses were found to be slower, and guidelines are given for determinig when it may be cost effective to copy data from common to local memory.

  3. Non-uniqueness of the point of application of the buoyancy force

    NASA Astrophysics Data System (ADS)

    Kliava, Janis; Mégel, Jacques

    2010-07-01

    Even though the buoyancy force (also known as the Archimedes force) has always been an important topic of academic studies in physics, its point of application has not been explicitly identified yet. We present a quantitative approach to this problem based on the concept of the hydrostatic energy, considered here for a general shape of the cross-section of a floating body and for an arbitrary angle of heel. We show that the location of the point of application of the buoyancy force essentially depends (i) on the type of motion experienced by the floating body and (ii) on the definition of this point. In a rolling/pitching motion, considerations involving the rotational moment lead to a particular dynamical point of application of the buoyancy force, and for some simple shapes of the floating body this point coincides with the well-known metacentre. On the other hand, from the work-energy relation it follows that in the rolling/pitching motion the energetical point of application of this force is rigidly connected to the centre of buoyancy; in contrast, in a vertical translation this point is rigidly connected to the centre of gravity of the body. Finally, we consider the location of the characteristic points of the floating bodies for some particular shapes of immersed cross-sections. The paper is intended for higher education level physics teachers and students.

  4. VLSI Design Techniques for Floating-Point Computation

    DTIC Science & Technology

    1988-11-18

    J. C. Gibson, The Gibson Mix, IBM Systems Development Division Tech. Report(June 1970). [Heni83] A. Heninger, The Zilog Z8070 Floating-Point...Broadcast Oock Gen. ’ itp Divide Module Module byN Module Oock Communication l I T Oock Communication Bus Figure 7.2. Clock Distribution between

  5. Implementing direct, spatially isolated problems on transputer networks

    NASA Technical Reports Server (NTRS)

    Ellis, Graham K.

    1988-01-01

    Parametric studies were performed on transputer networks of up to 40 processors to determine how to implement and maximize the performance of the solution of problems where no processor-to-processor data transfer is required for the problem solution (spatially isolated). Two types of problems are investigated a computationally intensive problem where the solution required the transmission of 160 bytes of data through the parallel network, and a communication intensive example that required the transmission of 3 Mbytes of data through the network. This data consists of solutions being sent back to the host processor and not intermediate results for another processor to work on. Studies were performed on both integer and floating-point transputers. The latter features an on-chip floating-point math unit and offers approximately an order of magnitude performance increase over the integer transputer on real valued computations. The results indicate that a minimum amount of work is required on each node per communication to achieve high network speedups (efficiencies). The floating-point processor requires approximately an order of magnitude more work per communication than the integer processor because of the floating-point unit's increased computing capacity.

  6. CT image reconstruction with half precision floating-point values.

    PubMed

    Maaß, Clemens; Baer, Matthias; Kachelrieß, Marc

    2011-07-01

    Analytic CT image reconstruction is a computationally demanding task. Currently, the even more demanding iterative reconstruction algorithms find their way into clinical routine because their image quality is superior to analytic image reconstruction. The authors thoroughly analyze a so far unconsidered but valuable tool of tomorrow's reconstruction hardware (CPU and GPU) that allows implementing the forward projection and backprojection steps, which are the computationally most demanding parts of any reconstruction algorithm, much more efficiently. Instead of the standard 32 bit floating-point values (float), a recently standardized floating-point value with 16 bit (half) is adopted for data representation in image domain and in rawdata domain. The reduction in the total data amount reduces the traffic on the memory bus, which is the bottleneck of today's high-performance algorithms, by 50%. In CT simulations and CT measurements, float reconstructions (gold standard) and half reconstructions are visually compared via difference images and by quantitative image quality evaluation. This is done for analytical reconstruction (filtered backprojection) and iterative reconstruction (ordered subset SART). The magnitude of quantization noise, which is caused by a reduction in the data precision of both rawdata and image data during image reconstruction, is negligible. This is clearly shown for filtered backprojection and iterative ordered subset SART reconstruction. In filtered backprojection, the implementation of the backprojection should be optimized for low data precision if the image data are represented in half format. In ordered subset SART image reconstruction, no adaptations are necessary and the convergence speed remains unchanged. Half precision floating-point values allow to speed up CT image reconstruction without compromising image quality.

  7. Individual differences in children's understanding of inversion and arithmetical skill.

    PubMed

    Gilmore, Camilla K; Bryant, Peter

    2006-06-01

    Background and aims. In order to develop arithmetic expertise, children must understand arithmetic principles, such as the inverse relationship between addition and subtraction, in addition to learning calculation skills. We report two experiments that investigate children's understanding of the principle of inversion and the relationship between their conceptual understanding and arithmetical skills. A group of 127 children from primary schools took part in the study. The children were from 2 age groups (6-7 and 8-9 years). Children's accuracy on inverse and control problems in a variety of presentation formats and in canonical and non-canonical forms was measured. Tests of general arithmetic ability were also administered. Children consistently performed better on inverse than control problems, which indicates that they could make use of the inverse principle. Presentation format affected performance: picture presentation allowed children to apply their conceptual understanding flexibly regardless of the problem type, while word problems restricted their ability to use their conceptual knowledge. Cluster analyses revealed three subgroups with different profiles of conceptual understanding and arithmetical skill. Children in the 'high ability' and 'low ability' groups showed conceptual understanding that was in-line with their arithmetical skill, whilst a 3rd group of children had more advanced conceptual understanding than arithmetical skill. The three subgroups may represent different points along a single developmental path or distinct developmental paths. The discovery of the existence of the three groups has important consequences for education. It demonstrates the importance of considering the pattern of individual children's conceptual understanding and problem-solving skills.

  8. Fractionating the neural correlates of individual working memory components underlying arithmetic problem solving skills in children

    PubMed Central

    Metcalfe, Arron W. S.; Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Menon, Vinod

    2013-01-01

    Baddeley and Hitch’s multi-component working memory (WM) model has played an enduring and influential role in our understanding of cognitive abilities. Very little is known, however, about the neural basis of this multi-component WM model and the differential role each component plays in mediating arithmetic problem solving abilities in children. Here, we investigate the neural basis of the central executive (CE), phonological (PL) and visuo-spatial (VS) components of WM during a demanding mental arithmetic task in 7–9 year old children (N=74). The VS component was the strongest predictor of math ability in children and was associated with increased arithmetic complexity-related responses in left dorsolateral and right ventrolateral prefrontal cortices as well as bilateral intra-parietal sulcus and supramarginal gyrus in posterior parietal cortex. Critically, VS, CE and PL abilities were associated with largely distinct patterns of brain response. Overlap between VS and CE components was observed in left supramarginal gyrus and no overlap was observed between VS and PL components. Our findings point to a central role of visuo-spatial WM during arithmetic problem-solving in young grade-school children and highlight the usefulness of the multi-component Baddeley and Hitch WM model in fractionating the neural correlates of arithmetic problem solving during development. PMID:24212504

  9. A hardware-oriented algorithm for floating-point function generation

    NASA Technical Reports Server (NTRS)

    O'Grady, E. Pearse; Young, Baek-Kyu

    1991-01-01

    An algorithm is presented for performing accurate, high-speed, floating-point function generation for univariate functions defined at arbitrary breakpoints. Rapid identification of the breakpoint interval, which includes the input argument, is shown to be the key operation in the algorithm. A hardware implementation which makes extensive use of read/write memories is used to illustrate the algorithm.

  10. An algorithm for the arithmetic classification of multilattices.

    PubMed

    Indelicato, Giuliana

    2013-01-01

    A procedure for the construction and the classification of monoatomic multilattices in arbitrary dimension is developed. The algorithm allows one to determine the location of the points of all monoatomic multilattices with a given symmetry, or to determine whether two assigned multilattices are arithmetically equivalent. This approach is based on ideas from integral matrix theory, in particular the reduction to the Smith normal form, and can be coded to provide a classification software package.

  11. A computerized symbolic integration technique for development of triangular and quadrilateral composite shallow-shell finite elements

    NASA Technical Reports Server (NTRS)

    Anderson, C. M.; Noor, A. K.

    1975-01-01

    Computerized symbolic integration was used in conjunction with group-theoretic techniques to obtain analytic expressions for the stiffness, geometric stiffness, consistent mass, and consistent load matrices of composite shallow shell structural elements. The elements are shear flexible and have variable curvature. A stiffness (displacement) formulation was used with the fundamental unknowns consisting of both the displacement and rotation components of the reference surface of the shell. The triangular elements have six and ten nodes; the quadrilateral elements have four and eight nodes and can have internal degrees of freedom associated with displacement modes which vanish along the edges of the element (bubble modes). The stiffness, geometric stiffness, consistent mass, and consistent load coefficients are expressed as linear combinations of integrals (over the element domain) whose integrands are products of shape functions and their derivatives. The evaluation of the elemental matrices is divided into two separate problems - determination of the coefficients in the linear combination and evaluation of the integrals. The integrals are performed symbolically by using the symbolic-and-algebraic-manipulation language MACSYMA. The efficiency of using symbolic integration in the element development is demonstrated by comparing the number of floating-point arithmetic operations required in this approach with those required by a commonly used numerical quadrature technique.

  12. GRay: A Massively Parallel GPU-based Code for Ray Tracing in Relativistic Spacetimes

    NASA Astrophysics Data System (ADS)

    Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal

    2013-11-01

    We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This graphics-processing-unit (GPU)-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single-precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 ns per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing central-processing-unit-based ray-tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparing theoretical predictions of images, spectra, and light curves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.

  13. Numerically stable formulas for a particle-based explicit exponential integrator

    NASA Astrophysics Data System (ADS)

    Nadukandi, Prashanth

    2015-05-01

    Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.

  14. A simplified Integer Cosine Transform and its application in image compression

    NASA Technical Reports Server (NTRS)

    Costa, M.; Tong, K.

    1994-01-01

    A simplified version of the integer cosine transform (ICT) is described. For practical reasons, the transform is considered jointly with the quantization of its coefficients. It differs from conventional ICT algorithms in that the combined factors for normalization and quantization are approximated by powers of two. In conventional algorithms, the normalization/quantization stage typically requires as many integer divisions as the number of transform coefficients. By restricting the factors to powers of two, these divisions can be performed by variable shifts in the binary representation of the coefficients, with speed and cost advantages to the hardware implementation of the algorithm. The error introduced by the factor approximations is compensated for in the inverse ICT operation, executed with floating point precision. The simplified ICT algorithm has potential applications in image-compression systems with disparate cost and speed requirements in the encoder and decoder ends. For example, in deep space image telemetry, the image processors on board the spacecraft could take advantage of the simplified, faster encoding operation, which would be adjusted on the ground, with high-precision arithmetic. A dual application is found in compressed video broadcasting. Here, a fast, high-performance processor at the transmitter would precompensate for the factor approximations in the inverse ICT operation, to be performed in real time, at a large number of low-cost receivers.

  15. 33 CFR 165.704 - Safety Zone; Tampa Bay, Florida.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., Florida. (a) A floating safety zone is established consisting of an area 1000 yards fore and aft of a... ending at Gadsden Point Cut Lighted Buoys “3” and “4”. The safety zone starts again at Gadsden Point Cut... the marked channel at Tampa Bay Cut “K” buoy “11K” enroute to Rattlesnake, Tampa, FL, the floating...

  16. Learning to assign binary weights to binary descriptor

    NASA Astrophysics Data System (ADS)

    Huang, Zhoudi; Wei, Zhenzhong; Zhang, Guangjun

    2016-10-01

    Constructing robust binary local feature descriptors are receiving increasing interest due to their binary nature, which can enable fast processing while requiring significantly less memory than their floating-point competitors. To bridge the performance gap between the binary and floating-point descriptors without increasing the computational cost of computing and matching, optimal binary weights are learning to assign to binary descriptor for considering each bit might contribute differently to the distinctiveness and robustness. Technically, a large-scale regularized optimization method is applied to learn float weights for each bit of the binary descriptor. Furthermore, binary approximation for the float weights is performed by utilizing an efficient alternatively greedy strategy, which can significantly improve the discriminative power while preserve fast matching advantage. Extensive experimental results on two challenging datasets (Brown dataset and Oxford dataset) demonstrate the effectiveness and efficiency of the proposed method.

  17. Developing an Energy Policy for the United States

    NASA Astrophysics Data System (ADS)

    Keefe, Pat

    2014-12-01

    Al Bartlett's video "Arithmetic, Population, and Energy"1 spells out many of the complex issues related to energy use in our society. Bartlett makes the point that basic arithmetic is the fundamental obstacle preventing us from being able to grasp the relationships between energy consumption, population, and lifestyles. In an earlier version of Bartlett's video, he refers to a "Hagar the Horrible" comic strip in which Hagar asks the critical question, "Good…Now can anybody here count?"

  18. Fractionating the neural correlates of individual working memory components underlying arithmetic problem solving skills in children.

    PubMed

    Metcalfe, Arron W S; Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Menon, Vinod

    2013-10-01

    Baddeley and Hitch's multi-component working memory (WM) model has played an enduring and influential role in our understanding of cognitive abilities. Very little is known, however, about the neural basis of this multi-component WM model and the differential role each component plays in mediating arithmetic problem solving abilities in children. Here, we investigate the neural basis of the central executive (CE), phonological (PL) and visuo-spatial (VS) components of WM during a demanding mental arithmetic task in 7-9 year old children (N=74). The VS component was the strongest predictor of math ability in children and was associated with increased arithmetic complexity-related responses in left dorsolateral and right ventrolateral prefrontal cortices as well as bilateral intra-parietal sulcus and supramarginal gyrus in posterior parietal cortex. Critically, VS, CE and PL abilities were associated with largely distinct patterns of brain response. Overlap between VS and CE components was observed in left supramarginal gyrus and no overlap was observed between VS and PL components. Our findings point to a central role of visuo-spatial WM during arithmetic problem-solving in young grade-school children and highlight the usefulness of the multi-component Baddeley and Hitch WM model in fractionating the neural correlates of arithmetic problem solving during development. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Fast and efficient compression of floating-point data.

    PubMed

    Lindstrom, Peter; Isenburg, Martin

    2006-01-01

    Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data.

  20. Schema Knowledge Structures for Representing and Understanding Arithmetic Story Problems.

    DTIC Science & Technology

    1987-03-01

    do so on a common unit of measure. Implicit in the CP relation is the concept of one-to- one matching of one element in the problem with the other. As...engages in one-to-one matching , removing one member from each set and setting them apart as a matched pair. The smaller of the two sets is the one...to be critical. As we pointed out earlier, some of the semantic * relations can be present in situations that demand any of * the four arithmetic

  1. Software Techniques for Non-Von Neumann Architectures

    DTIC Science & Technology

    1990-01-01

    Commtopo programmable Benes net.; hypercubic lattice for QCD Control CENTRALIZED Assign STATIC Memory :SHARED Synch UNIVERSAL Max-cpu 566 Proessor...boards (each = 4 floating point units, 2 multipliers) Cpu-size 32-bit floating point chips Perform 11.4 Gflops Market quantum chromodynamics ( QCD ...functions there should exist a capability to define hierarchies and lattices of complex objects. A complex object can be made up of a set of simple objects

  2. Towards High Resolution Numerical Algorithms for Wave Dominated Physical Phenomena

    DTIC Science & Technology

    2009-01-30

    results are scaled as floating point operations per second, obtained by counting the number of floating point additions and multiplications in the...black horizontal line. Perhaps the most striking feature at first is the fact that the memory bandwidth measured for flux lifting transcends this...theoretical peak performance values. For a suitable CPU-limited workload, this means that a single workstation equipped with multiple GPUs can do work that

  3. 77 FR 20295 - United States Navy Restricted Area, Menominee River, Marinette Marine Corporation Shipyard...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-04

    ... to the point of origin. The restricted area will be marked by a lighted and signed floating buoy line... a signed floating buoy line without permission from the Supervisor of Shipbuilding, Conversion and...

  4. Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 1. Generalized Born

    PubMed Central

    2012-01-01

    We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers. PMID:22582031

  5. Floating electrode dielectrophoresis.

    PubMed

    Golan, Saar; Elata, David; Orenstein, Meir; Dinnar, Uri

    2006-12-01

    In practice, dielectrophoresis (DEP) devices are based on micropatterned electrodes. When subjected to applied voltages, the electrodes generate nonuniform electric fields that are necessary for the DEP manipulation of particles. In this study, electrically floating electrodes are used in DEP devices. It is demonstrated that effective DEP forces can be achieved by using floating electrodes. Additionally, DEP forces generated by floating electrodes are different from DEP forces generated by excited electrodes. The floating electrodes' capabilities are explained theoretically by calculating the electric field gradients and demonstrated experimentally by using test-devices. The test-devices show that floating electrodes can be used to collect erythrocytes (red blood cells). DEP devices which contain many floating electrodes ought to have fewer connections to external signal sources. Therefore, the use of floating electrodes may considerably facilitate the fabrication and operation of DEP devices. It can also reduce device dimensions. However, the key point is that DEP devices can integrate excited electrodes fabricated by microtechnology processes and floating electrodes fabricated by nanotechnology processes. Such integration is expected to promote the use of DEP devices in the manipulation of nanoparticles.

  6. Term Cancellations in Computing Floating-Point Gröbner Bases

    NASA Astrophysics Data System (ADS)

    Sasaki, Tateaki; Kako, Fujio

    We discuss the term cancellation which makes the floating-point Gröbner basis computation unstable, and show that error accumulation is never negligible in our previous method. Then, we present a new method, which removes accumulated errors as far as possible by reducing matrices constructed from coefficient vectors by the Gaussian elimination. The method manifests amounts of term cancellations caused by the existence of approximate linearly dependent relations among input polynomials.

  7. Common Pitfalls in F77 Code Conversion

    DTIC Science & Technology

    2003-02-01

    implementation versus another are the source of these errors rather than typography . It is well to use the practice of commenting-out original source file lines...identifier), every I in the format field must be replaced with f followed by an appropriate floating point format designator . Floating point numeric...helps even more. Finally, libraries are a major source of non-portablility[sic], with graphics libraries one of the chief culprits. We in Fusion

  8. Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2016-06-01

    In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.

  9. Physical implication of transition voltage in organic nano-floating-gate nonvolatile memories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Shun; Gao, Xu, E-mail: wangsd@suda.edu.cn, E-mail: gaoxu@suda.edu.cn; Zhong, Ya-Nan

    High-performance pentacene-based organic field-effect transistor nonvolatile memories, using polystyrene as a tunneling dielectric and Au nanoparticles as a nano-floating-gate, show parallelogram-like transfer characteristics with a featured transition point. The transition voltage at the transition point corresponds to a threshold electric field in the tunneling dielectric, over which stored electrons in the nano-floating-gate will start to leak out. The transition voltage can be modulated depending on the bias configuration and device structure. For p-type active layers, optimized transition voltage should be on the negative side of but close to the reading voltage, which can simultaneously achieve a high ON/OFF ratio andmore » good memory retention.« less

  10. Experiences modeling ocean circulation problems on a 30 node commodity cluster with 3840 GPU processor cores.

    NASA Astrophysics Data System (ADS)

    Hill, C.

    2008-12-01

    Low cost graphic cards today use many, relatively simple, compute cores to deliver support for memory bandwidth of more than 100GB/s and theoretical floating point performance of more than 500 GFlop/s. Right now this performance is, however, only accessible to highly parallel algorithm implementations that, (i) can use a hundred or more, 32-bit floating point, concurrently executing cores, (ii) can work with graphics memory that resides on the graphics card side of the graphics bus and (iii) can be partially expressed in a language that can be compiled by a graphics programming tool. In this talk we describe our experiences implementing a complete, but relatively simple, time dependent shallow-water equations simulation targeting a cluster of 30 computers each hosting one graphics card. The implementation takes into account the considerations (i), (ii) and (iii) listed previously. We code our algorithm as a series of numerical kernels. Each kernel is designed to be executed by multiple threads of a single process. Kernels are passed memory blocks to compute over which can be persistent blocks of memory on a graphics card. Each kernel is individually implemented using the NVidia CUDA language but driven from a higher level supervisory code that is almost identical to a standard model driver. The supervisory code controls the overall simulation timestepping, but is written to minimize data transfer between main memory and graphics memory (a massive performance bottle-neck on current systems). Using the recipe outlined we can boost the performance of our cluster by nearly an order of magnitude, relative to the same algorithm executing only on the cluster CPU's. Achieving this performance boost requires that many threads are available to each graphics processor for execution within each numerical kernel and that the simulations working set of data can fit into the graphics card memory. As we describe, this puts interesting upper and lower bounds on the problem sizes for which this technology is currently most useful. However, many interesting problems fit within this envelope. Looking forward, we extrapolate our experience to estimate full-scale ocean model performance and applicability. Finally we describe preliminary hybrid mixed 32-bit and 64-bit experiments with graphics cards that support 64-bit arithmetic, albeit at a lower performance.

  11. Design of crossed-mirror array to form floating 3D LED signs

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirotsugu; Bando, Hiroki; Kujime, Ryousuke; Suyama, Shiro

    2012-03-01

    3D representation of digital signage improves its significance and rapid notification of important points. Our goal is to realize floating 3D LED signs. The problem is there is no sufficient device to form floating 3D images from LEDs. LED lamp size is around 1 cm including wiring and substrates. Such large pitch increases display size and sometimes spoils image quality. The purpose of this paper is to develop optical device to meet the three requirements and to demonstrate floating 3D arrays of LEDs. We analytically investigate image formation by a crossed mirror structure with aerial aperture, called CMA (crossed-mirror array). CMA contains dihedral corner reflectors at each aperture. After double reflection, light rays emitted from an LED will converge into the corresponding image point. We have fabricated CMA for 3D array of LEDs. One CMA unit contains 20 x 20 apertures that are located diagonally. Floating image of LEDs was formed in wide range of incident angle. The image size of focused beam agreed to the apparent aperture size. When LEDs were located three-dimensionally (LEDs in three depths), the focused distances were the same as the distance between the real LED and the CMA.

  12. DSS 13 Microprocessor Antenna Controller

    NASA Technical Reports Server (NTRS)

    Gosline, R. M.

    1984-01-01

    A microprocessor based antenna controller system developed as part of the unattended station project for DSS 13 is described. Both the hardware and software top level designs are presented and the major problems encounted are discussed. Developments useful to related projects include a JPL standard 15 line interface using a single board computer, a general purpose parser, a fast floating point to ASCII conversion technique, and experience gained in using off board floating point processors with the 8080 CPU.

  13. Special relativity from observer's mathematics point of view

    NASA Astrophysics Data System (ADS)

    Khots, Boris; Khots, Dmitriy

    2015-09-01

    When we create mathematical models for quantum theory of light we assume that the mathematical apparatus used in modeling, at least the simplest mathematical apparatus, is infallible. In particular, this relates to the use of "infinitely small" and "infinitely large" quantities in arithmetic and the use of Newton - Cauchy definitions of a limit and derivative in analysis. We believe that is where the main problem lies in contemporary study of nature. We have introduced a new concept of Observer's Mathematics (see www.mathrelativity.com). Observer's Mathematics creates new arithmetic, algebra, geometry, topology, analysis and logic which do not contain the concept of continuum, but locally coincide with the standard fields. We use Einstein special relativity principles and get the analogue of classical Lorentz transformation. This work considers this transformation from Observer's Mathematics point of view.

  14. Numerical calculation of thermo-mechanical problems at large strains based on complex step derivative approximation of tangent stiffness matrices

    NASA Astrophysics Data System (ADS)

    Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg

    2015-05-01

    In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.

  15. GRay: A MASSIVELY PARALLEL GPU-BASED CODE FOR RAY TRACING IN RELATIVISTIC SPACETIMES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal

    We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This graphics-processing-unit (GPU)-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single-precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 ns per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing central-processing-unit-based ray-tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparingmore » theoretical predictions of images, spectra, and light curves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.« less

  16. A novel control process of cyanobacterial bloom using cyanobacteriolytic bacteria immobilized in floating biodegradable plastic carriers.

    PubMed

    Nakamura, N; Nakano, K; Sugiura, N; Matsumura, M

    2003-12-01

    A process using a floating carrier for immobilization of cyanobacteriolytic bacteria, B.cereus N-14, was proposed to realize an effective in situ control of natural floating cyanobacterial blooms. The critical concentrations of the cyanobacteriolytic substance and B.cereus N-14 cells required to exhibit cyanobacteriolytic activity were investigated. The results indicated the necessity of cell growth to produce sufficiently high amounts of the cyanobacteriolytic substance to exhibit its activity and also for conditions enabling good contact between high concentrations of the cyanobacteriolytic substance and cyanobacteria. Floating biodegradable plastics made of starch were applied as a carrier material to maintain close contact between the immobilized cyanobacteriolytic bacteria and floating cyanobacteria. The floating starch-carriers could eliminate 99% of floating cyanobacteria in 4 d. Since B.cereus N-14 could produce the cyanobacteriolytic substance under the presence of starch and some amino acids, the cyanobacteriolytic activity could be attributed to carbon source fed from starch carrier and amino acids eluted from lysed cyanobacteria. Therefore, the effect of using a floating starch-carrier was confirmed from both view points as a carrier for immobilization and a nutrient source to stimulate cyanobacteriolytic activity. The new concept to apply a floating carrier immobilizing useful microorganisms for intensive treatment of a nuisance floating target was demonstrated.

  17. An integrated circuit floating point accumulator

    NASA Technical Reports Server (NTRS)

    Goldsmith, T. C.

    1977-01-01

    Goddard Space Flight Center has developed a large scale integrated circuit (type 623) which can perform pulse counting, storage, floating point compression, and serial transmission, using a single monolithic device. Counts of 27 or 19 bits can be converted to transmitted values of 12 or 8 bits respectively. Use of the 623 has resulted in substantial savaings in weight, volume, and dollar resources on at least 11 scientific instruments to be flown on 4 NASA spacecraft. The design, construction, and application of the 623 are described.

  18. Floating-point function generation routines for 16-bit microcomputers

    NASA Technical Reports Server (NTRS)

    Mackin, M. A.; Soeder, J. F.

    1984-01-01

    Several computer subroutines have been developed that interpolate three types of nonanalytic functions: univariate, bivariate, and map. The routines use data in floating-point form. However, because they are written for use on a 16-bit Intel 8086 system with an 8087 mathematical coprocessor, they execute as fast as routines using data in scaled integer form. Although all of the routines are written in assembly language, they have been implemented in a modular fashion so as to facilitate their use with high-level languages.

  19. ROStoJAUSBridge Manual

    DTIC Science & Technology

    2012-03-01

    Description A dass that handles Imming the JAUS header pmUon of JAUS messages. jaus_hmd~_msg is included as a data member in all JAUS messages. Member...scaleTolnt16 (float val, float low, float high) [related] Scales signed short value val, which is bounded by low and high. Shifts the center point of low...and high to zero, and shifts val accordingly. V a! is then up scaled by the ratio of the range of short values to the range of values from high to low

  20. Differential porosimetry and permeametry for random porous media.

    PubMed

    Hilfer, R; Lemmer, A

    2015-07-01

    Accurate determination of geometrical and physical properties of natural porous materials is notoriously difficult. Continuum multiscale modeling has provided carefully calibrated realistic microstructure models of reservoir rocks with floating point accuracy. Previous measurements using synthetic microcomputed tomography (μ-CT) were based on extrapolation of resolution-dependent properties for discrete digitized approximations of the continuum microstructure. This paper reports continuum measurements of volume and specific surface with full floating point precision. It also corrects an incomplete description of rotations in earlier publications. More importantly, the methods of differential permeametry and differential porosimetry are introduced as precision tools. The continuum microstructure chosen to exemplify the methods is a homogeneous, carefully calibrated and characterized model for Fontainebleau sandstone. The sample has been publicly available since 2010 on the worldwide web as a benchmark for methodical studies of correlated random media. High-precision porosimetry gives the volume and internal surface area of the sample with floating point accuracy. Continuum results with floating point precision are compared to discrete approximations. Differential porosities and differential surface area densities allow geometrical fluctuations to be discriminated from discretization effects and numerical noise. Differential porosimetry and Fourier analysis reveal subtle periodic correlations. The findings uncover small oscillatory correlations with a period of roughly 850μm, thus implying that the sample is not strictly stationary. The correlations are attributed to the deposition algorithm that was used to ensure the grain overlap constraint. Differential permeabilities are introduced and studied. Differential porosities and permeabilities provide scale-dependent information on geometry fluctuations, thereby allowing quantitative error estimates.

  1. An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.

    PubMed

    Fout, N; Ma, Kwan-Liu

    2012-12-01

    In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.

  2. Float processing of high-temperature complex silicate glasses and float baths used for same

    NASA Technical Reports Server (NTRS)

    Cooper, Reid Franklin (Inventor); Cook, Glen Bennett (Inventor)

    2000-01-01

    A float glass process for production of high melting temperature glasses utilizes a binary metal alloy bath having the combined properties of a low melting point, low reactivity with oxygen, low vapor pressure, and minimal reactivity with the silicate glasses being formed. The metal alloy of the float medium is exothermic with a solvent metal that does not readily form an oxide. The vapor pressure of both components in the alloy is low enough to prevent deleterious vapor deposition, and there is minimal chemical and interdiffusive interaction of either component with silicate glasses under the float processing conditions. Alloys having the desired combination of properties include compositions in which gold, silver or copper is the solvent metal and silicon, germanium or tin is the solute, preferably in eutectic or near-eutectic compositions.

  3. ICRF-Induced Changes in Floating Potential and Ion Saturation Current in the EAST Divertor

    NASA Astrophysics Data System (ADS)

    Perkins, Rory; Hosea, Joel; Taylor, Gary; Bertelli, Nicola; Kramer, Gerrit; Qin, Chengming; Wang, Liang; Yang, Jichan; Zhang, Xinjun

    2017-10-01

    Injection of waves in the ion cyclotron range of frequencies (ICRF) into a tokamak can potentially raise the plasma potential via RF rectification. Probes are affected both by changes in plasma potential and also by RF-averaging of the probe characteristic, with the latter tending to drop the floating potential. We present the effect of ICRF heating on divertor Langmuir probes in the EAST experiment. Over a scan of the outer gap, probes connected to the antennas have increases in floating potential with ICRF, but probes in between the outer-vessel strike point and flux surface tangent to the antenna have decreased floating potential. This behaviour is investigated using field-line mapping. Preliminary results show that mdiplane gas puffing can suppress the strong influence of ICRF on the probes' floating potential.

  4. Lithium-ion drifting: Application to the study of point defects in floating-zone silicon

    NASA Technical Reports Server (NTRS)

    Walton, J. T.; Wong, Y. K.; Zulehner, W.

    1997-01-01

    The use of lithium-ion (Li(+)) drifting to study the properties of point defects in p-type Floating-Zone (FZ) silicon crystals is reported. The Li(+) drift technique is used to detect the presence of vacancy-related defects (D defects) in certain p-type FZ silicon crystals. SUPREM-IV modeling suggests that the silicon point defect diffusivities are considerably higher than those commonly accepted, but are in reasonable agreement with values recently proposed. These results demonstrate the utility of Li(+) drifting in the study of silicon point defect properties in p-type FZ crystals. Finally, a straightforward measurement of the Li(+) compensation depth is shown to yield estimates of the vacancy-related defect concentration in p-type FZ crystals.

  5. [Observation on the clinical efficacy of shoulder pain in post-stroke shoulder-hand syndrome treated with floating acupuncture and rehabilitation training].

    PubMed

    Wang, Jun; Cui, Xiao; Ni, Huan-Huan; Huang, Chun-Shui; Zhou, Cui-Xia; Wu, Ji; Shi, Jun-Chao; Wu, Yi

    2013-04-01

    To compare the efficacy difference in the treatment of shoulder pain in post-stroke shoulder-hand syndrome among floating acupuncture, oral administration of western medicine and local fumigation of Chinese herbs. Ninety cases of post-stroke shoulder-hand syndrome (stage I) were randomized into a floating acupuncture group, a western medicine group and a local Chinese herbs fumigation group, 30 cases in each one. In the floating acupuncture group, two obvious tender points were detected on the shoulder and the site 80-100 mm inferior to each tender point was taken as the inserting point and stimulated with floating needling technique. In the western medicine group, mobic 7.5 mg was prescribed for oral administration. In the local Chinese herbs fumigation group, the formula for activating blood circulation and relaxing tendon was used for local fumigation. All the patients in three groups received rehabilitation training. The floating acupuncture, oral administration of western medicine, local Chinese herbs fumigation and rehabilitation training were given once a day respectively in corresponding group and the cases were observed for 1 month. The visual analogue scale (VAS) and Takagishi shoulder joint function assessment were adopted to evaluate the dynamic change of the patients with shoulder pain before and after treatment in three groups. The modified Barthel index was used to evaluate the dynamic change of daily life activity of the patients in three groups. With floating acupuncture, shoulder pain was relieved and the daily life activity was improved in the patients with post-stroke shoulder-hand syndrome, which was superior to the oral administration of western medicine and local Chinese herbs fumigation (P < 0.01). With local Chinese herbs fumigation, the improvement of shoulder pain was superior to the oral administration of western medicine. The difference in the improvement of daily life activity was not significant statistically between the local Chinese herbs fumigation and oral administration of western medicine, the efficacy was similar between these two therapies (P > 0.05). The floating acupuncture relieves shoulder pain of the patients with post-stroke shoulder-hand syndrome promptly and effectively, and the effects on shoulder pain and the improvements of daily life activity are superior to that of the oral administration of western medicine and local Chinese herbs fumigation.

  6. Expert Systems on Multiprocessor Architectures. Volume 4. Technical Reports

    DTIC Science & Technology

    1991-06-01

    Floated-Current-Time0 -> The time that this function is called in user time uflts, expressed as a floating point number. Halt- Poligono Arrests the...default a statistics file will be printed out, if it can be. To prevent this make No-Statistics true. Unhalt- Poligono Unarrests the process in which the

  7. 76 FR 19290 - Safety Zone; Commencement Bay, Tacoma, WA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-07

    ... the following points Latitude 47[deg]17'38'' N, Longitude 122[deg]28'43'' W; thence south easterly to... protruding from the shoreline along Ruston Way. Floating markers will be placed by the sponsor of the event... rectangle protruding from the shoreline along Ruston Way. Floating markers will be placed by the sponsor of...

  8. 40 CFR 63.685 - Standards: Tanks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... in paragraph (c)(2)(i) of this section when a tank is used as an interim transfer point to transfer... fixed-roof tank equipped with an internal floating roof in accordance with the requirements specified in paragraph (e) of this section; (2) A tank equipped with an external floating roof in accordance with the...

  9. Oil/gas collector/separator for underwater oil leaks

    DOEpatents

    Henning, Carl D.

    1993-01-01

    An oil/gas collector/separator for recovery of oil leaking, for example, from an offshore or underwater oil well. The separator is floated over the point of the leak and tethered in place so as to receive oil/gas floating, or forced under pressure, toward the water surface from either a broken or leaking oil well casing, line, or sunken ship. The separator is provided with a downwardly extending skirt to contain the oil/gas which floats or is forced upward into a dome wherein the gas is separated from the oil/water, with the gas being flared (burned) at the top of the dome, and the oil is separated from water and pumped to a point of use. Since the density of oil is less than that of water it can be easily separated from any water entering the dome.

  10. Evaluation of floating-point sum or difference of products in carry-save domain

    NASA Technical Reports Server (NTRS)

    Wahab, A.; Erdogan, S.; Premkumar, A. B.

    1992-01-01

    An architecture to evaluate a 24-bit floating-point sum or difference of products using modified sequential carry-save multipliers with extensive pipelining is described. The basic building block of the architecture is a carry-save multiplier with built-in mantissa alignment for the summation during the multiplication cycles. A carry-save adder, capable of mantissa alignment, correctly positions products with the current carry-save sum. Carry propagation in individual multipliers is avoided and is only required once to produce the final result.

  11. Floating-point scaling technique for sources separation automatic gain control

    NASA Astrophysics Data System (ADS)

    Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.

    2012-07-01

    Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.

  12. Onboard Data Processors for Planetary Ice-Penetrating Sounding Radars

    NASA Astrophysics Data System (ADS)

    Tan, I. L.; Friesenhahn, R.; Gim, Y.; Wu, X.; Jordan, R.; Wang, C.; Clark, D.; Le, M.; Hand, K. P.; Plaut, J. J.

    2011-12-01

    Among the many concerns faced by outer planetary missions, science data storage and transmission hold special significance. Such missions must contend with limited onboard storage, brief data downlink windows, and low downlink bandwidths. A potential solution to these issues lies in employing onboard data processors (OBPs) to convert raw data into products that are smaller and closely capture relevant scientific phenomena. In this paper, we present the implementation of two OBP architectures for ice-penetrating sounding radars tasked with exploring Europa and Ganymede. Our first architecture utilizes an unfocused processing algorithm extended from the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS, Jordan et. al. 2009). Compared to downlinking raw data, we are able to reduce data volume by approximately 100 times through OBP usage. To ensure the viability of our approach, we have implemented, simulated, and synthesized this architecture using both VHDL and Matlab models (with fixed-point and floating-point arithmetic) in conjunction with Modelsim. Creation of a VHDL model of our processor is the principle step in transitioning to actual digital hardware, whether in a FPGA (field-programmable gate array) or an ASIC (application-specific integrated circuit), and successful simulation and synthesis strongly indicate feasibility. In addition, we examined the tradeoffs faced in the OBP between fixed-point accuracy, resource consumption, and data product fidelity. Our second architecture is based upon a focused fast back projection (FBP) algorithm that requires a modest amount of computing power and on-board memory while yielding high along-track resolution and improved slope detection capability. We present an overview of the algorithm and details of our implementation, also in VHDL. With the appropriate tradeoffs, the use of OBPs can significantly reduce data downlink requirements without sacrificing data product fidelity. Through the development, simulation, and synthesis of two different OBP architectures, we have proven the feasibility and efficacy of an OBP for planetary ice-penetrating radars.

  13. Collision Visualization of a Laser-Scanned Point Cloud of Streets and a Festival Float Model Used for the Revival of a Traditional Procession Route

    NASA Astrophysics Data System (ADS)

    Li, W.; Shigeta, K.; Hasegawa, K.; Li, L.; Yano, K.; Tanaka, S.

    2017-09-01

    Recently, laser-scanning technology, especially mobile mapping systems (MMSs), has been applied to measure 3D urban scenes. Thus, it has become possible to simulate a traditional cultural event in a virtual space constructed using measured point clouds. In this paper, we take the festival float procession in the Gion Festival that has a long history in Kyoto City, Japan. The city government plans to revive the original procession route that is narrow and not used at present. For the revival, it is important to know whether a festival float collides with houses, billboards, electric wires or other objects along the original route. Therefore, in this paper, we propose a method for visualizing the collisions of point cloud objects. The advantageous features of our method are (1) a see-through visualization with a correct depth feel that is helpful to robustly determine the collision areas, (2) the ability to visualize areas of high collision risk as well as real collision areas, and (3) the ability to highlight target visualized areas by increasing the point densities there.

  14. Functional dissociations between four basic arithmetic operations in the human posterior parietal cortex: A cytoarchitectonic mapping study

    PubMed Central

    Rosenberg-Lee, Miriam; Chang, Ting Ting; Young, Christina B; Wu, Sarah; Menon, Vinod

    2011-01-01

    Although lesion studies over the past several decades have focused on functional dissociations in posterior parietal cortex (PPC) during arithmetic, no consistent view has emerged of its differential involvement in addition, subtraction, multiplication, and division. To circumvent problems with poor anatomical localization, we examined functional overlap and dissociations in cytoarchitectonically-defined subdivisions of the intraparietal sulcus (IPS), superior parietal lobule (SPL) and angular gyrus (AG), across these four operations. Compared to a number identification control task, all operations except addition, showed a consistent profile of left posterior IPS activation and deactivation in the right posterior AG. Multiplication and subtraction differed significantly in right, but not left, IPS and AG activity, challenging the view that the left AG differentially subserves retrieval during multiplication. Although addition and multiplication both rely on retrieval, multiplication evoked significantly greater activation in right posterior IPS, as well as the prefrontal cortex, lingual and fusiform gyri, demonstrating that addition and multiplication engage different brain processes. Comparison of PPC responses to the two pairs of inverse operations: division vs. multiplication and subtraction vs. addition revealed greater activation of left lateral SPL during division, suggesting that processing inverse relations is operation specific. Our findings demonstrate that individual IPS, SPL and AG subdivisions are differentially modulated by the four arithmetic operations and they point to significant functional heterogeneity and individual differences in activation and deactivation within the PPC. Critically, these effects are related to retrieval, calculation and inversion, the three key cognitive processes that are differentially engaged by arithmetic operations. Our findings point to distributed representation of these processes in the human PPC and also help explain why lesion and previous imaging studies have yielded inconsistent findings. PMID:21616086

  15. Functional dissociations between four basic arithmetic operations in the human posterior parietal cortex: a cytoarchitectonic mapping study.

    PubMed

    Rosenberg-Lee, Miriam; Chang, Ting Ting; Young, Christina B; Wu, Sarah; Menon, Vinod

    2011-07-01

    Although lesion studies over the past several decades have focused on functional dissociations in posterior parietal cortex (PPC) during arithmetic, no consistent view has emerged of its differential involvement in addition, subtraction, multiplication, and division. To circumvent problems with poor anatomical localization, we examined functional overlap and dissociations in cytoarchitectonically defined subdivisions of the intraparietal sulcus (IPS), superior parietal lobule (SPL) and angular gyrus (AG), across these four operations. Compared to a number identification control task, all operations except addition, showed a consistent profile of left posterior IPS activation and deactivation in the right posterior AG. Multiplication and subtraction differed significantly in right, but not left, IPS and AG activity, challenging the view that the left AG differentially subserves retrieval during multiplication. Although addition and multiplication both rely on retrieval, multiplication evoked significantly greater activation in right posterior IPS, as well as the prefrontal cortex, lingual and fusiform gyri, demonstrating that addition and multiplication engage different brain processes. Comparison of PPC responses to the two pairs of inverse operations: division versus multiplication and subtraction versus addition revealed greater activation of left lateral SPL during division, suggesting that processing inverse relations is operation specific. Our findings demonstrate that individual IPS, SPL and AG subdivisions are differentially modulated by the four arithmetic operations and they point to significant functional heterogeneity and individual differences in activation and deactivation within the PPC. Critically, these effects are related to retrieval, calculation and inversion, the three key cognitive processes that are differentially engaged by arithmetic operations. Our findings point to distribute representation of these processes in the human PPC and also help explain why lesion and previous imaging studies have yielded inconsistent findings. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Does size and buoyancy affect the long-distance transport of floating debris?

    NASA Astrophysics Data System (ADS)

    Ryan, Peter G.

    2015-08-01

    Floating persistent debris, primarily made from plastic, disperses long distances from source areas and accumulates in oceanic gyres. However, biofouling can increase the density of debris items to the point where they sink. Buoyancy is related to item volume, whereas fouling is related to surface area, so small items (which have high surface area to volume ratios) should start to sink sooner than large items. Empirical observations off South Africa support this prediction: moving offshore from coastal source areas there is an increase in the size of floating debris, an increase in the proportion of highly buoyant items (e.g. sealed bottles, floats and foamed plastics), and a decrease in the proportion of thin items such as plastic bags and flexible packaging which have high surface area to volume ratios. Size-specific sedimentation rates may be one reason for the apparent paucity of small plastic items floating in the world’s oceans.

  17. Integrated use of spatial and semantic relationships for extracting road networks from floating car data

    NASA Astrophysics Data System (ADS)

    Li, Jun; Qin, Qiming; Xie, Chao; Zhao, Yue

    2012-10-01

    The update frequency of digital road maps influences the quality of road-dependent services. However, digital road maps surveyed by probe vehicles or extracted from remotely sensed images still have a long updating circle and their cost remain high. With GPS technology and wireless communication technology maturing and their cost decreasing, floating car technology has been used in traffic monitoring and management, and the dynamic positioning data from floating cars become a new data source for updating road maps. In this paper, we aim to update digital road maps using the floating car data from China's National Commercial Vehicle Monitoring Platform, and present an incremental road network extraction method suitable for the platform's GPS data whose sampling frequency is low and which cover a large area. Based on both spatial and semantic relationships between a trajectory point and its associated road segment, the method classifies each trajectory point, and then merges every trajectory point into the candidate road network through the adding or modifying process according to its type. The road network is gradually updated until all trajectories have been processed. Finally, this method is applied in the updating process of major roads in North China and the experimental results reveal that it can accurately derive geometric information of roads under various scenes. This paper provides a highly-efficient, low-cost approach to update digital road maps.

  18. The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations

    NASA Astrophysics Data System (ADS)

    Orf, L.

    2017-12-01

    In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress extremely well. We observe that the overhead for compressing data with ZFP is low, and that compressing data in memory reduces the amount of memory overhead needed to store the virtual files before they are flushed to disk.

  19. 33 CFR 162.130 - Connecting waters from Lake Huron to Lake Erie; general rules.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... vessel astern, alongside, or by pushing ahead; and (iii) Each dredge and floating plant. (4) The traffic... towing another vessel astern, alongside or by pushing ahead; and (iv) Each dredge and floating plant. (c... Captain of the Port of Detroit, Michigan. Detroit River means the connecting waters from Windmill Point...

  20. 75 FR 69034 - United States Navy Restricted Area, Menominee River, Marinette Marine Corporation Shipyard...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-10

    ...]37[min]10.0[sec] W; thence easterly along the Marinette Marine Corporation pier to the point of origin. The restricted area will be marked by a lighted and signed floating boat barrier. (b) The... floating boat barrier without permission from the United States Navy, Supervisor of Shipbuilding Gulf Coast...

  1. 76 FR 30024 - United States Navy Restricted Area, Menominee River, Marinette Marine Corporation Shipyard...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-24

    ... changed so that the restricted area could be marked with a signed floating buoy line instead of a signed floating barrier. That change has been made to the final rule. Procedural Requirements a. Review Under...; thence easterly along the Marinette Marine Corporation pier to the point of origin. The restricted area...

  2. A Floating Cylinder on an Unbounded Bath

    NASA Astrophysics Data System (ADS)

    Chen, Hanzhe; Siegel, David

    2018-03-01

    In this paper, we reconsider a circular cylinder horizontally floating on an unbounded reservoir in a gravitational field directed downwards, which was studied by Bhatnagar and Finn (Phys Fluids 18(4):047103, 2006). We follow their approach but with some modifications. We establish the relation between the total energy E_T relative to the undisturbed state and the total force F_T , that is, F_T = -dE_T/dh , where h is the height of the center of the cylinder relative to the undisturbed fluid level. There is a monotone relation between h and the wetting angle φ _0 . We study the number of equilibria, the floating configurations and their stability for all parameter values. We find that the system admits at most two equilibrium points for arbitrary contact angle γ , the one with smaller φ _0 is stable and the one with larger φ _0 is unstable. Since the one-sided solution can be translated horizontally, the fluid interfaces may intersect. We show that the stable equilibrium point never lies in the intersection region, while the unstable equilibrium point may lie in the intersection region.

  3. Theoretical lower bounds for parallel pipelined shift-and-add constant multiplications with n-input arithmetic operators

    NASA Astrophysics Data System (ADS)

    Cruz Jiménez, Miriam Guadalupe; Meyer Baese, Uwe; Jovanovic Dolecek, Gordana

    2017-12-01

    New theoretical lower bounds for the number of operators needed in fixed-point constant multiplication blocks are presented. The multipliers are constructed with the shift-and-add approach, where every arithmetic operation is pipelined, and with the generalization that n-input pipelined additions/subtractions are allowed, along with pure pipelining registers. These lower bounds, tighter than the state-of-the-art theoretical limits, are particularly useful in early design stages for a quick assessment in the hardware utilization of low-cost constant multiplication blocks implemented in the newest families of field programmable gate array (FPGA) integrated circuits.

  4. Hold the Pickle!

    ERIC Educational Resources Information Center

    Lupi, Marsha Mead

    1979-01-01

    The article illustrates the use of commercial jingles as high interest, low-level reading and language arts materials for primary age mildly retarded students. It is pointed out that jingles can be used in teaching initial consonants, vocabulary words, and arithmetic concepts. (SBH)

  5. Fast Image Texture Classification Using Decision Trees

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2011-01-01

    Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.

  6. Efficient and portable acceleration of quantum chemical many-body methods in mixed floating point precision using OpenACC compiler directives

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.

    2017-09-01

    It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.

  7. Network-Physics (NP) BEC DIGITAL(#)-VULNERABILITY; ``Q-Computing"=Simple-Arithmetic;Modular-Congruences=SignalXNoise PRODUCTS=Clock-model;BEC-Factorization;RANDOM-# Definition;P=/=NP TRIVIAL Proof!!!

    NASA Astrophysics Data System (ADS)

    Pi, E. I.; Siegel, E.

    2010-03-01

    Siegel[AMS Natl.Mtg.(2002)-Abs.973-60-124] digits logarithmic- law inversion to ONLY BEQS BEC:Quanta/Bosons=#: EMP-like SEVERE VULNERABILITY of ONLY #-networks(VS.ANALOG INvulnerability) via Barabasi NP(VS.dynamics[Not.AMS(5/2009)] critique);(so called)``quantum-computing''(QC) = simple-arithmetic (sansdivision);algorithmiccomplexities:INtractibility/UNdecidabi lity/INefficiency/NONcomputability/HARDNESS(so MIScalled) ``noise''-induced-phase-transition(NIT)ACCELERATION:Cook-Levin theorem Reducibility = RG fixed-points; #-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(2002)] How? mea culpa)= ONLY MBCS hot-plasma v #-clumping NON-random BEC; Modular-Arithmetic Congruences = Signal x Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)]BEC logarithmic-law inversion factorization: Watkins #-theory U statistical- physics); P=/=NP C-S TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation(3 millennia AGO geometry: NO:CC,``CS'';``Feet of Clay!!!'']; Query WHAT?:Definition: (so MIScalled)``complexity''=UTTER-SIMPLICITY!! v COMPLICATEDNESS MEASURE(S).

  8. Arithmetic learning with the use of graphic organiser

    NASA Astrophysics Data System (ADS)

    Sai, F. L.; Shahrill, M.; Tan, A.; Han, S. H.

    2018-01-01

    For this study, Zollman’s four corners-and-a-diamond mathematics graphic organiser embedded with Polya’s Problem Solving Model was used to investigate secondary school students’ performance in arithmetic word problems. This instructional learning tool was used to help students break down the given information into smaller units for better strategic planning. The participants were Year 7 students, comprised of 21 male and 20 female students, aged between 11-13 years old, from a co-ed secondary school in Brunei Darussalam. This study mainly adopted a quantitative approach to investigate the types of differences found in the arithmetic word problem pre- and post-tests results from the use of the learning tool. Although the findings revealed slight improvements in the overall comparisons of the students’ test results, the in-depth analysis of the students’ responses in their activity worksheets shows a different outcome. Some students were able to make good attempts in breaking down the key points into smaller information in order to solve the word problems.

  9. A test data compression scheme based on irrational numbers stored coding.

    PubMed

    Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan

    2014-01-01

    Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.

  10. 26 CFR 1.1274-2 - Issue price of debt instruments to which section 1274 applies.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...- borrower to the seller-lender that is designated as interest or points. See Example 2 of § 1.1273-2(g)(5... ignored. (f) Treatment of variable rate debt instruments—(1) Stated interest at a qualified floating rate... qualified floating rate (or rates) is determined by assuming that the instrument provides for a fixed rate...

  11. 76 FR 71322 - Taking and Importing Marine Mammals; U.S. Navy Training in the Hawaii Range Complex

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-17

    ..., most operationally sound method of initiating a demolition charge on a floating mine or mine at depth...; require building/ deploying an improvised, bulky, floating system for the receiver; and add another 180 ft... charge initiating device are taken to the detonation point. Military forms of C-4 are used as the...

  12. Characterization of airborne float coal dust emitted during continuous mining, longwall mining and belt transport.

    PubMed

    Shahan, M R; Seaman, C E; Beck, T W; Colinet, J F; Mischler, S E

    2017-09-01

    Float coal dust is produced by various mining methods, carried by ventilating air and deposited on the floor, roof and ribs of mine airways. If deposited, float dust is re-entrained during a methane explosion. Without sufficient inert rock dust quantities, this float coal dust can propagate an explosion throughout mining entries. Consequently, controlling float coal dust is of critical interest to mining operations. Rock dusting, which is the adding of inert material to airway surfaces, is the main control technique currently used by the coal mining industry to reduce the float coal dust explosion hazard. To assist the industry in reducing this hazard, the Pittsburgh Mining Research Division of the U.S. National Institute for Occupational Safety and Health initiated a project to investigate methods and technologies to reduce float coal dust in underground coal mines through prevention, capture and suppression prior to deposition. Field characterization studies were performed to determine quantitatively the sources, types and amounts of dust produced during various coal mining processes. The operations chosen for study were a continuous miner section, a longwall section and a coal-handling facility. For each of these operations, the primary dust sources were confirmed to be the continuous mining machine, longwall shearer and conveyor belt transfer points, respectively. Respirable and total airborne float dust samples were collected and analyzed for each operation, and the ratio of total airborne float coal dust to respirable dust was calculated. During the continuous mining process, the ratio of total airborne float coal dust to respirable dust ranged from 10.3 to 13.8. The ratios measured on the longwall face were between 18.5 and 21.5. The total airborne float coal dust to respirable dust ratio observed during belt transport ranged between 7.5 and 21.8.

  13. A floating-point digital receiver for MRI.

    PubMed

    Hoenninger, John C; Crooks, Lawrence E; Arakawa, Mitsuaki

    2002-07-01

    A magnetic resonance imaging (MRI) system requires the highest possible signal fidelity and stability for clinical applications. Quadrature analog receivers have problems with channel matching, dc offset and analog-to-digital linearity. Fixed-point digital receivers (DRs) reduce all of these problems. We have demonstrated that a floating-point DR using large (order 124 to 512) FIR low-pass filters also overcomes these problems, automatically provides long word length and has low latency between signals. A preloaded table of finite impuls response (FIR) filter coefficients provides fast switching between one of 129 different one-stage and two-stage multrate FIR low-pass filters with bandwidths between 4 KHz and 125 KHz. This design has been implemented on a dual channel circuit board for a commercial MRI system.

  14. Gastroretentive extended-release floating granules prepared using a novel fluidized hot melt granulation (FHMG) technique.

    PubMed

    Zhai, H; Jones, D S; McCoy, C P; Madi, A M; Tian, Y; Andrews, G P

    2014-10-06

    The objective of this work was to investigate the feasibility of using a novel granulation technique, namely, fluidized hot melt granulation (FHMG), to prepare gastroretentive extended-release floating granules. In this study we have utilized FHMG, a solvent free process in which granulation is achieved with the aid of low melting point materials, using Compritol 888 ATO and Gelucire 50/13 as meltable binders, in place of conventional liquid binders. The physicochemical properties, morphology, floating properties, and drug release of the manufactured granules were investigated. Granules prepared by this method were spherical in shape and showed good flowability. The floating granules exhibited sustained release exceeding 10 h. Granule buoyancy (floating time and strength) and drug release properties were significantly influenced by formulation variables such as excipient type and concentration, and the physical characteristics (particle size, hydrophilicity) of the excipients. Drug release rate was increased by increasing the concentration of hydroxypropyl cellulose (HPC) and Gelucire 50/13, or by decreasing the particle size of HPC. Floating strength was improved through the incorporation of sodium bicarbonate and citric acid. Furthermore, floating strength was influenced by the concentration of HPC within the formulation. Granules prepared in this way show good physical characteristics, floating ability, and drug release properties when placed in simulated gastric fluid. Moreover, the drug release and floating properties can be controlled by modification of the ratio or physical characteristics of the excipients used in the formulation.

  15. Holistic Grammar.

    ERIC Educational Resources Information Center

    Pierstorff, Don K.

    1981-01-01

    Parodies holistic approaches to education. Explains an educational approach which simultaneously teaches grammar and arithmetic. Lauds the advantages of the approach as high student attrition, ease of grading, and focus on developing the reptilian portion of the brain. Points out common errors made by students. (AYC)

  16. 26 CFR 1.483-2 - Unstated interest.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... percentage points above the yield on 6-month Treasury bills at the mid-point of the semiannual period immediately preceding each interest payment date. Assume that the interest rate is a qualified floating rate...

  17. A preliminary study of molecular dynamics on reconfigurable computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolinski, C.; Trouw, F. R.; Gokhale, M.

    2003-01-01

    In this paper we investigate the performance of platform FPGAs on a compute-intensive, floating-point-intensive supercomputing application, Molecular Dynamics (MD). MD is a popular simulation technique to track interacting particles through time by integrating their equations of motion. One part of the MD algorithm was implemented using the Fabric Generator (FG)[l I ] and mapped onto several reconfigurable logic arrays. FG is a Java-based toolset that greatly accelerates construction of the fabrics from an abstract technology independent representation. Our experiments used technology-independent IEEE 32-bit floating point operators so that the design could be easily re-targeted. Experiments were performed using both non-pipelinedmore » and pipelined floating point modules. We present results for the Altera Excalibur ARM System on a Programmable Chip (SoPC), the Altera Strath EPlS80, and the Xilinx Virtex-N Pro 2VP.50. The best results obtained were 5.69 GFlops at 8OMHz(Altera Strath EPlS80), and 4.47 GFlops at 82 MHz (Xilinx Virtex-II Pro 2VF50). Assuming a lOWpower budget, these results compare very favorably to a 4Gjlop/40Wprocessing/power rate for a modern Pentium, suggesting that reconfigurable logic can achieve high performance at low power on jloating-point-intensivea pplications.« less

  18. [The validation of the effect of correcting spectral background changes based on floating reference method by simulation].

    PubMed

    Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin

    2015-02-01

    There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes.

  19. A simple combined floating and anchored collagen gel for enhancing mechanical strength of culture system.

    PubMed

    Harada, Ichiro; Kim, Sung-Gon; Cho, Chong Su; Kurosawa, Hisashi; Akaike, Toshihiro

    2007-01-01

    In this study, a simple combined method consisting of floating and anchored collagen gel in a ligament or tendon equivalent culture system was used to produce the oriented fibrils in fibroblast-populated collagen matrices (FPCMs) during the remodeling and contraction of the collagen gel. Orientation of the collagen fibrils along single axis occurred over the whole area of the floating section and most of the fibroblasts were elongated and aligned along the oriented collagen fibrils, whereas no significant orientation of fibrils was observed in normally contracted FPCMs by the floating method. Higher elasticity and enhanced mechanical strength were obtained using our simple method compared with normally contracted floating FPCMs. The Young's modulus and the breaking point of the FPCMs were dependent on the initial cell densities. This simple method will be applied as a convenient bioreactor to study cellular processes of the fibroblasts in the tissues with highly oriented fibrils such as ligaments or tendons. (c) 2006 Wiley Periodicals, Inc.

  20. Floating sample-collection platform with stage-activated automatic water sampler for streams with large variation in stage

    USGS Publications Warehouse

    Tarte, Stephen R.; Schmidt, A.R.; Sullivan, Daniel J.

    1992-01-01

    A floating sample-collection platform is described for stream sites where the vertical or horizontal distance between the stream-sampling point and a safe location for the sampler exceed the suction head of the sampler. The platform allows continuous water sampling over the entire storm-runoff hydrogrpah. The platform was developed for a site in southern Illinois.

  1. Floating assembly of diatom Coscinodiscus sp. microshells.

    PubMed

    Wang, Yu; Pan, Junfeng; Cai, Jun; Zhang, Deyuan

    2012-03-30

    Diatoms have silica frustules with transparent and delicate micro/nano scale structures, two dimensional pore arrays, and large surface areas. Although, the diatom cells of Coscinodiscus sp. live underwater, we found that their valves can float on water and assemble together. Experiments show that the convex shape and the 40 nm sieve pores of the valves allow them to float on water, and that the buoyancy and the micro-range attractive forces cause the valves to assemble together at the highest point of water. As measured by AFM calibrated glass needles fixed in manipulator, the buoyancy force on a single floating valve may reach up to 10 μN in water. Turning the valves over, enlarging the sieve pores, reducing the surface tension of water, or vacuum pumping may cause the floating valves to sink. After the water has evaporated, the floating valves remained in their assembled state and formed a monolayer film. The bonded diatom monolayer may be valuable in studies on diatom based optical devices, biosensors, solar cells, and batteries, to better use the optical and adsorption properties of frustules. The floating assembly phenomenon can also be used as a self-assembly method for fabricating monolayer of circular plates. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Quality of life associated with perceived stigma and discrimination among the floating population in Shanghai, China: a qualitative study.

    PubMed

    Wang, Ji-Wei; Cui, Zhi-Ting; Cui, Hong-Wei; Wei, Chang-Nian; Harada, Koichi; Minamoto, Keiko; Ueda, Kimiyo; Ingle, Kapilkumar N; Zhang, Cheng-Gang; Ueda, Atsushi

    2010-12-01

    The floating population refers to the large and increasing number of migrants without local household registration status and has become a new demographic phenomenon in China. Most of these migrants move from the rural areas of the central and western parts of China to the eastern and coastal metropolitan areas in pursuit of a better life. The floating population of China was composed of 121 million people in 2000, and this number was expected to increase to 300 million by 2010. Quality of life (QOL) studies of the floating population could provide a critical starting point for recognizing the potential of regions, cities and local communities to improve QOL. This study explored the construct of QOL of the floating population in Shanghai, China. We conducted eight focus groups with 58 members of the floating population (24 males and 34 females) and then performed a qualitative thematic analysis of the interviews. The following five QOL domains were identified from the analysis: personal development, jobs and career, family life, social relationships and social security. The results indicated that stigma and discrimination permeate these life domains and influence the framing of life expectations. Proposals were made for reducing stigma and discrimination against the floating population to improve the QOL of this population.

  3. Digital hardware implementation of a stochastic two-dimensional neuron model.

    PubMed

    Grassia, F; Kohno, T; Levi, T

    2016-11-01

    This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Developmental dyscalculia.

    PubMed

    Price, Gavin R; Ansari, Daniel

    2013-01-01

    Developmental dyscalculia (DD) is a learning disorder affecting the acquisition of school level arithmetic skills present in approximately 3-6% of the population. At the behavioral level DD is characterized by poor retrieval of arithmetic facts from memory, the use of immature calculation procedures and counting strategies, and the atypical representation and processing of numerical magnitude. At the neural level emerging evidence suggests DD is associated with atypical structure and function in brain regions associated with the representation of numerical magnitude. The current state of knowledge points to a core deficit in numerical magnitude representation in DD, but further work is required to elucidate causal mechanisms underlying the disorder. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Elliptic Curve Integral Points on y2 = x3 + 3x ‑ 14

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhong

    2018-03-01

    The positive integer points and integral points of elliptic curves are very important in the theory of number and arithmetic algebra, it has a wide range of applications in cryptography and other fields. There are some results of positive integer points of elliptic curve y 2 = x 3 + ax + b, a, b ∈ Z In 1987, D. Zagier submit the question of the integer points on y 2 = x 3 ‑ 27x + 62, it count a great deal to the study of the arithmetic properties of elliptic curves. In 2009, Zhu H L and Chen J H solved the problem of the integer points on y 2 = x 3 ‑ 27x + 62 by using algebraic number theory and P-adic analysis method. In 2010, By using the elementary method, Wu H M obtain all the integral points of elliptic curves y 2 = x 3 ‑ 27x ‑ 62. In 2015, Li Y Z and Cui B J solved the problem of the integer points on y 2 = x 3 ‑ 21x ‑ 90 By using the elementary method. In 2016, Guo J solved the problem of the integer points on y 2 = x 3 + 27x + 62 by using the elementary method. In 2017, Guo J proved that y 2 = x 3 ‑ 21x + 90 has no integer points by using the elementary method. Up to now, there is no relevant conclusions on the integral points of elliptic curves y 2 = x 3 + 3x ‑ 14, which is the subject of this paper. By using congruence and Legendre Symbol, it can be proved that elliptic curve y 2 = x 3 + 3x ‑ 14 has only one integer point: (x, y) = (2, 0).

  6. Determinant Computation on the GPU using the Condensation Method

    NASA Astrophysics Data System (ADS)

    Anisul Haque, Sardar; Moreno Maza, Marc

    2012-02-01

    We report on a GPU implementation of the condensation method designed by Abdelmalek Salem and Kouachi Said for computing the determinant of a matrix. We consider two types of coefficients: modular integers and floating point numbers. We evaluate the performance of our code by measuring its effective bandwidth and argue that it is numerical stable in the floating point number case. In addition, we compare our code with serial implementation of determinant computation from well-known mathematical packages. Our results suggest that a GPU implementation of the condensation method has a large potential for improving those packages in terms of running time and numerical stability.

  7. Investigation of Springing Responses on the Great Lakes Ore Carrier M/V STEWART J. CORT

    DTIC Science & Technology

    1980-12-01

    175k tons.6 Using these values one can write : JL@APBD - ACTflALIVIRTVAL (MALAST) (4.) BeALLAST &VAC TUAL U(L@ADN@) and 0.94 10 The shifting of theI’M...will have to write a routine to convert the floating-point num- bers into the other machine’s internal floating-point format. The CCI record is again...THE RESULTS AND WRITES W1l TO THE LINE PRINTER. C IT ALSO PUTS THE RESUL~rs IN A DISA FIL1E .C C WRITTEN BY JCD3 NOVEMBER 1970f C C C

  8. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    NASA Astrophysics Data System (ADS)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  9. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  10. Deflection of Resilient Materials for Reduction of Floor Impact Sound

    PubMed Central

    Lee, Jung-Yoon; Kim, Jong-Mun

    2014-01-01

    Recently, many residents living in apartment buildings in Korea have been bothered by noise coming from the houses above. In order to reduce noise pollution, communities are increasingly imposing bylaws, including the limitation of floor impact sound, minimum thickness of floors, and floor soundproofing solutions. This research effort focused specifically on the deflection of resilient materials in the floor sound insulation systems of apartment houses. The experimental program involved conducting twenty-seven material tests and ten sound insulation floating concrete floor specimens. Two main parameters were considered in the experimental investigation: the seven types of resilient materials and the location of the loading point. The structural behavior of sound insulation floor floating was predicted using the Winkler method. The experimental and analytical results indicated that the cracking strength of the floating concrete floor significantly increased with increasing the tangent modulus of resilient material. The deflection of the floating concrete floor loaded at the side of the specimen was much greater than that of the floating concrete floor loaded at the center of the specimen. The Winkler model considering the effect of modulus of resilient materials was able to accurately predict the cracking strength of the floating concrete floor. PMID:25574491

  11. Deflection of resilient materials for reduction of floor impact sound.

    PubMed

    Lee, Jung-Yoon; Kim, Jong-Mun

    2014-01-01

    Recently, many residents living in apartment buildings in Korea have been bothered by noise coming from the houses above. In order to reduce noise pollution, communities are increasingly imposing bylaws, including the limitation of floor impact sound, minimum thickness of floors, and floor soundproofing solutions. This research effort focused specifically on the deflection of resilient materials in the floor sound insulation systems of apartment houses. The experimental program involved conducting twenty-seven material tests and ten sound insulation floating concrete floor specimens. Two main parameters were considered in the experimental investigation: the seven types of resilient materials and the location of the loading point. The structural behavior of sound insulation floor floating was predicted using the Winkler method. The experimental and analytical results indicated that the cracking strength of the floating concrete floor significantly increased with increasing the tangent modulus of resilient material. The deflection of the floating concrete floor loaded at the side of the specimen was much greater than that of the floating concrete floor loaded at the center of the specimen. The Winkler model considering the effect of modulus of resilient materials was able to accurately predict the cracking strength of the floating concrete floor.

  12. A Cryptological Way of Teaching Mathematics

    ERIC Educational Resources Information Center

    Caballero-Gil, Pino; Bruno-Castaneda, Carlos

    2007-01-01

    This work addresses the subject of mathematics education at secondary schools from a current and stimulating point of view intimately related to computational science. Cryptology is a captivating way of introducing into the classroom different mathematical subjects such as functions, matrices, modular arithmetic, combinatorics, equations,…

  13. Speech recognition for embedded automatic positioner for laparoscope

    NASA Astrophysics Data System (ADS)

    Chen, Xiaodong; Yin, Qingyun; Wang, Yi; Yu, Daoyin

    2014-07-01

    In this paper a novel speech recognition methodology based on Hidden Markov Model (HMM) is proposed for embedded Automatic Positioner for Laparoscope (APL), which includes a fixed point ARM processor as the core. The APL system is designed to assist the doctor in laparoscopic surgery, by implementing the specific doctor's vocal control to the laparoscope. Real-time respond to the voice commands asks for more efficient speech recognition algorithm for the APL. In order to reduce computation cost without significant loss in recognition accuracy, both arithmetic and algorithmic optimizations are applied in the method presented. First, depending on arithmetic optimizations most, a fixed point frontend for speech feature analysis is built according to the ARM processor's character. Then the fast likelihood computation algorithm is used to reduce computational complexity of the HMM-based recognition algorithm. The experimental results show that, the method shortens the recognition time within 0.5s, while the accuracy higher than 99%, demonstrating its ability to achieve real-time vocal control to the APL.

  14. Algorithmic-Reducibility = Renormalization-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') Replacing CRUTCHES!!!: Gauss Modular/Clock-Arithmetic Congruences = Signal X Noise PRODUCTS..

    NASA Astrophysics Data System (ADS)

    Siegel, J.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!

  15. [A case of pure anarithmetia associated with disability in processing of abstract spatial relationship].

    PubMed

    Hirayama, Kazumi; Taguchi, Yuzuru; Tsukamoto, Tetsuro

    2002-10-01

    A 35-year-old right handed man developed pure anarithmetia after an left parieto-occipital subcortical hemorrhage. His intelligence, memory, language, and construction ability were all within normal limits. No hemispatial neglect, agraphia, finger agnosia, or right-left disorientation were noted. He showed no impairments in reading numbers aloud, pointing to written numbers, writing numbers to dictation, decomposition of numbers, estimation of numbers of dots, reading and writing of arithmetic signs, comprehension of arithmetic signs, appreciation of number values, appreciation of dots' number, counting aloud, alignment numbers, comprehension of the commulative law and the distributive law, retrieval of the table value (ku-ku), immediate memory for arithmetic problems, and use of electric calculator. He showed, however, remarkable difficulty even in addition and subtraction between one figure digits, and used counting on his fingers or intuitive strategy to solve the problems even when he could solve them. He could not execute multiplication and division, if the problems required other than the table value (ku-ku). Thus, he seemed to have difficulties in both of elemental arithmetic facts and calculating procedures. In addition, his backward digit span and reading of analogue clocks were deteriorated, and he showed logico-grammatical disorder of Luria. Our case supports the notion that there is a neural system which was shared in part between processing of abstract spatial relationship and calculation.

  16. 33 CFR 110.127b - Flaming Gorge Lake, Wyoming-Utah.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... launching ramp to a point beyond the floating breakwater and then westerly, as established by the... following points, excluding a 150-foot-wide fairway, extending southeasterly from the launching ramp, as... inclosed by the shore and a line connecting the following points, excluding a 100-foot-wide fairway...

  17. Wind tunnel research comparing lateral control devices, particularly at high angles of attack XI : various floating tip ailerons on both rectangular and tapered wings

    NASA Technical Reports Server (NTRS)

    Weick, Fred E; Harris, Thomas A

    1933-01-01

    Discussed here are a series of systematic tests being conducted to compare different lateral control devices with particular reference to their effectiveness at high angles of attack. The present tests were made with six different forms of floating tip ailerons of symmetrical section. The tests showed the effect of the various ailerons on the general performance characteristics of the wing, and on the lateral controllability and stability characteristics. In addition, the hinge moments were measured for the most interesting cases. The results are compared with those for a rectangular wing with ordinary ailerons and also with those for a rectangular wing having full-chord floating tip ailerons. Practically all the floating tip ailerons gave satisfactory rolling moments at all angles of attack and at the same time gave no adverse yawing moments of appreciable magnitude. The general performance characteristics with the floating tip ailerons, however, were relatively poor, especially the rate of climb. None of the floating tip ailerons entirely eliminated the auto rotational moments at angles of attack above the stall, but all of them gave lower moments than a plain wing. Some of the floating ailerons fluttered if given sufficiently large deflection, but this could have been eliminated by moving the hinge axis of the ailerons forward. Considering all points including hinge moments, the floating tip ailerons on the wing with 5:1 taper are probably the best of those which were tested.

  18. Development of a Novel Floating In-situ Gelling System for Stomach Specific Drug Delivery of the Narrow Absorption Window Drug Baclofen.

    PubMed

    R Jivani, Rishad; N Patel, Chhagan; M Patel, Dashrath; P Jivani, Nurudin

    2010-01-01

    The present study deals with development of a floating in-situ gel of the narrow absorption window drug baclofen. Sodium alginate-based in-situ gelling systems were prepared by dissolving various concentrations of sodium alginate in deionized water, to which varying concentrations of drug and calcium bicarbonate were added. Fourier transform infrared spectroscopy (FTIR) and differential scanning calorimetry (DSC) were used to check the presence of any interaction between the drug and the excipients. A 3(2) full factorial design was used for optimization. The concentrations of sodium alginate (X1) and calcium bicarbonate (X2) were selected as the independent variables. The amount of the drug released after 1 h (Q1) and 10 h (Q10) and the viscosity of the solution were selected as the dependent variables. The gels were studied for their viscosity, in-vitro buoyancy and drug release. Contour plots were drawn for each dependent variable and check-point batches were prepared in order to get desirable release profiles. The drug release profiles were fitted into different kinetic models. The floating lag time and floating time found to be 2 min and 12 h respectively. A decreasing trend in drug release was observed with increasing concentrations of CaCO3. The computed values of Q1 and Q10 for the check-point batch were 25% and 86% respectively, compared to the experimental values of 27.1% and 88.34%. The similarity factor (f 2) for the check-point batch being 80.25 showed that the two dissolution profiles were similar. The drug release from the in-situ gel follows the Higuchi model, which indicates a diffusion-controlled release. A stomach specific in-situ gel of baclofen could be prepared using floating mechanism to increase the residence time of the drug in stomach and thereby increase the absorption.

  19. 30 CFR 250.428 - What must I do in certain cementing and casing situations?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... point. (h) Need to use less than required cement for the surface casing during floating drilling... permafrost zone uncemented Fill the annulus with a liquid that has a freezing point below the minimum...

  20. Decidable and undecidable arithmetic functions in actin filament networks

    NASA Astrophysics Data System (ADS)

    Schumann, Andrew

    2018-01-01

    The plasmodium of Physarum polycephalum is very sensitive to its environment, and reacts to stimuli with appropriate motions. Both the sensory and motor stages of these reactions are explained by hydrodynamic processes, based on fluid dynamics, with the participation of actin filament networks. This paper is devoted to actin filament networks as a computational medium. The point is that actin filaments, with contributions from many other proteins like myosin, are sensitive to extracellular stimuli (attractants as well as repellents), and appear and disappear at different places in the cell to change aspects of the cell structure—e.g. its shape. By assembling and disassembling actin filaments, some unicellular organisms, like Amoeba proteus, can move in response to various stimuli. As a result, these organisms can be considered a simple reversible logic gate—extracellular signals being its inputs and motions its outputs. In this way, we can implement various logic gates on amoeboid behaviours. These networks can embody arithmetic functions within p-adic valued logic. Furthermore, within these networks we can define the so-called diagonalization for deducing undecidable arithmetic functions.

  1. Characterization of airborne float coal dust emitted during continuous mining, longwall mining and belt transport

    PubMed Central

    Shahan, M.R.; Seaman, C.E.; Beck, T.W.; Colinet, J.F.; Mischler, S.E.

    2017-01-01

    Float coal dust is produced by various mining methods, carried by ventilating air and deposited on the floor, roof and ribs of mine airways. If deposited, float dust is re-entrained during a methane explosion. Without sufficient inert rock dust quantities, this float coal dust can propagate an explosion throughout mining entries. Consequently, controlling float coal dust is of critical interest to mining operations. Rock dusting, which is the adding of inert material to airway surfaces, is the main control technique currently used by the coal mining industry to reduce the float coal dust explosion hazard. To assist the industry in reducing this hazard, the Pittsburgh Mining Research Division of the U.S. National Institute for Occupational Safety and Health initiated a project to investigate methods and technologies to reduce float coal dust in underground coal mines through prevention, capture and suppression prior to deposition. Field characterization studies were performed to determine quantitatively the sources, types and amounts of dust produced during various coal mining processes. The operations chosen for study were a continuous miner section, a longwall section and a coal-handling facility. For each of these operations, the primary dust sources were confirmed to be the continuous mining machine, longwall shearer and conveyor belt transfer points, respectively. Respirable and total airborne float dust samples were collected and analyzed for each operation, and the ratio of total airborne float coal dust to respirable dust was calculated. During the continuous mining process, the ratio of total airborne float coal dust to respirable dust ranged from 10.3 to 13.8. The ratios measured on the longwall face were between 18.5 and 21.5. The total airborne float coal dust to respirable dust ratio observed during belt transport ranged between 7.5 and 21.8. PMID:28936001

  2. Predicting Arithmetic Abilities: The Role of Preparatory Arithmetic Markers and Intelligence

    ERIC Educational Resources Information Center

    Stock, Pieter; Desoete, Annemie; Roeyers, Herbert

    2009-01-01

    Arithmetic abilities acquired in kindergarten are found to be strong predictors for later deficient arithmetic abilities. This longitudinal study (N = 684) was designed to examine if it was possible to predict the level of children's arithmetic abilities in first and second grade from their performance on preparatory arithmetic abilities in…

  3. Using Tensor Completion Method to Achieving Better Coverage of Traffic State Estimation from Sparse Floating Car Data

    PubMed Central

    Ran, Bin; Song, Li; Cheng, Yang; Tan, Huachun

    2016-01-01

    Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%. PMID:27448326

  4. Using Tensor Completion Method to Achieving Better Coverage of Traffic State Estimation from Sparse Floating Car Data.

    PubMed

    Ran, Bin; Song, Li; Zhang, Jian; Cheng, Yang; Tan, Huachun

    2016-01-01

    Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.

  5. What is the size of a floating sheath? An answer

    NASA Astrophysics Data System (ADS)

    Voigt, Farina; Naggary, Schabnam; Brinkmann, Ralf Peter

    2016-09-01

    The formation of a non-neutral boundary sheath in front of material surfaces is universal plasma phenomenon. Despite several decades of research, however, not all related issues are fully clarified. In a recent paper, Chabert pointed out that this lack of clarity applies even to the seemingly innocuous question ``What the size of a floating sheath?'' This contribution attempts to provide an answer that is not arbitrary: The size of a floating sheath is defined as the plate separation of an equivalent parallel plate capacitor. The consequences of the definition are explored with the help of a self-consistent sheath model, and a comparison is made with other sheath size definitions. Deutsche Forschungsgemeinschaft within SFB TR 87.

  6. Random Matrix Theory and Elliptic Curves

    DTIC Science & Technology

    2014-11-24

    distribution is unlimited. 1 ELLIPTIC CURVES AND THEIR L-FUNCTIONS 2 points on that curve. Counting rational points on curves is a field with a rich ...deficiency of zeros near the origin of the histograms in Figure 1. While as d becomes large this discretization becomes smaller and has less and less effect...order of 30), the regular oscillations seen at the origin become dominated by fluctuations of an arithmetic origin, influenced by zeros of the Riemann

  7. FloPSy - Search-Based Floating Point Constraint Solving for Symbolic Execution

    NASA Astrophysics Data System (ADS)

    Lakhotia, Kiran; Tillmann, Nikolai; Harman, Mark; de Halleux, Jonathan

    Recently there has been an upsurge of interest in both, Search-Based Software Testing (SBST), and Dynamic Symbolic Execution (DSE). Each of these two approaches has complementary strengths and weaknesses, making it a natural choice to explore the degree to which the strengths of one can be exploited to offset the weakness of the other. This paper introduces an augmented version of DSE that uses a SBST-based approach to handling floating point computations, which are known to be problematic for vanilla DSE. The approach has been implemented as a plug in for the Microsoft Pex DSE testing tool. The paper presents results from both, standard evaluation benchmarks, and two open source programs.

  8. From 16-bit to high-accuracy IDCT approximation: fruits of single architecture affliation

    NASA Astrophysics Data System (ADS)

    Liu, Lijie; Tran, Trac D.; Topiwala, Pankaj

    2007-09-01

    In this paper, we demonstrate an effective unified framework for high-accuracy approximation of the irrational co-effcient floating-point IDCT by a single integer-coeffcient fixed-point architecture. Our framework is based on a modified version of the Loeffler's sparse DCT factorization, and the IDCT architecture is constructed via a cascade of dyadic lifting steps and butterflies. We illustrate that simply varying the accuracy of the approximating parameters yields a large family of standard-compliant IDCTs, from rare 16-bit approximations catering to portable computing to ultra-high-accuracy 32-bit versions that virtually eliminate any drifting effect when pairing with the 64-bit floating-point IDCT at the encoder. Drifting performances of the proposed IDCTs along with existing popular IDCT algorithms in H.263+, MPEG-2 and MPEG-4 are also demonstrated.

  9. Decipipes: Helping Students to "Get the Point"

    ERIC Educational Resources Information Center

    Moody, Bruce

    2011-01-01

    Decipipes are a representational model that can be used to help students develop conceptual understanding of decimal place value. They provide a non-standard tool for representing length, which in turn can be represented using conventional decimal notation. They are conceptually identical to Linear Arithmetic Blocks. This article reviews theory…

  10. Quantum Theory from Observer's Mathematics Point of View

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khots, Dmitriy; Khots, Boris

    2010-05-04

    This work considers the linear (time-dependent) Schrodinger equation, quantum theory of two-slit interference, wave-particle duality for single photons, and the uncertainty principle in a setting of arithmetic, algebra, and topology provided by Observer's Mathematics, see [1]. Certain theoretical results and communications pertaining to these theorems are also provided.

  11. Comparison of eigensolvers for symmetric band matrices.

    PubMed

    Moldaschl, Michael; Gansterer, Wilfried N

    2014-09-15

    We compare different algorithms for computing eigenvalues and eigenvectors of a symmetric band matrix across a wide range of synthetic test problems. Of particular interest is a comparison of state-of-the-art tridiagonalization-based methods as implemented in Lapack or Plasma on the one hand, and the block divide-and-conquer (BD&C) algorithm as well as the block twisted factorization (BTF) method on the other hand. The BD&C algorithm does not require tridiagonalization of the original band matrix at all, and the current version of the BTF method tridiagonalizes the original band matrix only for computing the eigenvalues. Avoiding the tridiagonalization process sidesteps the cost of backtransformation of the eigenvectors. Beyond that, we discovered another disadvantage of the backtransformation process for band matrices: In several scenarios, a lot of gradual underflow is observed in the (optional) accumulation of the transformation matrix and in the (obligatory) backtransformation step. According to the IEEE 754 standard for floating-point arithmetic, this implies many operations with subnormal (denormalized) numbers, which causes severe slowdowns compared to the other algorithms without backtransformation of the eigenvectors. We illustrate that in these cases the performance of existing methods from Lapack and Plasma reaches a competitive level only if subnormal numbers are disabled (and thus the IEEE standard is violated). Overall, our performance studies illustrate that if the problem size is large enough relative to the bandwidth, BD&C tends to achieve the highest performance of all methods if the spectrum to be computed is clustered. For test problems with well separated eigenvalues, the BTF method tends to become the fastest algorithm with growing problem size.

  12. Robust and Efficient Spin Purification for Determinantal Configuration Interaction.

    PubMed

    Fales, B Scott; Hohenstein, Edward G; Levine, Benjamin G

    2017-09-12

    The limited precision of floating point arithmetic can lead to the qualitative and even catastrophic failure of quantum chemical algorithms, especially when high accuracy solutions are sought. For example, numerical errors accumulated while solving for determinantal configuration interaction wave functions via Davidson diagonalization may lead to spin contamination in the trial subspace. This spin contamination may cause the procedure to converge to roots with undesired ⟨Ŝ 2 ⟩, wasting computer time in the best case and leading to incorrect conclusions in the worst. In hopes of finding a suitable remedy, we investigate five purification schemes for ensuring that the eigenvectors have the desired ⟨Ŝ 2 ⟩. These schemes are based on projection, penalty, and iterative approaches. All of these schemes rely on a direct, graphics processing unit-accelerated algorithm for calculating the S 2 c matrix-vector product. We assess the computational cost and convergence behavior of these methods by application to several benchmark systems and find that the first-order spin penalty method is the optimal choice, though first-order and Löwdin projection approaches also provide fast convergence to the desired spin state. Finally, to demonstrate the utility of these approaches, we computed the lowest several excited states of an open-shell silver cluster (Ag 19 ) using the state-averaged complete active space self-consistent field method, where spin purification was required to ensure spin stability of the CI vector coefficients. Several low-lying states with significant multiply excited character are predicted, suggesting the value of a multireference approach for modeling plasmonic nanomaterials.

  13. Optimal trajectory planning of free-floating space manipulator using differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Mingming; Luo, Jianjun; Fang, Jing; Yuan, Jianping

    2018-03-01

    The existence of the path dependent dynamic singularities limits the volume of available workspace of free-floating space robot and induces enormous joint velocities when such singularities are met. In order to overcome this demerit, this paper presents an optimal joint trajectory planning method using forward kinematics equations of free-floating space robot, while joint motion laws are delineated with application of the concept of reaction null-space. Bézier curve, in conjunction with the null-space column vectors, are applied to describe the joint trajectories. Considering the forward kinematics equations of the free-floating space robot, the trajectory planning issue is consequently transferred to an optimization issue while the control points to construct the Bézier curve are the design variables. A constrained differential evolution (DE) scheme with premature handling strategy is implemented to find the optimal solution of the design variables while specific objectives and imposed constraints are satisfied. Differ from traditional methods, we synthesize null-space and specialized curve to provide a novel viewpoint for trajectory planning of free-floating space robot. Simulation results are presented for trajectory planning of 7 degree-of-freedom (DOF) kinematically redundant manipulator mounted on a free-floating spacecraft and demonstrate the feasibility and effectiveness of the proposed method.

  14. rpe v5: an emulator for reduced floating-point precision in large numerical simulations

    NASA Astrophysics Data System (ADS)

    Dawson, Andrew; Düben, Peter D.

    2017-06-01

    This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.

  15. Recent advances in lossy compression of scientific floating-point data

    NASA Astrophysics Data System (ADS)

    Lindstrom, P.

    2017-12-01

    With a continuing exponential trend in supercomputer performance, ever larger data sets are being generated through numerical simulation. Bandwidth and storage capacity are, however, not keeping pace with this increase in data size, causing significant data movement bottlenecks in simulation codes and substantial monetary costs associated with archiving vast volumes of data. Worse yet, ever smaller fractions of data generated can be stored for further analysis, where scientists frequently rely on decimating or averaging large data sets in time and/or space. One way to mitigate these problems is to employ data compression to reduce data volumes. However, lossless compression of floating-point data can achieve only very modest size reductions on the order of 10-50%. We present ZFP and FPZIP, two state-of-the-art lossy compressors for structured floating-point data that routinely achieve one to two orders of magnitude reduction with little to no impact on the accuracy of visualization and quantitative data analysis. We provide examples of the use of such lossy compressors in climate and seismic modeling applications to effectively accelerate I/O and reduce storage requirements. We further discuss how the design decisions behind these and other compressors impact error distributions and other statistical and differential properties, including derived quantities of interest relevant to each science application.

  16. [The effect of floating-needle therapy combined with rehabilitation training for the hand function recovery of post-stroke patients].

    PubMed

    Yang, Jiangxia; Xiao, Hong

    2015-08-01

    To explore the improvement of hand motion function,spasm and self-care ability of daily life for stroke patients treated with floating-needle combined with rehabilitation training. Eighty hand spasm patients of post-stroke within one year after stroke were randomly divided into an observation group and a control group, 40 cases in each one. In the two groups, rehabilitation was adopted for eight weeks,once a day,40 min one time. In the observation group, based on the above treatment and according to muscle fascia trigger point, 2~3 points in both the internal and external sides of forearm were treated with floating-needle. The positive or passive flexion and extension of wrist and knuckle till the relief of spasm hand was combined. The floating-needle therapy was given for eight weeks, on the first three days once a day and later once every other day. Modified Ashworth Scale(MAS), activity of daily life(ADL, Barthel index) scores and Fugl-Meyer(FMA) scores were used to assess the spasm hand degree,activity of daily life and hand motion function before and after 7-day, 14-day and 8-week treatment. After 7-day, 14-day and 8-week treatment, MAS scores were apparently lower than those before treatment in the two groups(all P<0. 05), and Barthel scores and FMA scores were obviously higher than those before-treatment(all P<0. 05). After 14-day and 8-week treatment, FMA scores in the observation group were markedly higher than those in the control group(both P<0. 05). Floating-needle therapy combined with rehabilitation training and simple rehabilitation training could both improve hand spasm degree, hand function and activity of daily life of post-stroke patients, but floating-needle therapy combined with rehabilitation training is superior to simple rehabilitation training for the improvement of hand function.

  17. Floating shoulders: Clinical and radiographic analysis at a mean follow-up of 11 years

    PubMed Central

    Pailhes, ReÌ gis; Bonnevialle, Nicolas; Laffosse, JeanMichel; Tricoire, JeanLouis; Cavaignac, Etienne; Chiron, Philippe

    2013-01-01

    Context: The floating shoulder (FS) is an uncommon injury, which can be managed conservatively or surgically. The therapeutic option remains controversial. Aims: The goal of our study was to evaluate the long-term results and to identify predictive factors of functional outcomes. Settings and Design: Retrospective monocentric study. Materials and Methods: Forty consecutive FS were included (24 nonoperated and 16 operated) from 1984 to 2009. Clinical results were assessed with Simple Shoulder Test (SST), Oxford Shoulder Score (OSS), Single Assessment Numeric Evaluation (SANE), Short Form-12 (SF12), Disabilities of the Arm Shoulder and Hand score (DASH), and Constant score (CST). Plain radiographs were reviewed to evaluate secondary displacement, fracture healing, and modification of the lateral offset of the gleno-humeral joint (chest X-rays). New radiographs were made to evaluate osteoarthritis during follow-up. Statistical Analysis Used: T-test, Mann-Whitney test, and the Pearson's correlation coefficient were used. The significance level was set at 0.05. Results: At mean follow-up of 135 months (range 12-312), clinical results were satisfactory regarding different mean scores: SST 10.5 points, OSS 14 points, SANE 81%, SF12 (50 points and 60 points), DASH 14.5 points and CST 84 points. There were no significant differences between operative and non-operative groups. However, the loss of lateral offset influenced the results negatively. Osteoarthritis was diagnosed in five patients (12.5%) without correlation to fracture patterns and type of treatment. Conclusions: This study advocates that floating shoulder may be treated conservatively and surgically with satisfactory clinical long-term outcomes. However, the loss of gleno-humeral lateral offset should be evaluated carefully before taking a therapeutic option. PMID:23960364

  18. 40 CFR 60.691 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... collection point for stormwater runoff received directly from refinery surfaces and for refinery wastewater... chamber in a stationary manner and which does not move with fluctuations in wastewater levels. Floating... separator. Junction box means a manhole or access point to a wastewater sewer system line. No detectable...

  19. Quality of Arithmetic Education for Children with Cerebral Palsy

    ERIC Educational Resources Information Center

    Jenks, Kathleen M.; de Moor, Jan; van Lieshout, Ernest C. D. M.; Withagen, Floortje

    2010-01-01

    The aim of this exploratory study was to investigate the quality of arithmetic education for children with cerebral palsy. The use of individual educational plans, amount of arithmetic instruction time, arithmetic instructional grouping, and type of arithmetic teaching method were explored in three groups: children with cerebral palsy (CP) in…

  20. The unique and shared contributions of arithmetic operation understanding and numerical magnitude representation to children's mathematics achievement.

    PubMed

    Wong, Terry Tin-Yau

    2017-12-01

    The current study examined the unique and shared contributions of arithmetic operation understanding and numerical magnitude representation to children's mathematics achievement. A sample of 124 fourth graders was tested on their arithmetic operation understanding (as reflected by their understanding of arithmetic principles and the knowledge about the application of arithmetic operations) and their precision of rational number magnitude representation. They were also tested on their mathematics achievement and arithmetic computation performance as well as the potential confounding factors. The findings suggested that both arithmetic operation understanding and numerical magnitude representation uniquely predicted children's mathematics achievement. The findings highlight the significance of arithmetic operation understanding in mathematics learning. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Cognitive mechanisms underlying third graders' arithmetic skills: Expanding the pathways to mathematics model.

    PubMed

    Träff, Ulf; Olsson, Linda; Skagerlund, Kenny; Östergren, Rickard

    2018-03-01

    A modified pathways to mathematics model was used to examine the cognitive mechanisms underlying arithmetic skills in third graders. A total of 269 children were assessed on tasks tapping the four pathways and arithmetic skills. A path analysis showed that symbolic number processing was directly supported by the linguistic and approximate quantitative pathways. The direct contribution from the four pathways to arithmetic proficiency varied; the linguistic pathway supported single-digit arithmetic and word problem solving, whereas the approximate quantitative pathway supported only multi-digit calculation. The spatial processing and verbal working memory pathways supported only arithmetic word problem solving. The notion of hierarchical levels of arithmetic was supported by the results, and the different levels were supported by different constellations of pathways. However, the strongest support to the hierarchical levels of arithmetic were provided by the proximal arithmetic skills. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Combined GPS/GLONASS Precise Point Positioning with Fixed GPS Ambiguities

    PubMed Central

    Pan, Lin; Cai, Changsheng; Santerre, Rock; Zhu, Jianjun

    2014-01-01

    Precise point positioning (PPP) technology is mostly implemented with an ambiguity-float solution. Its performance may be further improved by performing ambiguity-fixed resolution. Currently, the PPP integer ambiguity resolutions (IARs) are mainly based on GPS-only measurements. The integration of GPS and GLONASS can speed up the convergence and increase the accuracy of float ambiguity estimates, which contributes to enhancing the success rate and reliability of fixing ambiguities. This paper presents an approach of combined GPS/GLONASS PPP with fixed GPS ambiguities (GGPPP-FGA) in which GPS ambiguities are fixed into integers, while all GLONASS ambiguities are kept as float values. An improved minimum constellation method (MCM) is proposed to enhance the efficiency of GPS ambiguity fixing. Datasets from 20 globally distributed stations on two consecutive days are employed to investigate the performance of the GGPPP-FGA, including the positioning accuracy, convergence time and the time to first fix (TTFF). All datasets are processed for a time span of three hours in three scenarios, i.e., the GPS ambiguity-float solution, the GPS ambiguity-fixed resolution and the GGPPP-FGA resolution. The results indicate that the performance of the GPS ambiguity-fixed resolutions is significantly better than that of the GPS ambiguity-float solutions. In addition, the GGPPP-FGA improves the positioning accuracy by 38%, 25% and 44% and reduces the convergence time by 36%, 36% and 29% in the east, north and up coordinate components over the GPS-only ambiguity-fixed resolutions, respectively. Moreover, the TTFF is reduced by 27% after adding GLONASS observations. Wilcoxon rank sum tests and chi-square two-sample tests are made to examine the significance of the improvement on the positioning accuracy, convergence time and TTFF. PMID:25237901

  3. Parametric study of two-body floating-point wave absorber

    NASA Astrophysics Data System (ADS)

    Amiri, Atena; Panahi, Roozbeh; Radfar, Soheil

    2016-03-01

    In this paper, we present a comprehensive numerical simulation of a point wave absorber in deep water. Analyses are performed in both the frequency and time domains. The converter is a two-body floating-point absorber (FPA) with one degree of freedom in the heave direction. Its two parts are connected by a linear mass-spring-damper system. The commercial ANSYS-AQWA software used in this study performs well in considering validations. The velocity potential is obtained by assuming incompressible and irrotational flow. As such, we investigated the effects of wave characteristics on energy conversion and device efficiency, including wave height and wave period, as well as the device diameter, draft, geometry, and damping coefficient. To validate the model, we compared our numerical results with those from similar experiments. Our study results can clearly help to maximize the converter's efficiency when considering specific conditions.

  4. An array processing system for lunar geochemical and geophysical data

    NASA Technical Reports Server (NTRS)

    Eliason, E. M.; Soderblom, L. A.

    1977-01-01

    A computerized array processing system has been developed to reduce, analyze, display, and correlate a large number of orbital and earth-based geochemical, geophysical, and geological measurements of the moon on a global scale. The system supports the activities of a consortium of about 30 lunar scientists involved in data synthesis studies. The system was modeled after standard digital image-processing techniques but differs in that processing is performed with floating point precision rather than integer precision. Because of flexibility in floating-point image processing, a series of techniques that are impossible or cumbersome in conventional integer processing were developed to perform optimum interpolation and smoothing of data. Recently color maps of about 25 lunar geophysical and geochemical variables have been generated.

  5. Vectorization of a classical trajectory code on a floating point systems, Inc. Model 164 attached processor.

    PubMed

    Kraus, Wayne A; Wagner, Albert F

    1986-04-01

    A triatomic classical trajectory code has been modified by extensive vectorization of the algorithms to achieve much improved performance on an FPS 164 attached processor. Extensive timings on both the FPS 164 and a VAX 11/780 with floating point accelerator are presented as a function of the number of trajectories simultaneously run. The timing tests involve a potential energy surface of the LEPS variety and trajectories with 1000 time steps. The results indicate that vectorization results in timing improvements on both the VAX and the FPS. For larger numbers of trajectories run simultaneously, up to a factor of 25 improvement in speed occurs between VAX and FPS vectorized code. Copyright © 1986 John Wiley & Sons, Inc.

  6. Algebraic Functions, Computer Programming, and the Challenge of Transfer

    ERIC Educational Resources Information Center

    Schanzer, Emmanuel Tanenbaum

    2015-01-01

    Students' struggles with algebra are well documented. Prior to the introduction of functions, mathematics is typically focused on applying a set of arithmetic operations to compute an answer. The introduction of functions, however, marks the point at which mathematics begins to focus on building up abstractions as a way to solve complex problems.…

  7. Implicit Learning of Arithmetic Regularities Is Facilitated by Proximal Contrast

    PubMed Central

    Prather, Richard W.

    2012-01-01

    Natural number arithmetic is a simple, powerful and important symbolic system. Despite intense focus on learning in cognitive development and educational research many adults have weak knowledge of the system. In current study participants learn arithmetic principles via an implicit learning paradigm. Participants learn not by solving arithmetic equations, but through viewing and evaluating example equations, similar to the implicit learning of artificial grammars. We expand this to the symbolic arithmetic system. Specifically we find that exposure to principle-inconsistent examples facilitates the acquisition of arithmetic principle knowledge if the equations are presented to the learning in a temporally proximate fashion. The results expand on research of the implicit learning of regularities and suggest that contrasting cases, show to facilitate explicit arithmetic learning, is also relevant to implicit learning of arithmetic. PMID:23119101

  8. Arithmetic Circuit Verification Based on Symbolic Computer Algebra

    NASA Astrophysics Data System (ADS)

    Watanabe, Yuki; Homma, Naofumi; Aoki, Takafumi; Higuchi, Tatsuo

    This paper presents a formal approach to verify arithmetic circuits using symbolic computer algebra. Our method describes arithmetic circuits directly with high-level mathematical objects based on weighted number systems and arithmetic formulae. Such circuit description can be effectively verified by polynomial reduction techniques using Gröbner Bases. In this paper, we describe how the symbolic computer algebra can be used to describe and verify arithmetic circuits. The advantageous effects of the proposed approach are demonstrated through experimental verification of some arithmetic circuits such as multiply-accumulator and FIR filter. The result shows that the proposed approach has a definite possibility of verifying practical arithmetic circuits.

  9. Bearing Capacity of Floating Ice Sheets under Short-Term Loads: Over-Sea-Ice Traverse from McMurdo Station to Marble Point

    DTIC Science & Technology

    2015-01-01

    crafts on floating ice sheets near McMurdo, Antarctica (Katona and Vaudrey 1973; Katona 1974; Vaudrey 1977). To comply with the first criterion, one...Nomographs for operating wheeled aircraft on sea- ice runways: McMurdo Station, Antarctica . In Proceedings of the Offshore Mechanics and Arctic Engineering... Ice Thickness Requirements for Vehicles and Heavy Equipment at McMurdo Station, Antarctica . CRREL Project Report 04- 09, “Safe Sea Ice for Vehicle

  10. The anatomy of floating shock fitting. [shock waves computation for flow field

    NASA Technical Reports Server (NTRS)

    Salas, M. D.

    1975-01-01

    The floating shock fitting technique is examined. Second-order difference formulas are developed for the computation of discontinuities. A procedure is developed to compute mesh points that are crossed by discontinuities. The technique is applied to the calculation of internal two-dimensional flows with arbitrary number of shock waves and contact surfaces. A new procedure, based on the coalescence of characteristics, is developed to detect the formation of shock waves. Results are presented to validate and demonstrate the versatility of the technique.

  11. New Evidence on Causal Relationship between Approximate Number System (ANS) Acuity and Arithmetic Ability in Elementary-School Students: A Longitudinal Cross-Lagged Analysis.

    PubMed

    He, Yunfeng; Zhou, Xinlin; Shi, Dexin; Song, Hairong; Zhang, Hui; Shi, Jiannong

    2016-01-01

    Approximate number system (ANS) acuity and mathematical ability have been found to be closely associated in recent studies. However, whether and how these two measures are causally related still remain less addressed. There are two hypotheses about the possible causal relationship: ANS acuity influences mathematical performances, or access to math education sharpens ANS acuity. Evidences in support of both hypotheses have been reported, but these two hypotheses have never been tested simultaneously. Therefore, questions still remain whether only one-direction or reciprocal causal relationships existed in the association. In this work, we provided a new evidence on the causal relationship between ANS acuity and arithmetic ability. ANS acuity and mathematical ability of elementary-school students were measured sequentially at three time points within one year, and all possible causal directions were evaluated simultaneously using cross-lagged regression analysis. The results show that ANS acuity influences later arithmetic ability while the reverse causal direction was not supported. Our finding adds a strong evidence to the causal association between ANS acuity and mathematical ability, and also has important implications for educational intervention designed to train ANS acuity and thereby promote mathematical ability.

  12. New Evidence on Causal Relationship between Approximate Number System (ANS) Acuity and Arithmetic Ability in Elementary-School Students: A Longitudinal Cross-Lagged Analysis

    PubMed Central

    He, Yunfeng; Zhou, Xinlin; Shi, Dexin; Song, Hairong; Zhang, Hui; Shi, Jiannong

    2016-01-01

    Approximate number system (ANS) acuity and mathematical ability have been found to be closely associated in recent studies. However, whether and how these two measures are causally related still remain less addressed. There are two hypotheses about the possible causal relationship: ANS acuity influences mathematical performances, or access to math education sharpens ANS acuity. Evidences in support of both hypotheses have been reported, but these two hypotheses have never been tested simultaneously. Therefore, questions still remain whether only one-direction or reciprocal causal relationships existed in the association. In this work, we provided a new evidence on the causal relationship between ANS acuity and arithmetic ability. ANS acuity and mathematical ability of elementary-school students were measured sequentially at three time points within one year, and all possible causal directions were evaluated simultaneously using cross-lagged regression analysis. The results show that ANS acuity influences later arithmetic ability while the reverse causal direction was not supported. Our finding adds a strong evidence to the causal association between ANS acuity and mathematical ability, and also has important implications for educational intervention designed to train ANS acuity and thereby promote mathematical ability. PMID:27462291

  13. Fast Fuzzy Arithmetic Operations

    NASA Technical Reports Server (NTRS)

    Hampton, Michael; Kosheleva, Olga

    1997-01-01

    In engineering applications of fuzzy logic, the main goal is not to simulate the way the experts really think, but to come up with a good engineering solution that would (ideally) be better than the expert's control, In such applications, it makes perfect sense to restrict ourselves to simplified approximate expressions for membership functions. If we need to perform arithmetic operations with the resulting fuzzy numbers, then we can use simple and fast algorithms that are known for operations with simple membership functions. In other applications, especially the ones that are related to humanities, simulating experts is one of the main goals. In such applications, we must use membership functions that capture every nuance of the expert's opinion; these functions are therefore complicated, and fuzzy arithmetic operations with the corresponding fuzzy numbers become a computational problem. In this paper, we design a new algorithm for performing such operations. This algorithm is applicable in the case when negative logarithms - log(u(x)) of membership functions u(x) are convex, and reduces computation time from O(n(exp 2))to O(n log(n)) (where n is the number of points x at which we know the membership functions u(x)).

  14. Description of signature scales in a floating wind turbine model wake subjected to varying turbulence intensity

    NASA Astrophysics Data System (ADS)

    Kadum, Hawwa; Rockel, Stanislav; Holling, Michael; Peinke, Joachim; Cal, Raul Bayon

    2017-11-01

    The wake behind a floating model horizontal axis wind turbine during pitch motion is investigated and compared to a fixed wind turbine wake. An experiment is conducted in an acoustic wind tunnel where hot-wire data are acquired at five downstream locations. At each downstream location, a rake of 16 hot-wires was used with placement of the probes increasing radially in the vertical, horizontal, and diagonally at 45 deg. In addition, the effect of turbulence intensity on the floating wake is examined by subjecting the wind turbine to different inflow conditions controlled through three settings in the wind tunnel grid, a passive and two active protocols, thus varying in intensity. The wakes are inspected by statistics of the point measurements, where the various length/time scales are considered. The wake characteristics for a floating wind turbine are compared to a fixed turbine, and uncovering its features; relevant as the demand for exploiting deep waters in wind energy is increasing.

  15. The neural circuits for arithmetic principles.

    PubMed

    Liu, Jie; Zhang, Han; Chen, Chuansheng; Chen, Hui; Cui, Jiaxin; Zhou, Xinlin

    2017-02-15

    Arithmetic principles are the regularities underlying arithmetic computation. Little is known about how the brain supports the processing of arithmetic principles. The current fMRI study examined neural activation and functional connectivity during the processing of verbalized arithmetic principles, as compared to numerical computation and general language processing. As expected, arithmetic principles elicited stronger activation in bilateral horizontal intraparietal sulcus and right supramarginal gyrus than did language processing, and stronger activation in left middle temporal lobe and left orbital part of inferior frontal gyrus than did computation. In contrast, computation elicited greater activation in bilateral horizontal intraparietal sulcus (extending to posterior superior parietal lobule) than did either arithmetic principles or language processing. Functional connectivity analysis with the psychophysiological interaction approach (PPI) showed that left temporal-parietal (MTG-HIPS) connectivity was stronger during the processing of arithmetic principle and language than during computation, whereas parietal-occipital connectivities were stronger during computation than during the processing of arithmetic principles and language. Additionally, the left fronto-parietal (orbital IFG-HIPS) connectivity was stronger during the processing of arithmetic principles than during computation. The results suggest that verbalized arithmetic principles engage a neural network that overlaps but is distinct from the networks for computation and language processing. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Specificity and Overlap in Skills Underpinning Reading and Arithmetical Fluency

    ERIC Educational Resources Information Center

    van Daal, Victor; van der Leij, Aryan; Ader, Herman

    2013-01-01

    The aim of this study was to examine unique and common causes of problems in reading and arithmetic fluency. 13- to 14-year-old students were placed into one of five groups: reading disabled (RD, n = 16), arithmetic disabled (AD, n = 34), reading and arithmetic disabled (RAD, n = 17), reading, arithmetic, and listening comprehension disabled…

  17. High-frequency video capture and a computer program with frame-by-frame angle determination functionality as tools that support judging in artistic gymnastics.

    PubMed

    Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej

    2015-01-01

    The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.

  18. 33 CFR 161.18 - Reporting requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... call. H HOTEL Date, time and point of entry system Entry time expressed as in (B) and into the entry... KILO Date, time and point of exit from system Exit time expressed as in (B) and exit position expressed....; for a dredge or floating plant: configuration of pipeline, mooring configuration, number of assist...

  19. Knowing, Applying, and Reasoning about Arithmetic: Roles of Domain-General and Numerical Skills in Multiple Domains of Arithmetic Learning

    ERIC Educational Resources Information Center

    Zhang, Xiao; Räsänen, Pekka; Koponen, Tuire; Aunola, Kaisa; Lerkkanen, Marja-Kristiina; Nurmi, Jari-Erik

    2017-01-01

    The longitudinal relations of domain-general and numerical skills at ages 6-7 years to 3 cognitive domains of arithmetic learning, namely knowing (written computation), applying (arithmetic word problems), and reasoning (arithmetic reasoning) at age 11, were examined for a representative sample of 378 Finnish children. The results showed that…

  20. Children's use of decomposition strategies mediates the visuospatial memory and arithmetic accuracy relation.

    PubMed

    Foley, Alana E; Vasilyeva, Marina; Laski, Elida V

    2017-06-01

    This study examined the mediating role of children's use of decomposition strategies in the relation between visuospatial memory (VSM) and arithmetic accuracy. Children (N = 78; Age M = 9.36) completed assessments of VSM, arithmetic strategies, and arithmetic accuracy. Consistent with previous findings, VSM predicted arithmetic accuracy in children. Extending previous findings, the current study showed that the relation between VSM and arithmetic performance was mediated by the frequency of children's use of decomposition strategies. Identifying the role of arithmetic strategies in this relation has implications for increasing the math performance of children with lower VSM. Statement of contribution What is already known on this subject? The link between children's visuospatial working memory and arithmetic accuracy is well documented. Frequency of decomposition strategy use is positively related to children's arithmetic accuracy. Children's spatial skill positively predicts the frequency with which they use decomposition. What does this study add? Short-term visuospatial memory (VSM) positively relates to the frequency of children's decomposition use. Decomposition use mediates the relation between short-term VSM and arithmetic accuracy. Children with limited short-term VSM may struggle to use decomposition, decreasing accuracy. © 2016 The British Psychological Society.

  1. WISC-III cognitive profiles in children with developmental dyslexia: specific cognitive disability and diagnostic utility.

    PubMed

    Moura, Octávio; Simões, Mário R; Pereira, Marcelino

    2014-02-01

    This study analysed the usefulness of the Wechsler Intelligence Scale for Children-Third Edition in identifying specific cognitive impairments that are linked to developmental dyslexia (DD) and the diagnostic utility of the most common profiles in a sample of 100 Portuguese children (50 dyslexic and 50 normal readers) between the ages of 8 and 12 years. Children with DD exhibited significantly lower scores in the Verbal Comprehension Index (except the Vocabulary subtest), Freedom from Distractibility Index (FDI) and Processing Speed Index subtests, with larger effect sizes than normal readers in Information, Arithmetic and Digit Span. The Verbal-Performance IQs discrepancies, Bannatyne pattern and the presence of FDI; Arithmetic, Coding, Information and Digit Span subtests (ACID) and Symbol Search, Coding, Arithmetic and Digit Span subtests (SCAD) profiles (full or partial) in the lowest subtests revealed a low diagnostic utility. However, the receiver operating characteristic curve and the optimal cut-off score analyses of the composite ACID; FDI and SCAD profiles scores showed moderate accuracy in correctly discriminating dyslexic readers from normal ones. These results suggested that in the context of a comprehensive assessment, the Wechsler Intelligence Scale for Children-Third Edition provides some useful information about the presence of specific cognitive disabilities in DD. Practitioner Points. Children with developmental dyslexia revealed significant deficits in the Wechsler Intelligence Scale for Children-Third Edition subtests that rely on verbal abilities, processing speed and working memory. The composite Arithmetic, Coding, Information and Digit Span subtests (ACID); Freedom from Distractibility Index and Symbol Search, Coding, Arithmetic and Digit Span subtests (SCAD) profile scores showed moderate accuracy in correctly discriminating dyslexics from normal readers. Wechsler Intelligence Scale for Children-Third Edition may provide some useful information about the presence of specific cognitive disabilities in developmental dyslexia. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Reading instead of reasoning? Predictors of arithmetic skills in children with cochlear implants.

    PubMed

    Huber, Maria; Kipman, Ulrike; Pletzer, Belinda

    2014-07-01

    The aim of the present study was to evaluate whether the arithmetic achievement of children with cochlear implants (CI) was lower or comparable to that of their normal hearing peers and to identify predictors of arithmetic achievement in children with CI. In particular we related the arithmetic achievement of children with CI to nonverbal IQ, reading skills and hearing variables. 23 children with CI (onset of hearing loss in the first 24 months, cochlear implantation in the first 60 months of life, atleast 3 years of hearing experience with the first CI) and 23 normal hearing peers matched by age, gender, and social background participated in this case control study. All attended grades two to four in primary schools. To assess their arithmetic achievement, all children completed the "Arithmetic Operations" part of the "Heidelberger Rechentest" (HRT), a German arithmetic test. To assess reading skills and nonverbal intelligence as potential predictors of arithmetic achievement, all children completed the "Salzburger Lesetest" (SLS), a German reading screening, and the Culture Fair Intelligence Test (CFIT), a nonverbal intelligence test. Children with CI did not differ significantly from hearing children in their arithmetic achievement. Correlation and regression analyses revealed that in children with CI, arithmetic achievement was significantly (positively) related to reading skills, but not to nonverbal IQ. Reading skills and nonverbal IQ were not related to each other. In normal hearing children, arithmetic achievement was significantly (positively) related to nonverbal IQ, but not to reading skills. Reading skills and nonverbal IQ were positively correlated. Hearing variables were not related to arithmetic achievement. Children with CI do not show lower performance in non-verbal arithmetic tasks, compared to normal hearing peers. Copyright © 2014. Published by Elsevier Ireland Ltd.

  3. Parallel processor for real-time structural control

    NASA Astrophysics Data System (ADS)

    Tise, Bert L.

    1993-07-01

    A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-to-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look- up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating- point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An OpenWindows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.

  4. Unsteady aerodynamic analysis for offshore floating wind turbines under different wind conditions.

    PubMed

    Xu, B F; Wang, T G; Yuan, Y; Cao, J F

    2015-02-28

    A free-vortex wake (FVW) model is developed in this paper to analyse the unsteady aerodynamic performance of offshore floating wind turbines. A time-marching algorithm of third-order accuracy is applied in the FVW model. Owing to the complex floating platform motions, the blade inflow conditions and the positions of initial points of vortex filaments, which are different from the fixed wind turbine, are modified in the implemented model. A three-dimensional rotational effect model and a dynamic stall model are coupled into the FVW model to improve the aerodynamic performance prediction in the unsteady conditions. The effects of floating platform motions in the simulation model are validated by comparison between calculation and experiment for a small-scale rigid test wind turbine coupled with a floating tension leg platform (TLP). The dynamic inflow effect carried by the FVW method itself is confirmed and the results agree well with the experimental data of a pitching transient on another test turbine. Also, the flapping moment at the blade root in yaw on the same test turbine is calculated and compares well with the experimental data. Then, the aerodynamic performance is simulated in a yawed condition of steady wind and in an unyawed condition of turbulent wind, respectively, for a large-scale wind turbine coupled with the floating TLP motions, demonstrating obvious differences in rotor performance and blade loading from the fixed wind turbine. The non-dimensional magnitudes of loading changes due to the floating platform motions decrease from the blade root to the blade tip. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  5. Unsteady aerodynamic analysis for offshore floating wind turbines under different wind conditions

    PubMed Central

    Xu, B. F.; Wang, T. G.; Yuan, Y.; Cao, J. F.

    2015-01-01

    A free-vortex wake (FVW) model is developed in this paper to analyse the unsteady aerodynamic performance of offshore floating wind turbines. A time-marching algorithm of third-order accuracy is applied in the FVW model. Owing to the complex floating platform motions, the blade inflow conditions and the positions of initial points of vortex filaments, which are different from the fixed wind turbine, are modified in the implemented model. A three-dimensional rotational effect model and a dynamic stall model are coupled into the FVW model to improve the aerodynamic performance prediction in the unsteady conditions. The effects of floating platform motions in the simulation model are validated by comparison between calculation and experiment for a small-scale rigid test wind turbine coupled with a floating tension leg platform (TLP). The dynamic inflow effect carried by the FVW method itself is confirmed and the results agree well with the experimental data of a pitching transient on another test turbine. Also, the flapping moment at the blade root in yaw on the same test turbine is calculated and compares well with the experimental data. Then, the aerodynamic performance is simulated in a yawed condition of steady wind and in an unyawed condition of turbulent wind, respectively, for a large-scale wind turbine coupled with the floating TLP motions, demonstrating obvious differences in rotor performance and blade loading from the fixed wind turbine. The non-dimensional magnitudes of loading changes due to the floating platform motions decrease from the blade root to the blade tip. PMID:25583859

  6. Floating liquid phase in sedimenting colloid-polymer mixtures.

    PubMed

    Schmidt, Matthias; Dijkstra, Marjolein; Hansen, Jean-Pierre

    2004-08-20

    Density functional theory and computer simulation are used to investigate sedimentation equilibria of colloid-polymer mixtures within the Asakura-Oosawa-Vrij model of hard sphere colloids and ideal polymers. When the ratio of buoyant masses of the two species is comparable to the ratio of differences in density of the coexisting bulk (colloid) gas and liquid phases, a stable "floating liquid" phase is found, i.e., a thin layer of liquid sandwiched between upper and lower gas phases. The full phase diagram of the mixture under gravity shows coexistence of this floating liquid phase with a single gas phase or a phase involving liquid-gas equilibrium; the phase coexistence lines meet at a triple point. This scenario remains valid for general asymmetric binary mixtures undergoing bulk phase separation.

  7. The computationalist reformulation of the mind-body problem.

    PubMed

    Marchal, Bruno

    2013-09-01

    Computationalism, or digital mechanism, or simply mechanism, is a hypothesis in the cognitive science according to which we can be emulated by a computer without changing our private subjective feeling. We provide a weaker form of that hypothesis, weaker than the one commonly referred to in the (vast) literature and show how to recast the mind-body problem in that setting. We show that such a mechanist hypothesis does not solve the mind-body problem per se, but does help to reduce partially the mind-body problem into another problem which admits a formulation in pure arithmetic. We will explain that once we adopt the computationalist hypothesis, which is a form of mechanist assumption, we have to derive from it how our belief in the physical laws can emerge from *only* arithmetic and classical computer science. In that sense we reduce the mind-body problem to a body problem appearance in computer science, or in arithmetic. The general shape of the possible solution of that subproblem, if it exists, is shown to be closer to "Platonist or neoplatonist theology" than to the "Aristotelian theology". In Plato's theology, the physical or observable reality is only the shadow of a vaster hidden nonphysical and nonobservable, perhaps mathematical, reality. The main point is that the derivation is constructive, and it provides the technical means to derive physics from arithmetic, and this will make the computationalist hypothesis empirically testable, and thus scientific in the Popperian analysis of science. In case computationalism is wrong, the derivation leads to a procedure for measuring "our local degree of noncomputationalism". Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Working memory and arithmetic calculation in children: the contributory roles of processing speed, short-term memory, and reading.

    PubMed

    Berg, Derek H

    2008-04-01

    The cognitive underpinnings of arithmetic calculation in children are noted to involve working memory; however, cognitive processes related to arithmetic calculation and working memory suggest that this relationship is more complex than stated previously. The purpose of this investigation was to examine the relative contributions of processing speed, short-term memory, working memory, and reading to arithmetic calculation in children. Results suggested four important findings. First, processing speed emerged as a significant contributor of arithmetic calculation only in relation to age-related differences in the general sample. Second, processing speed and short-term memory did not eliminate the contribution of working memory to arithmetic calculation. Third, individual working memory components--verbal working memory and visual-spatial working memory--each contributed unique variance to arithmetic calculation in the presence of all other variables. Fourth, a full model indicated that chronological age remained a significant contributor to arithmetic calculation in the presence of significant contributions from all other variables. Results are discussed in terms of directions for future research on working memory in arithmetic calculation.

  9. Cognitive Predictors of Achievement Growth in Mathematics: A 5-Year Longitudinal Study

    ERIC Educational Resources Information Center

    Geary, David C.

    2011-01-01

    The study's goal was to identify the beginning of 1st grade quantitative competencies that predict mathematics achievement start point and growth through 5th grade. Measures of number, counting, and arithmetic competencies were administered in early 1st grade and used to predict mathematics achievement through 5th (n = 177), while controlling for…

  10. Mistaken Identifiers: Gene name errors can be introduced inadvertently when using Excel in bioinformatics

    PubMed Central

    Zeeberg, Barry R; Riss, Joseph; Kane, David W; Bussey, Kimberly J; Uchio, Edward; Linehan, W Marston; Barrett, J Carl; Weinstein, John N

    2004-01-01

    Background When processing microarray data sets, we recently noticed that some gene names were being changed inadvertently to non-gene names. Results A little detective work traced the problem to default date format conversions and floating-point format conversions in the very useful Excel program package. The date conversions affect at least 30 gene names; the floating-point conversions affect at least 2,000 if Riken identifiers are included. These conversions are irreversible; the original gene names cannot be recovered. Conclusions Users of Excel for analyses involving gene names should be aware of this problem, which can cause genes, including medically important ones, to be lost from view and which has contaminated even carefully curated public databases. We provide work-arounds and scripts for circumventing the problem. PMID:15214961

  11. Renormalization group procedure for potential -g/r2

    NASA Astrophysics Data System (ADS)

    Dawid, S. M.; Gonsior, R.; Kwapisz, J.; Serafin, K.; Tobolski, M.; Głazek, S. D.

    2018-02-01

    Schrödinger equation with potential - g /r2 exhibits a limit cycle, described in the literature in a broad range of contexts using various regularizations of the singularity at r = 0. Instead, we use the renormalization group transformation based on Gaussian elimination, from the Hamiltonian eigenvalue problem, of high momentum modes above a finite, floating cutoff scale. The procedure identifies a richer structure than the one we found in the literature. Namely, it directly yields an equation that determines the renormalized Hamiltonians as functions of the floating cutoff: solutions to this equation exhibit, in addition to the limit-cycle, also the asymptotic-freedom, triviality, and fixed-point behaviors, the latter in vicinity of infinitely many separate pairs of fixed points in different partial waves for different values of g.

  12. Program Correctness, Verification and Testing for Exascale (Corvette)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Koushik; Iancu, Costin; Demmel, James W

    The goal of this project is to provide tools to assess the correctness of parallel programs written using hybrid parallelism. There is a dire lack of both theoretical and engineering know-how in the area of finding bugs in hybrid or large scale parallel programs, which our research aims to change. In the project we have demonstrated novel approaches in several areas: 1. Low overhead automated and precise detection of concurrency bugs at scale. 2. Using low overhead bug detection tools to guide speculative program transformations for performance. 3. Techniques to reduce the concurrency required to reproduce a bug using partialmore » program restart/replay. 4. Techniques to provide reproducible execution of floating point programs. 5. Techniques for tuning the floating point precision used in codes.« less

  13. 33 CFR 183.110 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., below a height of 4 inches measured from the lowest point in the boat where liquid can collect when the boat is in its static floating position, except engine rooms. Connected means allowing a flow of water... the engine room or a connected compartment below a height of 12 inches measured from the lowest point...

  14. 40 CFR 63.653 - Monitoring, recordkeeping, and implementation plan for emissions averaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) For each emission point included in an emissions average, the owner or operator shall perform testing, monitoring, recordkeeping, and reporting equivalent to that required for Group 1 emission points complying... internal floating roof, external roof, or a closed vent system with a control device, as appropriate to the...

  15. 33 CFR 110.60 - Captain of the Port, New York.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... yachts and other recreational craft. A mooring buoy is permitted. (4) Manhattan, Fort Washington Point... special anchorage area is principally for use by yachts and other recreational craft. A temporary float or... shoreline to the point of origin. Note to paragraph (d)(5): The area will be principally for use by yachts...

  16. Geographic Resources Analysis Support System (GRASS) Version 4.0 User’s Reference Manual

    DTIC Science & Technology

    1992-06-01

    inpur-image need not be square; before processing, the X and Y dimensions of the input-image are padded with zeroes to the next highest power of two in...structures an input kowledge /control script with an appropriate combination of map layer category values (GRASS raster map layers that contain data on...F cos(x) cosine of x (x is in degrees) F exp(x) exponential function of x F exp(x,y) x to the power y F float(x) convert x to floating point F if

  17. Differences in arithmetic performance between Chinese and German adults are accompanied by differences in processing of non-symbolic numerical magnitude

    PubMed Central

    Lonnemann, Jan; Li, Su; Zhao, Pei; Li, Peng; Linkersdörfer, Janosch; Lindberg, Sven; Hasselhorn, Marcus; Yan, Song

    2017-01-01

    Human beings are assumed to possess an approximate number system (ANS) dedicated to extracting and representing approximate numerical magnitude information. The ANS is assumed to be fundamental to arithmetic learning and has been shown to be associated with arithmetic performance. It is, however, still a matter of debate whether better arithmetic skills are reflected in the ANS. To address this issue, Chinese and German adults were compared regarding their performance in simple arithmetic tasks and in a non-symbolic numerical magnitude comparison task. Chinese participants showed a better performance in solving simple arithmetic tasks and faster reaction times in the non-symbolic numerical magnitude comparison task without making more errors than their German peers. These differences in performance could not be ascribed to differences in general cognitive abilities. Better arithmetic skills were thus found to be accompanied by a higher speed of retrieving non-symbolic numerical magnitude knowledge but not by a higher precision of non-symbolic numerical magnitude representations. The group difference in the speed of retrieving non-symbolic numerical magnitude knowledge was fully mediated by the performance in arithmetic tasks, suggesting that arithmetic skills shape non-symbolic numerical magnitude processing skills. PMID:28384191

  18. Differences in Arithmetic Performance between Chinese and German Children Are Accompanied by Differences in Processing of Symbolic Numerical Magnitude

    PubMed Central

    Lonnemann, Jan; Linkersdörfer, Janosch; Hasselhorn, Marcus; Lindberg, Sven

    2016-01-01

    Symbolic numerical magnitude processing skills are assumed to be fundamental to arithmetic learning. It is, however, still an open question whether better arithmetic skills are reflected in symbolic numerical magnitude processing skills. To address this issue, Chinese and German third graders were compared regarding their performance in arithmetic tasks and in a symbolic numerical magnitude comparison task. Chinese children performed better in the arithmetic tasks and were faster in deciding which one of two Arabic numbers was numerically larger. The group difference in symbolic numerical magnitude processing was fully mediated by the performance in arithmetic tasks. We assume that a higher degree of familiarity with arithmetic in Chinese compared to German children leads to a higher speed of retrieving symbolic numerical magnitude knowledge. PMID:27630606

  19. AmeriFlux US-WPT Winous Point North Marsh

    DOE Data Explorer

    Chen, Jiquan [University of Toledo / Michigan State University

    2016-01-01

    This is the AmeriFlux version of the carbon flux data for the site US-WPT Winous Point North Marsh. Site Description - The marsh site has been owned by the Winous Point Shooting Club since 1856 and has been managed by wildlife biologists since 1946. The hydrology of the marsh is relatively isolated by the surrounding dikes and drainages and only receives drainage from nearby croplands through three connecting ditches. Since 2001, the marsh has been managed to maintain year-round inundation with the lowest water levels in September. Within the 0–250 m fetch of the tower, the marsh comprises 42.9% of floating-leaved vegetation, 52.7% of emergent vegetation, and 4.4% of dike and upland during the growing season. Dominant emergent plants include narrow-leaved cattail (Typha angustifolia), rose mallow (Hibiscus moscheutos), and bur reed (Sparganium americanum). Common floating-leaved species are water lily (Nymphaea odorata) and American lotus (Nelumbo lutea) with foliage usually covering the water surface from late May to early October.

  20. What basic number processing measures in kindergarten explain unique variability in first-grade arithmetic proficiency?

    PubMed

    Bartelet, Dimona; Vaessen, Anniek; Blomert, Leo; Ansari, Daniel

    2014-01-01

    Relations between children's mathematics achievement and their basic number processing skills have been reported in both cross-sectional and longitudinal studies. Yet, some key questions are currently unresolved, including which kindergarten skills uniquely predict children's arithmetic fluency during the first year of formal schooling and the degree to which predictors are contingent on children's level of arithmetic proficiency. The current study assessed kindergarteners' non-symbolic and symbolic number processing efficiency. In addition, the contribution of children's underlying magnitude representations to differences in arithmetic achievement was assessed. Subsequently, in January of Grade 1, their arithmetic proficiency was assessed. Hierarchical regression analysis revealed that children's efficiency to compare digits, count, and estimate numerosities uniquely predicted arithmetic differences above and beyond the non-numerical factors included. Moreover, quantile regression analysis indicated that symbolic number processing efficiency was consistently a significant predictor of arithmetic achievement scores regardless of children's level of arithmetic proficiency, whereas their non-symbolic number processing efficiency was not. Finally, none of the task-specific effects indexing children's representational precision was significantly associated with arithmetic fluency. The implications of the results are 2-fold. First, the findings indicate that children's efficiency to process symbols is important for the development of their arithmetic fluency in Grade 1 above and beyond the influence of non-numerical factors. Second, the impact of children's non-symbolic number processing skills does not depend on their arithmetic achievement level given that they are selected from a nonclinical population. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Elimination of fecal coliforms and F-specific RNA coliphage from oysters (Crassostrea virginica) relaid in floating containers.

    PubMed

    Kator, H; Rhodes, M

    2001-06-01

    Declining oyster (Crassostrea virginica) production in the Chesapeake Bay has stimulated aquaculture based on floats for off-bottom culture. While advantages of off-bottom culture are significant, the increased use of floating containers raises public health and microbiological concerns, because oysters in floats may be more susceptible to fecal contamination from storm runoff compared to those cultured on-bottom. We conducted four commercial-scale studies with market-size oysters naturally contaminated with fecal coliforms (FC) and a candidate viral indicator, F-specific RNA (FRNA) coliphage. To facilitate sampling and to test for location effects, 12 replicate subsamples, each consisting of 15 to 20 randomly selected oysters in plastic mesh bags, were placed at four characteristic locations within a 0.6- by 3.0-m "Taylor" float, and the remaining oysters were added to a depth not exceeding 15.2 cm. The float containing approximately 3,000 oysters was relaid in the York River, Virginia, for 14 days. During relay, increases in shellfish FC densities followed rain events such that final mean levels exceeded initial levels or did not meet an arbitrary product end point of 50 FC/100 ml. FRNA coliphage densities decreased to undetectable levels within 14 days (16 to 28 degrees C) in all but the last experiment, when temperatures fell between 12 and 16 degrees C. Friedman (nonparametric analysis of variance) tests performed on FC/Escherichia coli and FRNA densities indicated no differences in counts as a function of location within the float. The public health consequences of these observations are discussed, and future research and educational needs are identified.

  2. Measuring Arithmetic: A Psychometric Approach to Understanding Formatting Effects and Domain Specificity

    ERIC Educational Resources Information Center

    Rhodes, Katherine T.; Branum-Martin, Lee; Washington, Julie A.; Fuchs, Lynn S.

    2017-01-01

    Using multitrait, multimethod data, and confirmatory factor analysis, the current study examined the effects of arithmetic item formatting and the possibility that across formats, abilities other than arithmetic may contribute to children's answers. Measurement hypotheses were guided by several leading theories of arithmetic cognition. With a…

  3. Personal Experience and Arithmetic Meaning in Semantic Dementia

    ERIC Educational Resources Information Center

    Julien, Camille L.; Neary, David; Snowden, Julie S.

    2010-01-01

    Arithmetic skills are generally claimed to be preserved in semantic dementia (SD), suggesting functional independence of arithmetic knowledge from other aspects of semantic memory. However, in a recent case series analysis we showed that arithmetic performance in SD is not entirely normal. The finding of a direct association between severity of…

  4. Characteristics of a Single Float Seaplane During Take-off

    NASA Technical Reports Server (NTRS)

    Crowley, J W , Jr; Ronan, K M

    1925-01-01

    At the request of the Bureau of Aeronautics, Navy Department, the National Advisory Committee for Aeronautics at Langley Field is investigating the get-away characteristics of an N-9H, a DT-2, and an F-5l, as representing, respectively, a single float, a double float, and a boat type of seaplane. This report covers the investigation conducted on the N-9H. The results show that a single float seaplane trims aft in taking off. Until a planing condition is reached the angle of attack is about 15 degrees and is only slightly affected by controls. When planing it seeks a lower angle, but is controllable through a widening range, until at the take-off it is possible to obtain angles of 8 degrees to 15 degrees with corresponding speeds of 53 to 41 M. P. H. or about 40 per cent of the speed range. The point of greatest resistance occurs at about the highest angle of a pontoon planing angle of 9 1/2 degrees and at a water speed of 24 M. P. H.

  5. Analysis of Static Spacecraft Floating Potential at Low Earth Orbit (LEO)

    NASA Technical Reports Server (NTRS)

    Herr, Joel L.; Hwang, K. S.; Wu, S. T.

    1995-01-01

    Spacecraft floating potential is the charge on the external surfaces of orbiting spacecraft relative to the space. Charging is caused by unequal negative and positive currents to spacecraft surfaces. The charging process continues until the accelerated particles can be collected rapidly enough to balance the currents at which point the spacecraft has reached its equilibrium or floating potential. In low inclination. Low Earth Orbit (LEO), the collection of positive ion and negative electrons. in a particular direction. are typically not equal. The level of charging required for equilibrium to be established is influenced by the characteristics of the ambient plasma environment. by the spacecraft motion, and by the geometry of the spacecraft. Using the kinetic theory, a statistical approach for studying the interaction is developed. The approach used to study the spacecraft floating potential depends on which phenomena are being applied. and on the properties of the plasma. especially the density and temperature. The results from kinetic theory derivation are applied to determine the charging level and the electric potential distribution at an infinite flat plate perpendicular to a streaming plasma using finite-difference scheme.

  6. Current-Voltage and Floating-Potential characteristics of cylindrical emissive probes from a full-kinetic model based on the orbital motion theory

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Sánchez-Arriaga, Gonzalo

    2018-02-01

    To model the sheath structure around an emissive probe with cylindrical geometry, the Orbital-Motion theory takes advantage of three conserved quantities (distribution function, transverse energy, and angular momentum) to transform the stationary Vlasov-Poisson system into a single integro-differential equation. For a stationary collisionless unmagnetized plasma, this equation describes self-consistently the probe characteristics. By solving such an equation numerically, parametric analyses for the current-voltage (IV) and floating-potential (FP) characteristics can be performed, which show that: (a) for strong emission, the space-charge effects increase with probe radius; (b) the probe can float at a positive potential relative to the plasma; (c) a smaller probe radius is preferred for the FP method to determine the plasma potential; (d) the work function of the emitting material and the plasma-ion properties do not influence the reliability of the floating-potential method. Analytical analysis demonstrates that the inflection point of an IV curve for non-emitting probes occurs at the plasma potential. The flat potential is not a self-consistent solution for emissive probes.

  7. Gulp: An Imaginatively Different Approach to Learning about Water.

    ERIC Educational Resources Information Center

    Baird, Colette

    1997-01-01

    Provides details of performances by the Floating Point Science Theater working with elementary school children about the characteristics of water. Discusses student reactions to various parts of the performances. (DDR)

  8. 36 CFR 327.3 - Vessels.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., community or corporate docks, or at any fixed or permanent mooring point, may only be used for overnight... floating or stationary mooring facilities on, adjacent to, or interfering with a buoy, channel marker or...

  9. Software For Tie-Point Registration Of SAR Data

    NASA Technical Reports Server (NTRS)

    Rignot, Eric; Dubois, Pascale; Okonek, Sharon; Van Zyl, Jacob; Burnette, Fred; Borgeaud, Maurice

    1995-01-01

    SAR-REG software package registers synthetic-aperture-radar (SAR) image data to common reference frame based on manual tie-pointing. Image data can be in binary, integer, floating-point, or AIRSAR compressed format. For example, with map of soil characteristics, vegetation map, digital elevation map, or SPOT multispectral image, as long as user can generate binary image to be used by tie-pointing routine and data are available in one of the previously mentioned formats. Written in FORTRAN 77.

  10. [Post-stroke shoulder-hand syndrome treated with floating-needle therapy combined with rehabilitation training: a randomized controlled trial].

    PubMed

    Zhou, Zhao-Hui; Zhuang, Li-Xing; Chen, Zhen-Hu; Lang, Jian-Ying; Li, Yan-Hui; Jiang, Gang-Hui; Xu, Zhan-Qiong; Liao, Mu-Xi

    2014-07-01

    To compare the clinical efficacy in the treatment of post-stroke shoulder-hand syndrome between floating-needle therapy and conventional acupuncture on the basis of rehabilitation training. One hundred cases of post-stroke shoulder-hand syndrome were randomized into a floating-needle group and an acupuncture group, 50 cases in each one. The passive and positive rehabilitation training was adopted in the two groups. Additionally, in the floating-needle group, the floating-needle therapy was used. The needle was inserted at the site 5 to 10 cm away from myofasical trigger point (MTrP), manipulated and scattered subcutaneously, for 2 min continuously. In the acupuncture group, the conventional acupuncture was applied at Jianqian (EX-UE), Jianyu (LI 15), Jianliao (TE 14), etc. The treatment was given once every two days, 3 times a week, and 14 days of treatment were required. The shoulder hand syndrome scale (SHSS), the short form McGill pain scale (SF-MPQ) and the modified Fugl-Meyer motor function scale (FMA) were used to evaluate the damage severity, pain and motor function of the upper limbs before and after treatment in the two groups. The clinical efficacy was compared between the two groups. SHSS score, SF-MPQ score and FMA score were improved significantly after treatment in the two groups (all P < 0.01), and the improvements in the floating-needle group were superior to those in the acupuncture group (all P < 0.05). The total effective rate was 94.0% (47/50) in the floating-needle group, which was better than 90.0% (45/50) in the acupuncture group (P < 0.05). The floating-needle therapy combined with rehabilitation training achieves a satisfactory efficacy on post-stroke shoulder-hand syndrome, which is better than the combined therapy of conventional acupuncture and rehabilitation training.

  11. Investigating the potential of floating mires as record of palaeoenvironmental changes

    NASA Astrophysics Data System (ADS)

    Zaccone, C.; Adamo, P.; Giordano, S.; Miano, T. M.

    2012-04-01

    Peat-forming floating mires could provide an exceptional resource for palaeoenvironmental and environmental monitoring studies, as much of their own history, as well as the history of their surrounds, is recorded in their peat deposits. In his Naturalis historia (AD 77-79), Pliny the Elder described floating islands on Lake Vadimonis (now Posta Fibreno Lake, Italy). Actually, a small floating island (ca. 35 m of diameter and 3 m of submerged thickness) still occurs on this calcareous lake fed by karstic springs at the base of the Apennine Mountains. Here the southernmost Italian populations of Sphagnum palustre occur on the small surface of this floating mire known as "La Rota", i.e., a cup-formed core of Sphagnum peat and rhizomes of Helophytes, erratically floating on the water-body of a submerged doline, annexed to the easternmost edge of the lake, characterised by the extension of a large reed bed. Geological evidence point out the existence in the area of a large lacustrine basin since Late Pleistocene. The progressive filling of the lake caused by changing in climatic conditions and neotectonic events, brought about the formation of peat deposits in the area, following different depositional cycles in a swampy environment. Then, a round-shaped portion of fen, originated around lake margins in waterlogged areas, was somehow isolated from the bank and started to float. Coupling data about concentrations and fluxes of several major and trace elements of different origin (i.e., dust particles, volcanic emissions, cosmogenic dusts and marine aerosols), with climate records (plant micro- and macrofossils, pollens, isotopic ratios), biomolecular records (e.g., lipids), detailed age-depth modelling (i.e., 210Pb, 137Cs, 14C), and humification indexes, the present work is hoped to identify and better understand the reliability of this particular "archive", and thus possible relationships between biogeochemical processes occurring in this floating bog and environmental changes.

  12. [New determinations of the eye rotation center and criteria for the formation of its membrane in terms of the floating eye model and experimental support of the latter].

    PubMed

    Galoian, V R

    1988-01-01

    It is well known that the eye is a phylogenetically stabilized body with rotation properties. The eye has an elastic cover and is filled with uniform fluid. According to the theory of covers and other concepts on the configuration of turning fluid mass we concluded that the eyeball has an elliptic configuration. Classification of the eyeball is here presented with simultaneous studies of the principles of the eye situation. The parallelism between the state and different types of heterophory and orthophory was studied. To determine normal configuration it is necessary to have in mind some principles of achieving advisable correct situation of the eye in orbit. We determined the centre of the eye rotation and showed that it is impossible to situate it out of the geometrical centre of the eyeball. It was pointed out that for adequate perception the rotation centre must be situated on the visual axis. Using the well known theory of floating we experimentally determined that the centre of the eye rotation lies on the level of the floating eye, just on the point of cross of the visual line with the optical axis. It was shown experimentally on the basis of recording the eye movements in the process of eyelid closing that weakening of the eye movements is of gravitational pattern and proceeds under the action of stability forces, which directly indicates the floating state of the eye. For the first time using the model of the floating eye it was possible to show the formation of extraeye vacuum by straining the back wall. This effect can be obtained without any difficulty, if the face is turned down. The role of negative pressure in the formation of the eye ametropy, as well as new conclusions and prognostications about this new model are discussed.

  13. Functional outcomes of "floating elbow" injuries in adult patients.

    PubMed

    Yokoyama, K; Itoman, M; Kobayashi, A; Shindo, M; Futami, T

    1998-05-01

    To assess elbow function, complications, and problems of floating elbow fractures in adults receiving surgical treatment. Retrospective clinical review. Level I trauma center in Kanagawa, Japan. Fourteen patients with fifteen floating elbow injuries, excluding one immediate amputation, seen at the Kitasato University Hospital from January 1, 1984, to April 30, 1995. All fractures were managed surgically by various methods. In ten cases, the humeral and forearm fractures were treated simultaneously with immediate fixation. In three cases, both the humeral and forearm fractures were treated with delayed fixation on Day 1, 4, or 7. In the remaining two cases, the open forearm fracture was managed with immediate fixation and the humerus fracture with delayed fixation on Day 10 or 25. All subjects underwent standardized elbow evaluations, and results were compared with an elbow score based on a 100-point scale. The parameters evaluated were pain, motion, elbow and grip strength, and function during daily activities. Complications such as infections, nonunions, malunions, and refractures were investigated. Mean follow-up was forty-three months (range 13 to 112 months). At final follow-up, the mean elbow function score was 79 points, with 67 percent (ten of fifteen) of the subjects having good or excellent results. The functional outcome did not correlate with the Injury Severity Score of the individual patients, the existence of open injuries or neurovascular injuries, or the timing of surgery. There were one deep infection, two nonunions of the humerus, two nonunions of the forearm, one varus deformity of the humerus, and one forearm refracture. Based on the present data, we could not clarify the factors influencing the final functional outcome after floating elbow injury. These injuries, however, potentially have many complications, such as infection or nonunion, especially when there is associated brachial plexus injury. We consider that floating elbow injuries are severe injuries and that surgical stabilization is needed; beyond that, there are no specific forms of surgical treatment to reliably guarantee excellent results.

  14. 30 CFR 250.907 - Where must I locate foundation boreholes?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... soil boring must not exceed 500 feet. (b) For deepwater floating platforms which utilize catenary or..., other points throughout the anchor pattern to establish the soil profile suitable for foundation design...

  15. 30 CFR 250.907 - Where must I locate foundation boreholes?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... soil boring must not exceed 500 feet. (b) For deepwater floating platforms which utilize catenary or..., other points throughout the anchor pattern to establish the soil profile suitable for foundation design...

  16. Biliary atresia

    MedlinePlus

    ... weight normally for the first month. After that point, the baby will lose weight and become irritable, and will have worsening jaundice. Other symptoms may include: Dark urine Enlarged spleen Floating stools Foul-smelling stools Pale or clay-colored ...

  17. 30 CFR 250.907 - Where must I locate foundation boreholes?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... soil boring must not exceed 500 feet. (b) For deepwater floating platforms which utilize catenary or..., other points throughout the anchor pattern to establish the soil profile suitable for foundation design...

  18. Early but not late blindness leads to enhanced arithmetic and working memory abilities.

    PubMed

    Dormal, Valérie; Crollen, Virginie; Baumans, Christine; Lepore, Franco; Collignon, Olivier

    2016-10-01

    Behavioural and neurophysiological evidence suggest that vision plays an important role in the emergence and development of arithmetic abilities. However, how visual deprivation impacts on the development of arithmetic processing remains poorly understood. We compared the performances of early (EB), late blind (LB) and sighted control (SC) individuals during various arithmetic tasks involving addition, subtraction and multiplication of various complexities. We also assessed working memory (WM) performances to determine if they relate to a blind person's arithmetic capacities. Results showed that EB participants performed better than LB and SC in arithmetic tasks, especially in conditions in which verbal routines and WM abilities are needed. Moreover, EB participants also showed higher WM abilities. Together, our findings demonstrate that the absence of developmental vision does not prevent the development of refined arithmetic skills and can even trigger the refinement of these abilities in specific tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. The cognitive foundations of early arithmetic skills: It is counting and number judgment, but not finger gnosis, that count.

    PubMed

    Long, Imogen; Malone, Stephanie A; Tolan, Anne; Burgoyne, Kelly; Heron-Delaney, Michelle; Witteveen, Kate; Hulme, Charles

    2016-12-01

    Following on from ideas developed by Gerstmann, a body of work has suggested that impairments in finger gnosis may be causally related to children's difficulties in learning arithmetic. We report a study with a large sample of typically developing children (N=197) in which we assessed finger gnosis and arithmetic along with a range of other relevant cognitive predictors of arithmetic skills (vocabulary, counting, and symbolic and nonsymbolic magnitude judgments). Contrary to some earlier claims, we found no meaningful association between finger gnosis and arithmetic skills. Counting and symbolic magnitude comparison were, however, powerful predictors of arithmetic skills, replicating a number of earlier findings. Our findings seriously question theories that posit either a simple association or a causal connection between finger gnosis and the development of arithmetic skills. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  20. Rapid Design of Gravity Assist Trajectories

    NASA Technical Reports Server (NTRS)

    Carrico, J.; Hooper, H. L.; Roszman, L.; Gramling, C.

    1991-01-01

    Several International Solar Terrestrial Physics (ISTP) missions require the design of complex gravity assisted trajectories in order to investigate the interaction of the solar wind with the Earth's magnetic field. These trajectories present a formidable trajectory design and optimization problem. The philosophy and methodology that enable an analyst to design and analyse such trajectories are discussed. The so called 'floating end point' targeting, which allows the inherently nonlinear multiple body problem to be solved with simple linear techniques, is described. The combination of floating end point targeting with analytic approximations with a Newton method targeter to achieve trajectory design goals quickly, even for the very sensitive double lunar swingby trajectories used by the ISTP missions, is demonstrated. A multiconic orbit integration scheme allows fast and accurate orbit propagation. A prototype software tool, Swingby, built for trajectory design and launch window analysis, is described.

  1. Implementation of the DAST ARW II control laws using an 8086 microprocessor and an 8087 floating-point coprocessor. [drones for aeroelasticity research

    NASA Technical Reports Server (NTRS)

    Kelly, G. L.; Berthold, G.; Abbott, L.

    1982-01-01

    A 5 MHZ single-board microprocessor system which incorporates an 8086 CPU and an 8087 Numeric Data Processor is used to implement the control laws for the NASA Drones for Aerodynamic and Structural Testing, Aeroelastic Research Wing II. The control laws program was executed in 7.02 msec, with initialization consuming 2.65 msec and the control law loop 4.38 msec. The software emulator execution times for these two tasks were 36.67 and 61.18, respectively, for a total of 97.68 msec. The space, weight and cost reductions achieved in the present, aircraft control application of this combination of a 16-bit microprocessor with an 80-bit floating point coprocessor may be obtainable in other real time control applications.

  2. Implementation of kernels on the Maestro processor

    NASA Astrophysics Data System (ADS)

    Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.

    Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.

  3. Parallel processor for real-time structural control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tise, B.L.

    1992-01-01

    A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection tomore » host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An Open Windows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.« less

  4. [Acquisition of arithmetic knowledge].

    PubMed

    Fayol, Michel

    2008-01-01

    The focus of this paper is on contemporary research on the number counting and arithmetical competencies that emerge during infancy, the preschool years, and the elementary school. I provide a brief overview of the evolution of children's conceptual knowledge of arithmetic knowledge, the acquisition and use of counting and how they solve simple arithmetic problems (e.g. 4 + 3).

  5. The Development of Arithmetic Principle Knowledge: How Do We Know What Learners Know?

    ERIC Educational Resources Information Center

    Prather, Richard W.; Alibali, Martha W.

    2009-01-01

    This paper reviews research on learners' knowledge of three arithmetic principles: "Commutativity", "Relation to Operands", and "Inversion." Studies of arithmetic principle knowledge vary along several dimensions, including the age of the participants, the context in which the arithmetic is presented, and most importantly, the type of knowledge…

  6. How to interpret cognitive training studies: A reply to Lindskog & Winman

    PubMed Central

    Park, Joonkoo; Brannon, Elizabeth M.

    2017-01-01

    In our previous studies, we demonstrated that repeated training on an approximate arithmetic task selectively improves symbolic arithmetic performance (Park & Brannon, 2013, 2014). We proposed that mental manipulation of quantity is the common cognitive component between approximate arithmetic and symbolic arithmetic, driving the causal relationship between the two. In a commentary to our work, Lindskog and Winman argue that there is no evidence of performance improvement during approximate arithmetic training and that this challenges the proposed causal relationship between approximate arithmetic and symbolic arithmetic. Here, we argue that causality in cognitive training experiments is interpreted from the selectivity of transfer effects and does not hinge upon improved performance in the training task. This is because changes in the unobservable cognitive elements underlying the transfer effect may not be observable from performance measures in the training task. We also question the validity of Lindskog and Winman’s simulation approach for testing for a training effect, given that simulations require a valid and sufficient model of a decision process, which is often difficult to achieve. Finally we provide an empirical approach to testing the training effects in adaptive training. Our analysis reveals new evidence that approximate arithmetic performance improved over the course of training in Park and Brannon (2014). We maintain that our data supports the conclusion that approximate arithmetic training leads to improvement in symbolic arithmetic driven by the common cognitive component of mental quantity manipulation. PMID:26972469

  7. The neural correlates of mental arithmetic in adolescents: a longitudinal fNIRS study.

    PubMed

    Artemenko, Christina; Soltanlou, Mojtaba; Ehlis, Ann-Christine; Nuerk, Hans-Christoph; Dresler, Thomas

    2018-03-10

    Arithmetic processing in adults is known to rely on a frontal-parietal network. However, neurocognitive research focusing on the neural and behavioral correlates of arithmetic development has been scarce, even though the acquisition of arithmetic skills is accompanied by changes within the fronto-parietal network of the developing brain. Furthermore, experimental procedures are typically adjusted to constraints of functional magnetic resonance imaging, which may not reflect natural settings in which children and adolescents actually perform arithmetic. Therefore, we investigated the longitudinal neurocognitive development of processes involved in performing the four basic arithmetic operations in 19 adolescents. By using functional near-infrared spectroscopy, we were able to use an ecologically valid task, i.e., a written production paradigm. A common pattern of activation in the bilateral fronto-parietal network for arithmetic processing was found for all basic arithmetic operations. Moreover, evidence was obtained for decreasing activation during subtraction over the course of 1 year in middle and inferior frontal gyri, and increased activation during addition and multiplication in angular and middle temporal gyri. In the self-paced block design, parietal activation in multiplication and left angular and temporal activation in addition were observed to be higher for simple than for complex blocks, reflecting an inverse effect of arithmetic complexity. In general, the findings suggest that the brain network for arithmetic processing is already established in 12-14 year-old adolescents, but still undergoes developmental changes.

  8. VIEW OF FACILITY NO. S 20 NEAR THE POINT WHERE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW OF FACILITY NO. S 20 NEAR THE POINT WHERE IT JOINS FACILITY NO. S 21. NOTE THE ASPHALT-FILLED NARROW-GAUGE TRACKWAY WITH SOME AREAS OF STEEL TRACK SHOWING. VIEW FACING NORTHEAST - U.S. Naval Base, Pearl Harbor, Floating Dry Dock Quay, Hurt Avenue at northwest side of Magazine Loch, Pearl City, Honolulu County, HI

  9. 40 CFR 435.14 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS OIL AND GAS EXTRACTION POINT SOURCE CATEGORY Offshore... 40 CFR 125.30-32, any existing point source subject to this subpart must achieve the following... Minimum of 1 mg/l and maintained as close to this concentration as possible. Sanitary M91M Floating solids...

  10. 33 CFR 183.558 - Hoses and connections.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...: (A) The hose is severed at the point where maximum drainage of fuel would occur, (B) The boat is in its static floating position, and (C) The fuel system is filled to the capacity market on the tank... minutes when: (A) The hose is severed at the point where maximum drainage of fuel would occur, (B) The...

  11. Approximate Arithmetic Training Improves Informal Math Performance in Low Achieving Preschoolers

    PubMed Central

    Szkudlarek, Emily; Brannon, Elizabeth M.

    2018-01-01

    Recent studies suggest that practice with approximate and non-symbolic arithmetic problems improves the math performance of adults, school aged children, and preschoolers. However, the relative effectiveness of approximate arithmetic training compared to available educational games, and the type of math skills that approximate arithmetic targets are unknown. The present study was designed to (1) compare the effectiveness of approximate arithmetic training to two commercially available numeral and letter identification tablet applications and (2) to examine the specific type of math skills that benefit from approximate arithmetic training. Preschool children (n = 158) were pseudo-randomly assigned to one of three conditions: approximate arithmetic, letter identification, or numeral identification. All children were trained for 10 short sessions and given pre and post tests of informal and formal math, executive function, short term memory, vocabulary, alphabet knowledge, and number word knowledge. We found a significant interaction between initial math performance and training condition, such that children with low pretest math performance benefited from approximate arithmetic training, and children with high pretest math performance benefited from symbol identification training. This effect was restricted to informal, and not formal, math problems. There were also effects of gender, socio-economic status, and age on post-test informal math score after intervention. A median split on pretest math ability indicated that children in the low half of math scores in the approximate arithmetic training condition performed significantly better than children in the letter identification training condition on post-test informal math problems when controlling for pretest, age, gender, and socio-economic status. Our results support the conclusion that approximate arithmetic training may be especially effective for children with low math skills, and that approximate arithmetic training improves early informal, but not formal, math skills. PMID:29867624

  12. Approximate Arithmetic Training Improves Informal Math Performance in Low Achieving Preschoolers.

    PubMed

    Szkudlarek, Emily; Brannon, Elizabeth M

    2018-01-01

    Recent studies suggest that practice with approximate and non-symbolic arithmetic problems improves the math performance of adults, school aged children, and preschoolers. However, the relative effectiveness of approximate arithmetic training compared to available educational games, and the type of math skills that approximate arithmetic targets are unknown. The present study was designed to (1) compare the effectiveness of approximate arithmetic training to two commercially available numeral and letter identification tablet applications and (2) to examine the specific type of math skills that benefit from approximate arithmetic training. Preschool children ( n = 158) were pseudo-randomly assigned to one of three conditions: approximate arithmetic, letter identification, or numeral identification. All children were trained for 10 short sessions and given pre and post tests of informal and formal math, executive function, short term memory, vocabulary, alphabet knowledge, and number word knowledge. We found a significant interaction between initial math performance and training condition, such that children with low pretest math performance benefited from approximate arithmetic training, and children with high pretest math performance benefited from symbol identification training. This effect was restricted to informal, and not formal, math problems. There were also effects of gender, socio-economic status, and age on post-test informal math score after intervention. A median split on pretest math ability indicated that children in the low half of math scores in the approximate arithmetic training condition performed significantly better than children in the letter identification training condition on post-test informal math problems when controlling for pretest, age, gender, and socio-economic status. Our results support the conclusion that approximate arithmetic training may be especially effective for children with low math skills, and that approximate arithmetic training improves early informal, but not formal, math skills.

  13. An Arithmetic-Algebraic Work Space for the Promotion of Arithmetic and Algebraic Thinking: Triangular Numbers

    ERIC Educational Resources Information Center

    Hitt, Fernando; Saboya, Mireille; Cortés Zavala, Carlos

    2016-01-01

    This paper presents an experiment that attempts to mobilise an arithmetic-algebraic way of thinking in order to articulate between arithmetic thinking and the early algebraic thinking, which is considered a prelude to algebraic thinking. In the process of building this latter way of thinking, researchers analysed pupils' spontaneous production…

  14. Non-symbolic arithmetic in adults and young children.

    PubMed

    Barth, Hilary; La Mont, Kristen; Lipton, Jennifer; Dehaene, Stanislas; Kanwisher, Nancy; Spelke, Elizabeth

    2006-01-01

    Five experiments investigated whether adults and preschool children can perform simple arithmetic calculations on non-symbolic numerosities. Previous research has demonstrated that human adults, human infants, and non-human animals can process numerical quantities through approximate representations of their magnitudes. Here we consider whether these non-symbolic numerical representations might serve as a building block of uniquely human, learned mathematics. Both adults and children with no training in arithmetic successfully performed approximate arithmetic on large sets of elements. Success at these tasks did not depend on non-numerical continuous quantities, modality-specific quantity information, the adoption of alternative non-arithmetic strategies, or learned symbolic arithmetic knowledge. Abstract numerical quantity representations therefore are computationally functional and may provide a foundation for formal mathematics.

  15. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    NASA Astrophysics Data System (ADS)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.

  16. 33 CFR 149.625 - What are the design standards?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... elsewhere in this subpart (for example, single point moorings, hoses, and aids to navigation buoys), must be... components. (c) Heliports on floating deepwater ports must be designed in compliance with the regulations at...

  17. 33 CFR 329.6 - Interstate or foreign commerce.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... United States. Note, however, that the mere presence of floating logs will not of itself make the river... the future, or at a past point in time. (b) Nature of commerce: interstate and intrastate. Interstate...

  18. 30 CFR 250.907 - Where must I locate foundation boreholes?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... foundation pile to a soil boring must not exceed 500 feet. (b) For deepwater floating platforms which utilize... necessary, other points throughout the anchor pattern to establish the soil profile suitable for foundation...

  19. Self-Aware Computing

    DTIC Science & Technology

    2009-06-01

    to floating point , to multi-level logic. 2 Overview Self-aware computation can be distinguished from existing computational models which are...systems have advanced to the point that the time is ripe to realize such a system. To illustrate, let us examine each of the key aspects of self...servers for each service, there are no single points of failure in the system. If an OS or user core has a failure, one of several introspection cores

  20. Single crystal growth of 67%BiFeO 3 -33%BaTiO 3 solution by the floating zone method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rong, Y.; Zheng, H.; Krogstad, M. J.

    The growth conditions and the resultant grain morphologies and phase purities from floating-zone growth of 67%BiFeO3-33%BaTiO3 (BF-33BT) single crystals are reported. We find two formidable challenges for the growth. First, a low-melting point constituent leads to a pre-melt zone in the feed-rod that adversely affects growth stability. Second, constitutional super-cooling (CSC), which was found to lead to dendritic and columnar features in the grain morphology, necessitates slow traveling rates during growth. Both challenges were addressed by modifications to the floating-zone furnace that steepened the temperature gradient at the melt-solid interfaces. Slow growth was also required to counter the effects ofmore » CSC. Single crystals with typical dimensions of hundreds of microns have been obtained which possess high quality and are suitable for detailed structural studies.« less

  1. Rear surface effects in high efficiency silicon solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenham, S.R.; Robinson, S.J.; Dai, X.

    1994-12-31

    Rear surface effects in PERL solar cells can lead not only to degradation in the short circuit current and open circuit voltage, but also fill factor. Three mechanisms capable of changing the effective rear surface recombination velocity with injection level are identified, two associated with oxidized p-type surfaces, and the third with two dimensional effects associated with a rear floating junction. Each of these will degrade the fill factor if the range of junction biases corresponding to the rear surface transition, coincides with the maximum power point. Despite the identified non idealities, PERL cells with rear floating junctions (PERF cells)more » have achieved record open circuit voltages for silicon solar cells, while simultaneously achieving fill factor improvements relative to standard PERL solar cells. Without optimization, a record efficiency of 22% has been demonstrated for a cell with a rear floating junction. The results of both theoretical and experimental studies are provided.« less

  2. Single crystal growth of 67%BiFeO3-33%BaTiO3 solution by the floating zone method

    NASA Astrophysics Data System (ADS)

    Rong, Y.; Zheng, H.; Krogstad, M. J.; Mitchell, J. F.; Phelan, D.

    2018-01-01

    The growth conditions and the resultant grain morphologies and phase purities from floating-zone growth of 67%BiFeO3-33%BaTiO3 (BF-33BT) single crystals are reported. We find two formidable challenges for the growth. First, a low-melting point constituent leads to a pre-melt zone in the feed-rod that adversely affects growth stability. Second, constitutional super-cooling (CSC), which was found to lead to dendritic and columnar features in the grain morphology, necessitates slow traveling rates during growth. Both challenges were addressed by modifications to the floating-zone furnace that steepened the temperature gradient at the melt-solid interfaces. Slow growth was also required to counter the effects of CSC. Single crystals with typical dimensions of hundreds of microns have been obtained which possess high quality and are suitable for detailed structural studies.

  3. Increased water temperature renders single-housed C57BL/6J mice susceptible to antidepressant treatment in the forced swim test.

    PubMed

    Bächli, Heidi; Steiner, Michel A; Habersetzer, Ursula; Wotjak, Carsten T

    2008-02-11

    To investigate genotype x environment interactions in the forced swim test, we tested the influence of water temperature (20 degrees C, 25 degrees C, 30 degrees C) on floating behaviour in single-housed male C57BL/6J and BALB/c mice. We observed a contrasting relationship between floating and water temperature between the two strains, with C57BL/6J floating more and BALB/c floating less with increasing water temperature, independent of the lightening conditions and the time point of testing during the animals' circadian rhythm. Both strains showed an inverse relationship between plasma corticosterone concentration and water temperature, indicating that the differences in stress coping are unrelated to different perception of the aversive encounter. Treatment with desipramine (20mg/kg, i.p.) caused a reduction in immobility time in C57BL/6J mice if the animals were tested at 30 degrees C water temperature, with no effect at 25 degrees C and no effects on forced swim stress-induced corticosterone secretion. The same treatment failed to affect floating behaviour in BALB/c at any temperature, but caused a decrease in plasma corticosterone levels. Taken together we demonstrate that an increase in water temperature in the forced swim test exerts opposite effects on floating behaviour in C57BL/6J and BALB/c and renders single-housed C57BL/6J mice, but not BALB/c mice, susceptible to antidepressant-like behavioral effects of desipramine.

  4. Redox Electrocatalysis of Floating Nanoparticles: Determining Electrocatalytic Properties without the Influence of Solid Supports.

    PubMed

    Peljo, Pekka; Scanlon, Micheál D; Olaya, Astrid J; Rivier, Lucie; Smirnov, Evgeny; Girault, Hubert H

    2017-08-03

    Redox electrocatalysis (catalysis of electron-transfer reactions by floating conductive particles) is discussed from the point-of-view of Fermi level equilibration, and an overall theoretical framework is given. Examples of redox electrocatalysis in solution, in bipolar configuration, and at liquid-liquid interfaces are provided, highlighting that bipolar and liquid-liquid interfacial systems allow the study of the electrocatalytic properties of particles without effects from the support, but only liquid-liquid interfaces allow measurement of the electrocatalytic current directly. Additionally, photoinduced redox electrocatalysis will be of interest, for example, to achieve water splitting.

  5. Common brain regions underlying different arithmetic operations as revealed by conjunct fMRI-BOLD activation.

    PubMed

    Fehr, Thorsten; Code, Chris; Herrmann, Manfred

    2007-10-03

    The issue of how and where arithmetic operations are represented in the brain has been addressed in numerous studies. Lesion studies suggest that a network of different brain areas are involved in mental calculation. Neuroimaging studies have reported inferior parietal and lateral frontal activations during mental arithmetic using tasks of different complexities and using different operators (addition, subtraction, etc.). Indeed, it has been difficult to compare brain activation across studies because of the variety of different operators and different presentation modalities used. The present experiment examined fMRI-BOLD activity in participants during calculation tasks entailing different arithmetic operations -- addition, subtraction, multiplication and division -- of different complexities. Functional imaging data revealed a common activation pattern comprising right precuneus, left and right middle and superior frontal regions during all arithmetic operations. All other regional activations were operation specific and distributed in prominently frontal, parietal and central regions when contrasting complex and simple calculation tasks. The present results largely confirm former studies suggesting that activation patterns due to mental arithmetic appear to reflect a basic anatomical substrate of working memory, numerical knowledge and processing based on finger counting, and derived from a network originally related to finger movement. We emphasize that in mental arithmetic research different arithmetic operations should always be examined and discussed independently of each other in order to avoid invalid generalizations on arithmetics and involved brain areas.

  6. Examining the relationship between rapid automatized naming and arithmetic fluency in Chinese kindergarten children.

    PubMed

    Cui, Jiaxin; Georgiou, George K; Zhang, Yiyun; Li, Yixun; Shu, Hua; Zhou, Xinlin

    2017-02-01

    Rapid automatized naming (RAN) has been found to predict mathematics. However, the nature of their relationship remains unclear. Thus, the purpose of this study was twofold: (a) to examine how RAN (numeric and non-numeric) predicts a subdomain of mathematics (arithmetic fluency) and (b) to examine what processing skills may account for the RAN-arithmetic fluency relationship. A total of 160 third-year kindergarten Chinese children (83 boys and 77 girls, mean age=5.11years) were assessed on RAN (colors, objects, digits, and dice), nonverbal IQ, visual-verbal paired associate learning, phonological awareness, short-term memory, speed of processing, approximate number system acuity, and arithmetic fluency (addition and subtraction). The results indicated first that RAN was a significant correlate of arithmetic fluency and the correlations did not vary as a function of type of RAN or arithmetic fluency tasks. In addition, RAN continued to predict addition and subtraction fluency even after controlling for all other processing skills. Taken together, these findings challenge the existing theoretical accounts of the RAN-arithmetic fluency relationship and suggest that, similar to reading fluency, multiple processes underlie the RAN-arithmetic fluency relationship. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. 78 FR 25069 - South Carolina Electric & Gas Company; Notice of Application Accepted for Filing and Soliciting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-29

    ... Murray Docks, Inc./Windward Point Yacht Club to use project waters to expand an existing boat dock facility through the addition of an 8-slip floating dock to accommodate a maximum of 12 additional boats. The proposed new structures would be for the private use of members of the Windward Point Yacht Club...

  8. 40 CFR 435.12 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS OIL AND GAS EXTRACTION POINT SOURCE CATEGORY... provided in 40 CFR 125.30-32, any existing point source subject to this subpart must achieve the following... maintained as close to this concentration as possible. 3 There shall be no floating solids as a result of the...

  9. Hardware description ADSP-21020 40-bit floating point DSP as designed in a remotely controlled digital CW Doppler radar

    NASA Astrophysics Data System (ADS)

    Morrison, R. E.; Robinson, S. H.

    A continuous wave Doppler radar system has been designed which is portable, easily deployed, and remotely controlled. The heart of this system is a DSP/control board using Analog Devices ADSP-21020 40-bit floating point digital signal processor (DSP) microprocessor. Two 18-bit audio A/D converters provide digital input to the DSP/controller board for near real time target detection. Program memory for the DSP is dual ported with an Intel 87C51 microcontroller allowing DSP code to be up-loaded or down-loaded from a central controlling computer. The 87C51 provides overall system control for the remote radar and includes a time-of-day/day-of-year real time clock, system identification (ID) switches, and input/output (I/O) expansion by an Intel 82C55 I/O expander.

  10. Optimized Latching Control of Floating Point Absorber Wave Energy Converter

    NASA Astrophysics Data System (ADS)

    Gadodia, Chaitanya; Shandilya, Shubham; Bansal, Hari Om

    2018-03-01

    There is an increasing demand for energy in today’s world. Currently main energy resources are fossil fuels, which will eventually drain out, also the emissions produced from them contribute to global warming. For a sustainable future, these fossil fuels should be replaced with renewable and green energy sources. Sea waves are a gigantic and undiscovered vitality asset. The potential for extricating energy from waves is extensive. To trap this energy, wave energy converters (WEC) are needed. There is a need for increasing the energy output and decreasing the cost requirement of these existing WECs. This paper presents a method which uses prediction as a part of the control scheme to increase the energy efficiency of the floating-point absorber WECs. Kalman Filter is used for estimation, coupled with latching control in regular as well as irregular sea waves. Modelling and Simulation results for the same are also included.

  11. Microfluidic quadrupole and floating concentration gradient.

    PubMed

    Qasaimeh, Mohammad A; Gervais, Thomas; Juncker, David

    2011-09-06

    The concept of fluidic multipoles, in analogy to electrostatics, has long been known as a particular class of solutions of the Navier-Stokes equation in potential flows; however, experimental observations of fluidic multipoles and of their characteristics have not been reported yet. Here we present a two-dimensional microfluidic quadrupole and a theoretical analysis consistent with the experimental observations. The microfluidic quadrupole was formed by simultaneously injecting and aspirating fluids from two pairs of opposing apertures in a narrow gap formed between a microfluidic probe and a substrate. A stagnation point was formed at the centre of the microfluidic quadrupole, and its position could be rapidly adjusted hydrodynamically. Following the injection of a solute through one of the poles, a stationary, tunable, and movable-that is, 'floating'-concentration gradient was formed at the stagnation point. Our results lay the foundation for future combined experimental and theoretical exploration of microfluidic planar multipoles including convective-diffusive phenomena.

  12. Atmospheric Modeling And Sensor Simulation (AMASS) study

    NASA Technical Reports Server (NTRS)

    Parker, K. G.

    1984-01-01

    The capabilities of the atmospheric modeling and sensor simulation (AMASS) system were studied in order to enhance them. This system is used in processing atmospheric measurements which are utilized in the evaluation of sensor performance, conducting design-concept simulation studies, and also in the modeling of the physical and dynamical nature of atmospheric processes. The study tasks proposed in order to both enhance the AMASS system utilization and to integrate the AMASS system with other existing equipment to facilitate the analysis of data for modeling and image processing are enumerated. The following array processors were evaluated for anticipated effectiveness and/or improvements in throughput by attachment of the device to the P-e: (1) Floating Point Systems AP-120B; (2) Floating Point Systems 5000; (3) CSP, Inc. MAP-400; (4) Analogic AP500; (5) Numerix MARS-432; and (6) Star Technologies, Inc. ST-100.

  13. Individual structural differences in left inferior parietal area are associated with schoolchildrens' arithmetic scores

    PubMed Central

    Li, Yongxin; Hu, Yuzheng; Wang, Yunqi; Weng, Jian; Chen, Feiyan

    2013-01-01

    Arithmetic skill is of critical importance for academic achievement, professional success and everyday life, and childhood is the key period to acquire this skill. Neuroimaging studies have identified that left parietal regions are a key neural substrate for representing arithmetic skill. Although the relationship between functional brain activity in left parietal regions and arithmetic skill has been studied in detail, it remains unclear about the relationship between arithmetic achievement and structural properties in left inferior parietal area in schoolchildren. The current study employed a combination of voxel-based morphometry (VBM) for high-resolution T1-weighted images and fiber tracking on diffusion tensor imaging (DTI) to examine the relationship between structural properties in the inferior parietal area and arithmetic achievement in 10-year-old schoolchildren. VBM of the T1-weighted images revealed that individual differences in arithmetic scores were significantly and positively correlated with the gray matter (GM) volume in the left intraparietal sulcus (IPS). Fiber tracking analysis revealed that the forceps major, left superior longitudinal fasciculus (SLF), bilateral inferior longitudinal fasciculus (ILF) and inferior fronto-occipital fasciculus (IFOF) were the primary pathways connecting the left IPS with other brain areas. Furthermore, the regression analysis of the probabilistic pathways revealed a significant and positive correlation between the fractional anisotropy (FA) values in the left SLF, ILF and bilateral IFOF and arithmetic scores. The brain structure-behavior correlation analyses indicated that the GM volumes in the left IPS and the FA values in the tract pathways connecting left IPS were both related to children's arithmetic achievement. The present findings provide evidence that individual structural differences in the left IPS are associated with arithmetic scores in schoolchildren. PMID:24367320

  14. Age-related changes in strategic variations during arithmetic problem solving: The role of executive control.

    PubMed

    Hinault, T; Lemaire, P

    2016-01-01

    In this review, we provide an overview of how age-related changes in executive control influence aging effects in arithmetic processing. More specifically, we consider the role of executive control in strategic variations with age during arithmetic problem solving. Previous studies found that age-related differences in arithmetic performance are associated with strategic variations. That is, when they accomplish arithmetic problem-solving tasks, older adults use fewer strategies than young adults, use strategies in different proportions, and select and execute strategies less efficiently. Here, we review recent evidence, suggesting that age-related changes in inhibition, cognitive flexibility, and working memory processes underlie age-related changes in strategic variations during arithmetic problem solving. We discuss both behavioral and neural mechanisms underlying age-related changes in these executive control processes. © 2016 Elsevier B.V. All rights reserved.

  15. Reconfigurable data path processor

    NASA Technical Reports Server (NTRS)

    Donohoe, Gregory (Inventor)

    2005-01-01

    A reconfigurable data path processor comprises a plurality of independent processing elements. Each of the processing elements advantageously comprising an identical architecture. Each processing element comprises a plurality of data processing means for generating a potential output. Each processor is also capable of through-putting an input as a potential output with little or no processing. Each processing element comprises a conditional multiplexer having a first conditional multiplexer input, a second conditional multiplexer input and a conditional multiplexer output. A first potential output value is transmitted to the first conditional multiplexer input, and a second potential output value is transmitted to the second conditional multiplexer output. The conditional multiplexer couples either the first conditional multiplexer input or the second conditional multiplexer input to the conditional multiplexer output, according to an output control command. The output control command is generated by processing a set of arithmetic status-bits through a logical mask. The conditional multiplexer output is coupled to a first processing element output. A first set of arithmetic bits are generated according to the processing of the first processable value. A second set of arithmetic bits may be generated from a second processing operation. The selection of the arithmetic status-bits is performed by an arithmetic-status bit multiplexer selects the desired set of arithmetic status bits from among the first and second set of arithmetic status bits. The conditional multiplexer evaluates the select arithmetic status bits according to logical mask defining an algorithm for evaluating the arithmetic status bits.

  16. Solidification of floating organic droplet in dispersive liquid-liquid microextraction as a green analytical tool.

    PubMed

    Mansour, Fotouh R; Danielson, Neil D

    2017-08-01

    Dispersive liquid-liquid microextraction (DLLME) is a special type of microextraction in which a mixture of two solvents (an extracting solvent and a disperser) is injected into the sample. The extraction solvent is then dispersed as fine droplets in the cloudy sample through manual or mechanical agitation. Hence, the sample is centrifuged to break the formed emulsion and the extracting solvent is manually separated. The organic solvents commonly used in DLLME are halogenated hydrocarbons that are highly toxic. These solvents are heavier than water, so they sink to the bottom of the centrifugation tube which makes the separation step difficult. By using solvents of low density, the organic extractant floats on the sample surface. If the selected solvent such as undecanol has a freezing point in the range 10-25°C, the floating droplet can be solidified using a simple ice-bath, and then transferred out of the sample matrix; this step is known as solidification of floating organic droplet (SFOD). Coupling DLLME to SFOD combines the advantages of both approaches together. The DLLME-SFOD process is controlled by the same variables of conventional liquid-liquid extraction. The organic solvents used as extractants in DLLME-SFOD must be immiscible with water, of lower density, low volatility, high partition coefficient and low melting and freezing points. The extraction efficiency of DLLME-SFOD is affected by types and volumes of organic extractant and disperser, salt addition, pH, temperature, stirring rate and extraction time. This review discusses the principle, optimization variables, advantages and disadvantages and some selected applications of DLLME-SFOD in water, food and biomedical analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Cognitive Processes that Account for Mental Addition Fluency Differences between Children Typically Achieving in Arithmetic and Children At-Risk for Failure in Arithmetic

    ERIC Educational Resources Information Center

    Berg, Derek H.; Hutchinson, Nancy L.

    2010-01-01

    This study investigated whether processing speed, short-term memory, and working memory accounted for the differential mental addition fluency between children typically achieving in arithmetic (TA) and children at-risk for failure in arithmetic (AR). Further, we drew attention to fluency differences in simple (e.g., 5 + 3) and complex (e.g., 16 +…

  18. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, J.R.

    1997-02-11

    A method and apparatus are disclosed for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register. 15 figs.

  19. Method and apparatus for high speed data acquisition and processing

    DOEpatents

    Ferron, John R.

    1997-01-01

    A method and apparatus for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register.

  20. Fast and Scalable Computation of the Forward and Inverse Discrete Periodic Radon Transform.

    PubMed

    Carranza, Cesar; Llamocca, Daniel; Pattichis, Marios

    2016-01-01

    The discrete periodic radon transform (DPRT) has extensively been used in applications that involve image reconstructions from projections. Beyond classic applications, the DPRT can also be used to compute fast convolutions that avoids the use of floating-point arithmetic associated with the use of the fast Fourier transform. Unfortunately, the use of the DPRT has been limited by the need to compute a large number of additions and the need for a large number of memory accesses. This paper introduces a fast and scalable approach for computing the forward and inverse DPRT that is based on the use of: a parallel array of fixed-point adder trees; circular shift registers to remove the need for accessing external memory components when selecting the input data for the adder trees; an image block-based approach to DPRT computation that can fit the proposed architecture to available resources; and fast transpositions that are computed in one or a few clock cycles that do not depend on the size of the input image. As a result, for an N × N image (N prime), the proposed approach can compute up to N(2) additions per clock cycle. Compared with the previous approaches, the scalable approach provides the fastest known implementations for different amounts of computational resources. For example, for a 251×251 image, for approximately 25% fewer flip-flops than required for a systolic implementation, we have that the scalable DPRT is computed 36 times faster. For the fastest case, we introduce optimized just 2N + ⌈log(2) N⌉ + 1 and 2N + 3 ⌈log(2) N⌉ + B + 2 cycles, architectures that can compute the DPRT and its inverse in respectively, where B is the number of bits used to represent each input pixel. On the other hand, the scalable DPRT approach requires more 1-b additions than for the systolic implementation and provides a tradeoff between speed and additional 1-b additions. All of the proposed DPRT architectures were implemented in VHSIC Hardware Description Language (VHDL) and validated using an Field-Programmable Gate Array (FPGA) implementation.

  1. Aerial LED signage by use of crossed-mirror array

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirotsugu; Kujime, Ryousuke; Bando, Hiroki; Suyama, Shiro

    2013-03-01

    3D representation of digital signage improves its significance and rapid notification of important points. Real 3D display techniques such as volumetric 3D displays are effective for use of 3D for public signs because it provides not only binocular disparity but also motion parallax and other cues, which will give 3D impression even people with abnormal binocular vision. Our goal is to realize aerial 3D LED signs. We have specially designed and fabricated a reflective optical device to form an aerial image of LEDs with a wide field angle. The developed reflective optical device composed of crossed-mirror array (CMA). CMA contains dihedral corner reflectors at each aperture. After double reflection, light rays emitted from an LED will converge into the corresponding image point. The depth between LED lamps is represented in the same depth in the floating 3D image. Floating image of LEDs was formed in wide range of incident angle with a peak reflectance at 35 deg. The image size of focused beam (point spread function) agreed to the apparent aperture size.

  2. Functional Neuroanatomy Involved in Automatic order Mental Arithmetic and Recitation of the Multiplication Table

    NASA Astrophysics Data System (ADS)

    Wang, Li-Qun; Saito, Masao

    We used 1.5T functional magnetic resonance imaging (fMRI) to explore that which brain areas contribute uniquely to numeric computation. The BOLD effect activation pattern of metal arithmetic task (successive subtraction: actual calculation task) was compared with multiplication tables repetition task (rote verbal arithmetic memory task) response. The activation found in right parietal lobule during metal arithmetic task suggested that quantitative cognition or numeric computation may need the assistance of sensuous convert, such as spatial imagination and spatial sensuous convert. In addition, this mechanism may be an ’analog algorithm’ in the simple mental arithmetic processing.

  3. Early language and executive skills predict variations in number and arithmetic skills in children at family-risk of dyslexia and typically developing controls

    PubMed Central

    Moll, Kristina; Snowling, Margaret J.; Göbel, Silke M.; Hulme, Charles

    2015-01-01

    Two important foundations for learning are language and executive skills. Data from a longitudinal study tracking the development of 93 children at family-risk of dyslexia and 76 controls was used to investigate the influence of these skills on the development of arithmetic. A two-group longitudinal path model assessed the relationships between language and executive skills at 3–4 years, verbal number skills (counting and number knowledge) and phonological processing skills at 4–5 years, and written arithmetic in primary school. The same cognitive processes accounted for variability in arithmetic skills in both groups. Early language and executive skills predicted variations in preschool verbal number skills, which in turn, predicted arithmetic skills in school. In contrast, phonological awareness was not a predictor of later arithmetic skills. These results suggest that verbal and executive processes provide the foundation for verbal number skills, which in turn influence the development of formal arithmetic skills. Problems in early language development may explain the comorbidity between reading and mathematics disorder. PMID:26412946

  4. 40 CFR 426.54 - [Reserved

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 30 2011-07-01 2011-07-01 false [Reserved] 426.54 Section 426.54 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.54 [Reserved] ...

  5. 40 CFR 426.54 - [Reserved

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false [Reserved] 426.54 Section 426.54 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.54 [Reserved] ...

  6. 33 CFR 100.101 - Harvard-Yale Regatta, Thames River, New London, CT.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... race course, between Scotch Cap and Bartlett Point Light. (ii) Within the race course boundaries or in... not cause waves which result in damage to submarines or other vessels in the floating drydocks. (11...

  7. 75 FR 5073 - Alabama Power Company; Notice of Application for Amendment of License and Soliciting Comments...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-01

    ... facilities associated with the Willow Glynn at Willow Point residential subdivision. These facilities include 2 floating docks, with 16 double-slips each, a wooden pedestrian bridge, a wooden boardwalk along 1...

  8. 40 CFR 125.133 - What special definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Subcategories of the Oil and Gas Extraction Point Source Category Effluent Guidelines in 40 CFR 435.10 or 40 CFR..., floating, mobile, facility engaged in the processing of fresh, frozen, canned, smoked, salted or pickled...

  9. 33 CFR 110.29 - Boston Inner Harbor, Mass.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Park Yacht Club, Winthrop. Southerly of a line bearing 276° from a point on the west side of Pleasant.... [NAD83]. (2) The area is principally for use by yachts and other recreational craft. Temporary floats or...

  10. 33 CFR 110.29 - Boston Inner Harbor, Mass.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Park Yacht Club, Winthrop. Southerly of a line bearing 276° from a point on the west side of Pleasant.... [NAD83]. (2) The area is principally for use by yachts and other recreational craft. Temporary floats or...

  11. Teachers’ Beliefs and Practices Regarding the Role of Executive Functions in Reading and Arithmetic

    PubMed Central

    Rapoport, Shirley; Rubinsten, Orly; Katzir, Tami

    2016-01-01

    The current study investigated early elementary school teachers’ beliefs and practices regarding the role of Executive Functions (EFs) in reading and arithmetic. A new research questionnaire was developed and judged by professionals in the academia and the field. Reponses were obtained from 144 teachers from Israel. Factor analysis divided the questionnaire into three valid and reliable subscales, reflecting (1) beliefs regarding the contribution of EFs to reading and arithmetic, (2) pedagogical practices, and (3) a connection between the cognitive mechanisms of reading and arithmetic. Findings indicate that teachers believe EFs affect students’ performance in reading and arithmetic. These beliefs were also correlated with pedagogical practices. Additionally, special education teachers’ scored higher on the different subscales compared to general education teachers. These findings shed light on the way teachers perceive the cognitive foundations of reading and arithmetic and indicate to which extent these perceptions guide their teaching practices. PMID:27799917

  12. Teachers' Beliefs and Practices Regarding the Role of Executive Functions in Reading and Arithmetic.

    PubMed

    Rapoport, Shirley; Rubinsten, Orly; Katzir, Tami

    2016-01-01

    The current study investigated early elementary school teachers' beliefs and practices regarding the role of Executive Functions (EFs) in reading and arithmetic. A new research questionnaire was developed and judged by professionals in the academia and the field. Reponses were obtained from 144 teachers from Israel. Factor analysis divided the questionnaire into three valid and reliable subscales, reflecting (1) beliefs regarding the contribution of EFs to reading and arithmetic, (2) pedagogical practices, and (3) a connection between the cognitive mechanisms of reading and arithmetic. Findings indicate that teachers believe EFs affect students' performance in reading and arithmetic. These beliefs were also correlated with pedagogical practices. Additionally, special education teachers' scored higher on the different subscales compared to general education teachers. These findings shed light on the way teachers perceive the cognitive foundations of reading and arithmetic and indicate to which extent these perceptions guide their teaching practices.

  13. Channel-Like Bottom Features and High Bottom Melt Rates of Petermann Gletscher's Floating Tongue in Northwestern Greenland

    NASA Astrophysics Data System (ADS)

    Steffen, K.; Huff, R. D.; Cullen, N.; Rignot, E.; Stewart, C.; Jenkins, A.

    2003-12-01

    Petermann Gletscher is the largest and most influential outlet glacier in central northern Greenland. Located at 81 N, 60 W, it drains an area of 71,580 km2, with a discharge of 12 cubic km of ice per year into the Arctic Ocean. We finished a second field season in spring 2003 collecting in situ data on local climate, ice velocity, strain rates, ice thickness profiles and bottom melt rates of the floating ice tongue. Last years findings have been confirmed that large channels of several hundred meters in depth at the underside of the floating ice tongue are running roughly parallel to the flow direction. We mapped these channels using ground penetrating radar at 25 MHz frequency and multi-phase radar in profiling mode over half of the glacier's width. In addition, NASA airborne laser altimeter data was collected along and cross-glacier for accurate assessment of surface topography. We will present a 3-D model of the floating ice tongue and provide hypothesis of the origin and mechanism that caused these large ice channels at the bottom of the floating ice tongue. Multi-phase radar point measurements revealed interesting results of bottom melt rates, which exceed all previous estimates. It is worth mentioned that the largest bottom melt rates were not found at the grounding line, which is common on ice shelves in the Antarctica. In addition, GPS tidal motion has been measured over one lunar cycle at the flex zone and on the free floating ice tongue and the result will be compared to historic measurements made at the beginning of last century. The surface climate has been recorded by two automatic weather stations over a 12 month period, and the local climate of this remote region will be presented.

  14. 76 FR 23964 - Fisheries in the Western Pacific; Pelagic Fisheries; Purse Seine Prohibited Areas Around American...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-29

    ... animals, such as pelagic fishes and sea turtles, tend to congregate to naturally-occurring floating... American Samoa enclosed by straight lines connecting the following coordinates: Point S. latitude W. longitude AS-3-A 11[deg]12[min] 172[deg]18[min] AS-3-B 12[deg]12[min] 169[deg]56[min] and from Point AS-3-A...

  15. 75 FR 33692 - Safety Zone; Tacoma Freedom Fair Air Show, Commencement Bay, Tacoma, WA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-15

    ... this rule encompasses all waters within the points 47[deg]-17.63' N., 122[deg]-28.724' W.; 47[deg]-17... Ruston Way and extending approximately 1100 yards into Commencement Bay. Floating markers will be placed... designated safety zone: All waters within the points 47[deg]-17.63' N., 122[deg]-28.724' W.; 47[deg]-17.059...

  16. Improving Soldier Training: An Aptitude-Treatment Interaction Approach.

    DTIC Science & Technology

    1979-06-01

    magazines. Eighteen percent of American adults lack basic literacy skills to the point where they cannot even fill out basic forms. Dr. Food emphasized...designed to upgrade the literacy and computational skills of Army personnel found deficient. The magnitude of the problem is such, however, that the services...knowledge, (WK); arithmetic reasoning, AR); etc.) predict the aiount learned or the rate of learning or both. Special abilities such as psychomotor skills

  17. LAVA: Large scale Automated Vulnerability Addition

    DTIC Science & Technology

    2016-05-23

    memory copy, e.g., are reasonable attack points. If the goal is to inject divide- by-zero, then arithmetic operations involving division will be...ways. First, it introduces deterministic record and replay , which can be used for iterated and expensive analyses that cannot be performed online... memory . Since our approach records the correspondence between source lines and program basic block execution, it would be just as easy to figure out

  18. Assessment of methane emission and oxidation at Air Hitam Landfill site cover soil in wet tropical climate.

    PubMed

    Abushammala, Mohammed F M; Basri, Noor Ezlin Ahmad; Elfithri, Rahmah

    2013-12-01

    Methane (CH₄) emissions and oxidation were measured at the Air Hitam sanitary landfill in Malaysia and were modeled using the Intergovernmental Panel on Climate Change waste model to estimate the CH₄ generation rate constant, k. The emissions were measured at several locations using a fabricated static flux chamber. A combination of gas concentrations in soil profiles and surface CH₄ and carbon dioxide (CO₂) emissions at four monitoring locations were used to estimate the CH₄ oxidation capacity. The temporal variations in CH₄ and CO₂ emissions were also investigated in this study. Geospatial means using point kriging and inverse distance weight (IDW), as well as arithmetic and geometric means, were used to estimate total CH₄ emissions. The point kriging, IDW, and arithmetic means were almost identical and were two times higher than the geometric mean. The CH₄ emission geospatial means estimated using the kriging and IDW methods were 30.81 and 30.49 gm(−2) day(−1), respectively. The total CH₄ emissions from the studied area were 53.8 kg day(−1). The mean of the CH₄ oxidation capacity was 27.5 %. The estimated value of k is 0.138 year(−1). Special consideration must be given to the CH₄ oxidation in the wet tropical climate for enhancing CH₄ emission reduction.

  19. A quantum molecular similarity analysis of changes in molecular electron density caused by basis set flotation and electric field application

    NASA Astrophysics Data System (ADS)

    Simon, Sílvia; Duran, Miquel

    1997-08-01

    Quantum molecular similarity (QMS) techniques are used to assess the response of the electron density of various small molecules to application of a static, uniform electric field. Likewise, QMS is used to analyze the changes in electron density generated by the process of floating a basis set. The results obtained show an interrelation between the floating process, the optimum geometry, and the presence of an external field. Cases involving the Le Chatelier principle are discussed, and an insight on the changes of bond critical point properties, self-similarity values and density differences is performed.

  20. Highly anomalous accumulation rates of C and N recorded by a relic, free-floating peatland in Central Italy

    PubMed Central

    Zaccone, Claudio; Lobianco, Daniela; Shotyk, William; Ciavatta, Claudio; Appleby, Peter G.; Brugiapaglia, Elisabetta; Casella, Laura; Miano, Teodoro M.; D’Orazio, Valeria

    2017-01-01

    Floating islands mysteriously moving around on lakes were described by several Latin authors almost two millennia ago. These fascinating ecosystems, known as free-floating mires, have been extensively investigated from ecological, hydrological and management points of view, but there have been no detailed studies of their rates of accumulation of organic matter (OM), organic carbon (OC) and total nitrogen (TN). We have collected a peat core 4 m long from the free-floating island of Posta Fibreno, a relic mire in Central Italy. This is the thickest accumulation of peat ever found in a free-floating mire, yet it has formed during the past seven centuries and represents the greatest accumulation rates, at both decadal and centennial timescale, of OM (0.63 vs. 0.37 kg/m2/yr), OC (0.28 vs. 0.18 kg/m2/yr) and TN (3.7 vs. 6.1 g/m2/yr) ever reported for coeval peatlands. The anomalously high accretion rates, obtained using 14C age dating, were confirmed using 210Pb and 137Cs: these show that the top 2 m of Sphagnum-peat has accumulated in only ~100 years. As an environmental archive, Posta Fibreno offers a temporal resolution which is 10x greater than any terrestrial peat bog, and promises to provide new insight into environmental changes occurring during the Anthropocene. PMID:28230066

  1. 40 CFR 426.51 - Specialized definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Specialized definitions. 426.51 Section 426.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.51...

  2. 40 CFR 426.51 - Specialized definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Specialized definitions. 426.51 Section 426.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.51...

  3. Efficient Atomization and Combustion of Emulsified Crude Oil

    DTIC Science & Technology

    2014-09-18

    2.26 Naphthenes , vol % 50.72 Aromatics, vol % 16.82 Freezing Point, °F -49.7 Freezing Point, °C -45.4 Smoke Point, mm (ASTM) 19.2 Acid ...needed by the proposed method for capturing and oil removal , in particular the same vessels and booms used to herd the floating crude oil into a thick...slicks need to be removed more rapidly than they can be transported, in situ burning offers a rapid disposal method that minimizes risk to marine life

  4. Computer Program to Add NOISEMAP Grids of Different Spacings

    DTIC Science & Technology

    1980-04-01

    GRIC POINT. C 1,J ARE THE INDICES !-OR THE #-IN’- GRIL , PUINT CLOSLSTP C uUT TO THE LkFT AND 8tLGW9 T~ic Oi.JIRL&j iEIG GkIO POIt4TO C .(1,RJ ARE THE...ACTUAL FLOATING POINT CUORGINATES THE bIG C i.kID POINT WOULD HAVE WERL IT IN THE i-INL GRIL .. C CUMMION /GRIOS/ NBF, NBFL, OG(IOUIOO), dSo FG(iI.0QI,1

  5. Prospective relations between resting-state connectivity of parietal subdivisions and arithmetic competence.

    PubMed

    Price, Gavin R; Yeo, Darren J; Wilkey, Eric D; Cutting, Laurie E

    2018-04-01

    The present study investigates the relation between resting-state functional connectivity (rsFC) of cytoarchitectonically defined subdivisions of the parietal cortex at the end of 1st grade and arithmetic performance at the end of 2nd grade. Results revealed a dissociable pattern of relations between rsFC and arithmetic competence among subdivisions of intraparietal sulcus (IPS) and angular gyrus (AG). rsFC between right hemisphere IPS subdivisions and contralateral IPS subdivisions positively correlated with arithmetic competence. In contrast, rsFC between the left hIP1 and the right medial temporal lobe, and rsFC between the left AG and left superior frontal gyrus, were negatively correlated with arithmetic competence. These results suggest that strong inter-hemispheric IPS connectivity is important for math development, reflecting either neurocognitive mechanisms specific to arithmetic processing, domain-general mechanisms that are particularly relevant to arithmetic competence, or structural 'cortical maturity'. Stronger connectivity between IPS, and AG, subdivisions and frontal and temporal cortices, however, appears to be negatively associated with math development, possibly reflecting the ability to disengage suboptimal problem-solving strategies during mathematical processing, or to flexibly reorient task-based networks. Importantly, the reported results pertain even when controlling for reading, spatial attention, and working memory, suggesting that the observed rsFC-behavior relations are specific to arithmetic competence. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Number comparison and number ordering as predictors of arithmetic performance in adults: Exploring the link between the two skills, and investigating the question of domain-specificity.

    PubMed

    Morsanyi, Kinga; O'Mahony, Eileen; McCormack, Teresa

    2017-12-01

    Recent evidence has highlighted the important role that number-ordering skills play in arithmetic abilities, both in children and adults. In the current study, we demonstrated that number comparison and ordering skills were both significantly related to arithmetic performance in adults, and the effect size was greater in the case of ordering skills. Additionally, we found that the effect of number comparison skills on arithmetic performance was mediated by number-ordering skills. Moreover, performance on comparison and ordering tasks involving the months of the year was also strongly correlated with arithmetic skills, and participants displayed similar (canonical or reverse) distance effects on the comparison and ordering tasks involving months as when the tasks included numbers. This suggests that the processes responsible for the link between comparison and ordering skills and arithmetic performance are not specific to the domain of numbers. Finally, a factor analysis indicated that performance on comparison and ordering tasks loaded on a factor that included performance on a number line task and self-reported spatial thinking styles. These results substantially extend previous research on the role of order processing abilities in mental arithmetic.

  7. Cognitive precursors of arithmetic development in primary school children with cerebral palsy.

    PubMed

    Van Rooijen, M; Verhoeven, L; Smits, D W; Dallmeijer, A J; Becher, J G; Steenbergen, B

    2014-04-01

    The aim of this study was to examine the development of arithmetic performance and its cognitive precursors in children with CP from 7 till 9 years of age. Previous research has shown that children with CP are generally delayed in arithmetic performance compared to their typically developing peers. In children with CP, the developmental trajectory of the ability to solve addition- and subtraction tasks has, however, rarely been studied, as well as the cognitive factors affecting this trajectory. Sixty children (M=7.2 years, SD=.23 months at study entry) with CP participated in this study. Standardized tests were administered to assess arithmetic performance, word decoding skills, non-verbal intelligence, and working memory. The results showed that the ability to solve addition- and subtraction tasks increased over a two year period. Word decoding skills were positively related to the initial status of arithmetic performance. In addition, non-verbal intelligence and working memory were associated with the initial status and growth rate of arithmetic performance from 7 till 9 years of age. The current study highlights the importance of non-verbal intelligence and working memory to the development of arithmetic performance of children with CP. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Separating stages of arithmetic verification: An ERP study with a novel paradigm.

    PubMed

    Avancini, Chiara; Soltész, Fruzsina; Szűcs, Dénes

    2015-08-01

    In studies of arithmetic verification, participants typically encounter two operands and they carry out an operation on these (e.g. adding them). Operands are followed by a proposed answer and participants decide whether this answer is correct or incorrect. However, interpretation of results is difficult because multiple parallel, temporally overlapping numerical and non-numerical processes of the human brain may contribute to task execution. In order to overcome this problem here we used a novel paradigm specifically designed to tease apart the overlapping cognitive processes active during arithmetic verification. Specifically, we aimed to separate effects related to detection of arithmetic correctness, detection of the violation of strategic expectations, detection of physical stimulus properties mismatch and numerical magnitude comparison (numerical distance effects). Arithmetic correctness, physical stimulus properties and magnitude information were not task-relevant properties of the stimuli. We distinguished between a series of temporally highly overlapping cognitive processes which in turn elicited overlapping ERP effects with distinct scalp topographies. We suggest that arithmetic verification relies on two major temporal phases which include parallel running processes. Our paradigm offers a new method for investigating specific arithmetic verification processes in detail. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Do Children Understand Fraction Addition?

    ERIC Educational Resources Information Center

    Braithwaite, David W.; Tian, Jing; Siegler, Robert S.

    2017-01-01

    Many children fail to master fraction arithmetic even after years of instruction. A recent theory of fraction arithmetic (Braithwaite, Pyke, & Siegler, in press) hypothesized that this poor learning of fraction arithmetic procedures reflects poor conceptual understanding of them. To test this hypothesis, we performed three experiments…

  10. Dissociation between arithmetic relatedness and distance effects is modulated by task properties: an ERP study comparing explicit vs. implicit arithmetic processing.

    PubMed

    Avancini, Chiara; Galfano, Giovanni; Szűcs, Dénes

    2014-12-01

    Event-related potential (ERP) studies have detected several characteristic consecutive amplitude modulations in both implicit and explicit mental arithmetic tasks. Implicit tasks typically focused on the arithmetic relatedness effect (in which performance is affected by semantic associations between numbers) while explicit tasks focused on the distance effect (in which performance is affected by the numerical difference of to-be-compared numbers). Both task types elicit morphologically similar ERP waves which were explained in functionally similar terms. However, to date, the relationship between these tasks has not been investigated explicitly and systematically. In order to fill this gap, here we examined whether ERP effects and their underlying cognitive processes in implicit and explicit mental arithmetic tasks differ from each other. The same group of participants performed both an implicit number-matching task (in which arithmetic knowledge is task-irrelevant) and an explicit arithmetic-verification task (in which arithmetic knowledge is task-relevant). 129-channel ERP data differed substantially between tasks. In the number-matching task, the arithmetic relatedness effect appeared as a negativity over left-frontal electrodes whereas the distance effect was more prominent over right centro-parietal electrodes. In the verification task, all probe types elicited similar N2b waves over right fronto-central electrodes and typical centro-parietal N400 effects over central electrodes. The distance effect appeared as an early-rising, long-lasting left parietal negativity. We suggest that ERP effects in the implicit task reflect access to semantic memory networks and to magnitude discrimination, respectively. In contrast, effects of expectation violation are more prominent in explicit tasks and may mask more delicate cognitive processes. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Dissociation between arithmetic relatedness and distance effects is modulated by task properties: An ERP study comparing explicit vs. implicit arithmetic processing

    PubMed Central

    Avancini, Chiara; Galfano, Giovanni; Szűcs, Dénes

    2014-01-01

    Event-related potential (ERP) studies have detected several characteristic consecutive amplitude modulations in both implicit and explicit mental arithmetic tasks. Implicit tasks typically focused on the arithmetic relatedness effect (in which performance is affected by semantic associations between numbers) while explicit tasks focused on the distance effect (in which performance is affected by the numerical difference of to-be-compared numbers). Both task types elicit morphologically similar ERP waves which were explained in functionally similar terms. However, to date, the relationship between these tasks has not been investigated explicitly and systematically. In order to fill this gap, here we examined whether ERP effects and their underlying cognitive processes in implicit and explicit mental arithmetic tasks differ from each other. The same group of participants performed both an implicit number-matching task (in which arithmetic knowledge is task-irrelevant) and an explicit arithmetic-verification task (in which arithmetic knowledge is task-relevant). 129-channel ERP data differed substantially between tasks. In the number-matching task, the arithmetic relatedness effect appeared as a negativity over left-frontal electrodes whereas the distance effect was more prominent over right centro-parietal electrodes. In the verification task, all probe types elicited similar N2b waves over right fronto-central electrodes and typical centro-parietal N400 effects over central electrodes. The distance effect appeared as an early-rising, long-lasting left parietal negativity. We suggest that ERP effects in the implicit task reflect access to semantic memory networks and to magnitude discrimination, respectively. In contrast, effects of expectation violation are more prominent in explicit tasks and may mask more delicate cognitive processes. PMID:25450162

  12. Flow-induced oscillations of a floating moored cylinder

    NASA Astrophysics Data System (ADS)

    Carlson, Daniel; Modarres-Sadeghi, Yahya

    2016-11-01

    An experimental study of flow-induced oscillations of a floating model spar buoy was conducted. The model spar consisted of a floating uniform cylinder moored in a water tunnel test section, and free to oscillate about its mooring attachment point near the center of mass. For the bare cylinder, counter-clockwise (CCW) figure-eight trajectories approaching A* =1 in amplitude were observed at the lower part of the spar for a reduced velocity range of U* =4-11, while its upper part experienced clockwise (CW) orbits. It was hypothesized that the portion of the spar undergoing CCW figure eights is the portion within which the flow excites the structure. By adding helical strakes to the portion of the cylinder with CCW figure eights, the response amplitude was significantly reduced, while adding strakes to portions with clockwise orbital motion had a minimal influence on the amplitude of response. This work is partially supported by the NSF-sponsored IGERT: Offshore Wind Energy Engineering, Environmental Science, and Policy (Grant Number 1068864).

  13. Aztec arithmetic revisited: land-area algorithms and Acolhua congruence arithmetic.

    PubMed

    Williams, Barbara J; Jorge y Jorge, María del Carmen

    2008-04-04

    Acolhua-Aztec land records depicting areas and side dimensions of agricultural fields provide insight into Aztec arithmetic. Hypothesizing that recorded areas resulted from indigenous calculation, in a study of sample quadrilateral fields we found that 60% of the area values could be reproduced exactly by computation. In remaining cases, discrepancies between computed and recorded areas were consistently small, suggesting use of an unknown indigenous arithmetic. In revisiting the research, we discovered evidence for the use of congruence principles, based on proportions between the standard linear Acolhua measure and their units of shorter length. This procedure substitutes for computation with fractions and is labeled "Acolhua congruence arithmetic." The findings also clarify variance between Acolhua and Tenochca linear units, long an issue in understanding Aztec metrology.

  14. Reconfigurable pipelined processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saccardi, R.J.

    1989-09-19

    This patent describes a reconfigurable pipelined processor for processing data. It comprises: a plurality of memory devices for storing bits of data; a plurality of arithmetic units for performing arithmetic functions with the data; cross bar means for connecting the memory devices with the arithmetic units for transferring data therebetween; at least one counter connected with the cross bar means for providing a source of addresses to the memory devices; at least one variable tick delay device connected with each of the memory devices and arithmetic units; and means for providing control bits to the variable tick delay device formore » variably controlling the input and output operations thereof to selectively delay the memory devices and arithmetic units to align the data for processing in a selected sequence.« less

  15. The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.

  16. FBC: a flat binary code scheme for fast Manhattan hash retrieval

    NASA Astrophysics Data System (ADS)

    Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun

    2018-04-01

    Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.

  17. Network-Physics(NP) Bec DIGITAL(#)-VULNERABILITY Versus Fault-Tolerant Analog

    NASA Astrophysics Data System (ADS)

    Alexander, G. K.; Hathaway, M.; Schmidt, H. E.; Siegel, E.

    2011-03-01

    Siegel[AMS Joint Mtg.(2002)-Abs.973-60-124] digits logarithmic-(Newcomb(1881)-Weyl(1914; 1916)-Benford(1938)-"NeWBe"/"OLDbe")-law algebraic-inversion to ONLY BEQS BEC:Quanta/Bosons= digits: Synthesis reveals EMP-like SEVERE VULNERABILITY of ONLY DIGITAL-networks(VS. FAULT-TOLERANT ANALOG INvulnerability) via Barabasi "Network-Physics" relative-``statics''(VS.dynamics-[Willinger-Alderson-Doyle(Not.AMS(5/09)]-]critique); (so called)"Quantum-computing is simple-arithmetic(sans division/ factorization); algorithmic-complexities: INtractibility/ UNdecidability/ INefficiency/NONcomputability / HARDNESS(so MIScalled) "noise"-induced-phase-transitions(NITS) ACCELERATION: Cook-Levin theorem Reducibility is Renormalization-(Semi)-Group fixed-points; number-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(02)] How? mea culpa)can ONLY be MBCS "hot-plasma" versus digit-clumping NON-random BEC; Modular-arithmetic Congruences= Signal X Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)] BEC logarithmic-law inversion factorization:Watkins number-thy. U stat.-phys.); P=/=NP TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation via geometry.

  18. Real-time Continuous Assessment Method for Mental and Physiological Condition using Heart Rate Variability

    NASA Astrophysics Data System (ADS)

    Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro

    It is necessary to monitor the daily health condition for preventing stress syndrome. In this study, it was proposed the method assessing the mental and physiological condition, such as the work stress or the relaxation, using heart rate variability at real time and continuously. The instantanuous heart rate (HR), and the ratio of the number of extreme points (NEP) and the number of heart beats were calculated for assessing mental and physiological condition. In this method, 20 beats heart rate were used to calculate these indexes. These were calculated in one beat interval. Three conditions, which are sitting rest, performing mental arithmetic and watching relaxation movie, were assessed using our proposed algorithm. The assessment accuracies were 71.9% and 55.8%, when performing mental arithmetic and watching relaxation movie respectively. In this method, the mental and physiological condition was assessed using only 20 regressive heart beats, so this method is considered as the real time assessment method.

  19. Noncommutative geometry and arithmetics

    NASA Astrophysics Data System (ADS)

    Almeida, P.

    2009-09-01

    We intend to illustrate how the methods of noncommutative geometry are currently used to tackle problems in class field theory. Noncommutative geometry enables one to think geometrically in situations in which the classical notion of space formed of points is no longer adequate, and thus a “noncommutative space” is needed; a full account of this approach is given in [3] by its main contributor, Alain Connes. The class field theory, i.e., number theory within the realm of Galois theory, is undoubtedly one of the main achievements in arithmetics, leading to an important algebraic machinery; for a modern overview, see [23]. The relationship between noncommutative geometry and number theory is one of the many themes treated in [22, 7-9, 11], a small part of which we will try to put in a more down-to-earth perspective, illustrating through an example what should be called an “application of physics to mathematics,” and our only purpose is to introduce nonspecialists to this beautiful area.

  20. Measuring mandibular motions

    NASA Technical Reports Server (NTRS)

    Dimeff, J.; Rositano, S.; Taylor, R. C.

    1977-01-01

    Mandibular motion along three axes is measured by three motion transducers on floating yoke that rests against mandible. System includes electronics to provide variety of outputs for data display and processing. Head frame is strapped to test subject's skull to provide fixed point of reference for transducers.

  1. 40 CFR 426.56 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Pretreatment standards for new sources. 426.56 Section 426.56 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory...

  2. 40 CFR 426.56 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Pretreatment standards for new sources. 426.56 Section 426.56 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory...

  3. 46 CFR 46.10-45 - Nonsubmergence subdivision load lines in salt water.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... which the vessel is floating but not for the weight of fuel, water, etc., required for consumption between the point of departure and the open sea, and no allowance is to be made for bilge or ballast water...

  4. Efficient volume computation for three-dimensional hexahedral cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dukowicz, J.K.

    1988-02-01

    Currently, algorithms for computing the volume of hexahedral cells with ''ruled'' surfaces require a minimum of 122 FLOPs (floating point operations) per cell. A new algorithm is described which reduces the operation count to 57 FLOPs per cell. copyright 1988 Academic Press, Inc.

  5. 75 FR 19880 - Safety Zone; BW PIONEER at Walker Ridge 249, Outer Continental Shelf FPSO, Gulf of Mexico

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-16

    ... BW PIONEER, a Floating Production, Storage and Offloading (FPSO) system, at Walker Ridge 249 in the... point at 26[deg]41'46.25'' N and 090[deg]30'30.16'' W. This action is based on a thorough and... regulations. The FPSO can swing in a 360 degree arc around the center point. The safety zone will reduce...

  6. Program of arithmetic improvement by means of cognitive enhancement: an intervention in children with special educational needs.

    PubMed

    Deaño, Manuel Deaño; Alfonso, Sonia; Das, Jagannath Prasad

    2015-03-01

    This study reports the cognitive and arithmetic improvement of a mathematical model based on the program PASS Remedial Program (PREP), which aims to improve specific cognitive processes underlying academic skills such as arithmetic. For this purpose, a group of 20 students from the last four grades of Primary Education was divided into two groups. One group (n=10) received training in the program and the other served as control. Students were assessed at pre and post intervention in the PASS cognitive processes (planning, attention, simultaneous and successive processing), general level of intelligence, and arithmetic performance in calculus and solving problems. Performance of children from the experimental group was significantly higher than that of the control group in cognitive process and arithmetic. This joint enhancement of cognitive and arithmetic processes was a result of the operationalization of training that promotes the encoding task, attention and planning, and learning by induction, mediation and verbalization. The implications of this are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Contributions of Domain-General Cognitive Resources and Different Forms of Arithmetic Development to Pre-Algebraic Knowledge

    PubMed Central

    Fuchs, Lynn S.; Compton, Donald L.; Fuchs, Douglas; Powell, Sarah R.; Schumacher, Robin F.; Hamlett, Carol L.; Vernier, Emily; Namkung, Jessica M.; Vukovic, Rose K.

    2012-01-01

    The purpose of this study was to investigate the contributions of domain-general cognitive resources and different forms of arithmetic development to individual differences in pre-algebraic knowledge. Children (n=279; mean age=7.59 yrs) were assessed on 7 domain-general cognitive resources as well as arithmetic calculations and word problems at start of 2nd grade and on calculations, word problems, and pre-algebraic knowledge at end of 3rd grade. Multilevel path analysis, controlling for instructional effects associated with the sequence of classrooms in which students were nested across grades 2–3, indicated arithmetic calculations and word problems are foundational to pre-algebraic knowledge. Also, results revealed direct contributions of nonverbal reasoning and oral language to pre-algebraic knowledge, beyond indirect effects that are mediated via arithmetic calculations and word problems. By contrast, attentive behavior, phonological processing, and processing speed contributed to pre-algebraic knowledge only indirectly via arithmetic calculations and word problems. PMID:22409764

  8. A natural history of mathematics: George Peacock and the making of English algebra.

    PubMed

    Lambert, Kevin

    2013-06-01

    In a series of papers read to the Cambridge Philosophical Society through the 1820s, the Cambridge mathematician George Peacock laid the foundation for a natural history of arithmetic that would tell a story of human progress from counting to modern arithmetic. The trajectory of that history, Peacock argued, established algebraic analysis as a form of universal reasoning that used empirically warranted operations of mind to think with symbols on paper. The science of counting would suggest arithmetic, arithmetic would suggest arithmetical algebra, and, finally, arithmetical algebra would suggest symbolic algebra. This philosophy of suggestion provided the foundation for Peacock's "principle of equivalent forms," which justified the practice of nineteenth-century English symbolic algebra. Peacock's philosophy of suggestion owed a considerable debt to the early Cambridge Philosophical Society culture of natural history. The aim of this essay is to show how that culture of natural history was constitutively significant to the practice of nineteenth-century English algebra.

  9. JPRS Report, Soviet Union, Military Affairs.

    DTIC Science & Technology

    1990-06-14

    live better. What am I proposing? That in order to at least come anywhere close to a 41- hour workweek , it is important to differentiate the ...time off and a 41- hour workweek , are extended to the officer personnel. It was pointed out that any limitations here have no legitimate basis...Procuracy. Here Is the "Arithmetic"... Possibly, somewhere the 41- hour workweek is observed in the troop units but this is not so on

  10. Arithmetic difficulties in children with cerebral palsy are related to executive function and working memory.

    PubMed

    Jenks, Kathleen M; de Moor, Jan; van Lieshout, Ernest C D M

    2009-07-01

    Although it is believed that children with cerebral palsy are at high risk for learning difficulties and arithmetic difficulties in particular, few studies have investigated this issue. Arithmetic ability was longitudinally assessed in children with cerebral palsy in special (n = 41) and mainstream education (n = 16) and controls in mainstream education (n = 16). Second grade executive function and working memory scores were used to predict third grade arithmetic accuracy and response time. Children with cerebral palsy in special education were less accurate and slower than their peers on all arithmetic tests, even after controlling for IQ, whereas children with cerebral palsy in mainstream education performed as well as controls. Although the performance gap became smaller over time, it did not disappear. Children with cerebral palsy in special education showed evidence of executive function and working memory deficits in shifting, updating, visuospatial sketchpad and phonological loop (for digits, not words) whereas children with cerebral palsy in mainstream education only had a deficit in visuospatial sketchpad. Hierarchical regression revealed that, after controlling for intelligence, components of executive function and working memory explained large proportions of unique variance in arithmetic accuracy and response time and these variables were sufficient to explain group differences in simple, but not complex, arithmetic. Children with cerebral palsy are at risk for specific executive function and working memory deficits that, when present, increase the risk for arithmetic difficulties in these children.

  11. Drift trajectories of a floating human body simulated in a hydraulic model of Puget Sound.

    PubMed

    Ebbesmeyer, C C; Haglund, W D

    1994-01-01

    After a young man jumped off a 221-foot (67 meters) high bridge, the drift of the body that beached 20 miles (32 km) away at Alki Point in Seattle, Washington was simulated with a hydraulic model. Simulations for the appropriate time period were performed using a small floating bead to represent the body in the hydraulic model at the University of Washington. Bead movements were videotaped and transferred to Computer Aided Drafting (AutoCAD) charts on a personal computer. Because of strong tidal currents in the narrow passage under the bridge (The Narrows near Tacoma, WA), small changes in the time of the jump (+/- 30 minutes) made large differences in the distance the body traveled (30 miles; 48 km). Hydraulic and other types of oceanographic models may be located by contacting technical experts known as physical oceanographers at local universities, and can be utilized to demonstrate trajectories of floating objects and the time required to arrive at selected locations. Potential applications for forensic death investigators include: to be able to set geographic and time limits for searches; determine potential origin of remains found floating or beached; and confirm and correlate information regarding entry into the water and sightings of remains.

  12. Conceptual Knowledge of Fraction Arithmetic

    ERIC Educational Resources Information Center

    Siegler, Robert S.; Lortie-Forgues, Hugues

    2015-01-01

    Understanding an arithmetic operation implies, at minimum, knowing the direction of effects that the operation produces. However, many children and adults, even those who execute arithmetic procedures correctly, may lack this knowledge on some operations and types of numbers. To test this hypothesis, we presented preservice teachers (Study 1),…

  13. Disabilities of Arithmetic and Mathematical Reasoning: Perspectives from Neurology and Neuropsychology.

    ERIC Educational Resources Information Center

    Rourke, Byron P.; Conway, James A.

    1997-01-01

    Reviews current research on brain-behavior relationships in disabilities of arithmetic and mathematical reasoning from both a neurological and a neuropsychological perspective. Defines developmental dyscalculia and the developmental importance of right versus left hemisphere integrity for the mediation of arithmetic learning and explores…

  14. MM Algorithms for Geometric and Signomial Programming

    PubMed Central

    Lange, Kenneth; Zhou, Hua

    2013-01-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545

  15. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  16. Two-point correlation function for Dirichlet L-functions

    NASA Astrophysics Data System (ADS)

    Bogomolny, E.; Keating, J. P.

    2013-03-01

    The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy-Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question.

  17. Children Learn Spurious Associations in Their Math Textbooks: Examples from Fraction Arithmetic

    ERIC Educational Resources Information Center

    Braithwaite, David W.; Siegler, Robert S.

    2018-01-01

    Fraction arithmetic is among the most important and difficult topics children encounter in elementary and middle school mathematics. Braithwaite, Pyke, and Siegler (2017) hypothesized that difficulties learning fraction arithmetic often reflect reliance on associative knowledge--rather than understanding of mathematical concepts and procedures--to…

  18. A Computational Model of Fraction Arithmetic

    ERIC Educational Resources Information Center

    Braithwaite, David W.; Pyke, Aryn A.; Siegler, Robert S.

    2017-01-01

    Many children fail to master fraction arithmetic even after years of instruction, a failure that hinders their learning of more advanced mathematics as well as their occupational success. To test hypotheses about why children have so many difficulties in this area, we created a computational model of fraction arithmetic learning and presented it…

  19. Arithmetic 400. A Computer Educational Program.

    ERIC Educational Resources Information Center

    Firestein, Laurie

    "ARITHMETIC 400" is the first of the next generation of educational programs designed to encourage thinking about arithmetic problems. Presented in video game format, performance is a measure of correctness, speed, accuracy, and fortune as well. Play presents a challenge to individuals at various skill levels. The program, run on an Apple…

  20. Simulating Network Retrieval of Arithmetic Facts.

    ERIC Educational Resources Information Center

    Ashcraft, Mark H.

    This report describes a simulation of adults' retrieval of arithmetic facts from a network-based memory representation. The goals of the simulation project are to: demonstrate in specific form the nature of a spreading activation model of mental arithmetic; account for three important reaction time effects observed in laboratory investigations;…

  1. Individual Differences in Children's Understanding of Inversion and Arithmetical Skill

    ERIC Educational Resources Information Center

    Gilmore, Camilla K.; Bryant, Peter

    2006-01-01

    Background and aims: In order to develop arithmetic expertise, children must understand arithmetic principles, such as the inverse relationship between addition and subtraction, in addition to learning calculation skills. We report two experiments that investigate children's understanding of the principle of inversion and the relationship between…

  2. The Practice of Arithmetic in Liberian Schools.

    ERIC Educational Resources Information Center

    Brenner, Mary E.

    1985-01-01

    Describes a study of Liberian schools in which students of the Vai tribe are instructed in Western mathematical practices which differ from those of the students' home culture. Reports that the Vai children employed syncretic arithmetic practices, combining two distinct systems of arithmetic in a classroom environment that tacitly facilitated the…

  3. From Arithmetic Sequences to Linear Equations

    ERIC Educational Resources Information Center

    Matsuura, Ryota; Harless, Patrick

    2012-01-01

    The first part of the article focuses on deriving the essential properties of arithmetic sequences by appealing to students' sense making and reasoning. The second part describes how to guide students to translate their knowledge of arithmetic sequences into an understanding of linear equations. Ryota Matsuura originally wrote these lessons for…

  4. Baby Arithmetic: One Object Plus One Tone

    ERIC Educational Resources Information Center

    Kobayashi, Tessei; Hiraki, Kazuo; Mugitani, Ryoko; Hasegawa, Toshikazu

    2004-01-01

    Recent studies using a violation-of-expectation task suggest that preverbal infants are capable of recognizing basic arithmetical operations involving visual objects. There is still debate, however, over whether their performance is based on any expectation of the arithmetical operations, or on a general perceptual tendency to prefer visually…

  5. Conceptual Knowledge of Decimal Arithmetic

    ERIC Educational Resources Information Center

    Lortie-Forgues, Hugues; Siegler, Robert S.

    2016-01-01

    In two studies (N's = 55 and 54), we examined a basic form of conceptual understanding of rational number arithmetic, the direction of effect of decimal arithmetic operations, at a level of detail useful for informing instruction. Middle school students were presented tasks examining knowledge of the direction of effects (e.g., "True or…

  6. IBM system/360 assembly language interval arithmetic software

    NASA Technical Reports Server (NTRS)

    Phillips, E. J.

    1972-01-01

    Computer software designed to perform interval arithmetic is described. An interval is defined as the set of all real numbers between two given numbers including or excluding one or both endpoints. Interval arithmetic consists of the various elementary arithmetic operations defined on the set of all intervals, such as interval addition, subtraction, union, etc. One of the main applications of interval arithmetic is in the area of error analysis of computer calculations. For example, it has been used sucessfully to compute bounds on sounding errors in the solution of linear algebraic systems, error bounds in numerical solutions of ordinary differential equations, as well as integral equations and boundary value problems. The described software enables users to implement algorithms of the type described in references efficiently on the IBM 360 system.

  7. Factor structure of the Norwegian version of the WAIS-III in a clinical sample: the arithmetic problem.

    PubMed

    Egeland, Jens; Bosnes, Ole; Johansen, Hans

    2009-09-01

    Confirmatory Factor Analyses (CFA) of the Wechsler Adult Intelligence Scale-III (WAIS-III) lend partial support to the four-factor model proposed in the test manual. However, the Arithmetic subtest has been especially difficult to allocate to one factor. Using the new Norwegian WAIS-III version, we tested factor models differing in the number of factors and in the placement of the Arithmetic subtest in a mixed clinical sample (n = 272). Only the four-factor solutions had adequate goodness-of-fit values. Allowing Arithmetic to load on both the Verbal Comprehension and Working Memory factors provided a more parsimonious solution compared to considering the subtest only as a measure of Working Memory. Effects of education were particularly high for both the Verbal Comprehension tests and Arithmetic.

  8. If Gravity is Geometry, is Dark Energy just Arithmetic?

    NASA Astrophysics Data System (ADS)

    Czachor, Marek

    2017-04-01

    Arithmetic operations (addition, subtraction, multiplication, division), as well as the calculus they imply, are non-unique. The examples of four-dimensional spaces, R+4 and (- L/2, L/2)4, are considered where different types of arithmetic and calculus coexist simultaneously. In all the examples there exists a non-Diophantine arithmetic that makes the space globally Minkowskian, and thus the laws of physics are formulated in terms of the corresponding calculus. However, when one switches to the `natural' Diophantine arithmetic and calculus, the Minkowskian character of the space is lost and what one effectively obtains is a Lorentzian manifold. I discuss in more detail the problem of electromagnetic fields produced by a pointlike charge. The solution has the standard form when expressed in terms of the non-Diophantine formalism. When the `natural' formalsm is used, the same solution looks as if the fields were created by a charge located in an expanding universe, with nontrivially accelerating expansion. The effect is clearly visible also in solutions of the Friedman equation with vanishing cosmological constant. All of this suggests that phenomena attributed to dark energy may be a manifestation of a miss-match between the arithmetic employed in mathematical modeling, and the one occurring at the level of natural laws. Arithmetic is as physical as geometry.

  9. Children learn spurious associations in their math textbooks: Examples from fraction arithmetic.

    PubMed

    Braithwaite, David W; Siegler, Robert S

    2018-04-26

    Fraction arithmetic is among the most important and difficult topics children encounter in elementary and middle school mathematics. Braithwaite, Pyke, and Siegler (2017) hypothesized that difficulties learning fraction arithmetic often reflect reliance on associative knowledge-rather than understanding of mathematical concepts and procedures-to guide choices of solution strategies. They further proposed that this associative knowledge reflects distributional characteristics of the fraction arithmetic problems children encounter. To test these hypotheses, we examined textbooks and middle school children in the United States (Experiments 1 and 2) and China (Experiment 3). We asked the children to predict which arithmetic operation would accompany a specified pair of operands, to generate operands to accompany a specified arithmetic operation, and to match operands and operations. In both countries, children's responses indicated that they associated operand pairs having equal denominators with addition and subtraction, and operand pairs having a whole number and a fraction with multiplication and division. The children's associations paralleled the textbook input in both countries, which was consistent with the hypothesis that children learned the associations from the practice problems. Differences in the effects of such associative knowledge on U.S. and Chinese children's fraction arithmetic performance are discussed, as are implications of these differences for educational practice. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. Is the SNARC effect related to the level of mathematics? No systematic relationship observed despite more power, more repetitions, and more direct assessment of arithmetic skill.

    PubMed

    Cipora, Krzysztof; Nuerk, Hans-Christoph

    2013-01-01

    The SNARC (spatial-numerical association of response codes) described that larger numbers are responded faster with the right hand and smaller numbers with the left hand. It is held in the literature that arithmetically skilled and nonskilled adults differ in the SNARC. However, the respective data are descriptive, and the decisive tests are nonsignificant. Possible reasons for this nonsignificance could be that in previous studies (a) very small samples were used, (b) there were too few repetitions producing too little power and, consequently, reliabilities that were too small to reach conventional significance levels for the descriptive skill differences in the SNARC, and (c) general mathematical ability was assessed by the field of study of students, while individual arithmetic skills were not examined. Therefore we used a much bigger sample, a lot more repetitions, and direct assessment of arithmetic skills to explore relations between the SNARC effect and arithmetic skills. Nevertheless, a difference in SNARC effect between arithmetically skilled and nonskilled participants was not obtained. Bayesian analysis showed positive evidence of a true null effect, not just a power problem. Hence we conclude that the idea that arithmetically skilled and nonskilled participants generally differ in the SNARC effect is not warranted by our data.

  11. Pointright: a system to redirect mouse and keyboard control among multiple machines

    DOEpatents

    Johanson, Bradley E [Palo Alto, CA; Winograd, Terry A [Stanford, CA; Hutchins, Gregory M [Mountain View, CA

    2008-09-30

    The present invention provides a software system, PointRight, that allows for smooth and effortless control of pointing and input devices among multiple displays. With PointRight, a single free-floating mouse and keyboard can be used to control multiple screens. When the cursor reaches the edge of a screen it seamlessly moves to the adjacent screen and keyboard control is simultaneously redirected to the appropriate machine. Laptops may also redirect their keyboard and pointing device, and multiple pointers are supported simultaneously. The system automatically reconfigures itself as displays go on, go off, or change the machine they display.

  12. 33 CFR 207.200 - Mississippi River below mouth of Ohio River, including South and Southwest Passes; use...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... banks of the river, and no floating plant other than launches and similar small craft shall land against... white background readable from the waterway side, placed on each side of the river near the point where...

  13. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor

    PubMed Central

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-01-01

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation. PMID:27983714

  14. Verification of Numerical Programs: From Real Numbers to Floating Point Numbers

    NASA Technical Reports Server (NTRS)

    Goodloe, Alwyn E.; Munoz, Cesar; Kirchner, Florent; Correnson, Loiec

    2013-01-01

    Numerical algorithms lie at the heart of many safety-critical aerospace systems. The complexity and hybrid nature of these systems often requires the use of interactive theorem provers to verify that these algorithms are logically correct. Usually, proofs involving numerical computations are conducted in the infinitely precise realm of the field of real numbers. However, numerical computations in these algorithms are often implemented using floating point numbers. The use of a finite representation of real numbers introduces uncertainties as to whether the properties veri ed in the theoretical setting hold in practice. This short paper describes work in progress aimed at addressing these concerns. Given a formally proven algorithm, written in the Program Verification System (PVS), the Frama-C suite of tools is used to identify sufficient conditions and verify that under such conditions the rounding errors arising in a C implementation of the algorithm do not affect its correctness. The technique is illustrated using an algorithm for detecting loss of separation among aircraft.

  15. A performance comparison of the IBM RS/6000 and the Astronautics ZS-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, W.M.; Abraham, S.G.; Davidson, E.S.

    1991-01-01

    Concurrent uniprocessor architectures, of which vector and superscalar are two examples, are designed to capitalize on fine-grain parallelism. The authors have developed a performance evaluation method for comparing and improving these architectures, and in this article they present the methodology and a detailed case study of two machines. The runtime of many programs is dominated by time spent in loop constructs - for example, Fortran Do-loops. Loops generally comprise two logical processes: The access process generates addresses for memory operations while the execute process operates on floating-point data. Memory access patterns typically can be generated independently of the data inmore » the execute process. This independence allows the access process to slip ahead, thereby hiding memory latency. The IBM 360/91 was designed in 1967 to achieve slip dynamically, at runtime. One CPU unit executes integer operations while another handles floating-point operations. Other machines, including the VAX 9000 and the IBM RS/6000, use a similar approach.« less

  16. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.

    PubMed

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-12-15

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.

  17. Floating-Point Modules Targeted for Use with RC Compilation Tools

    NASA Technical Reports Server (NTRS)

    Sahin, Ibrahin; Gloster, Clay S.

    2000-01-01

    Reconfigurable Computing (RC) has emerged as a viable computing solution for computationally intensive applications. Several applications have been mapped to RC system and in most cases, they provided the smallest published execution time. Although RC systems offer significant performance advantages over general-purpose processors, they require more application development time than general-purpose processors. This increased development time of RC systems provides the motivation to develop an optimized module library with an assembly language instruction format interface for use with future RC system that will reduce development time significantly. In this paper, we present area/performance metrics for several different types of floating point (FP) modules that can be utilized to develop complex FP applications. These modules are highly pipelined and optimized for both speed and area. Using these modules, and example application, FP matrix multiplication, is also presented. Our results and experiences show, that with these modules, 8-10X speedup over general-purpose processors can be achieved.

  18. Fortran Program for X-Ray Photoelectron Spectroscopy Data Reformatting

    NASA Technical Reports Server (NTRS)

    Abel, Phillip B.

    1989-01-01

    A FORTRAN program has been written for use on an IBM PC/XT or AT or compatible microcomputer (personal computer, PC) that converts a column of ASCII-format numbers into a binary-format file suitable for interactive analysis on a Digital Equipment Corporation (DEC) computer running the VGS-5000 Enhanced Data Processing (EDP) software package. The incompatible floating-point number representations of the two computers were compared, and a subroutine was created to correctly store floating-point numbers on the IBM PC, which can be directly read by the DEC computer. Any file transfer protocol having provision for binary data can be used to transmit the resulting file from the PC to the DEC machine. The data file header required by the EDP programs for an x ray photoelectron spectrum is also written to the file. The user is prompted for the relevant experimental parameters, which are then properly coded into the format used internally by all of the VGS-5000 series EDP packages.

  19. Contents and occurrence of cadmium in the coals from Guizhou province, China.

    PubMed

    Song, Dangyu; Wang, Mingshi; Zhang, Junying; Zheng, Chuguang

    2008-10-01

    Eleven raw coal samples were collected from Liuzhi, Suicheng, Zunyi, Xingren, Xingyi, and Anlong districts in Guizhou Province, Southwest China. The content of cadmium (Cd) in coal was determined using inductively coupled plasma mass-spectrometry (ICP-MS). Cd contents ranged from 0.146 to 2.74 ppm (whole coal basis), with an average of 1.09 ppm. In comparison with the arithmetic means of Cd in Chinese coal (0.25 ppm), this is much higher. In order to find its occurrence in coal, float-sink analysis and a coal flotation test by progressive release were conducted on two raw coal samples. The content of the Cd and ash yield of the flotation products were determined. The organic matter was removed by low-temperature ashing (LTA). X-ray diffraction (XRD) was used to differentiate the main, minor, and trace minerals in the LTA from different flotation subproducts. Quartz, kaolinite, pyrite, and calcite were found to dominate the mineral matters, with a proportion of anatase, muscovite, and illite. Then quantitative analysis of minerals in LTA was conducted using material analysis using diffraction (MAUD) based on the Rietveld refinement method. Results show that Cd has a strong association with kaolinite.

  20. Video- Demonstration of Tea and Sugar in Water Onboard the International Space Station (ISS)

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Saturday Morning Science, the science of opportunity series of applied experiments and demonstrations, performed aboard the International Space Station (ISS) by Expedition 6 astronaut Dr. Don Pettit, revealed some remarkable findings. Imagine what would happen if a collection of loosely attractive particles were confined in a relatively small region in the floating environment of space. Would they self organize into a compact structure, loosely organize into a fractal, or just continue to float around in their container? In this video clip, Dr. Pettit explored the possibilities. At one point he remarks, 'These things look like pictures from the Hubble Space Telescope.' Watch the video and see what happens!

  1. High-order computer-assisted estimates of topological entropy

    NASA Astrophysics Data System (ADS)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  2. Geometrical and quantum mechanical aspects in observers' mathematics

    NASA Astrophysics Data System (ADS)

    Khots, Boris; Khots, Dmitriy

    2013-10-01

    When we create mathematical models for Quantum Mechanics we assume that the mathematical apparatus used in modeling, at least the simplest mathematical apparatus, is infallible. In particular, this relates to the use of "infinitely small" and "infinitely large" quantities in arithmetic and the use of Newton Cauchy definitions of a limit and derivative in analysis. We believe that is where the main problem lies in contemporary study of nature. We have introduced a new concept of Observer's Mathematics (see www.mathrelativity.com). Observer's Mathematics creates new arithmetic, algebra, geometry, topology, analysis and logic which do not contain the concept of continuum, but locally coincide with the standard fields. We prove that Euclidean Geometry works in sufficiently small neighborhood of the given line, but when we enlarge the neighborhood, non-euclidean Geometry takes over. We prove that the physical speed is a random variable, cannot exceed some constant, and this constant does not depend on an inertial coordinate system. We proved the following theorems: Theorem A (Lagrangian). Let L be a Lagrange function of free material point with mass m and speed v. Then the probability P of L = m 2 v2 is less than 1: P(L = m 2 v2) < 1. Theorem B (Nadezhda effect). On the plane (x, y) on every line y = kx there is a point (x0, y0) with no existing Euclidean distance between origin (0, 0) and this point. Conjecture (Black Hole). Our space-time nature is a black hole: light cannot go out infinitely far from origin.

  3. Eddy Seeding in the Labrador Sea: a Submerged Autonomous Launching Platform (SALP) Application

    NASA Astrophysics Data System (ADS)

    Furey, Heather H.; Femke de Jong, M.; Bower, Amy S.

    2013-04-01

    A simplified Submerged Autonomous Launch Platform (SALP) was used to release profiling floats into warm-core Irminger Rings (IRs) in order to investigate their vertical structure and evolution in the Labrador Sea from September 2007 - September 2009. IRs are thought to play an important role in restratification after convection in the Labrador Sea. The SALP is designed to release surface drifters or subsurface floats serially from a traditional ocean mooring, using real-time ocean measurements as criteria for launch. The original prototype instrument used properties measured at multiple depths, with information relayed to the SALP controller via acoustic modems. In our application, two SALP carousels were attached at 500 meters onto a heavily-instrumented deep water mooring, in the path of recently-shed IRs off the west Greenland shelf. A release algorithm was designed to use temperature and pressure measured at the SALP depth only to release one or two APEX profiling drifters each time an IR passed the mooring, using limited historical observations to set release thresholds. Mechanically and electronically, the SALP worked well: out of eleven releases, there was only one malfunction when a float was caught in the cage after the burn-wire had triggered. However, getting floats trapped in eddies met with limited success due to problems with the release algorithm and float ballasting. Out of seven floats launched from the platform using oceanographic criteria, four were released during warm water events that were not related to passing IRs. Also, after float release, it took on average about 2.6 days for the APEX to adjust from its initial ballast depth, about 600 meters, to its park point of 300 meters, leaving the float below the trapped core of water in the IRs. The other mooring instruments (at depths of 100 to 3000 m), revealed that 12 IRs passed by the mooring in the 2-year monitoring period. With this independent information, we were able to assess and improve the release algorithm, still based on ocean conditions measured only at one depth. We found that much better performance could have been achieved with an algorithm that detected IRs based on a temperature difference from a long-term running mean rather than a fixed temperature threshold. This highlights the challenge of designing an appropriate release strategy with limited a priori information on the amplitude and time scales of the background variability.

  4. A Substituting Meaning for the Equals Sign in Arithmetic Notating Tasks

    ERIC Educational Resources Information Center

    Jones, Ian; Pratt, Dave

    2012-01-01

    Three studies explore arithmetic tasks that support both substitutive and basic relational meanings for the equals sign. The duality of meanings enabled children to engage meaningfully and purposefully with the structural properties of arithmetic statements in novel ways. Some, but not all, children were successful at the adapted task and were…

  5. Children's Acquisition of Arithmetic Principles: The Role of Experience

    ERIC Educational Resources Information Center

    Prather, Richard; Alibali, Martha W.

    2011-01-01

    The current study investigated how young learners' experiences with arithmetic equations can lead to learning of an arithmetic principle. The focus was elementary school children's acquisition of the Relation to Operands principle for subtraction (i.e., for natural numbers, the difference must be less than the minuend). In Experiment 1, children…

  6. Identifying Simple Numerical Stimuli: Processing Inefficiencies Exhibited by Arithmetic Learning Disabled Children.

    ERIC Educational Resources Information Center

    Koontz, Kristine L.; Berch, Daniel B.

    1996-01-01

    Children with arithmetic learning disabilities (n=16) and normally achieving controls (n=16) in grades 3-5 were administered a battery of computerized tasks. Memory spans for both letters and digits were found to be smaller among the arithmetic learning disabled children. Implications for teaching are discussed. (Author/CMS)

  7. Arithmetic Abilities in Children with Developmental Dyslexia: Performance on French ZAREKI-R Test

    ERIC Educational Resources Information Center

    De Clercq-Quaegebeur, Maryse; Casalis, Séverine; Vilette, Bruno; Lemaitre, Marie-Pierre; Vallée, Louis

    2018-01-01

    A high comorbidity between reading and arithmetic disabilities has already been reported. The present study aims at identifying more precisely patterns of arithmetic performance in children with developmental dyslexia, defined with severe and specific criteria. By means of a standardized test of achievement in mathematics ("Calculation and…

  8. Binary Arithmetic From Hariot (CA, 1600 A.D.) to the Computer Age.

    ERIC Educational Resources Information Center

    Glaser, Anton

    This history of binary arithmetic begins with details of Thomas Hariot's contribution and includes specific references to Hariot's manuscripts kept at the British Museum. A binary code developed by Sir Francis Bacon is discussed. Briefly mentioned are contributions to binary arithmetic made by Leibniz, Fontenelle, Gauss, Euler, Benzout, Barlow,…

  9. Arithmetic Performance of Children with Cerebral Palsy: The Influence of Cognitive and Motor Factors

    ERIC Educational Resources Information Center

    van Rooijen, Maaike; Verhoeven, Ludo; Smits, Dirk-Wouter; Ketelaar, Marjolijn; Becher, Jules G.; Steenbergen, Bert

    2012-01-01

    Children diagnosed with cerebral palsy (CP) often show difficulties in arithmetic compared to their typically developing peers. The present study explores whether cognitive and motor variables are related to arithmetic performance of a large group of primary school children with CP. More specifically, the relative influence of non-verbal…

  10. Cognitive Arithmetic: Evidence for the Development of Automaticity.

    ERIC Educational Resources Information Center

    LeFevre, Jo-Anne; Bisanz, Jeffrey

    To determine whether children's knowledge of arithmetic facts becomes increasingly "automatic" with age, 7-year-olds, 11-year-olds, and adults were given a number-matching task for which mental arithmetic should have been irrelevant. Specifically, students were required to verify the presence of a probe number in a previously presented pair (e.g.,…

  11. Continuity in Representation between Children and Adults: Arithmetic Knowledge Hinders Undergraduates' Algebraic Problem Solving

    ERIC Educational Resources Information Center

    McNeil, Nicole M.; Rittle-Johnson, Bethany; Hattikudur, Shanta; Petersen, Lori A.

    2010-01-01

    This study examined if solving arithmetic problems hinders undergraduates' accuracy on algebra problems. The hypothesis was that solving arithmetic problems would hinder accuracy because it activates an operational view of equations, even in educated adults who have years of experience with algebra. In three experiments, undergraduates (N = 184)…

  12. Fostering Formal Commutativity Knowledge with Approximate Arithmetic

    PubMed Central

    Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert

    2015-01-01

    How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311

  13. Frontoparietal white matter diffusion properties predict mental arithmetic skills in children

    PubMed Central

    Tsang, Jessica M.; Dougherty, Robert F.; Deutsch, Gayle K.; Wandell, Brian A.; Ben-Shachar, Michal

    2009-01-01

    Functional MRI studies of mental arithmetic consistently report blood oxygen level–dependent signals in the parietal and frontal regions. We tested whether white matter pathways connecting these regions are related to mental arithmetic ability by using diffusion tensor imaging (DTI) to measure these pathways in 28 children (age 10–15 years, 14 girls) and assessing their mental arithmetic skills. For each child, we identified anatomically the anterior portion of the superior longitudinal fasciculus (aSLF), a pathway connecting parietal and frontal cortex. We measured fractional anisotropy in a core region centered along the length of the aSLF. Fractional anisotropy in the left aSLF positively correlates with arithmetic approximation skill, as measured by a mental addition task with approximate answer choices. The correlation is stable in adjacent core aSLF regions but lower toward the pathway endpoints. The correlation is not explained by shared variance with other cognitive abilities and did not pass significance in the right aSLF. These measurements used DTI, a structural method, to test a specific functional model of mental arithmetic. PMID:19948963

  14. Numerical aerodynamic simulation facility preliminary study: Executive study

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A computing system was designed with the capability of providing an effective throughput of one billion floating point operations per second for three dimensional Navier-Stokes codes. The methodology used in defining the baseline design, and the major elements of the numerical aerodynamic simulation facility are described.

  15. Floating-Point Numerical Function Generators Using EVMDDs for Monotone Elementary Functions

    DTIC Science & Technology

    2009-01-01

    Villa, R. K. Brayton , and A. L. Sangiovanni- Vincentelli, “Multi-valued decision diagrams: Theory and appli- cations,” Multiple-Valued Logic: An...Shmerko, and R. S. Stankovic, Decision Diagram Techniques for Micro- and Na- noelectronic Design, CRC Press, Taylor & Francis Group, 2006. Appendix

  16. 40 CFR 426.57 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... control technology. 426.57 Section 426.57 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.57 Effluent limitations guidelines representing the degree of effluent reduction...

  17. Ironic Intertextuality in "Six Chapters from a Floating Life" and "Six Chapters from Life at a Cadre School."

    ERIC Educational Resources Information Center

    Wu, Yenna

    1991-01-01

    Exploration of and comparisons between structural, stylistic, and linguistic similarities and differences in two modern Chinese semiautobiographical texts points out both authors' methods for depicting the ironies within their socio-political and ideological conditions. (19 references) (CB)

  18. Torque-balanced vibrationless rotary coupling

    DOEpatents

    Miller, Donald M.

    1980-01-01

    This disclosure describes a torque-balanced vibrationless rotary coupling for transmitting rotary motion without unwanted vibration into the spindle of a machine tool. A drive member drives a driven member using flexible connecting loops which are connected tangentially and at diametrically opposite connecting points through a free floating ring.

  19. 78 FR 46258 - Drawbridge Operation Regulation Lake Washington, Seattle, WA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-31

    ... Operation Regulation Lake Washington, Seattle, WA AGENCY: Coast Guard, DHS. ACTION: Notice of deviation from... that governs the Evergreen Point Floating Bridge (State Route 520 across Lake Washington) at Seattle... Route 520 across Lake Washington) remain closed to vessel traffic to facilitate safe passage of...

  20. Determination of the Stresses Produced by the Landing Impact in the Bulkheads of a Seaplane Bottom

    NASA Technical Reports Server (NTRS)

    Darevsky, V. M.

    1944-01-01

    The present report deals with the determination of the impact stresses in the bulkhead floors of a seaplane bottom. The dynamic problem is solved on the assumption of a certain elastic system, the floor being assumed as a weightless elastic beam with concentrated masses at the ends (due to the mass of the float) and with a spring which replaces the elastic action of the keel in the center. The distributed load on the floor is that due to the hydrodynamic force acting over a certain portion of the bottom. The pressure distribution over the width of the float is assumed to follow the Wagner law. The formulas given for the maximum bending moment are derived on the assumption that the keel is relatively elastic, in which case it can be shown that at each instant of time the maximum bending moment is at the point of juncture of the floor with the keel. The bending moment at this point is a function of the half width of the wetted surface c and reaches its maximum value when c is approximately equal to b/2 where b is the half width of the float. In general, however, for computing the bending moment the values of the bending moment at the keel for certain values of c are determined and a curve is drawn. The illustrative sample computation gave for the stresses a result approximately equal to that obtained by the conventional factory computation.

  1. Working Memory and Arithmetic Calculation in Children: The Contributory Roles of Processing Speed, Short-Term Memory, and Reading

    ERIC Educational Resources Information Center

    Berg, Derek H.

    2008-01-01

    The cognitive underpinnings of arithmetic calculation in children are noted to involve working memory; however, cognitive processes related to arithmetic calculation and working memory suggest that this relationship is more complex than stated previously. The purpose of this investigation was to examine the relative contributions of processing…

  2. Arithmetic Achievement in Children with Cerebral Palsy or Spina Bifida Meningomyelocele

    ERIC Educational Resources Information Center

    Jenks, Kathleen M.; van Lieshout, Ernest C. D. M.; de Moor, Jan

    2009-01-01

    The aim of this study was to establish whether children with a physical disability resulting from central nervous system disorders (CNSd) show a level of arithmetic achievement lower than that of non-CNSd children and whether this is related to poor automaticity of number facts or reduced arithmetic instruction time. Twenty-two children with CNSd…

  3. The Association between Arithmetic and Reading Performance in School: A Meta-Analytic Study

    ERIC Educational Resources Information Center

    Singer, Vivian; Strasser, Kathernie

    2017-01-01

    Many studies of school achievement find a significant association between reading and arithmetic achievement. The magnitude of the association varies widely across the studies, but the sources of this variation have not been identified. The purpose of this paper is to examine the magnitude and determinants of the relation between arithmetic and…

  4. 24 CFR Appendix E to Part 3500 - Arithmetic Steps

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 5 2010-04-01 2010-04-01 false Arithmetic Steps E Appendix E to...—Arithmetic Steps I. Example Illustrating Aggregate Analysis: ASSUMPTIONS: Disbursements: $360 for school... Payment: July 1 Step 1—Initial Trial Balance Aggregate pmt disb bal Jun 0 0 0 Jul 130 500 −370 Aug 130 0...

  5. Computational Fluency and Strategy Choice Predict Individual and Cross-National Differences in Complex Arithmetic

    ERIC Educational Resources Information Center

    Vasilyeva, Marina; Laski, Elida V.; Shen, Chen

    2015-01-01

    The present study tested the hypothesis that children's fluency with basic number facts and knowledge of computational strategies, derived from early arithmetic experience, predicts their performance on complex arithmetic problems. First-grade students from United States and Taiwan (N = 152, mean age: 7.3 years) were presented with problems that…

  6. Arithmetic Difficulties in Children with Cerebral Palsy Are Related to Executive Function and Working Memory

    ERIC Educational Resources Information Center

    Jenks, Kathleen M.; de Moor, Jan; van Lieshout, Ernest C. D. M.

    2009-01-01

    Background: Although it is believed that children with cerebral palsy are at high risk for learning difficulties and arithmetic difficulties in particular, few studies have investigated this issue. Methods: Arithmetic ability was longitudinally assessed in children with cerebral palsy in special (n = 41) and mainstream education (n = 16) and…

  7. Cognitive Impairments of Children with Severe Arithmetic Difficulties: Cognitive Deficit or Developmental Lag?

    ERIC Educational Resources Information Center

    Berg, Derek H.

    2008-01-01

    An age-matched/achievement-matched design was utilized to examine the cognitive functioning of children with severe arithmetic difficulties. A battery of cognitive tasks was administered to three groups of elementary aged children: 20 children with severe arithmetic difficulties (SAD), 20 children matched in age (CAM) to the children with SAD, and…

  8. A Cross-Cultural Investigation into the Development of Place-Value Concepts of Children in Taiwan and the United States.

    ERIC Educational Resources Information Center

    Yang, Ma Tzu-Lin; Cobb, Paul

    1995-01-01

    Compares mathematics achievement of children in Taiwan and the United States by analyzing the arithmetical learning contexts of each. Interviews with parents and teachers identify cultural beliefs about learning arithmetic; interviews with students identify level of sophistication of arithmetical concepts. Found greater understanding by Chinese…

  9. Comparing the Use of the Interpersonal Computer, Personal Computer and Pen-and-Paper When Solving Arithmetic Exercises

    ERIC Educational Resources Information Center

    Alcoholado, Cristián; Diaz, Anita; Tagle, Arturo; Nussbaum, Miguel; Infante, Cristián

    2016-01-01

    This study aims to understand the differences in student learning outcomes and classroom behaviour when using the interpersonal computer, personal computer and pen-and-paper to solve arithmetic exercises. In this multi-session experiment, third grade students working on arithmetic exercises from various curricular units were divided into three…

  10. Development of performance specifications for hybrid modeling of floating wind turbines in wave basin tests

    DOE PAGES

    Hall, Matthew; Goupee, Andrew; Jonkman, Jason

    2017-08-24

    Hybrid modeling—combining physical testing and numerical simulation in real time$-$opens new opportunities in floating wind turbine research. Wave basin testing is an important validation step for floating support structure design, but the conventional approaches that use physical wind above the basin are limited by scaling problems in the aerodynamics. Applying wind turbine loads with an actuation system that is controlled by a simulation responding to the basin test in real time offers a way to avoid scaling problems and reduce cost barriers for floating wind turbine design validation in realistic coupled wind and wave conditions. This paper demonstrates the developmentmore » of performance specifications for a system that couples a wave basin experiment with a wind turbine simulation. Two different points for the hybrid coupling are considered: the tower-base interface and the aero-rotor interface (the boundary between aerodynamics and the rotor structure). Analyzing simulations of three floating wind turbine designs across seven load cases reveals the motion and force requirements of the coupling system. By simulating errors in the hybrid coupling system, the sensitivity of the floating wind turbine response to coupling quality can be quantified. The sensitivity results can then be used to determine tolerances for motion tracking errors, force actuation errors, bandwidth limitations, and latency in the hybrid coupling system. These tolerances can guide the design of hybrid coupling systems to achieve desired levels of accuracy. An example demonstrates how the developed methods can be used to generate performance specifications for a system at 1:50 scale. Results show that sensitivities vary significantly between support structure designs and that coupling at the aero-rotor interface has less stringent requirements than those for coupling at the tower base. As a result, the methods and results presented here can inform design of future hybrid coupling systems and enhance understanding of how test results are affected by hybrid coupling quality.« less

  11. Development of performance specifications for hybrid modeling of floating wind turbines in wave basin tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Matthew; Goupee, Andrew; Jonkman, Jason

    Hybrid modeling—combining physical testing and numerical simulation in real time$-$opens new opportunities in floating wind turbine research. Wave basin testing is an important validation step for floating support structure design, but the conventional approaches that use physical wind above the basin are limited by scaling problems in the aerodynamics. Applying wind turbine loads with an actuation system that is controlled by a simulation responding to the basin test in real time offers a way to avoid scaling problems and reduce cost barriers for floating wind turbine design validation in realistic coupled wind and wave conditions. This paper demonstrates the developmentmore » of performance specifications for a system that couples a wave basin experiment with a wind turbine simulation. Two different points for the hybrid coupling are considered: the tower-base interface and the aero-rotor interface (the boundary between aerodynamics and the rotor structure). Analyzing simulations of three floating wind turbine designs across seven load cases reveals the motion and force requirements of the coupling system. By simulating errors in the hybrid coupling system, the sensitivity of the floating wind turbine response to coupling quality can be quantified. The sensitivity results can then be used to determine tolerances for motion tracking errors, force actuation errors, bandwidth limitations, and latency in the hybrid coupling system. These tolerances can guide the design of hybrid coupling systems to achieve desired levels of accuracy. An example demonstrates how the developed methods can be used to generate performance specifications for a system at 1:50 scale. Results show that sensitivities vary significantly between support structure designs and that coupling at the aero-rotor interface has less stringent requirements than those for coupling at the tower base. As a result, the methods and results presented here can inform design of future hybrid coupling systems and enhance understanding of how test results are affected by hybrid coupling quality.« less

  12. Benchmark calculations of excess electrons in water cluster cavities: balancing the addition of atom-centered diffuse functions versus floating diffuse functions.

    PubMed

    Zhang, Changzhe; Bu, Yuxiang

    2016-09-14

    Diffuse functions have been proved to be especially crucial for the accurate characterization of excess electrons which are usually bound weakly in intermolecular zones far away from the nuclei. To examine the effects of diffuse functions on the nature of the cavity-shaped excess electrons in water cluster surroundings, both the HOMO and LUMO distributions, vertical detachment energies (VDEs) and visible absorption spectra of two selected (H2O)24(-) isomers are investigated in the present work. Two main types of diffuse functions are considered in calculations including the Pople-style atom-centered diffuse functions and the ghost-atom-based floating diffuse functions. It is found that augmentation of atom-centered diffuse functions contributes to a better description of the HOMO (corresponding to the VDE convergence), in agreement with previous studies, but also leads to unreasonable diffuse characters of the LUMO with significant red-shifts in the visible spectra, which is against the conventional point of view that the more the diffuse functions, the better the results. The issue of designing extra floating functions for excess electrons has also been systematically discussed, which indicates that the floating diffuse functions are necessary not only for reducing the computational cost but also for improving both the HOMO and LUMO accuracy. Thus, the basis sets with a combination of partial atom-centered diffuse functions and floating diffuse functions are recommended for a reliable description of the weakly bound electrons. This work presents an efficient way for characterizing the electronic properties of weakly bound electrons accurately by balancing the addition of atom-centered diffuse functions and floating diffuse functions and also by balancing the computational cost and accuracy of the calculated results, and thus is very useful in the relevant calculations of various solvated electron systems and weakly bound anionic systems.

  13. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  14. How Stable Is Stable?

    ERIC Educational Resources Information Center

    Baehr, Marie

    1994-01-01

    Provides a problem where students are asked to find the point at which a soda can floating in some liquid changes its equilibrium between stable and unstable as the soda is removed from the can. Requires use of Newton's first law, center of mass, Archimedes' principle, stable and unstable equilibrium, and buoyant force position. (MVL)

  15. Changes of brain response induced by simulated weightlessness

    NASA Astrophysics Data System (ADS)

    Wei, Jinhe; Yan, Gongdong; Guan, Zhiqiang

    The characteristics change of brain response was studied during 15° head-down tilt (HDT) comparing with 45° head-up tilt (HUT). The brain responses evaluated included the EEG power spectra change at rest and during mental arithmetic, and the event-related potentials (ERPs) of somatosensory, selective attention and mental arithmetic activities. The prominent feature of brain response change during HDT revealed that the brain function was inhibited to some extent. Such inhibition included that the significant increment of "40Hz" activity during HUT arithmetic almost disappeared during HDT arithmetic, and that the positive-potential effect induced by HDT presented in all kinds of ERPs measured, but the slow negative wave reflecting mental arithmetic and memory process was elongated. These data suggest that the brain function be affected profoundly by the simulated weightlessness, therefore, the brain function change during space flight should be studied systematically.

  16. The relationship between medical impairments and arithmetic development in children with cerebral palsy.

    PubMed

    Jenks, Kathleen M; van Lieshout, Ernest C D M; de Moor, Jan

    2009-05-01

    Arithmetic ability was tested in children with cerebral palsy without severe intellectual impairment (verbal IQ >or= 70) attending special (n = 41) or mainstream education (n = 16) as well as control children in mainstream education (n = 16) throughout first and second grade. Children with cerebral palsy in special education did not appear to have fully automatized arithmetic facts by the end of second grade. Their lower accuracy and consistently slower (verbal) response times raise important concerns for their future arithmetic development. Differences in arithmetic performance between children with cerebral palsy in special or mainstream education were not related to localization of cerebral palsy or to gross motor impairment. Rather, lower accuracy and slower verbal responses were related to differences in nonverbal intelligence and the presence of epilepsy. Left-hand impairment was related to slower verbal responses but not to lower accuracy.

  17. Computational fluency and strategy choice predict individual and cross-national differences in complex arithmetic.

    PubMed

    Vasilyeva, Marina; Laski, Elida V; Shen, Chen

    2015-10-01

    The present study tested the hypothesis that children's fluency with basic number facts and knowledge of computational strategies, derived from early arithmetic experience, predicts their performance on complex arithmetic problems. First-grade students from United States and Taiwan (N = 152, mean age: 7.3 years) were presented with problems that differed in difficulty: single-, mixed-, and double-digit addition. Children's strategy use varied as a function of problem difficulty, consistent with Siegler's theory of strategy choice. The use of decomposition strategy interacted with computational fluency in predicting the accuracy of double-digit addition. Further, the frequency of decomposition and computational fluency fully mediated cross-national differences in accuracy on these complex arithmetic problems. The results indicate the importance of both fluency with basic number facts and the decomposition strategy for later arithmetic performance. (c) 2015 APA, all rights reserved).

  18. The MasPar MP-1 As a Computer Arithmetic Laboratory

    PubMed Central

    Anuta, Michael A.; Lozier, Daniel W.; Turner, Peter R.

    1996-01-01

    This paper is a blueprint for the use of a massively parallel SIMD computer architecture for the simulation of various forms of computer arithmetic. The particular system used is a DEC/MasPar MP-1 with 4096 processors in a square array. This architecture has many advantages for such simulations due largely to the simplicity of the individual processors. Arithmetic operations can be spread across the processor array to simulate a hardware chip. Alternatively they may be performed on individual processors to allow simulation of a massively parallel implementation of the arithmetic. Compromises between these extremes permit speed-area tradeoffs to be examined. The paper includes a description of the architecture and its features. It then summarizes some of the arithmetic systems which have been, or are to be, implemented. The implementation of the level-index and symmetric level-index, LI and SLI, systems is described in some detail. An extensive bibliography is included. PMID:27805123

  19. Symbolic Numerical Magnitude Processing Is as Important to Arithmetic as Phonological Awareness Is to Reading

    PubMed Central

    Vanbinst, Kiran; Ansari, Daniel; Ghesquière, Pol; De Smedt, Bert

    2016-01-01

    In this article, we tested, using a 1-year longitudinal design, whether symbolic numerical magnitude processing or children’s numerical representation of Arabic digits, is as important to arithmetic as phonological awareness is to reading. Children completed measures of symbolic comparison, phonological awareness, arithmetic, reading at the start of third grade and the latter two were retested at the start of fourth grade. Cross-sectional and longitudinal correlations indicated that symbolic comparison was a powerful domain-specific predictor of arithmetic and that phonological awareness was a unique predictor of reading. Crucially, the strength of these independent associations was not significantly different. This indicates that symbolic numerical magnitude processing is as important to arithmetic development as phonological awareness is to reading and suggests that symbolic numerical magnitude processing is a good candidate for screening children at risk for developing mathematical difficulties. PMID:26942935

  20. Classified one-step high-radix signed-digit arithmetic units

    NASA Astrophysics Data System (ADS)

    Cherri, Abdallah K.

    1998-08-01

    High-radix number systems enable higher information storage density, less complexity, fewer system components, and fewer cascaded gates and operations. A simple one-step fully parallel high-radix signed-digit arithmetic is proposed for parallel optical computing based on new joint spatial encodings. This reduces hardware requirements and improves throughput by reducing the space-bandwidth produce needed. The high-radix signed-digit arithmetic operations are based on classifying the neighboring input digit pairs into various groups to reduce the computation rules. A new joint spatial encoding technique is developed to present both the operands and the computation rules. This technique increases the spatial bandwidth product of the spatial light modulators of the system. An optical implementation of the proposed high-radix signed-digit arithmetic operations is also presented. It is shown that our one-step trinary signed-digit and quaternary signed-digit arithmetic units are much simpler and better than all previously reported high-radix signed-digit techniques.

  1. Visuospatial and verbal memory in mental arithmetic.

    PubMed

    Clearman, Jack; Klinger, Vojtěch; Szűcs, Dénes

    2017-09-01

    Working memory allows complex information to be remembered and manipulated over short periods of time. Correlations between working memory and mathematics achievement have been shown across the lifespan. However, only a few studies have examined the potentially distinct contributions of domain-specific visuospatial and verbal working memory resources in mental arithmetic computation. Here we aimed to fill this gap in a series of six experiments pairing addition and subtraction tasks with verbal and visuospatial working memory and interference tasks. In general, we found higher levels of interference between mental arithmetic and visuospatial working memory tasks than between mental arithmetic and verbal working memory tasks. Additionally, we found that interference that matched the working memory domain of the task (e.g., verbal task with verbal interference) lowered working memory performance more than mismatched interference (verbal task with visuospatial interference). Findings suggest that mental arithmetic relies on domain-specific working memory resources.

  2. The semantic system is involved in mathematical problem solving.

    PubMed

    Zhou, Xinlin; Li, Mengyi; Li, Leinian; Zhang, Yiyun; Cui, Jiaxin; Liu, Jie; Chen, Chuansheng

    2018-02-01

    Numerous studies have shown that the brain regions around bilateral intraparietal cortex are critical for number processing and arithmetical computation. However, the neural circuits for more advanced mathematics such as mathematical problem solving (with little routine arithmetical computation) remain unclear. Using functional magnetic resonance imaging (fMRI), this study (N = 24 undergraduate students) compared neural bases of mathematical problem solving (i.e., number series completion, mathematical word problem solving, and geometric problem solving) and arithmetical computation. Direct subject- and item-wise comparisons revealed that mathematical problem solving typically had greater activation than arithmetical computation in all 7 regions of the semantic system (which was based on a meta-analysis of 120 functional neuroimaging studies on semantic processing). Arithmetical computation typically had greater activation in the supplementary motor area and left precentral gyrus. The results suggest that the semantic system in the brain supports mathematical problem solving. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Dynamic behavior and deformation analysis of the fish cage system using mass-spring model

    NASA Astrophysics Data System (ADS)

    Lee, Chun Woo; Lee, Jihoon; Park, Subong

    2015-06-01

    Fish cage systems are influenced by various oceanic conditions, and the movements and deformation of the system by the external forces can affect the safety of the system itself, as well as the species of fish being cultivated. Structural durability of the system against environmental factors has been major concern for the marine aquaculture system. In this research, a mathematical model and a simulation method were presented for analyzing the performance of the large-scale fish cage system influenced by current and waves. The cage system consisted of netting, mooring ropes, floats, sinkers and floating collar. All the elements were modeled by use of the mass-spring model. The structures were divided into finite elements and mass points were placed at the mid-point of each element, and mass points were connected by springs without mass. Each mass point was applied to external and internal forces, and total force was calculated in every integration step. The computation method was applied to the dynamic simulation of the actual fish cage systems rigged with synthetic fiber and copper wire simultaneously influenced by current and waves. Here, we also tried to find a relevant ratio between buoyancy and sinking force of the fish cages. The simulation results provide improved understanding of the behavior of the structure and valuable information concerning optimum ratio of the buoyancy to sinking force according to current speeds.

  4. Assessment of the nutrient removal effectiveness of floating treatment wetlands applied to urban retention ponds.

    PubMed

    Wang, Chih-Yu; Sample, David J

    2014-05-01

    The application of floating treatment wetlands (FTWs) in point and non-point source pollution control has received much attention recently. Although the potential of this emerging technology is supported by various studies, quantifying FTW performance in urban retention ponds remains elusive due to significant research gaps. Actual urban retention pond water was utilized in this mesocosm study to evaluate phosphorus and nitrogen removal efficiency of FTWs. Multiple treatments were used to investigate the contribution of each component in the FTW system with a seven-day retention time. The four treatments included a control, floating mat, pickerelweed (Pontederia cordata L.), and softstem bulrush (Schoenoplectus tabernaemontani). The water samples collected on Day 0 (initial) and 7 were analyzed for total phosphorus (TP), total particulate phosphorus, orthophosphate, total nitrogen (TN), organic nitrogen, ammonia nitrogen, nitrate-nitrite nitrogen, and chlorophyll-a. Statistical tests were used to evaluate the differences between the four treatments. The effects of temperature on TP and TN removal rates of the FTWs were described by the modified Arrhenius equation. Our results indicated that all three FTW designs, planted and unplanted floating mats, could significantly improve phosphorus and nitrogen removal efficiency (%, E-TP and E-TN) compared to the control treatment during the growing season, i.e., May through August. The E-TP and E-TN was enhanced by 8.2% and 18.2% in the FTW treatments planted with the pickerelweed and softstem bulrush, respectively. Organic matter decomposition was likely to be the primary contributor of nutrient removal by FTWs in urban retention ponds. Such a mechanism is fostered by microbes within the attached biofilms on the floating mats and plant root surfaces. Among the results of the four treatments, the FTWs planted with pickerelweed had the highest E-TP, and behaved similarly with the other two FTW treatments for nitrogen removal during the growth period. The temperature effects described by the modified Arrhenius equation revealed that pickerelweed is sensitive to temperature and provides considerable phosphorus removal when water temperature is greater than 25 °C. However, the nutrient removal effectiveness of this plant species may be negligible for water temperatures below 15 °C. The study also assessed potential effects of shading from the FTW mats on water temperature, DO, pH, and attached-to-substrate periphyton/vegetation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Modeling of submarine melting in Petermann Fjord, Northwestern Greenland using an ocean general circulation model

    NASA Astrophysics Data System (ADS)

    Cai, C.; Rignot, E. J.; Xu, Y.; An, L.

    2013-12-01

    Basal melting of the floating tongue of Petermann Glacier, in northwestern Greenland is by far the largest process of mass ablation. Melting of the floating tongue is controlled by the buoyancy of the melt water plume, the pressure-dependence of the melting point of sea ice, and the mixing of warm subsurface water with fresh buoyant subglacial discharge. In prior simulations of this melting process, the role of subglacial discharge has been neglected because in similar configurations (floating ice shelves) in the Antarctic, surface runoff is negligible; this is however not true in Greenland. Here, we use the Mass Institute of Technology general circulation model (MITgcm) at a high spatial resolution (10 m x 10 m) to simulate the melting process of the ice shelf in 2-D. the model is constrained by ice shelf bathymetry and ice thickness from NASA Operation IceBridge, ocean temperature/salinity data from Johnson et al. (2011), and subglacial discharge estimated from output products of the Regional Atmospheric Climate Model (RACMO). We compare the results obtained in winter (no runoff) with summer, and the sensitivity of the results to thermal forcing from the ocean, and to the magnitude of subglacial runoff. We conclude on the impact of the ocean and surface melting on the melting regime of the floating ice tongue of Petermann. This work is performed under a contract with NASA Cryosphere Program.

  6. Design and Optimization of Floating Drug Delivery System of Acyclovir

    PubMed Central

    Kharia, A. A.; Hiremath, S. N.; Singhai, A. K.; Omray, L. K.; Jain, S. K.

    2010-01-01

    The purpose of the present work was to design and optimize floating drug delivery systems of acyclovir using psyllium husk and hydroxypropylmethylcellulose K4M as the polymers and sodium bicarbonate as a gas generating agent. The tablets were prepared by wet granulation method. A 32 full factorial design was used for optimization of drug release profile. The amount of psyllium husk (X1) and hydroxypropylmethylcellulose K4M (X2) were selected as independent variables. The times required for 50% (t50%) and 70% (t70%) drug dissolution were selected as dependent variables. All the designed nine batches of formulations were evaluated for hardness, friability, weight variation, drug content uniformity, swelling index, in vitro buoyancy, and in vitro drug release profile. All formulations had floating lag time below 3 min and constantly floated on dissolution medium for more than 24 h. Validity of the developed polynomial equation was verified by designing two check point formulations (C1 and C2). The closeness of predicted and observed values for t50% and t70% indicates validity of derived equations for the dependent variables. These studies indicated that the proper balance between psyllium husk and hydroxypropylmethylcellulose K4M can produce a drug dissolution profile similar to the predicted dissolution profile. The optimized formulations followed Higuchi's kinetics while the drug release mechanism was found to be anomalous type, controlled by diffusion through the swollen matrix. PMID:21694992

  7. Design and optimization of floating drug delivery system of acyclovir.

    PubMed

    Kharia, A A; Hiremath, S N; Singhai, A K; Omray, L K; Jain, S K

    2010-09-01

    The purpose of the present work was to design and optimize floating drug delivery systems of acyclovir using psyllium husk and hydroxypropylmethylcellulose K4M as the polymers and sodium bicarbonate as a gas generating agent. The tablets were prepared by wet granulation method. A 3(2) full factorial design was used for optimization of drug release profile. The amount of psyllium husk (X1) and hydroxypropylmethylcellulose K4M (X2) were selected as independent variables. The times required for 50% (t(50%)) and 70% (t(70%)) drug dissolution were selected as dependent variables. All the designed nine batches of formulations were evaluated for hardness, friability, weight variation, drug content uniformity, swelling index, in vitro buoyancy, and in vitro drug release profile. All formulations had floating lag time below 3 min and constantly floated on dissolution medium for more than 24 h. Validity of the developed polynomial equation was verified by designing two check point formulations (C1 and C2). The closeness of predicted and observed values for t(50%) and t(70%) indicates validity of derived equations for the dependent variables. These studies indicated that the proper balance between psyllium husk and hydroxypropylmethylcellulose K4M can produce a drug dissolution profile similar to the predicted dissolution profile. The optimized formulations followed Higuchi's kinetics while the drug release mechanism was found to be anomalous type, controlled by diffusion through the swollen matrix.

  8. The Arithmetic Project Course for Teachers - 8. Topic: Lower Brackets and Upper Brackets. Supplement: Arithmetic With Frames.

    ERIC Educational Resources Information Center

    Education Development Center, Inc., Newton, MA.

    This is one of a series of 20 booklets designed for participants in an in-service course for teachers of elementary mathematics. The course, developed by the University of Illinois Arithmetic Project, is designed to be conducted by local school personnel. In addition to these booklets, a course package includes films showing mathematics being…

  9. Sex Differences in Mental Arithmetic, Digit Span, and "g" Defined as Working Memory Capacity

    ERIC Educational Resources Information Center

    Lynn, Richard; Irwing, Paul

    2008-01-01

    Meta-analyses are presented of sex differences in (1) the (mental) arithmetic subtest of the Wechsler intelligence tests for children and adolescents (the WISC and WPPSI tests), showing that boys obtained a mean advantage of 0.11d; (2) the (mental) arithmetic subtest of the Wechsler intelligence tests for adults (the WAIS tests) showing a mean…

  10. Comparing and Transforming: An Application of Piaget's Morphisms Theory to the Development of Class Inclusion and Arithmetic Problem Solving.

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Poirier, Louise

    1997-01-01

    Outlines Piaget's late ideas on categories and morphisms and the impact of these ideas on the comprehension of the inclusion relationship and the solution of arithmetic problems. Reports a study in which fourth through sixth graders were given arithmetic problems involving two known quantities associated with changes rather than states. Identified…

  11. Working Memory as a Predictor of Written Arithmetical Skills in Children: The Importance of Central Executive Functions

    ERIC Educational Resources Information Center

    Andersson, Ulf

    2008-01-01

    Background: The study was conducted in an attempt to further our understanding of how working memory contributes to written arithmetical skills in children. Aim: The aim was to pinpoint the contribution of different central executive functions and to examine the contribution of the two subcomponents of children's written arithmetical skills.…

  12. Contributions of Domain-General Cognitive Resources and Different Forms of Arithmetic Development to Pre-Algebraic Knowledge

    ERIC Educational Resources Information Center

    Fuchs, Lynn S.; Compton, Donald L.; Fuchs, Douglas; Powell, Sarah R.; Schumacher, Robin F.; Hamlett, Carol L.; Vernier, Emily; Namkung, Jessica M.; Vukovic, Rose K.

    2012-01-01

    The purpose of this study was to investigate the contributions of domain-general cognitive resources and different forms of arithmetic development to individual differences in pre-algebraic knowledge. Children (n = 279, mean age = 7.59 years) were assessed on 7 domain-general cognitive resources as well as arithmetic calculations and word problems…

  13. Error-correcting codes in computer arithmetic.

    NASA Technical Reports Server (NTRS)

    Massey, J. L.; Garcia, O. N.

    1972-01-01

    Summary of the most important results so far obtained in the theory of coding for the correction and detection of errors in computer arithmetic. Attempts to satisfy the stringent reliability demands upon the arithmetic unit are considered, and special attention is given to attempts to incorporate redundancy into the numbers themselves which are being processed so that erroneous results can be detected and corrected.

  14. Limitations to Teaching Children 2 + 2 = 4: Typical Arithmetic Problems Can Hinder Learning of Mathematical Equivalence

    ERIC Educational Resources Information Center

    McNeil, Nicole M.

    2008-01-01

    Do typical arithmetic problems hinder learning of mathematical equivalence? Second and third graders (7-9 years old; N= 80) received lessons on mathematical equivalence either with or without typical arithmetic problems (e.g., 15 + 13 = 28 vs. 28 = 28, respectively). Children then solved math equivalence problems (e.g., 3 + 9 + 5 = 6 + __),…

  15. Arithmetic Data Cube as a Data Intensive Benchmark

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Shabano, Leonid

    2003-01-01

    Data movement across computational grids and across memory hierarchy of individual grid machines is known to be a limiting factor for application involving large data sets. In this paper we introduce the Data Cube Operator on an Arithmetic Data Set which we call Arithmetic Data Cube (ADC). We propose to use the ADC to benchmark grid capabilities to handle large distributed data sets. The ADC stresses all levels of grid memory by producing 2d views of an Arithmetic Data Set of d-tuples described by a small number of parameters. We control data intensity of the ADC by controlling the sizes of the views through choice of the tuple parameters.

  16. Patterns of problem-solving in children's literacy and arithmetic.

    PubMed

    Farrington-Flint, Lee; Vanuxem-Cotterill, Sophie; Stiller, James

    2009-11-01

    Patterns of problem-solving among 5-to-7 year-olds' were examined on a range of literacy (reading and spelling) and arithmetic-based (addition and subtraction) problem-solving tasks using verbal self-reports to monitor strategy choice. The results showed higher levels of variability in the children's strategy choice across Years I and 2 on the arithmetic (addition and subtraction) than literacy-based tasks (reading and spelling). However, across all four tasks, the children showed a tendency to move from less sophisticated procedural-based strategies, which included phonological strategies for reading and spelling and counting-all and finger modellingfor addition and subtraction, to more efficient retrieval methods from Years I to 2. Distinct patterns in children's problem-solving skill were identified on the literacy and arithmetic tasks using two separate cluster analyses. There was a strong association between these two profiles showing that those children with more advanced problem-solving skills on the arithmetic tasks also showed more advanced profiles on the literacy tasks. The results highlight how different-aged children show flexibility in their use of problem-solving strategies across literacy and arithmetical contexts and reinforce the importance of studying variations in children's problem-solving skill across different educational contexts.

  17. Plastic debris in the open ocean

    PubMed Central

    Cózar, Andrés; Echevarría, Fidel; González-Gordillo, J. Ignacio; Irigoien, Xabier; Úbeda, Bárbara; Hernández-León, Santiago; Palma, Álvaro T.; Navarro, Sandra; García-de-Lomas, Juan; Ruiz, Andrea; Fernández-de-Puelles, María L.; Duarte, Carlos M.

    2014-01-01

    There is a rising concern regarding the accumulation of floating plastic debris in the open ocean. However, the magnitude and the fate of this pollution are still open questions. Using data from the Malaspina 2010 circumnavigation, regional surveys, and previously published reports, we show a worldwide distribution of plastic on the surface of the open ocean, mostly accumulating in the convergence zones of each of the five subtropical gyres with comparable density. However, the global load of plastic on the open ocean surface was estimated to be on the order of tens of thousands of tons, far less than expected. Our observations of the size distribution of floating plastic debris point at important size-selective sinks removing millimeter-sized fragments of floating plastic on a large scale. This sink may involve a combination of fast nano-fragmentation of the microplastic into particles of microns or smaller, their transference to the ocean interior by food webs and ballasting processes, and processes yet to be discovered. Resolving the fate of the missing plastic debris is of fundamental importance to determine the nature and significance of the impacts of plastic pollution in the ocean. PMID:24982135

  18. DUST DYNAMICS IN PROTOPLANETARY DISK WINDS DRIVEN BY MAGNETOROTATIONAL TURBULENCE: A MECHANISM FOR FLOATING DUST GRAINS WITH CHARACTERISTIC SIZES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyake, Tomoya; Suzuki, Takeru K.; Inutsuka, Shu-ichiro, E-mail: miyake.tomoya@e.mbox.nagoya-u.ac.jp, E-mail: stakeru@nagoya-u.jp

    We investigate the dynamics of dust grains of various sizes in protoplanetary disk winds driven by magnetorotational turbulence, by simulating the time evolution of the dust grain distribution in the vertical direction. Small dust grains, which are well-coupled to the gas, are dragged upward with the upflowing gas, while large grains remain near the midplane of a disk. Intermediate-size grains float near the sonic point of the disk wind located at several scale heights from the midplane, where the grains are loosely coupled to the background gas. For the minimum mass solar nebula at 1 au, dust grains with sizemore » of 25–45 μm float around 4 scale heights from the midplane. Considering the dependence on the distance from the central star, smaller-size grains remain only in an outer region of the disk, while larger-size grains are distributed in a broader region. We also discuss the implications of our result for observations of dusty material around young stellar objects.« less

  19. Gravity-induced dynamics of a squirmer microswimmer in wall proximity

    NASA Astrophysics Data System (ADS)

    Rühle, Felix; Blaschke, Johannes; Kuhr, Jan-Timm; Stark, Holger

    2018-02-01

    We perform hydrodynamic simulations using the method of multi-particle collision dynamics and a theoretical analysis to study a single squirmer microswimmer at high Péclet number, which moves in a low Reynolds number fluid and under gravity. The relevant parameters are the ratio α of swimming to bulk sedimentation velocity and the squirmer type β. The combination of self-propulsion, gravitational force, hydrodynamic interactions with the wall, and thermal noise leads to a surprisingly diverse behavior. At α > 1 we observe cruising states, while for α < 1 the squirmer resides close to the bottom wall with the motional state determined by stable fixed points in height and orientation. They strongly depend on the squirmer type β. While neutral squirmers permanently float above the wall with upright orientation, pullers float for α larger than a threshold value {α }th} and are pinned to the wall below {α }th}. In contrast, pushers slide along the wall at lower heights, from which thermal orientational fluctuations drive them into a recurrent floating state with upright orientation, where they remain on the timescale of orientational persistence.

  20. Plastic debris in the open ocean.

    PubMed

    Cózar, Andrés; Echevarría, Fidel; González-Gordillo, J Ignacio; Irigoien, Xabier; Ubeda, Bárbara; Hernández-León, Santiago; Palma, Alvaro T; Navarro, Sandra; García-de-Lomas, Juan; Ruiz, Andrea; Fernández-de-Puelles, María L; Duarte, Carlos M

    2014-07-15

    There is a rising concern regarding the accumulation of floating plastic debris in the open ocean. However, the magnitude and the fate of this pollution are still open questions. Using data from the Malaspina 2010 circumnavigation, regional surveys, and previously published reports, we show a worldwide distribution of plastic on the surface of the open ocean, mostly accumulating in the convergence zones of each of the five subtropical gyres with comparable density. However, the global load of plastic on the open ocean surface was estimated to be on the order of tens of thousands of tons, far less than expected. Our observations of the size distribution of floating plastic debris point at important size-selective sinks removing millimeter-sized fragments of floating plastic on a large scale. This sink may involve a combination of fast nano-fragmentation of the microplastic into particles of microns or smaller, their transference to the ocean interior by food webs and ballasting processes, and processes yet to be discovered. Resolving the fate of the missing plastic debris is of fundamental importance to determine the nature and significance of the impacts of plastic pollution in the ocean.

Top