Algorithm XXX : functions to support the IEEE standard for binary floating-point arithmetic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cody, W. J.; Mathematics and Computer Science
1993-12-01
This paper describes C programs for the support functions copysign(x,y), logb(x), scalb(x,n), nextafter(x,y), finite(x), and isnan(x) recommended in the Appendix to the IEEE Standard for Binary Floating-Point Arithmetic. In the case of logb, the modified definition given in the later IEEE Standard for Radix-Independent Floating-Point Arithmetic is followed. These programs should run without modification on most systems conforming to the binary standard.
Defining the IEEE-854 floating-point standard in PVS
NASA Technical Reports Server (NTRS)
Miner, Paul S.
1995-01-01
A significant portion of the ANSI/IEEE-854 Standard for Radix-Independent Floating-Point Arithmetic is defined in PVS (Prototype Verification System). Since IEEE-854 is a generalization of the ANSI/IEEE-754 Standard for Binary Floating-Point Arithmetic, the definition of IEEE-854 in PVS also formally defines much of IEEE-754. This collection of PVS theories provides a basis for machine checked verification of floating-point systems. This formal definition illustrates that formal specification techniques are sufficiently advanced that is is reasonable to consider their use in the development of future standards.
Basic mathematical function libraries for scientific computation
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
Ada packages implementing selected mathematical functions for the support of scientific and engineering applications were written. The packages provide the Ada programmer with the mathematical function support found in the languages Pascal and FORTRAN as well as an extended precision arithmetic and a complete complex arithmetic. The algorithms used are fully described and analyzed. Implementation assumes that the Ada type FLOAT objects fully conform to the IEEE 754-1985 standard for single binary floating-point arithmetic, and that INTEGER objects are 32-bit entities. Codes for the Ada packages are included as appendixes.
Exploring the Feasibility of a DNA Computer: Design of an ALU Using Sticker-Based DNA Model.
Sarkar, Mayukh; Ghosal, Prasun; Mohanty, Saraju P
2017-09-01
Since its inception, DNA computing has advanced to offer an extremely powerful, energy-efficient emerging technology for solving hard computational problems with its inherent massive parallelism and extremely high data density. This would be much more powerful and general purpose when combined with other existing well-known algorithmic solutions that exist for conventional computing architectures using a suitable ALU. Thus, a specifically designed DNA Arithmetic and Logic Unit (ALU) that can address operations suitable for both domains can mitigate the gap between these two. An ALU must be able to perform all possible logic operations, including NOT, OR, AND, XOR, NOR, NAND, and XNOR; compare, shift etc., integer and floating point arithmetic operations (addition, subtraction, multiplication, and division). In this paper, design of an ALU has been proposed using sticker-based DNA model with experimental feasibility analysis. Novelties of this paper may be in manifold. First, the integer arithmetic operations performed here are 2s complement arithmetic, and the floating point operations follow the IEEE 754 floating point format, resembling closely to a conventional ALU. Also, the output of each operation can be reused for any next operation. So any algorithm or program logic that users can think of can be implemented directly on the DNA computer without any modification. Second, once the basic operations of sticker model can be automated, the implementations proposed in this paper become highly suitable to design a fully automated ALU. Third, proposed approaches are easy to implement. Finally, these approaches can work on sufficiently large binary numbers.
Floating point arithmetic in future supercomputers
NASA Technical Reports Server (NTRS)
Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.
1989-01-01
Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.
Paranoia.Ada: A diagnostic program to evaluate Ada floating-point arithmetic
NASA Technical Reports Server (NTRS)
Hjermstad, Chris
1986-01-01
Many essential software functions in the mission critical computer resource application domain depend on floating point arithmetic. Numerically intensive functions associated with the Space Station project, such as emphemeris generation or the implementation of Kalman filters, are likely to employ the floating point facilities of Ada. Paranoia.Ada appears to be a valuabe program to insure that Ada environments and their underlying hardware exhibit the precision and correctness required to satisfy mission computational requirements. As a diagnostic tool, Paranoia.Ada reveals many essential characteristics of an Ada floating point implementation. Equipped with such knowledge, programmers need not tremble before the complex task of floating point computation.
Instabilities caused by floating-point arithmetic quantization.
NASA Technical Reports Server (NTRS)
Phillips, C. L.
1972-01-01
It is shown that an otherwise stable digital control system can be made unstable by signal quantization when the controller operates on floating-point arithmetic. Sufficient conditions of instability are determined, and an example of loss of stability is treated when only one quantizer is operated.
Floating-point geometry: toward guaranteed geometric computations with approximate arithmetics
NASA Astrophysics Data System (ADS)
Bajard, Jean-Claude; Langlois, Philippe; Michelucci, Dominique; Morin, Géraldine; Revol, Nathalie
2008-08-01
Geometric computations can fail because of inconsistencies due to floating-point inaccuracy. For instance, the computed intersection point between two curves does not lie on the curves: it is unavoidable when the intersection point coordinates are non rational, and thus not representable using floating-point arithmetic. A popular heuristic approach tests equalities and nullities up to a tolerance ɛ. But transitivity of equality is lost: we can have A approx B and B approx C, but A not approx C (where A approx B means ||A - B|| < ɛ for A,B two floating-point values). Interval arithmetic is another, self-validated, alternative; the difficulty is to limit the swell of the width of intervals with computations. Unfortunately interval arithmetic cannot decide equality nor nullity, even in cases where it is decidable by other means. A new approach, developed in this paper, consists in modifying the geometric problems and algorithms, to account for the undecidability of the equality test and unavoidable inaccuracy. In particular, all curves come with a non-zero thickness, so two curves (generically) cut in a region with non-zero area, an inner and outer representation of which is computable. This last approach no more assumes that an equality or nullity test is available. The question which arises is: which geometric problems can still be solved with this last approach, and which cannot? This paper begins with the description of some cases where every known arithmetic fails in practice. Then, for each arithmetic, some properties of the problems they can solve are given. We end this work by proposing the bases of a new approach which aims to fulfill the geometric computations requirements.
Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.
Wang, Charlie C L; Manocha, Dinesh
2013-01-01
We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.
Rational Arithmetic in Floating-Point.
1986-09-01
RD-RI75 190 RATIONAL ARITHMETIC IN FLOTING-POINT(U) CALIFORNIA~UNIY BERKELEY CENTER FOR PURE AND APPLIED MATHEMATICS USI FE N KAHAN SEP 86 PRM-343...8217 ," .’,.-.’ .- " .- . ,,,.". ".. .. ". CENTER FOR PURE AND APPLIED MATHEMATICS UNIVERSITY OF CALIFORNIA, BERKELEY PAf4343 0l RATIONAL ARITHMIETIC IN FLOATING-POINT W. KAHAN SETMER18 SEPTEMBE...delicate balance between, on the one hand, the simplicity and aesthetic appeal of the specifications and, on the other hand, the complexity and
Hardware math for the 6502 microprocessor
NASA Technical Reports Server (NTRS)
Kissel, R.; Currie, J.
1985-01-01
A floating-point arithmetic unit is described which is being used in the Ground Facility of Large Space Structures Control Verification (GF/LSSCV). The experiment uses two complete inertial measurement units and a set of three gimbal torquers in a closed loop to control the structural vibrations in a flexible test article (beam). A 6502 (8-bit) microprocessor controls four AMD 9511A floating-point arithmetic units to do all the computation in 20 milliseconds.
High-precision arithmetic in mathematical physics
Bailey, David H.; Borwein, Jonathan M.
2015-05-12
For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.
Formal verification of mathematical software
NASA Technical Reports Server (NTRS)
Sutherland, D.
1984-01-01
Methods are investigated for formally specifying and verifying the correctness of mathematical software (software which uses floating point numbers and arithmetic). Previous work in the field was reviewed. A new model of floating point arithmetic called the asymptotic paradigm was developed and formalized. Two different conceptual approaches to program verification, the classical Verification Condition approach and the more recently developed Programming Logic approach, were adapted to use the asymptotic paradigm. These approaches were then used to verify several programs; the programs chosen were simplified versions of actual mathematical software.
NASA Technical Reports Server (NTRS)
Manos, P.; Turner, L. R.
1972-01-01
Approximations which can be evaluated with precision using floating-point arithmetic are presented. The particular set of approximations thus far developed are for the function TAN and the functions of USASI FORTRAN excepting SQRT and EXPONENTIATION. These approximations are, furthermore, specialized to particular forms which are especially suited to a computer with a small memory, in that all of the approximations can share one general purpose subroutine for the evaluation of a polynomial in the square of the working argument.
Paranoia.Ada: Sample output reports
NASA Technical Reports Server (NTRS)
1986-01-01
Paranoia.Ada is a program to diagnose floating point arithmetic in the context of the Ada programming language. The program evaluates the quality of a floating point arithmetic implementation with respect to the proposed IEEE Standards P754 and P854. Paranoia.Ada is derived from the original BASIC programming language version of Paranoia. The Paranoia.Ada replicates in Ada the test algorithms originally implemented in BASIC and adheres to the evaluation criteria established by W. M. Kahan. Paranoia.Ada incorporates a major structural redesign and employs applicable Ada architectural and stylistic features.
NASA Astrophysics Data System (ADS)
Nikmehr, Hooman; Phillips, Braden; Lim, Cheng-Chew
2005-02-01
Recently, decimal arithmetic has become attractive in the financial and commercial world including banking, tax calculation, currency conversion, insurance and accounting. Although computers are still carrying out decimal calculation using software libraries and binary floating-point numbers, it is likely that in the near future, all processors will be equipped with units performing decimal operations directly on decimal operands. One critical building block for some complex decimal operations is the decimal carry-free adder. This paper discusses the mathematical framework of the addition, introduces a new signed-digit format for representing decimal numbers and presents an efficient architectural implementation. Delay estimation analysis shows that the adder offers improved performance over earlier designs.
Verification of IEEE Compliant Subtractive Division Algorithms
NASA Technical Reports Server (NTRS)
Miner, Paul S.; Leathrum, James F., Jr.
1996-01-01
A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.
Learning to assign binary weights to binary descriptor
NASA Astrophysics Data System (ADS)
Huang, Zhoudi; Wei, Zhenzhong; Zhang, Guangjun
2016-10-01
Constructing robust binary local feature descriptors are receiving increasing interest due to their binary nature, which can enable fast processing while requiring significantly less memory than their floating-point competitors. To bridge the performance gap between the binary and floating-point descriptors without increasing the computational cost of computing and matching, optimal binary weights are learning to assign to binary descriptor for considering each bit might contribute differently to the distinctiveness and robustness. Technically, a large-scale regularized optimization method is applied to learn float weights for each bit of the binary descriptor. Furthermore, binary approximation for the float weights is performed by utilizing an efficient alternatively greedy strategy, which can significantly improve the discriminative power while preserve fast matching advantage. Extensive experimental results on two challenging datasets (Brown dataset and Oxford dataset) demonstrate the effectiveness and efficiency of the proposed method.
Floating-point system quantization errors in digital control systems
NASA Technical Reports Server (NTRS)
Phillips, C. L.; Vallely, D. P.
1978-01-01
This paper considers digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. A quantization error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. The program can be integrated into existing digital simulations of a system.
Gauss Elimination: Workhorse of Linear Algebra.
1995-08-05
linear algebra computation for solving systems, computing determinants and determining the rank of matrix. All of these are discussed in varying contexts. These include different arithmetic or algebraic setting such as integer arithmetic or polynomial rings as well as conventional real (floating-point) arithmetic. These have effects on both accuracy and complexity analyses of the algorithm. These, too, are covered here. The impact of modern parallel computer architecture on GE is also
Arnold, Jeffrey
2018-05-14
Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided. About the speaker: Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.
Interpretation of IEEE-854 floating-point standard and definition in the HOL system
NASA Technical Reports Server (NTRS)
Carreno, Victor A.
1995-01-01
The ANSI/IEEE Standard 854-1987 for floating-point arithmetic is interpreted by converting the lexical descriptions in the standard into mathematical conditional descriptions organized in tables. The standard is represented in higher-order logic within the framework of the HOL (Higher Order Logic) system. The paper is divided in two parts with the first part the interpretation and the second part the description in HOL.
On the design of a radix-10 online floating-point multiplier
NASA Astrophysics Data System (ADS)
McIlhenny, Robert D.; Ercegovac, Milos D.
2009-08-01
This paper describes an approach to design and implement a radix-10 online floating-point multiplier. An online approach is considered because it offers computational flexibility not available with conventional arithmetic. The design was coded in VHDL and compiled, synthesized, and mapped onto a Virtex 5 FPGA to measure cost in terms of LUTs (look-up-tables) as well as the cycle time and total latency. The routing delay which was not optimized is the major component in the cycle time. For a rough estimate of the cost/latency characteristics, our design was compared to a standard radix-2 floating-point multiplier of equivalent precision. The results demonstrate that even an unoptimized radix-10 online design is an attractive implementation alternative for FPGA floating-point multiplication.
NASA Technical Reports Server (NTRS)
Pan, Jing; Levitt, Karl N.; Cohen, Gerald C.
1991-01-01
Discussed here is work to formally specify and verify a floating point coprocessor based on the MC68881. The HOL verification system developed at Cambridge University was used. The coprocessor consists of two independent units: the bus interface unit used to communicate with the cpu and the arithmetic processing unit used to perform the actual calculation. Reasoning about the interaction and synchronization among processes using higher order logic is demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, G. Scott
This floating-point arithmetic library contains a software implementation of Universal Numbers (unums) as described by John Gustafson [1]. The unum format is a superset of IEEE 754 floating point with several advantages. Computing with unums provides more accurate answers without rounding errors, underflow or overflow. In contrast to fixed-sized IEEE numbers, a variable number of bits can be used to encode unums. This all allows number with only a few significant digits or with a small dynamic range to be represented more compactly.
Multi-input and binary reproducible, high bandwidth floating point adder in a collective network
Chen, Dong; Eisley, Noel A.; Heidelberger, Philip; Steinmacher-Burow, Burkhard
2016-11-15
To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to the collective logic device and receive outputs only once from the collective logic device.
Design of permanent magnet synchronous motor speed control system based on SVPWM
NASA Astrophysics Data System (ADS)
Wu, Haibo
2017-04-01
The control system is designed to realize TMS320F28335 based on the permanent magnet synchronous motor speed control system, and put it to quoting all electric of injection molding machine. The system of the control method used SVPWM, through the sampling motor current and rotating transformer position information, realize speed, current double closed loop control. Through the TMS320F28335 hardware floating-point processing core, realize the application for permanent magnet synchronous motor in the floating point arithmetic, to replace the past fixed-point algorithm, and improve the efficiency of the code.
On the Floating Point Performance of the i860 Microprocessor
NASA Technical Reports Server (NTRS)
Lee, King; Kutler, Paul (Technical Monitor)
1997-01-01
The i860 microprocessor is a pipelined processor that can deliver two double precision floating point results every clock. It is being used in the Touchstone project to develop a teraflop computer by the year 2000. With such high computational capabilities it was expected that memory bandwidth would limit performance on many kernels. Measured performance of three kernels showed performance is less than what memory bandwidth limitations would predict. This paper develops a model that explains the discrepancy in terms of memory latencies and points to some problems involved in moving data from memory to the arithmetic pipelines.
Floating-to-Fixed-Point Conversion for Digital Signal Processors
NASA Astrophysics Data System (ADS)
Menard, Daniel; Chillet, Daniel; Sentieys, Olivier
2006-12-01
Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.
An Input Routine Using Arithmetic Statements for the IBM 704 Digital Computer
NASA Technical Reports Server (NTRS)
Turner, Don N.; Huff, Vearl N.
1961-01-01
An input routine has been designed for use with FORTRAN or SAP coded programs which are to be executed on an IBM 704 digital computer. All input to be processed by the routine is punched on IBM cards as declarative statements of the arithmetic type resembling the FORTRAN language. The routine is 850 words in length. It is capable of loading fixed- or floating-point numbers, octal numbers, and alphabetic words, and of performing simple arithmetic as indicated on input cards. Provisions have been made for rapid loading of arrays of numbers in consecutive memory locations.
Y-MP floating point and Cholesky factorization
NASA Technical Reports Server (NTRS)
Carter, Russell
1991-01-01
The floating point arithmetics implemented in the Cray 2 and Cray Y-MP computer systems are nearly identical, but large scale computations performed on the two systems have exhibited significant differences in accuracy. The difference in accuracy is analyzed for Cholesky factorization algorithm, and it is found that the source of the difference is the subtract magnitude operation of the Cray Y-MP. The results from numerical experiments for a range of problem sizes are presented, and an efficient method for improving the accuracy of the factorization obtained on the Y-MP is presented.
Multi-input and binary reproducible, high bandwidth floating point adder in a collective network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Dong; Eisley, Noel A; Heidelberger, Philip
To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to themore » collective logic device and receive outputs only once from the collective logic device.« less
Research in the design of high-performance reconfigurable systems
NASA Technical Reports Server (NTRS)
Slotnick, D. L.; Mcewan, S. D.; Spry, A. J.
1984-01-01
An initial design for the Bit Processor (BP) referred to in prior reports as the Processing Element or PE has been completed. Eight BP's, together with their supporting random-access memory, a 64 k x 9 ROM to perform addition, routing logic, and some additional logic, constitute the components of a single stage. An initial stage design is given. Stages may be combined to perform high-speed fixed or floating point arithmetic. Stages can be configured into a range of arithmetic modules that includes bit-serial one or two-dimensional arrays; one or two dimensional arrays fixed or floating point processors; and specialized uniprocessors, such as long-word arithmetic units. One to eight BP's represent a likely initial chip level. The Stage would then correspond to a first-level pluggable module. As both this project and VLSI CAD/CAM progress, however, it is expected that the chip level would migrate upward to the stage and, perhaps, ultimately the box level. The BP RAM, consisting of two banks, holds only operands and indices. Programs are at the box (high-level function) and system level. At the system level initial effort has been concentrated on specifying the tools needed to evaluate design alternatives.
Inconsistencies in Numerical Simulations of Dynamical Systems Using Interval Arithmetic
NASA Astrophysics Data System (ADS)
Nepomuceno, Erivelton G.; Peixoto, Márcia L. C.; Martins, Samir A. M.; Rodrigues, Heitor M.; Perc, Matjaž
Over the past few decades, interval arithmetic has been attracting widespread interest from the scientific community. With the expansion of computing power, scientific computing is encountering a noteworthy shift from floating-point arithmetic toward increased use of interval arithmetic. Notwithstanding the significant reliability of interval arithmetic, this paper presents a theoretical inconsistency in a simulation of dynamical systems using a well-known implementation of arithmetic interval. We have observed that two natural interval extensions present an empty intersection during a finite time range, which is contrary to the fundamental theorem of interval analysis. We have proposed a procedure to at least partially overcome this problem, based on the union of the two generated pseudo-orbits. This paper also shows a successful case of interval arithmetic application in the reduction of interval width size on the simulation of discrete map. The implications of our findings on the reliability of scientific computing using interval arithmetic have been properly addressed using two numerical examples.
NASA Technical Reports Server (NTRS)
Munoz, Cesar A.; Butler, Ricky (Technical Monitor)
2003-01-01
PVSio is a conservative extension to the PVS prelude library that provides basic input/output capabilities to the PVS ground evaluator. It supports rapid prototyping in PVS by enhancing the specification language with built-in constructs for string manipulation, floating point arithmetic, and input/output operations.
Desirable floating-point arithmetic and elementary functions for numerical computation
NASA Technical Reports Server (NTRS)
Hull, T. E.
1978-01-01
The topics considered are: (1) the base of the number system, (2) precision control, (3) number representation, (4) arithmetic operations, (5) other basic operations, (6) elementary functions, and (7) exception handling. The possibility of doing without fixed-point arithmetic is also mentioned. The specifications are intended to be entirely at the level of a programming language such as FORTRAN. The emphasis is on convenience and simplicity from the user's point of view. Conforming to such specifications would have obvious beneficial implications for the portability of numerical software, and for proving programs correct, as well as attempting to provide facilities which are most suitable for the user. The specifications are not complete in every detail, but it is intended that they be complete in spirit - some further details, especially syntatic details, would have to be provided, but the proposals are otherwise relatively complete.
Apparatus and method for implementing power saving techniques when processing floating point values
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Young Moon; Park, Sang Phill
An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.
Binary Arithmetic From Hariot (CA, 1600 A.D.) to the Computer Age.
ERIC Educational Resources Information Center
Glaser, Anton
This history of binary arithmetic begins with details of Thomas Hariot's contribution and includes specific references to Hariot's manuscripts kept at the British Museum. A binary code developed by Sir Francis Bacon is discussed. Briefly mentioned are contributions to binary arithmetic made by Leibniz, Fontenelle, Gauss, Euler, Benzout, Barlow,…
Bit-parallel arithmetic in a massively-parallel associative processor
NASA Technical Reports Server (NTRS)
Scherson, Isaac D.; Kramer, David A.; Alleyne, Brian D.
1992-01-01
A simple but powerful new architecture based on a classical associative processor model is presented. Algorithms for performing the four basic arithmetic operations both for integer and floating point operands are described. For m-bit operands, the proposed architecture makes it possible to execute complex operations in O(m) cycles as opposed to O(m exp 2) for bit-serial machines. A word-parallel, bit-parallel, massively-parallel computing system can be constructed using this architecture with VLSI technology. The operation of this system is demonstrated for the fast Fourier transform and matrix multiplication.
A floating-point/multiple-precision processor for airborne applications
NASA Technical Reports Server (NTRS)
Yee, R.
1982-01-01
A compact input output (I/O) numerical processor capable of performing floating-point, multiple precision and other arithmetic functions at execution times which are at least 100 times faster than comparable software emulation is described. The I/O device is a microcomputer system containing a 16 bit microprocessor, a numerical coprocessor with eight 80 bit registers running at a 5 MHz clock rate, 18K random access memory (RAM) and 16K electrically programmable read only memory (EPROM). The processor acts as an intelligent slave to the host computer and can be programmed in high order languages such as FORTRAN and PL/M-86.
Fixed-point image orthorectification algorithms for reduced computational cost
NASA Astrophysics Data System (ADS)
French, Joseph Clinton
Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.
Floating-point system quantization errors in digital control systems
NASA Technical Reports Server (NTRS)
Phillips, C. L.
1973-01-01
The results are reported of research into the effects on system operation of signal quantization in a digital control system. The investigation considered digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. An error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. As an output the program gives the programing form required for minimum system quantization errors (either maximum of rms errors), and the maximum and rms errors that appear in the system output for a given bit configuration. The program can be integrated into existing digital simulations of a system.
UNIX as an environment for producing numerical software
NASA Technical Reports Server (NTRS)
Schryer, N. L.
1978-01-01
The UNIX operating system supports a number of software tools; a mathematical equation-setting language, a phototypesetting language, a FORTRAN preprocessor language, a text editor, and a command interpreter. The design, implementation, documentation, and maintenance of a portable FORTRAN test of the floating-point arithmetic unit of a computer is used to illustrate these tools at work.
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2012-04-01
By extending the exponent of floating point numbers with an additional integer as the power index of a large radix, we compute fully normalized associated Legendre functions (ALF) by recursion without underflow problem. The new method enables us to evaluate ALFs of extremely high degree as 232 = 4,294,967,296, which corresponds to around 1 cm resolution on the Earth's surface. By limiting the application of exponent extension to a few working variables in the recursion, choosing a suitable large power of 2 as the radix, and embedding the contents of the basic arithmetic procedure of floating point numbers with the exponent extension directly in the program computing the recurrence formulas, we achieve the evaluation of ALFs in the double-precision environment at the cost of around 10% increase in computational time per single ALF. This formulation realizes meaningful execution of the spherical harmonic synthesis and/or analysis of arbitrary degree and order.
Software For Tie-Point Registration Of SAR Data
NASA Technical Reports Server (NTRS)
Rignot, Eric; Dubois, Pascale; Okonek, Sharon; Van Zyl, Jacob; Burnette, Fred; Borgeaud, Maurice
1995-01-01
SAR-REG software package registers synthetic-aperture-radar (SAR) image data to common reference frame based on manual tie-pointing. Image data can be in binary, integer, floating-point, or AIRSAR compressed format. For example, with map of soil characteristics, vegetation map, digital elevation map, or SPOT multispectral image, as long as user can generate binary image to be used by tie-pointing routine and data are available in one of the previously mentioned formats. Written in FORTRAN 77.
A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus
NASA Astrophysics Data System (ADS)
Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei
2005-01-01
Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.
DFT algorithms for bit-serial GaAs array processor architectures
NASA Technical Reports Server (NTRS)
Mcmillan, Gary B.
1988-01-01
Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.
A sparse matrix algorithm on the Boolean vector machine
NASA Technical Reports Server (NTRS)
Wagner, Robert A.; Patrick, Merrell L.
1988-01-01
VLSI technology is being used to implement a prototype Boolean Vector Machine (BVM), which is a large network of very small processors with equally small memories that operate in SIMD mode; these use bit-serial arithmetic, and communicate via cube-connected cycles network. The BVM's bit-serial arithmetic and the small memories of individual processors are noted to compromise the system's effectiveness in large numerical problem applications. Attention is presently given to the implementation of a basic matrix-vector iteration algorithm for space matrices of the BVM, in order to generate over 1 billion useful floating-point operations/sec for this iteration algorithm. The algorithm is expressed in a novel language designated 'BVM'.
A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing
NASA Technical Reports Server (NTRS)
Takaki, Mitsuo; Cavalcanti, Diego; Gheyi, Rohit; Iyoda, Juliano; dAmorim, Marcelo; Prudencio, Ricardo
2009-01-01
The complexity of constraints is a major obstacle for constraint-based software verification. Automatic constraint solvers are fundamentally incomplete: input constraints often build on some undecidable theory or some theory the solver does not support. This paper proposes and evaluates several randomized solvers to address this issue. We compare the effectiveness of a symbolic solver (CVC3), a random solver, three hybrid solvers (i.e., mix of random and symbolic), and two heuristic search solvers. We evaluate the solvers on two benchmarks: one consisting of manually generated constraints and another generated with a concolic execution of 8 subjects. In addition to fully decidable constraints, the benchmarks include constraints with non-linear integer arithmetic, integer modulo and division, bitwise arithmetic, and floating-point arithmetic. As expected symbolic solving (in particular, CVC3) subsumes the other solvers for the concolic execution of subjects that only generate decidable constraints. For the remaining subjects the solvers are complementary.
Redundant binary number representation for an inherently parallel arithmetic on optical computers.
De Biase, G A; Massini, A
1993-02-10
A simple redundant binary number representation suitable for digital-optical computers is presented. By means of this representation it is possible to build an arithmetic with carry-free parallel algebraic sums carried out in constant time and parallel multiplication in log N time. This redundant number representation naturally fits the 2's complement binary number system and permits the construction of inherently parallel arithmetic units that are used in various optical technologies. Some properties of this number representation and several examples of computation are presented.
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.
NASA Astrophysics Data System (ADS)
Drabik, Timothy J.; Lee, Sing H.
1986-11-01
The intrinsic parallelism characteristics of easily realizable optical SIMD arrays prompt their present consideration in the implementation of highly structured algorithms for the numerical solution of multidimensional partial differential equations and the computation of fast numerical transforms. Attention is given to a system, comprising several spatial light modulators (SLMs), an optical read/write memory, and a functional block, which performs simple, space-invariant shifts on images with sufficient flexibility to implement the fastest known methods for partial differential equations as well as a wide variety of numerical transforms in two or more dimensions. Either fixed or floating-point arithmetic may be used. A performance projection of more than 1 billion floating point operations/sec using SLMs with 1000 x 1000-resolution and operating at 1-MHz frame rates is made.
Fixed-Rate Compressed Floating-Point Arrays.
Lindstrom, Peter
2014-12-01
Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.
Simulation of an Air Cushion Vehicle
1977-03-01
Massachusetts 02139 ! DDC Niov 219T March 1977 Final Report for Period January 1975 - December 1976 DOD DISTRIBUTION STATEMENT Approved for public...or in ,art is permitted for any purpose of the United States Government. II II JI UNCLASSI FIED SECURITY CLASSIFICATiON OF TIlS PAGE flWhen Dato...overflow Floating point fault Decimal arithmetic fault Watch Dog timer runout 186 NAVTRAEQUIPCEN 75-C-0057- 1 PROGRAM ENi\\TRY Initial Program - LOAD Inhibit
Probability Quantization for Multiplication-Free Binary Arithmetic Coding
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.
Translation of one high-level language to another: COBOL to ADA, an example
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, J.A.
1986-01-01
This dissertation discusses the difficulties encountered in, and explores possible solutions to, the task of automatically converting programs written in one HLL, COBOL, into programs written in another HLL, Ada, and still maintain readability. This paper presents at least one set of techniques and algorithms to solve many of the problems that were encountered. The differing view of records is solved by isolating those instances where it is a problem, then using the RENAMES option of Ada. Several solutions to doing the decimal-arithmetic translation are discussed. One method used is to emulate COBOL arithmetic in an arithmetic package. Another partialmore » solution suggested is to convert the values to decimal-scaled integers and use modular arithmetic. Conversion to fixed-point type and floating-point type are the third and fourth methods. The work of another researcher, Bobby Othmer, is utilized to correct any unstructured code, to remap statements not directly translatable such as ALTER, and to pull together isolated code sections. Algorithms are then presented to convert this restructured COBOL code into Ada code with local variables, parameters, and packages. The input/output requirements are partially met by mapping them to a series of procedure calls that interface with Ada's standard input-output package. Several examples are given of hand translations of COBOL programs. In addition, a possibly new method is shown for measuring the readability of programs.« less
The MONGOOSE Rational Arithmetic Toolbox.
Le, Christopher; Chindelevitch, Leonid
2018-01-01
The modeling of metabolic networks has seen a rapid expansion following the complete sequencing of thousands of genomes. The constraint-based modeling framework has emerged as one of the most popular approaches to reconstructing and analyzing genome-scale metabolic models. Its main assumption is that of a quasi-steady-state, requiring that the production of each internal metabolite be balanced by its consumption. However, due to the multiscale nature of the models, the large number of reactions and metabolites, and the use of floating-point arithmetic for the stoichiometric coefficients, ensuring that this assumption holds can be challenging.The MONGOOSE toolbox addresses this problem by using rational arithmetic, thus ensuring that models are analyzed in a reproducible manner and consistently with modeling assumptions. In this chapter we present a protocol for the complete analysis of a metabolic network model using the MONGOOSE toolbox, via its newly developed GUI, and describe how it can be used as a model-checking platform both during and after the model construction process.
Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie
2014-01-01
Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations. PMID:25291352
Towards constructing multi-bit binary adder based on Belousov-Zhabotinsky reaction
NASA Astrophysics Data System (ADS)
Zhang, Guo-Mao; Wong, Ieong; Chou, Meng-Ta; Zhao, Xin
2012-04-01
It has been proposed that the spatial excitable media can perform a wide range of computational operations, from image processing, to path planning, to logical and arithmetic computations. The realizations in the field of chemical logical and arithmetic computations are mainly concerned with single simple logical functions in experiments. In this study, based on Belousov-Zhabotinsky reaction, we performed simulations toward the realization of a more complex operation, the binary adder. Combining with some of the existing functional structures that have been verified experimentally, we designed a planar geometrical binary adder chemical device. Through numerical simulations, we first demonstrated that the device can implement the function of a single-bit full binary adder. Then we show that the binary adder units can be further extended in plane, and coupled together to realize a two-bit, or even multi-bit binary adder. The realization of chemical adders can guide the constructions of other sophisticated arithmetic functions, ultimately leading to the implementation of chemical computer and other intelligent systems.
Receptive fields selection for binary feature description.
Fan, Bin; Kong, Qingqun; Trzcinski, Tomasz; Wang, Zhiheng; Pan, Chunhong; Fua, Pascal
2014-06-01
Feature description for local image patch is widely used in computer vision. While the conventional way to design local descriptor is based on expert experience and knowledge, learning-based methods for designing local descriptor become more and more popular because of their good performance and data-driven property. This paper proposes a novel data-driven method for designing binary feature descriptor, which we call receptive fields descriptor (RFD). Technically, RFD is constructed by thresholding responses of a set of receptive fields, which are selected from a large number of candidates according to their distinctiveness and correlations in a greedy way. Using two different kinds of receptive fields (namely rectangular pooling area and Gaussian pooling area) for selection, we obtain two binary descriptors RFDR and RFDG .accordingly. Image matching experiments on the well-known patch data set and Oxford data set demonstrate that RFD significantly outperforms the state-of-the-art binary descriptors, and is comparable with the best float-valued descriptors at a fraction of processing time. Finally, experiments on object recognition tasks confirm that both RFDR and RFDG successfully bridge the performance gap between binary descriptors and their floating-point competitors.
Kalinina, Elizabeth A
2013-08-01
The explicit Euler's method is known to be very easy and effective in implementation for many applications. This article extends results previously obtained for the systems of linear differential equations with constant coefficients to arbitrary systems of ordinary differential equations. Optimal (providing minimum total error) step size is calculated at each step of Euler's method. Several examples of solving stiff systems are included. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Floating-Point Units and Algorithms for field-programmable gate arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Underwood, Keith D.; Hemmert, K. Scott
2005-11-01
The software that we are attempting to copyright is a package of floating-point unit descriptions and example algorithm implementations using those units for use in FPGAs. The floating point units are best-in-class implementations of add, multiply, divide, and square root floating-point operations. The algorithm implementations are sample (not highly flexible) implementations of FFT, matrix multiply, matrix vector multiply, and dot product. Together, one could think of the collection as an implementation of parts of the BLAS library or something similar to the FFTW packages (without the flexibility) for FPGAs. Results from this work has been published multiple times and wemore » are working on a publication to discuss the techniques we use to implement the floating-point units, For some more background, FPGAS are programmable hardware. "Programs" for this hardware are typically created using a hardware description language (examples include Verilog, VHDL, and JHDL). Our floating-point unit descriptions are written in JHDL, which allows them to include placement constraints that make them highly optimized relative to some other implementations of floating-point units. Many vendors (Nallatech from the UK, SRC Computers in the US) have similar implementations, but our implementations seem to be somewhat higher performance. Our algorithm implementations are written in VHDL and models of the floating-point units are provided in VHDL as well. FPGA "programs" make multiple "calls" (hardware instantiations) to libraries of intellectual property (IP), such as the floating-point unit library described here. These programs are then compiled using a tool called a synthesizer (such as a tool from Synplicity, Inc.). The compiled file is a netlist of gates and flip-flops. This netlist is then mapped to a particular type of FPGA by a mapper and then a place- and-route tool. These tools assign the gates in the netlist to specific locations on the specific type of FPGA chip used and constructs the required routes between them. The result is a "bitstream" that is analogous to a compiled binary. The bitstream is loaded into the FPGA to create a specific hardware configuration.« less
FBC: a flat binary code scheme for fast Manhattan hash retrieval
NASA Astrophysics Data System (ADS)
Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun
2018-04-01
Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.
Float processing of high-temperature complex silicate glasses and float baths used for same
NASA Technical Reports Server (NTRS)
Cooper, Reid Franklin (Inventor); Cook, Glen Bennett (Inventor)
2000-01-01
A float glass process for production of high melting temperature glasses utilizes a binary metal alloy bath having the combined properties of a low melting point, low reactivity with oxygen, low vapor pressure, and minimal reactivity with the silicate glasses being formed. The metal alloy of the float medium is exothermic with a solvent metal that does not readily form an oxide. The vapor pressure of both components in the alloy is low enough to prevent deleterious vapor deposition, and there is minimal chemical and interdiffusive interaction of either component with silicate glasses under the float processing conditions. Alloys having the desired combination of properties include compositions in which gold, silver or copper is the solvent metal and silicon, germanium or tin is the solute, preferably in eutectic or near-eutectic compositions.
Fortran Program for X-Ray Photoelectron Spectroscopy Data Reformatting
NASA Technical Reports Server (NTRS)
Abel, Phillip B.
1989-01-01
A FORTRAN program has been written for use on an IBM PC/XT or AT or compatible microcomputer (personal computer, PC) that converts a column of ASCII-format numbers into a binary-format file suitable for interactive analysis on a Digital Equipment Corporation (DEC) computer running the VGS-5000 Enhanced Data Processing (EDP) software package. The incompatible floating-point number representations of the two computers were compared, and a subroutine was created to correctly store floating-point numbers on the IBM PC, which can be directly read by the DEC computer. Any file transfer protocol having provision for binary data can be used to transmit the resulting file from the PC to the DEC machine. The data file header required by the EDP programs for an x ray photoelectron spectrum is also written to the file. The user is prompted for the relevant experimental parameters, which are then properly coded into the format used internally by all of the VGS-5000 series EDP packages.
Multinode reconfigurable pipeline computer
NASA Technical Reports Server (NTRS)
Nosenchuck, Daniel M. (Inventor); Littman, Michael G. (Inventor)
1989-01-01
A multinode parallel-processing computer is made up of a plurality of innerconnected, large capacity nodes each including a reconfigurable pipeline of functional units such as Integer Arithmetic Logic Processors, Floating Point Arithmetic Processors, Special Purpose Processors, etc. The reconfigurable pipeline of each node is connected to a multiplane memory by a Memory-ALU switch NETwork (MASNET). The reconfigurable pipeline includes three (3) basic substructures formed from functional units which have been found to be sufficient to perform the bulk of all calculations. The MASNET controls the flow of signals from the memory planes to the reconfigurable pipeline and vice versa. the nodes are connectable together by an internode data router (hyperspace router) so as to form a hypercube configuration. The capability of the nodes to conditionally configure the pipeline at each tick of the clock, without requiring a pipeline flush, permits many powerful algorithms to be implemented directly.
A simplified Integer Cosine Transform and its application in image compression
NASA Technical Reports Server (NTRS)
Costa, M.; Tong, K.
1994-01-01
A simplified version of the integer cosine transform (ICT) is described. For practical reasons, the transform is considered jointly with the quantization of its coefficients. It differs from conventional ICT algorithms in that the combined factors for normalization and quantization are approximated by powers of two. In conventional algorithms, the normalization/quantization stage typically requires as many integer divisions as the number of transform coefficients. By restricting the factors to powers of two, these divisions can be performed by variable shifts in the binary representation of the coefficients, with speed and cost advantages to the hardware implementation of the algorithm. The error introduced by the factor approximations is compensated for in the inverse ICT operation, executed with floating point precision. The simplified ICT algorithm has potential applications in image-compression systems with disparate cost and speed requirements in the encoder and decoder ends. For example, in deep space image telemetry, the image processors on board the spacecraft could take advantage of the simplified, faster encoding operation, which would be adjusted on the ground, with high-precision arithmetic. A dual application is found in compressed video broadcasting. Here, a fast, high-performance processor at the transmitter would precompensate for the factor approximations in the inverse ICT operation, to be performed in real time, at a large number of low-cost receivers.
Arithmetic operations in optical computations using a modified trinary number system.
Datta, A K; Basuray, A; Mukhopadhyay, S
1989-05-01
A modified trinary number (MTN) system is proposed in which any binary number can be expressed with the help of trinary digits (1, 0, 1 ). Arithmetic operations can be performed in parallel without the need for carry and borrow steps when binary digits are converted to the MTN system. An optical implementation of the proposed scheme that uses spatial light modulators and color-coded light signals is described.
Gomez-Pulido, Juan A; Cerrada-Barrios, Jose L; Trinidad-Amado, Sebastian; Lanza-Gutierrez, Jose M; Fernandez-Diaz, Ramon A; Crawford, Broderick; Soto, Ricardo
2016-08-31
Metaheuristics are widely used to solve large combinatorial optimization problems in bioinformatics because of the huge set of possible solutions. Two representative problems are gene selection for cancer classification and biclustering of gene expression data. In most cases, these metaheuristics, as well as other non-linear techniques, apply a fitness function to each possible solution with a size-limited population, and that step involves higher latencies than other parts of the algorithms, which is the reason why the execution time of the applications will mainly depend on the execution time of the fitness function. In addition, it is usual to find floating-point arithmetic formulations for the fitness functions. This way, a careful parallelization of these functions using the reconfigurable hardware technology will accelerate the computation, specially if they are applied in parallel to several solutions of the population. A fine-grained parallelization of two floating-point fitness functions of different complexities and features involved in biclustering of gene expression data and gene selection for cancer classification allowed for obtaining higher speedups and power-reduced computation with regard to usual microprocessors. The results show better performances using reconfigurable hardware technology instead of usual microprocessors, in computing time and power consumption terms, not only because of the parallelization of the arithmetic operations, but also thanks to the concurrent fitness evaluation for several individuals of the population in the metaheuristic. This is a good basis for building accelerated and low-energy solutions for intensive computing scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demmel, James W.
This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emergingmore » memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a subset of the IEEE Floating Point Standard 754-2008, uses just 6 words to represent a “reproducible accumulator,” and requires just one read-only pass over the data, or one reduction in parallel. New instructions based on this work are being considered for inclusion in the future IEEE 754-2018 floating-point standard, and new reproducible BLAS are being considered for the next version of the BLAS standard.« less
Design of Arithmetic Circuits for Complex Binary Number System
NASA Astrophysics Data System (ADS)
Jamil, Tariq
2011-08-01
Complex numbers play important role in various engineering applications. To represent these numbers efficiently for storage and manipulation, a (-1+j)-base complex binary number system (CBNS) has been proposed in the literature. In this paper, designs of nibble-size arithmetic circuits (adder, subtractor, multiplier, divider) have been presented. These circuits can be incorporated within von Neumann and associative dataflow processors to achieve higher performance in both sequential and parallel computing paradigms.
Fpga based L-band pulse doppler radar design and implementation
NASA Astrophysics Data System (ADS)
Savci, Kubilay
As its name implies RADAR (Radio Detection and Ranging) is an electromagnetic sensor used for detection and locating targets from their return signals. Radar systems propagate electromagnetic energy, from the antenna which is in part intercepted by an object. Objects reradiate a portion of energy which is captured by the radar receiver. The received signal is then processed for information extraction. Radar systems are widely used for surveillance, air security, navigation, weather hazard detection, as well as remote sensing applications. In this work, an FPGA based L-band Pulse Doppler radar prototype, which is used for target detection, localization and velocity calculation has been built and a general-purpose Pulse Doppler radar processor has been developed. This radar is a ground based stationary monopulse radar, which transmits a short pulse with a certain pulse repetition frequency (PRF). Return signals from the target are processed and information about their location and velocity is extracted. Discrete components are used for the transmitter and receiver chain. The hardware solution is based on Xilinx Virtex-6 ML605 FPGA board, responsible for the control of the radar system and the digital signal processing of the received signal, which involves Constant False Alarm Rate (CFAR) detection and Pulse Doppler processing. The algorithm is implemented in MATLAB/SIMULINK using the Xilinx System Generator for DSP tool. The field programmable gate arrays (FPGA) implementation of the radar system provides the flexibility of changing parameters such as the PRF and pulse length therefore it can be used with different radar configurations as well. A VHDL design has been developed for 1Gbit Ethernet connection to transfer digitized return signal and detection results to PC. An A-Scope software has been developed with C# programming language to display time domain radar signals and detection results on PC. Data are processed both in FPGA chip and on PC. FPGA uses fixed point arithmetic operations as it is fast and facilitates source requirement as it consumes less hardware than floating point arithmetic operations. The software uses floating point arithmetic operations, which ensure precision in processing at the expense of speed. The functionality of the radar system has been tested for experimental validation in the field with a moving car and the validation of submodules are tested with synthetic data simulated on MATLAB.
Floating liquid phase in sedimenting colloid-polymer mixtures.
Schmidt, Matthias; Dijkstra, Marjolein; Hansen, Jean-Pierre
2004-08-20
Density functional theory and computer simulation are used to investigate sedimentation equilibria of colloid-polymer mixtures within the Asakura-Oosawa-Vrij model of hard sphere colloids and ideal polymers. When the ratio of buoyant masses of the two species is comparable to the ratio of differences in density of the coexisting bulk (colloid) gas and liquid phases, a stable "floating liquid" phase is found, i.e., a thin layer of liquid sandwiched between upper and lower gas phases. The full phase diagram of the mixture under gravity shows coexistence of this floating liquid phase with a single gas phase or a phase involving liquid-gas equilibrium; the phase coexistence lines meet at a triple point. This scenario remains valid for general asymmetric binary mixtures undergoing bulk phase separation.
A CPU benchmark for protein crystallographic refinement.
Bourne, P E; Hendrickson, W A
1990-01-01
The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.
Optimized stereo matching in binocular three-dimensional measurement system using structured light.
Liu, Kun; Zhou, Changhe; Wei, Shengbin; Wang, Shaoqing; Fan, Xin; Ma, Jianyong
2014-09-10
In this paper, we develop an optimized stereo-matching method used in an active binocular three-dimensional measurement system. A traditional dense stereo-matching algorithm is time consuming due to a long search range and the high complexity of a similarity evaluation. We project a binary fringe pattern in combination with a series of N binary band limited patterns. In order to prune the search range, we execute an initial matching before exhaustive matching and evaluate a similarity measure using logical comparison instead of a complicated floating-point operation. Finally, an accurate point cloud can be obtained by triangulation methods and subpixel interpolation. The experiment results verify the computational efficiency and matching accuracy of the method.
A new version of the CADNA library for estimating round-off error propagation in Fortran programs
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc
2010-11-01
The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: On 64-bit processors, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore a particular definition of mathematical functions for stochastic arguments has been included in the CADNA library to enable its use with the GNU Fortran compiler on 64-bit processors. Summary of revisions: If CADNA is used on a 64-bit processor with the GNU Fortran compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the stochastic argument of a mathematical function is never lost. Restrictions: CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf which shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs. The source code, which is located in the src directory, consists of one assembly language file (cadna_rounding.s) and eighteen Fortran language files. cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the Fortran compiler used. This assembly file contains routines which are frequently called in the CADNA Fortran files to change the rounding mode. The Fortran language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
20-GFLOPS QR processor on a Xilinx Virtex-E FPGA
NASA Astrophysics Data System (ADS)
Walke, Richard L.; Smith, Robert W. M.; Lightbody, Gaye
2000-11-01
Adaptive beamforming can play an important role in sensor array systems in countering directional interference. In high-sample rate systems, such as radar and comms, the calculation of adaptive weights is a very computational task that requires highly parallel solutions. For systems where low power consumption and volume are important the only viable implementation is as an Application Specific Integrated Circuit (ASIC). However, the rapid advancement of Field Programmable Gate Array (FPGA) technology is enabling highly credible re-programmable solutions. In this paper we present the implementation of a scalable linear array processor for weight calculation using QR decomposition. We employ floating-point arithmetic with mantissa size optimized to the target application to minimize component size, and implement them as relationally placed macros (RPMs) on Xilinx Virtex FPGAs to achieve predictable dense layout and high-speed operation. We present results that show that 20GFLOPS of sustained computation on a single XCV3200E-8 Virtex-E FPGA is possible. We also describe the parameterized implementation of the floating-point operators and QR-processor, and the design methodology that enables us to rapidly generate complex FPGA implementations using the industry standard hardware description language VHDL.
NASA Astrophysics Data System (ADS)
Maiti, Anup Kumar; Nath Roy, Jitendra; Mukhopadhyay, Sourangshu
2007-08-01
In the field of optical computing and parallel information processing, several number systems have been used for different arithmetic and algebraic operations. Therefore an efficient conversion scheme from one number system to another is very important. Modified trinary number (MTN) has already taken a significant role towards carry and borrow free arithmetic operations. In this communication, we propose a tree-net architecture based all optical conversion scheme from binary number to its MTN form. Optical switch using nonlinear material (NLM) plays an important role.
An Experimental Comparison of an Intrinsically Programed Text and a Narrative Text.
ERIC Educational Resources Information Center
Senter, R. J.; And Others
The study compared three methods of instruction in binary and octal arithmetic, i.e., (1) Norman Crowder's branched programed text, "The Arithmetic of Computers," (2) another version of this text modified so that subjects could not see the instructional material while answering "branching" questions, and (3) a narrative text…
Implementation of the Sun Position Calculation in the PDC-1 Control Microprocessor
NASA Technical Reports Server (NTRS)
Stallkamp, J. A.
1984-01-01
The several computational approaches to providing the local azimuth and elevation angles of the Sun as a function of local time and then the utilization of the most appropriate method in the PDC-1 microprocessor are presented. The full algorithm, the FORTRAN form, is felt to be very useful in any kind or size of computer. It was used in the PDC-1 unit to generate efficient code for the microprocessor with its floating point arithmetic chip. The balance of the presentation consists of a brief discussion of the tracking requirements for PPDC-1, the planetary motion equations from the first to the final version, and the local azimuth-elevation geometry.
AN ADA LINEAR ALGEBRA PACKAGE MODELED AFTER HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1994-01-01
This package extends the Ada programming language to include linear algebra capabilities similar to those of the HAL/S programming language. The package is designed for avionics applications such as Space Station flight software. In addition to the HAL/S built-in functions, the package incorporates the quaternion functions used in the Shuttle and Galileo projects, and routines from LINPAK that solve systems of equations involving general square matrices. Language conventions in this package follow those of HAL/S to the maximum extent practical and minimize the effort required for writing new avionics software and translating existent software into Ada. Valid numeric types in this package include scalar, vector, matrix, and quaternion declarations. (Quaternions are fourcomponent vectors used in representing motion between two coordinate frames). Single precision and double precision floating point arithmetic is available in addition to the standard double precision integer manipulation. Infix operators are used instead of function calls to define dot products, cross products, quaternion products, and mixed scalar-vector, scalar-matrix, and vector-matrix products. The package contains two generic programs: one for floating point, and one for integer. The actual component type is passed as a formal parameter to the generic linear algebra package. The procedures for solving systems of linear equations defined by general matrices include GEFA, GECO, GESL, and GIDI. The HAL/S functions include ABVAL, UNIT, TRACE, DET, INVERSE, TRANSPOSE, GET, PUT, FETCH, PLACE, and IDENTITY. This package is written in Ada (Version 1.2) for batch execution and is machine independent. The linear algebra software depends on nothing outside the Ada language except for a call to a square root function for floating point scalars (such as SQRT in the DEC VAX MATHLIB library). This program was developed in 1989, and is a copyrighted work with all copyright vested in NASA.
Chosen interval methods for solving linear interval systems with special type of matrix
NASA Astrophysics Data System (ADS)
Szyszka, Barbara
2013-10-01
The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.
Crawford, D C; Bell, D S; Bamber, J C
1993-01-01
A systematic method to compensate for nonlinear amplification of individual ultrasound B-scanners has been investigated in order to optimise performance of an adaptive speckle reduction (ASR) filter for a wide range of clinical ultrasonic imaging equipment. Three potential methods have been investigated: (1) a method involving an appropriate selection of the speckle recognition feature was successful when the scanner signal processing executes simple logarithmic compressions; (2) an inverse transform (decompression) of the B-mode image was effective in correcting for the measured characteristics of image data compression when the algorithm was implemented in full floating point arithmetic; (3) characterising the behaviour of the statistical speckle recognition feature under conditions of speckle noise was found to be the method of choice for implementation of the adaptive speckle reduction algorithm in limited precision integer arithmetic. In this example, the statistical features of variance and mean were investigated. The third method may be implemented on commercially available fast image processing hardware and is also better suited for transfer into dedicated hardware to facilitate real-time adaptive speckle reduction. A systematic method is described for obtaining ASR calibration data from B-mode images of a speckle producing phantom.
CASPER: A GENERALIZED PROGRAM FOR PLOTTING AND SCALING DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lietzke, M.P.; Smith, R.E.
A Fortran subroutine was written to scale floating-point data and generate a magnetic tape to plot it on the Calcomp 570 digital plotter. The routine permits a great deal of flexibility, and may be used with any type of FORTRAN or FAP calling program. A simple calling program was also written to permit the user to read in data from cards and plot it without any additional programming. Both the Fortran and binary decks are available. (auth)
Failure detection in high-performance clusters and computers using chaotic map computations
Rao, Nageswara S.
2015-09-01
A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.
Digital control system for space structure dampers
NASA Technical Reports Server (NTRS)
Haviland, J. K.
1985-01-01
A digital controller was developed using an SKD-51 System Design Kit, which incorporates an 8031 microcontroller. The necessary interfaces were installed in the wire wrap area of the SKD-51 and a pulse width modulator was developed to drive the coil of the actuator. Also, control equations were developed, using floating-point arithmetic. The design of the digital control system is emphasized, and it is shown that, provided certain rules are followed, an adequate design can be achieved. It is recommended that the so-called w-plane design method be used, and that the time elapsed before output of the up-dated coil-force signal be kept as small as possible. However, the cycle time for the controller should be watched carefully, because very small values for this time can lead to digital noise.
JANUS: a bit-wise reversible integrator for N-body dynamics
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2018-01-01
Hamiltonian systems such as the gravitational N-body problem have time-reversal symmetry. However, all numerical N-body integration schemes, including symplectic ones, respect this property only approximately. In this paper, we present the new N-body integrator JANUS , for which we achieve exact time-reversal symmetry by combining integer and floating point arithmetic. JANUS is explicit, formally symplectic and satisfies Liouville's theorem exactly. Its order is even and can be adjusted between two and ten. We discuss the implementation of JANUS and present tests of its accuracy and speed by performing and analysing long-term integrations of the Solar system. We show that JANUS is fast and accurate enough to tackle a broad class of dynamical problems. We also discuss the practical and philosophical implications of running exactly time-reversible simulations.
Data reduction programs for a laser radar system
NASA Technical Reports Server (NTRS)
Badavi, F. F.; Copeland, G. E.
1984-01-01
The listing and description of software routines which were used to analyze the analog data obtained from LIDAR - system are given. All routines are written in FORTRAN - IV on a HP - 1000/F minicomputer which serves as the heart of the data acquisition system for the LIDAR program. This particular system has 128 kilobytes of highspeed memory and is equipped with a Vector Instruction Set (VIS) firmware package, which is used in all the routines, to handle quick execution of different long loops. The system handles floating point arithmetic in hardware in order to enhance the speed of execution. This computer is a 2177 C/F series version of HP - 1000 RTE-IVB data acquisition computer system which is designed for real time data capture/analysis and disk/tape mass storage environment.
Reproducibility of neuroimaging analyses across operating systems
Glatard, Tristan; Lewis, Lindsay B.; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C.
2015-01-01
Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed. PMID:25964757
Reproducibility of neuroimaging analyses across operating systems.
Glatard, Tristan; Lewis, Lindsay B; Ferreira da Silva, Rafael; Adalat, Reza; Beck, Natacha; Lepage, Claude; Rioux, Pierre; Rousseau, Marc-Etienne; Sherif, Tarek; Deelman, Ewa; Khalili-Mahani, Najmeh; Evans, Alan C
2015-01-01
Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical functions based on single-precision floating-point arithmetic whose implementations in operating systems continue to evolve. While these differences have little or no impact on simple analysis pipelines such as brain extraction and cortical tissue classification, their accumulation creates important differences in longer pipelines such as subcortical tissue classification, fMRI analysis, and cortical thickness extraction. With FSL, most Dice coefficients between subcortical classifications obtained on different operating systems remain above 0.9, but values as low as 0.59 are observed. Independent component analyses (ICA) of fMRI data differ between operating systems in one third of the tested subjects, due to differences in motion correction. With Freesurfer and CIVET, in some brain regions we find an effect of build or operating system on cortical thickness. A first step to correct these reproducibility issues would be to use more precise representations of floating-point numbers in the critical sections of the pipelines. The numerical stability of pipelines should also be reviewed.
CADNA_C: A version of CADNA for use with C or C++ programs
NASA Astrophysics Data System (ADS)
Lamotte, Jean-Luc; Chesneaux, Jean-Marie; Jézéquel, Fabienne
2010-11-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. The CADNA_C version enables this estimation in C or C++ programs, while the previous version had been developed for Fortran programs. The CADNA_C version has the same features as the previous one: with CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. New version program summaryProgram title: CADNA_C Catalogue identifier: AEGQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 60 075 No. of bytes in distributed program, including test data, etc.: 710 781 Distribution format: tar.gz Programming language: C++ Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 933 Does the new version supersede the previous version?: No Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: The previous version (AEAT_v1_0) enables the estimation of round-off error propagation in Fortran programs [2]. The new version has been developed to enable this estimation in C or C++ programs. Summary of revisions: The CADNA_C source code consists of one assembly language file (cadna_rounding.s) and twenty-three C++ language files (including three header files). cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the C++ compiler used. This assembly file contains routines which are frequently called in the CADNA_C C++ files to change the rounding mode. The C++ language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA_C specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. As a remark, on 64-bit processors, the mathematical library associated with the GNU C++ compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore, if CADNA_C is used on a 64-bit processor with the GNU C++ compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the argument of a mathematical function is never lost. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf and a reference guide named, ref_cadna.pdf. The user guide shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs.The reference guide briefly describes each function of the library. The source code (which consists of C++ and assembly files) is located in the src directory. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
Real time pipelined system for forming the sum of products in the processing of video data
NASA Technical Reports Server (NTRS)
Wilcox, Brian (Inventor)
1988-01-01
A 3-by-3 convolver utilizes 9 binary arithmetic units connected in cascade for multiplying 12-bit binary pixel values P sub i which are positive or two's complement binary numbers by 5-bit magnitide (plus sign) weights W sub i which may be positive or negative. The weights are stored in registers including the sign bits. For a negative weight, the one's complement of the pixel value to be multiplied is formed at each unit by a bank of 17 exclusive or gates G sub i under control of the sign of the corresponding weight W sub i, and a correction is made by adding the sum of the absolute values of all the negative weights for each 3-by-3 kernel. Since this correction value remains constant as long as the weights are constant, it can be precomputed and stored in a register as a value to be added to the product PW of the first arithmetic unit.
A comparison of companion matrix methods to find roots of a trigonometric polynomial
NASA Astrophysics Data System (ADS)
Boyd, John P.
2013-08-01
A trigonometric polynomial is a truncated Fourier series of the form fN(t)≡∑j=0Naj cos(jt)+∑j=1N bj sin(jt). It has been previously shown by the author that zeros of such a polynomial can be computed as the eigenvalues of a companion matrix with elements which are complex valued combinations of the Fourier coefficients, the "CCM" method. However, previous work provided no examples, so one goal of this new work is to experimentally test the CCM method. A second goal is introduce a new alternative, the elimination/Chebyshev algorithm, and experimentally compare it with the CCM scheme. The elimination/Chebyshev matrix (ECM) algorithm yields a companion matrix with real-valued elements, albeit at the price of usefulness only for real roots. The new elimination scheme first converts the trigonometric rootfinding problem to a pair of polynomial equations in the variables (c,s) where c≡cos(t) and s≡sin(t). The elimination method next reduces the system to a single univariate polynomial P(c). We show that this same polynomial is the resultant of the system and is also a generator of the Groebner basis with lexicographic ordering for the system. Both methods give very high numerical accuracy for real-valued roots, typically at least 11 decimal places in Matlab/IEEE 754 16 digit floating point arithmetic. The CCM algorithm is typically one or two decimal places more accurate, though these differences disappear if the roots are "Newton-polished" by a single Newton's iteration. The complex-valued matrix is accurate for complex-valued roots, too, though accuracy decreases with the magnitude of the imaginary part of the root. The cost of both methods scales as O(N3) floating point operations. In spite of intimate connections of the elimination/Chebyshev scheme to two well-established technologies for solving systems of equations, resultants and Groebner bases, and the advantages of using only real-valued arithmetic to obtain a companion matrix with real-valued elements, the ECM algorithm is noticeably inferior to the complex-valued companion matrix in simplicity, ease of programming, and accuracy.
Design of barrier bucket kicker control system
NASA Astrophysics Data System (ADS)
Ni, Fa-Fu; Wang, Yan-Yu; Yin, Jun; Zhou, De-Tai; Shen, Guo-Dong; Zheng, Yang-De.; Zhang, Jian-Chuan; Yin, Jia; Bai, Xiao; Ma, Xiao-Li
2018-05-01
The Heavy-Ion Research Facility in Lanzhou (HIRFL) contains two synchrotrons: the main cooler storage ring (CSRm) and the experimental cooler storage ring (CSRe). Beams are extracted from CSRm, and injected into CSRe. To apply the Barrier Bucket (BB) method on the CSRe beam accumulation, a new BB technology based kicker control system was designed and implemented. The controller of the system is implemented using an Advanced Reduced Instruction Set Computer (RISC) Machine (ARM) chip and a field-programmable gate array (FPGA) chip. Within the architecture, ARM is responsible for data presetting and floating number arithmetic processing. The FPGA computes the RF phase point of the two rings and offers more accurate control of the time delay. An online preliminary experiment on HIRFL was also designed to verify the functionalities of the control system. The result shows that the reference trigger point of two different sinusoidal RF signals for an arbitrary phase point was acquired with a matched phase error below 1° (approximately 2.1 ns), and the step delay time better than 2 ns were realized.
Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method
NASA Astrophysics Data System (ADS)
Gilbreth, C. N.; Alhassid, Y.
2015-03-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
ASIC For Complex Fixed-Point Arithmetic
NASA Technical Reports Server (NTRS)
Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.
1995-01-01
Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.
Extreme D'Hondt and round-off effects in voting computations
NASA Astrophysics Data System (ADS)
Konstantinov, M. M.; Pelova, G. B.
2015-11-01
D'Hondt (or Jefferson) method and Hare-Niemeyer (or Hamilton) method are widely used worldwide for seat allocation in proportional systems. Everything seems to be well known in this area. However, this is not the case. For example the D'Hondt method can violate the quota rule from above but this effect is not analyzed as a function of the number of parties and/or the threshold used. Also, allocation methods are often implemented automatically as computer codes in machine arithmetic believing that following the IEEE standards for double precision binary arithmetics would guarantee correct results. Unfortunately this may not happen not only for double precision arithmetic (usually producing 15-16 true decimal digits) but also for any relative precision of the underlying binary machine arithmetics. This paper deals with the following new issues.Find conditions (threshold in particular) such that D'Hondt seat allocation violates maximally the quota rule. Analyze possible influence of rounding errors in the automatic implementation of Hare-Niemeyer method in machine arithmetic.Concerning the first issue, it is known that the maximal deviation of D'Hondt allocation from upper quota for the Bulgarian proportional system (240 MP and 4% barrier) is 5. This fact had been established in 1991. A classical treatment of voting issues is the monograph [1], while electoral problems specific for Bulgaria have been treated in [2, 4]. The effect of threshold on extreme seat allocations is also analyzed in [3]. Finally we would like to stress that Voting Theory may sometimes be mathematically trivial but always has great political impact. This is a strong motivation for further investigations in this area.
Program Converts VAX Floating-Point Data To UNIX
NASA Technical Reports Server (NTRS)
Alves, Marcos; Chapman, Bruce; Chu, Eugene
1996-01-01
VAX Floating Point to Host Floating Point Conversion (VAXFC) software converts non-ASCII files to unformatted floating-point representation of UNIX machine. This is done by reading bytes bit by bit, converting them to floating-point numbers, then writing results to another file. Useful when data files created by VAX computer must be used on other machines. Written in C language.
Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance (1Area×Time=1AT) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature. PMID:28459831
Hossain, Md Selim; Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.
Extracting the information of coastline shape and its multiple representations
NASA Astrophysics Data System (ADS)
Liu, Ying; Li, Shujun; Tian, Zhen; Chen, Huirong
2007-06-01
According to studying the coastline, a new way of multiple representations is put forward in the paper. That is stimulating human thinking way when they generalized, building the appropriate math model and describing the coastline with graphics, extracting all kinds of the coastline shape information. The coastline automatic generalization will be finished based on the knowledge rules and arithmetic operators. Showing the information of coastline shape by building the curve Douglas binary tree, it can reveal the shape character of coastline not only microcosmically but also macroscopically. Extracting the information of coastline concludes the local characteristic point and its orientation, the curve structure and the topology trait. The curve structure can be divided the single curve and the curve cluster. By confirming the knowledge rules of the coastline generalization, the generalized scale and its shape parameter, the coastline automatic generalization model is established finally. The method of the multiple scale representation of coastline in this paper has some strong points. It is human's thinking mode and can keep the nature character of the curve prototype. The binary tree structure can control the coastline comparability, avoid the self-intersect phenomenon and hold the unanimous topology relationship.
Making the Tent Function Complex
ERIC Educational Resources Information Center
Sprows, David J.
2010-01-01
This note can be used to illustrate to the student such concepts as periodicity in the complex plane. The basic construction makes use of the Tent function which requires only that the student have some working knowledge of binary arithmetic.
Zhao, Hong-Quan; Kasai, Seiya; Shiratori, Yuta; Hashizume, Tamotsu
2009-06-17
A two-bit arithmetic logic unit (ALU) was successfully fabricated on a GaAs-based regular nanowire network with hexagonal topology. This fundamental building block of central processing units can be implemented on a regular nanowire network structure with simple circuit architecture based on graphical representation of logic functions using a binary decision diagram and topology control of the graph. The four-instruction ALU was designed by integrating subgraphs representing each instruction, and the circuitry was implemented by transferring the logical graph structure to a GaAs-based nanowire network formed by electron beam lithography and wet chemical etching. A path switching function was implemented in nodes by Schottky wrap gate control of nanowires. The fabricated circuit integrating 32 node devices exhibits the correct output waveforms at room temperature allowing for threshold voltage variation.
Orthogonal polynomials for refinable linear functionals
NASA Astrophysics Data System (ADS)
Laurie, Dirk; de Villiers, Johan
2006-12-01
A refinable linear functional is one that can be expressed as a convex combination and defined by a finite number of mask coefficients of certain stretched and shifted replicas of itself. The notion generalizes an integral weighted by a refinable function. The key to calculating a Gaussian quadrature formula for such a functional is to find the three-term recursion coefficients for the polynomials orthogonal with respect to that functional. We show how to obtain the recursion coefficients by using only the mask coefficients, and without the aid of modified moments. Our result implies the existence of the corresponding refinable functional whenever the mask coefficients are nonnegative, even when the same mask does not define a refinable function. The algorithm requires O(n^2) rational operations and, thus, can in principle deliver exact results. Numerical evidence suggests that it is also effective in floating-point arithmetic.
NASA Technical Reports Server (NTRS)
Martensen, Anna L.; Butler, Ricky W.
1987-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.
The Fault Tree Compiler (FTC): Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1989-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.
Robust image region descriptor using local derivative ordinal binary pattern
NASA Astrophysics Data System (ADS)
Shang, Jun; Chen, Chuanbo; Pei, Xiaobing; Liang, Hu; Tang, He; Sarem, Mudar
2015-05-01
Binary image descriptors have received a lot of attention in recent years, since they provide numerous advantages, such as low memory footprint and efficient matching strategy. However, they utilize intermediate representations and are generally less discriminative than floating-point descriptors. We propose an image region descriptor, namely local derivative ordinal binary pattern, for object recognition and image categorization. In order to preserve more local contrast and edge information, we quantize the intensity differences between the central pixels and their neighbors of the detected local affine covariant regions in an adaptive way. These differences are then sorted and mapped into binary codes and histogrammed with a weight of the sum of the absolute value of the differences. Furthermore, the gray level of the central pixel is quantized to further improve the discriminative ability. Finally, we combine them to form a joint histogram to represent the features of the image. We observe that our descriptor preserves more local brightness and edge information than traditional binary descriptors. Also, our descriptor is robust to rotation, illumination variations, and other geometric transformations. We conduct extensive experiments on the standard ETHZ and Kentucky datasets for object recognition and PASCAL for image classification. The experimental results show that our descriptor outperforms existing state-of-the-art methods.
Local Multi-Grouped Binary Descriptor With Ring-Based Pooling Configuration and Optimization.
Gao, Yongqiang; Huang, Weilin; Qiao, Yu
2015-12-01
Local binary descriptors are attracting increasingly attention due to their great advantages in computational speed, which are able to achieve real-time performance in numerous image/vision applications. Various methods have been proposed to learn data-dependent binary descriptors. However, most existing binary descriptors aim overly at computational simplicity at the expense of significant information loss which causes ambiguity in similarity measure using Hamming distance. In this paper, by considering multiple features might share complementary information, we present a novel local binary descriptor, referred as ring-based multi-grouped descriptor (RMGD), to successfully bridge the performance gap between current binary and floated-point descriptors. Our contributions are twofold. First, we introduce a new pooling configuration based on spatial ring-region sampling, allowing for involving binary tests on the full set of pairwise regions with different shapes, scales, and distances. This leads to a more meaningful description than the existing methods which normally apply a limited set of pooling configurations. Then, an extended Adaboost is proposed for an efficient bit selection by emphasizing high variance and low correlation, achieving a highly compact representation. Second, the RMGD is computed from multiple image properties where binary strings are extracted. We cast multi-grouped features integration as rankSVM or sparse support vector machine learning problem, so that different features can compensate strongly for each other, which is the key to discriminativeness and robustness. The performance of the RMGD was evaluated on a number of publicly available benchmarks, where the RMGD outperforms the state-of-the-art binary descriptors significantly.
KaDonna Randolph
2010-01-01
The use of the geometric and arithmetic means for estimating tree crown diameter and crown cross-sectional area were examined for trees with crown width measurements taken at the widest point of the crown and perpendicular to the widest point of the crown. The average difference between the geometric and arithmetic mean crown diameters was less than 0.2 ft in absolute...
Shen, Chongfei; Liu, Hongtao; Xie, Xb; Luk, Keith Dk; Hu, Yong
2007-01-01
Adaptive noise canceller (ANC) has been used to improve signal to noise ratio (SNR) of somsatosensory evoked potential (SEP). In order to efficiently apply the ANC in hardware system, fixed-point algorithm based ANC can achieve fast, cost-efficient construction, and low-power consumption in FPGA design. However, it is still questionable whether the SNR improvement performance by fixed-point algorithm is as good as that by floating-point algorithm. This study is to compare the outputs of ANC by floating-point and fixed-point algorithm ANC when it was applied to SEP signals. The selection of step-size parameter (micro) was found different in fixed-point algorithm from floating-point algorithm. In this simulation study, the outputs of fixed-point ANC showed higher distortion from real SEP signals than that of floating-point ANC. However, the difference would be decreased with increasing micro value. In the optimal selection of micro, fixed-point ANC can get as good results as floating-point algorithm.
Environment parameters and basic functions for floating-point computation
NASA Technical Reports Server (NTRS)
Brown, W. S.; Feldman, S. I.
1978-01-01
A language-independent proposal for environment parameters and basic functions for floating-point computation is presented. Basic functions are proposed to analyze, synthesize, and scale floating-point numbers. The model provides a small set of parameters and a small set of axioms along with sharp measures of roundoff error. The parameters and functions can be used to write portable and robust codes that deal intimately with the floating-point representation. Subject to underflow and overflow constraints, a number can be scaled by a power of the floating-point radix inexpensively and without loss of precision. A specific representation for FORTRAN is included.
Identification of mothball powder composition by float tests and melting point tests.
Tang, Ka Yuen
2018-07-01
The aim of the study was to identify the composition, as either camphor, naphthalene, or paradichlorobenzene, of mothballs in the form of powder or tiny fragments by float tests and melting point tests. Naphthalene, paradichlorobenzene and camphor mothballs were blended into powder and tiny fragments (with sizes <1/10 of the size of an intact mothball). In the float tests, the mothball powder and tiny fragments were placed in water, saturated salt solution and 50% dextrose solution (D50), and the extent to which they floated or sank in the liquids was observed. In the melting point tests, the mothball powder and tiny fragments were placed in hot water with a temperature between 53 and 80 °C, and the extent to which they melted was observed. Both the float and melting point tests were then repeated using intact mothballs. Three emergency physicians blinded to the identities of samples and solutions visually evaluated each sample. In the float tests, paradichlorobenzene powder partially floated and partially sank in all three liquids, while naphthalene powder partially floated and partially sank in water. Naphthalene powder did not sink in D50 or saturated salt solution. Camphor powder floated in all three liquids. Float tests identified the compositions of intact mothball accurately. In the melting point tests, paradichlorobenzene powder melted completely in hot water within 1 min while naphthalene powder and camphor powder did not melt. The melted portions of paradichlorobenzene mothballs were sometimes too small to be observed in 1 min but the mothballs either partially or completely melted in 5 min. Both camphor and naphthalene intact mothballs did not melt in hot water. For mothball powder, the melting point tests were more accurate than the float tests in differentiating between paradichlorobenzene and non-paradichlorobenzene (naphthalene or camphor). For intact mothballs, float tests performed better than melting point tests. Float tests can identify camphor mothballs but melting point tests cannot. We suggest melting point tests for identifying mothball powder and tiny fragments while float tests are recommended for intact mothball and large fragments.
Pre-Algebra Groups. Concepts & Applications.
ERIC Educational Resources Information Center
Montgomery County Public Schools, Rockville, MD.
Discussion material and exercises related to pre-algebra groups are provided in this five chapter manual. Chapter 1 (mappings) focuses on restricted domains, order of operations (parentheses and exponents), rules of assignment, and computer extensions. Chapter 2 considers finite number systems, including binary operations, clock arithmetic,…
Coding efficiency of AVS 2.0 for CBAC and CABAC engines
NASA Astrophysics Data System (ADS)
Cui, Jing; Choi, Youngkyu; Chae, Soo-Ik
2015-12-01
In this paper we compare the coding efficiency of AVS 2.0[1] for engines of the Context-based Binary Arithmetic Coding (CBAC)[2] in the AVS 2.0 and the Context-Adaptive Binary Arithmetic Coder (CABAC)[3] in the HEVC[4]. For fair comparison, the CABAC is embedded in the reference code RD10.1 because the CBAC is in the HEVC in our previous work[5]. The rate estimation table is employed only for RDOQ in the RD code. To reduce the computation complexity of the video encoder, therefore we modified the RD code so that the rate estimation table is employed for all RDO decision. Furthermore, we also simplify the complexity of rate estimation table by reducing the bit depth of its fractional part to 2 from 8. The simulation result shows that the CABAC has the BD-rate loss of about 0.7% compared to the CBAC. It seems that the CBAC is a little more efficient than that the CABAC in the AVS 2.0.
Gschwind, Michael K [Chappaqua, NY
2011-03-01
Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.
Improvements in floating point addition/subtraction operations
Farmwald, P.M.
1984-02-24
Apparatus is described for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.
Bifurcated method and apparatus for floating point addition with decreased latency time
Farmwald, Paul M.
1987-01-01
Apparatus for decreasing the latency time associated with floating point addition and subtraction in a computer, using a novel bifurcated, pre-normalization/post-normalization approach that distinguishes between differences of floating point exponents.
Vanbinst, Kiran; Ghesquière, Pol; De Smedt, Bert
2014-11-01
Deficits in arithmetic fact retrieval constitute the hallmark of children with mathematical learning difficulties (MLD). It remains, however, unclear which cognitive deficits underpin these difficulties in arithmetic fact retrieval. Many prior studies defined MLD by considering low achievement criteria and not by additionally taking the persistence of the MLD into account. Therefore, the present longitudinal study contrasted children with persistent MLD (MLD-p; mean age: 9 years 2 months) and typically developing (TD) children (mean age: 9 years 6 months) at three time points, to explore whether differences in arithmetic strategy development were associated with differences in numerical magnitude processing, working memory and phonological processing. Our longitudinal data revealed that children with MLD-p had persistent arithmetic fact retrieval deficits at each time point. Children with MLD-p showed persistent impairments in symbolic, but not in nonsymbolic, magnitude processing at each time point. The two groups differed in phonological processing, but not in working memory. Our data indicate that both domain-specific and domain-general cognitive abilities contribute to individual differences in children's arithmetic strategy development, and that the symbolic processing of numerical magnitudes might be a particular risk factor for children with MLD-p. Copyright © 2014 Elsevier Ltd. All rights reserved.
NULL Convention Floating Point Multiplier
Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation. PMID:25879069
NULL convention floating point multiplier.
Albert, Anitha Juliette; Ramachandran, Seshasayanan
2015-01-01
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
Method for preparing homogeneous single crystal ternary III-V alloys
Ciszek, Theodore F.
1991-01-01
A method for producing homogeneous, single-crystal III-V ternary alloys of high crystal perfection using a floating crucible system in which the outer crucible holds a ternary alloy of the composition desired to be produced in the crystal and an inner floating crucible having a narrow, melt-passing channel in its bottom wall holds a small quantity of melt of a pseudo-binary liquidus composition that would freeze into the desired crystal composition. The alloy of the floating crucilbe is maintained at a predetermined lower temperature than the alloy of the outer crucible, and a single crystal of the desired homogeneous alloy is pulled out of the floating crucible melt, as melt from the outer crucible flows into a bottom channel of the floating crucible at a rate that corresponds to the rate of growth of the crystal.
Spotting Incorrect Rules in Signed-Number Arithmetic by the Individual Consistency Index.
1981-08-01
meaning of dimensionality of achievement data. It also shows the importance of construct validity, even in criterion referenced testing of the cognitive ... aspect of performance, and that the traditional means of item analysis that are based on taking the variances of binary scores and content analysis
Fundamentals of Digital Logic.
ERIC Educational Resources Information Center
Noell, Monica L.
This course is designed to prepare electronics personnel for further training in digital techniques, presenting need to know information that is basic to any maintenance course on digital equipment. It consists of seven study units: (1) binary arithmetic; (2) boolean algebra; (3) logic gates; (4) logic flip-flops; (5) nonlogic circuits; (6)…
An Efficient Implementation For Real Time Applications Of The Wigner-Ville Distribution
NASA Astrophysics Data System (ADS)
Boashash, Boualem; Black, Peter; Whitehouse, Harper J.
1986-03-01
The Wigner-Ville Distribution (WVD) is a valuable tool for time-frequency signal analysis. In order to implement the WVD in real time an efficient algorithm and architecture have been developed which may be implemented with commercial components. This algorithm successively computes the analytic signal corresponding to the input signal, forms a weighted kernel function and analyses the kernel via a Discrete Fourier Transform (DFT). To evaluate the analytic signal required by the algorithm it is shown that the time domain definition implemented as a finite impulse response (FIR) filter is practical and more efficient than the frequency domain definition of the analytic signal. The windowed resolution of the WVD in the frequency domain is shown to be similar to the resolution of a windowed Fourier Transform. A real time signal processsor has been designed for evaluation of the WVD analysis system. The system is easily paralleled and can be configured to meet a variety of frequency and time resolutions. The arithmetic unit is based on a pair of high speed VLSI floating-point multiplier and adder chips. Dual operand buses and an independent result bus maximize data transfer rates. The system is horizontally microprogrammed and utilizes a full instruction pipeline. Each microinstruction specifies two operand addresses, a result location, the type of arithmetic and the memory configuration. input and output is via shared memory blocks with front-end processors to handle data transfers during the non access periods of the analyzer.
Pérez Suárez, Santiago T.; Travieso González, Carlos M.; Alonso Hernández, Jesús B.
2013-01-01
This article presents a design methodology for designing an artificial neural network as an equalizer for a binary signal. Firstly, the system is modelled in floating point format using Matlab. Afterward, the design is described for a Field Programmable Gate Array (FPGA) using fixed point format. The FPGA design is based on the System Generator from Xilinx, which is a design tool over Simulink of Matlab. System Generator allows one to design in a fast and flexible way. It uses low level details of the circuits and the functionality of the system can be fully tested. System Generator can be used to check the architecture and to analyse the effect of the number of bits on the system performance. Finally the System Generator design is compiled for the Xilinx Integrated System Environment (ISE) and the system is described using a hardware description language. In ISE the circuits are managed with high level details and physical performances are obtained. In the Conclusions section, some modifications are proposed to improve the methodology and to ensure portability across FPGA manufacturers.
Bounds for the price of discrete arithmetic Asian options
NASA Astrophysics Data System (ADS)
Vanmaele, M.; Deelstra, G.; Liinev, J.; Dhaene, J.; Goovaerts, M. J.
2006-01-01
In this paper the pricing of European-style discrete arithmetic Asian options with fixed and floating strike is studied by deriving analytical lower and upper bounds. In our approach we use a general technique for deriving upper (and lower) bounds for stop-loss premiums of sums of dependent random variables, as explained in Kaas et al. (Ins. Math. Econom. 27 (2000) 151-168), and additionally, the ideas of Rogers and Shi (J. Appl. Probab. 32 (1995) 1077-1088) and of Nielsen and Sandmann (J. Financial Quant. Anal. 38(2) (2003) 449-473). We are able to create a unifying framework for European-style discrete arithmetic Asian options through these bounds, that generalizes several approaches in the literature as well as improves the existing results. We obtain analytical and easily computable bounds. The aim of the paper is to formulate an advice of the appropriate choice of the bounds given the parameters, investigate the effect of different conditioning variables and compare their efficiency numerically. Several sets of numerical results are included. We also discuss hedging using these bounds. Moreover, our methods are applicable to a wide range of (pricing) problems involving a sum of dependent random variables.
Exploiting data representation for fault tolerance
Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...
2015-01-06
Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less
Quantity, Revisited: An Object-Oriented Reusable Class
NASA Technical Reports Server (NTRS)
Funston, Monica Gayle; Gerstle, Walter; Panthaki, Malcolm
1998-01-01
"Quantity", a prototype implementation of an object-oriented class, was developed for two reasons: to help engineers and scientists manipulate the many types of quantities encountered during routine analysis, and to create a reusable software component to for large domain-specific applications. From being used as a stand-alone application to being incorporated into an existing computational mechanics toolkit, "Quantity" appears to be a useful and powerful object. "Quantity" has been designed to maintain the full engineering meaning of values with respect to units and coordinate systems. A value is a scalar, vector, tensor, or matrix, each of which is composed of Value Components, each of which may be an integer, floating point number, fuzzy number, etc., and its associated physical unit. Operations such as coordinate transformation and arithmetic operations are handled by member functions of "Quantity". The prototype has successfully tested such characteristics as maintaining a numeric value, an associated unit, and an annotation. In this paper we further explore the design of "Quantity", with particular attention to coordinate systems.
Towards cortex sized artificial neural systems.
Johansson, Christopher; Lansner, Anders
2007-01-01
We propose, implement, and discuss an abstract model of the mammalian neocortex. This model is instantiated with a sparse recurrently connected neural network that has spiking leaky integrator units and continuous Hebbian learning. First we study the structure, modularization, and size of neocortex, and then we describe a generic computational model of the cortical circuitry. A characterizing feature of the model is that it is based on the modularization of neocortex into hypercolumns and minicolumns. Both a floating- and fixed-point arithmetic implementation of the model are presented along with simulation results. We conclude that an implementation on a cluster computer is not communication but computation bounded. A mouse and rat cortex sized version of our model executes in 44% and 23% of real-time respectively. Further, an instance of the model with 1.6 x 10(6) units and 2 x 10(11) connections performed noise reduction and pattern completion. These implementations represent the current frontier of large-scale abstract neural network simulations in terms of network size and running speed.
How Is Phonological Processing Related to Individual Differences in Children's Arithmetic Skills?
ERIC Educational Resources Information Center
De Smedt, Bert; Taylor, Jessica; Archibald, Lisa; Ansari, Daniel
2010-01-01
While there is evidence for an association between the development of reading and arithmetic, the precise locus of this relationship remains to be determined. Findings from cognitive neuroscience research that point to shared neural correlates for phonological processing and arithmetic as well as recent behavioral evidence led to the present…
Extending the BEAGLE library to a multi-FPGA platform.
Jin, Zheming; Bakos, Jason D
2013-01-19
Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.
The Power of 2: How an Apparently Irregular Numeration System Facilitates Mental Arithmetic
ERIC Educational Resources Information Center
Bender, Andrea; Beller, Sieghard
2017-01-01
Mangarevan traditionally contained two numeration systems: a general one, which was highly regular, decimal, and extraordinarily extensive; and a specific one, which was restricted to specific objects, based on diverging counting units, and interspersed with binary steps. While most of these characteristics are shared by numeration systems in…
Rounding Technique for High-Speed Digital Signal Processing
NASA Technical Reports Server (NTRS)
Wechsler, E. R.
1983-01-01
Arithmetic technique facilitates high-speed rounding of 2's complement binary data. Conventional rounding of 2's complement numbers presents problems in high-speed digital circuits. Proposed technique consists of truncating K + 1 bits then attaching bit in least significant position. Mean output error is zero, eliminating introducing voltage offset at input.
ERIC Educational Resources Information Center
Marine Corps, Washington, DC.
Targeted for grades 10 through adult, these military-developed curriculum materials consist of a student lesson book with text readings and review exercises designed to prepare electronic personnel for further training in digital techniques. Covered in the five lessons are binary arithmetic (number systems, decimal systems, the mathematical form…
Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos
NASA Astrophysics Data System (ADS)
Xu, Dawen; Wang, Rangding
2015-05-01
A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.
Model Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library. We provide efficient algorithms for manipulating EVMDDs and review the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi- Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools. Compared to the CUDD package, our tool is several orders of magnitude faster
A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.
1991-01-01
A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.
Verification of floating-point software
NASA Technical Reports Server (NTRS)
Hoover, Doug N.
1990-01-01
Floating point computation presents a number of problems for formal verification. Should one treat the actual details of floating point operations, or accept them as imprecisely defined, or should one ignore round-off error altogether and behave as if floating point operations are perfectly accurate. There is the further problem that a numerical algorithm usually only approximately computes some mathematical function, and we often do not know just how good the approximation is, even in the absence of round-off error. ORA has developed a theory of asymptotic correctness which allows one to verify floating point software with a minimum entanglement in these problems. This theory and its implementation in the Ariel C verification system are described. The theory is illustrated using a simple program which finds a zero of a given function by bisection. This paper is presented in viewgraph form.
Design of a reversible single precision floating point subtractor.
Anantha Lakshmi, Av; Sudha, Gf
2014-01-04
In recent years, Reversible logic has emerged as a major area of research due to its ability to reduce the power dissipation which is the main requirement in the low power digital circuit design. It has wide applications like low power CMOS design, Nano-technology, Digital signal processing, Communication, DNA computing and Optical computing. Floating-point operations are needed very frequently in nearly all computing disciplines, and studies have shown floating-point addition/subtraction to be the most used floating-point operation. However, few designs exist on efficient reversible BCD subtractors but no work on reversible floating point subtractor. In this paper, it is proposed to present an efficient reversible single precision floating-point subtractor. The proposed design requires reversible designs of an 8-bit and a 24-bit comparator unit, an 8-bit and a 24-bit subtractor, and a normalization unit. For normalization, a 24-bit Reversible Leading Zero Detector and a 24-bit reversible shift register is implemented to shift the mantissas. To realize a reversible 1-bit comparator, in this paper, two new 3x3 reversible gates are proposed The proposed reversible 1-bit comparator is better and optimized in terms of the number of reversible gates used, the number of transistor count and the number of garbage outputs. The proposed work is analysed in terms of number of reversible gates, garbage outputs, constant inputs and quantum costs. Using these modules, an efficient design of a reversible single precision floating point subtractor is proposed. Proposed circuits have been simulated using Modelsim and synthesized using Xilinx Virtex5vlx30tff665-3. The total on-chip power consumed by the proposed 32-bit reversible floating point subtractor is 0.410 W.
Wu, Jun; Hu, Xie-he; Chen, Sheng; Chu, Jian
2003-01-01
The closed-loop stability issue of finite-precision realizations was investigated for digital controllers implemented in block-floating-point format. The controller coefficient perturbation was analyzed resulting from using finite word length (FWL) block-floating-point representation scheme. A block-floating-point FWL closed-loop stability measure was derived which considers both the dynamic range and precision. To facilitate the design of optimal finite-precision controller realizations, a computationally tractable block-floating-point FWL closed-loop stability measure was then introduced and the method of computing the value of this measure for a given controller realization was developed. The optimal controller realization is defined as the solution that maximizes the corresponding measure, and a numerical optimization approach was adopted to solve the resulting optimal realization problem. A numerical example was used to illustrate the design procedure and to compare the optimal controller realization with the initial realization.
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
NASA Astrophysics Data System (ADS)
Archimedes; Heath, Thomas L.
2009-09-01
Part I. Introduction: 1. Archimedes; 2. Manuscripts and principle editions; 3. Relation of Archimedes to his predecessors; 4. Arithmetic in Archimedes; 5. On the problem known as neuseis; 6. Cubic equations; 7. Anticipations by Archimedes of the integral calculus; 8. The terminology of Archimedes; Part II. The Works of Archimedes: 1. On the sphere and cylinder; 2. Measurement of a circle; 3. On conoids and spheroids; 4. On spirals; 5. On the equilibrium of planes; 6. The sand-reckoner; 7. Quadrature of the parabola; 8. On floating bodies; 9. Book of lemmas; 10. The cattle-problem.
Trajectory NG: portable, compressed, general molecular dynamics trajectories.
Spångberg, Daniel; Larsson, Daniel S D; van der Spoel, David
2011-10-01
We present general algorithms for the compression of molecular dynamics trajectories. The standard ways to store MD trajectories as text or as raw binary floating point numbers result in very large files when efficient simulation programs are used on supercomputers. Our algorithms are based on the observation that differences in atomic coordinates/velocities, in either time or space, are generally smaller than the absolute values of the coordinates/velocities. Also, it is often possible to store values at a lower precision. We apply several compression schemes to compress the resulting differences further. The most efficient algorithms developed here use a block sorting algorithm in combination with Huffman coding. Depending on the frequency of storage of frames in the trajectory, either space, time, or combinations of space and time differences are usually the most efficient. We compare the efficiency of our algorithms with each other and with other algorithms present in the literature for various systems: liquid argon, water, a virus capsid solvated in 15 mM aqueous NaCl, and solid magnesium oxide. We perform tests to determine how much precision is necessary to obtain accurate structural and dynamic properties, as well as benchmark a parallelized implementation of the algorithms. We obtain compression ratios (compared to single precision floating point) of 1:3.3-1:35 depending on the frequency of storage of frames and the system studied.
Floating-point performance of ARM cores and their efficiency in classical molecular dynamics
NASA Astrophysics Data System (ADS)
Nikolskiy, V.; Stegailov, V.
2016-02-01
Supercomputing of the exascale era is going to be inevitably limited by power efficiency. Nowadays different possible variants of CPU architectures are considered. Recently the development of ARM processors has come to the point when their floating point performance can be seriously considered for a range of scientific applications. In this work we present the analysis of the floating point performance of the latest ARM cores and their efficiency for the algorithms of classical molecular dynamics.
Single-digit arithmetic processing—anatomical evidence from statistical voxel-based lesion analysis
Mihulowicz, Urszula; Willmes, Klaus; Karnath, Hans-Otto; Klein, Elise
2014-01-01
Different specific mechanisms have been suggested for solving single-digit arithmetic operations. However, the neural correlates underlying basic arithmetic (multiplication, addition, subtraction) are still under debate. In the present study, we systematically assessed single-digit arithmetic in a group of acute stroke patients (n = 45) with circumscribed left- or right-hemispheric brain lesions. Lesion sites significantly related to impaired performance were found only in the left-hemisphere damaged (LHD) group. Deficits in multiplication and addition were related to subcortical/white matter brain regions differing from those for subtraction tasks, corroborating the notion of distinct processing pathways for different arithmetic tasks. Additionally, our results further point to the importance of investigating fiber pathways in numerical cognition. PMID:24847238
NASA Astrophysics Data System (ADS)
Zinke, Stephan
2017-02-01
Memory sensitive applications for remote sensing data require memory-optimized data types in remote sensing products. Hierarchical Data Format version 5 (HDF5) offers user defined floating point numbers and integers and the n-bit filter to create data types optimized for memory consumption. The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) applies a compaction scheme to the disseminated products of the Day and Night Band (DNB) data of Suomi National Polar-orbiting Partnership (S-NPP) satellite's instrument Visible Infrared Imager Radiometer Suite (VIIRS) through the EUMETSAT Advanced Retransmission Service, converting the original 32 bits floating point numbers to user defined floating point numbers in combination with the n-bit filter for the radiance dataset of the product. The radiance dataset requires a floating point representation due to the high dynamic range of the DNB. A compression factor of 1.96 is reached by using an automatically determined exponent size and an 8 bits trailing significand and thus reducing the bandwidth requirements for dissemination. It is shown how the parameters needed for user defined floating point numbers are derived or determined automatically based on the data present in a product.
Heinrich, Mattias P; Blendowski, Max; Oktay, Ozan
2018-05-30
Deep convolutional neural networks (DCNN) are currently ubiquitous in medical imaging. While their versatility and high-quality results for common image analysis tasks including segmentation, localisation and prediction is astonishing, the large representational power comes at the cost of highly demanding computational effort. This limits their practical applications for image-guided interventions and diagnostic (point-of-care) support using mobile devices without graphics processing units (GPU). We propose a new scheme that approximates both trainable weights and neural activations in deep networks by ternary values and tackles the open question of backpropagation when dealing with non-differentiable functions. Our solution enables the removal of the expensive floating-point matrix multiplications throughout any convolutional neural network and replaces them by energy- and time-preserving binary operators and population counts. We evaluate our approach for the segmentation of the pancreas in CT. Here, our ternary approximation within a fully convolutional network leads to more than 90% memory reductions and high accuracy (without any post-processing) with a Dice overlap of 71.0% that comes close to the one obtained when using networks with high-precision weights and activations. We further provide a concept for sub-second inference without GPUs and demonstrate significant improvements in comparison with binary quantisation and without our proposed ternary hyperbolic tangent continuation. We present a key enabling technique for highly efficient DCNN inference without GPUs that will help to bring the advances of deep learning to practical clinical applications. It has also great promise for improving accuracies in large-scale medical data retrieval.
On the use of inexact, pruned hardware in atmospheric modelling
Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.
2014-01-01
Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031
NASA Astrophysics Data System (ADS)
Fellman, Ronald D.; Kaneshiro, Ronald T.; Konstantinides, Konstantinos
1990-03-01
The authors present the design and evaluation of an architecture for a monolithic, programmable, floating-point digital signal processor (DSP) for instrumentation applications. An investigation of the most commonly used algorithms in instrumentation led to a design that satisfies the requirements for high computational and I/O (input/output) throughput. In the arithmetic unit, a 16- x 16-bit multiplier and a 32-bit accumulator provide the capability for single-cycle multiply/accumulate operations, and three format adjusters automatically adjust the data format for increased accuracy and dynamic range. An on-chip I/O unit is capable of handling data block transfers through a direct memory access port and real-time data streams through a pair of parallel I/O ports. I/O operations and program execution are performed in parallel. In addition, the processor includes two data memories with independent addressing units, a microsequencer with instruction RAM, and multiplexers for internal data redirection. The authors also present the structure and implementation of a design environment suitable for the algorithmic, behavioral, and timing simulation of a complete DSP system. Various benchmarking results are reported.
Active vibration control of a full scale aircraft wing using a reconfigurable controller
NASA Astrophysics Data System (ADS)
Prakash, Shashikala; Renjith Kumar, T. G.; Raja, S.; Dwarakanathan, D.; Subramani, H.; Karthikeyan, C.
2016-01-01
This work highlights the design of a Reconfigurable Active Vibration Control (AVC) System for aircraft structures using adaptive techniques. The AVC system with a multichannel capability is realized using Filtered-X Least Mean Square algorithm (FxLMS) on Xilinx Virtex-4 Field Programmable Gate Array (FPGA) platform in Very High Speed Integrated Circuits Hardware Description Language, (VHDL). The HDL design is made based on Finite State Machine (FSM) model with Floating point Intellectual Property (IP) cores for arithmetic operations. The use of FPGA facilitates to modify the system parameters even during runtime depending on the changes in user's requirements. The locations of the control actuators are optimized based on dynamic modal strain approach using genetic algorithm (GA). The developed system has been successfully deployed for the AVC testing of the full-scale wing of an all composite two seater transport aircraft. Several closed loop configurations like single channel and multi-channel control have been tested. The experimental results from the studies presented here are very encouraging. They demonstrate the usefulness of the system's reconfigurability for real time applications.
Accelerating scientific computations with mixed precision algorithms
NASA Astrophysics Data System (ADS)
Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire
2009-12-01
On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. Program summaryProgram title: ITER-REF Catalogue identifier: AECO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 41 862 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: desktop, server Operating system: Unix/Linux RAM: 512 Mbytes Classification: 4.8 External routines: BLAS (optional) Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision. Running time: seconds/minutes
Li, Xin-Wei; Shao, Xiao-Mei; Tan, Ke-Ping; Fang, Jian-Qiao
2013-04-01
To compare the efficacy difference in the treatment of supraspinous ligament injury between floating acupuncture at Tianying point and the conventional warm needling therapy. Ninety patients were randomized into a floating acupuncture group and a warm needling group, 45 cases in each one. In the floating acupuncture group, the floating needling technique was adopted at Tianying point. In the warm needling group, the conventional warm needling therapy was applied at Tianying point as the chief point in the prescription. The treatment was given 3 times a week and 6 treatments made one session. The visual analogue scale (VAS) was adopted for pain comparison before and after treatment of the patients in two groups and the efficacy in two groups were assessed. The curative and remarkably effective rate was 81.8% (36/44) in the floating acupuncture group and the total effective rate was 95.5% (42/44), which were superior to 44.2% (19/43) and 79.1% (34/43) in the warm needling group separately (P < 0.01, P < 0.05). VAS score was lower as compared with that before treatment of the patients in two groups (both P < 0.01) and the score in the floating acupuncture group was lower than that in the warm needling group after treatment (P < 0.01). Thirty-six cases were cured and remarkably effective in the floating acupuncture group after treatment, in which 28 cases were cured and remarkably effective in 3 treatments, accounting for 77.8 (28/36), which was apparently higher than 26.3 (5/19) in the warm-needling group (P < 0.01). The floating acupuncture at Tianying point achieves the quick and definite efficacy on supraspinous ligament injury and presents the apparent analgesic effect. The efficacy is superior to the conventional warm-needling therapy.
NASA Technical Reports Server (NTRS)
Brown, R. A.
1986-01-01
This research program focuses on analysis of the transport mechanisms in solidification processes, especially one of interest to the Microgravity Sciences and Applications Program of NASA. Research during the last year has focused on analysis of the dynamics of the floating zone process for growth of small-scale crystals, on studies of the effect of applied magnetic fields on convection and solute segregation in directional solidification, and on the dynamics of microscopic cell formation in two-dimensional solidification of binary alloys. Significant findings are given.
Extending the BEAGLE library to a multi-FPGA platform
2013-01-01
Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707
Applications Performance on NAS Intel Paragon XP/S - 15#
NASA Technical Reports Server (NTRS)
Saini, Subhash; Simon, Horst D.; Copper, D. M. (Technical Monitor)
1994-01-01
The Numerical Aerodynamic Simulation (NAS) Systems Division received an Intel Touchstone Sigma prototype model Paragon XP/S- 15 in February, 1993. The i860 XP microprocessor with an integrated floating point unit and operating in dual -instruction mode gives peak performance of 75 million floating point operations (NIFLOPS) per second for 64 bit floating point arithmetic. It is used in the Paragon XP/S-15 which has been installed at NAS, NASA Ames Research Center. The NAS Paragon has 208 nodes and its peak performance is 15.6 GFLOPS. Here, we will report on early experience using the Paragon XP/S- 15. We have tested its performance using both kernels and applications of interest to NAS. We have measured the performance of BLAS 1, 2 and 3 both assembly-coded and Fortran coded on NAS Paragon XP/S- 15. Furthermore, we have investigated the performance of a single node one-dimensional FFT, a distributed two-dimensional FFT and a distributed three-dimensional FFT Finally, we measured the performance of NAS Parallel Benchmarks (NPB) on the Paragon and compare it with the performance obtained on other highly parallel machines, such as CM-5, CRAY T3D, IBM SP I, etc. In particular, we investigated the following issues, which can strongly affect the performance of the Paragon: a. Impact of the operating system: Intel currently uses as a default an operating system OSF/1 AD from the Open Software Foundation. The paging of Open Software Foundation (OSF) server at 22 MB to make more memory available for the application degrades the performance. We found that when the limit of 26 NIB per node out of 32 MB available is reached, the application is paged out of main memory using virtual memory. When the application starts paging, the performance is considerably reduced. We found that dynamic memory allocation can help applications performance under certain circumstances. b. Impact of data cache on the i860/XP: We measured the performance of the BLAS both assembly coded and Fortran coded. We found that the measured performance of assembly-coded BLAS is much less than what memory bandwidth limitation would predict. The influence of data cache on different sizes of vectors is also investigated using one-dimensional FFTs. c. Impact of processor layout: There are several different ways processors can be laid out within the two-dimensional grid of processors on the Paragon. We have used the FFT example to investigate performance differences based on processors layout.
Model-Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library along with state-of-the-art algorithms for building the transition relation and the state space of discrete state systems. We provide efficient algorithms for manipulating EVMDDs and give upper bounds of the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi-Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools: EVMDDs for encoding arithmetic expressions, identity-reduced MDDs for representing the transition relation, and the saturation algorithm for reachability analysis. We compare our new symbolic model checking EVMDD library with the widely used CUDD package and show that, in many cases, our tool is several orders of magnitude faster than CUDD.
NASA Technical Reports Server (NTRS)
Parkinson, J B; HOUSE R O
1938-01-01
Tests were made in the NACA tank and in the NACA 7 by 10 foot wind tunnel on two models of transverse step floats and three models of pointed step floats considered to be suitable for use with single float seaplanes. The object of the program was the reduction of water resistance and spray of single float seaplanes without reducing the angle of dead rise believed to be necessary for the satisfactory absorption of the shock loads. The results indicated that all the models have less resistance and spray than the model of the Mark V float and that the pointed step floats are somewhat superior to the transverse step floats in these respects. Models 41-D, 61-A, and 73 were tested by the general method over a wide range of loads and speeds. The results are presented in the form of curves and charts for use in design calculations.
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2016-12-01
Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can hamper AM resulting in severe underestimation of BME. TI turned out to be the most vulnerable, resulting in BME overestimation. Finally, we show how SS can be largely invariant to rounding errors, yielding the most accurate and computational efficient results. These research results are useful for MC simulations to estimate Bayesian model evidence.
The Unified Floating Point Vector Coprocessor for Reconfigurable Hardware
NASA Astrophysics Data System (ADS)
Kathiara, Jainik
There has been an increased interest recently in using embedded cores on FPGAs. Many of the applications that make use of these cores have floating point operations. Due to the complexity and expense of floating point hardware, these algorithms are usually converted to fixed point operations or implemented using floating-point emulation in software. As the technology advances, more and more homogeneous computational resources and fixed function embedded blocks are added to FPGAs and hence implementation of floating point hardware becomes a feasible option. In this research we have implemented a high performance, autonomous floating point vector Coprocessor (FPVC) that works independently within an embedded processor system. We have presented a unified approach to vector and scalar computation, using a single register file for both scalar operands and vector elements. The Hybrid vector/SIMD computational model of FPVC results in greater overall performance for most applications along with improved peak performance compared to other approaches. By parameterizing vector length and the number of vector lanes, we can design an application specific FPVC and take optimal advantage of the FPGA fabric. For this research we have also initiated designing a software library for various computational kernels, each of which adapts FPVC's configuration and provide maximal performance. The kernels implemented are from the area of linear algebra and include matrix multiplication and QR and Cholesky decomposition. We have demonstrated the operation of FPVC on a Xilinx Virtex 5 using the embedded PowerPC.
High-performance floating-point image computing workstation for medical applications
NASA Astrophysics Data System (ADS)
Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin
1990-07-01
The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.
ERIC Educational Resources Information Center
Gerhardt, Ira
2015-01-01
An experiment was conducted over three recent semesters of an introductory calculus course to test whether it was possible to quantify the effect that difficulty with basic algebraic and arithmetic computation had on individual performance. Points lost during the term were classified as being due to either algebraic and arithmetic mistakes…
A Flexible VHDL Floating Point Module for Control Algorithm Implementation in Space Applications
NASA Astrophysics Data System (ADS)
Padierna, A.; Nicoleau, C.; Sanchez, J.; Hidalgo, I.; Elvira, S.
2012-08-01
The implementation of control loops for space applications is an area with great potential. However, the characteristics of this kind of systems, such as its wide dynamic range of numeric values, make inadequate the use of fixed-point algorithms.However, because the generic chips available for the treatment of floating point data are, in general, not qualified to operate in space environments and the possibility of using an IP module in a FPGA/ASIC qualified for space is not viable due to the low amount of logic cells available for these type of devices, it is necessary to find a viable alternative.For these reasons, in this paper a VHDL Floating Point Module is presented. This proposal allows the design and execution of floating point algorithms with acceptable occupancy to be implemented in FPGAs/ASICs qualified for space environments.
Genetic analysis of floating Enteromorpha prolifera in the Yellow Sea with AFLP marker
NASA Astrophysics Data System (ADS)
Liu, Cui; Zhang, Jing; Sun, Xiaoyu; Li, Jian; Zhang, Xi; Liu, Tao
2011-09-01
Extremely large accumulation of green algae Enteromorpha prolifera floated along China' coastal region of the Yellow Sea ever since the summer of 2008. Amplified Fragment Length Polymorphism (AFLP) analysis was applied to assess the genetic diversity and relationships among E. prolifera samples collected from 9 affected areas of the Yellow Sea. Two hundred reproducible fragments were generated with 8 AFLP primer combinations, of which 194 (97%) were polymorphic. The average Nei's genetic diversity, the coefficiency of genetic differentiation (Gst), and the average gene flow estimated from Gst in the 9 populations were 0.4018, 0.6404 and 0.2807 respectively. Cluster analysis based on the unweighed pair group method with arithmetic averages (UPGMA) showed that the genetic relationships within one population or among different populations were all related to their collecting locations and sampling time. Large genetic differentiation was detected among the populations. The E. prolifera originated from different areas and were undergoing a course of mixing.
40 CFR 426.50 - Applicability; description of the float glass manufacturing subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... float glass manufacturing subcategory. 426.50 Section 426.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.50 Applicability; description of the float glass...
40 CFR 426.50 - Applicability; description of the float glass manufacturing subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... float glass manufacturing subcategory. 426.50 Section 426.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.50 Applicability; description of the float glass...
NASA Astrophysics Data System (ADS)
Schudlo, Larissa C.; Chau, Tom
2014-02-01
Objective. Near-infrared spectroscopy (NIRS) has recently gained attention as a modality for brain-computer interfaces (BCIs), which may serve as an alternative access pathway for individuals with severe motor impairments. For NIRS-BCIs to be used as a real communication pathway, reliable online operation must be achieved. Yet, only a limited number of studies have been conducted online to date. These few studies were carried out under a synchronous paradigm and did not accommodate an unconstrained resting state, precluding their practical clinical implication. Furthermore, the potentially discriminative power of spatiotemporal characteristics of activation has yet to be considered in an online NIRS system. Approach. In this study, we developed and evaluated an online system-paced NIRS-BCI which was driven by a mental arithmetic activation task and accommodated an unconstrained rest state. With a dual-wavelength, frequency domain near-infrared spectrometer, measurements were acquired over nine sites of the prefrontal cortex, while ten able-bodied participants selected letters from an on-screen scanning keyboard via intentionally controlled brain activity (using mental arithmetic). Participants were provided dynamic NIR topograms as continuous visual feedback of their brain activity as well as binary feedback of the BCI's decision (i.e. if the letter was selected or not). To classify the hemodynamic activity, temporal features extracted from the NIRS signals and spatiotemporal features extracted from the dynamic NIR topograms were used in a majority vote combination of multiple linear classifiers. Main results. An overall online classification accuracy of 77.4 ± 10.5% was achieved across all participants. The binary feedback was found to be very useful during BCI use, while not all participants found value in the continuous feedback provided. Significance. These results demonstrate that mental arithmetic is a potent mental task for driving an online system-paced NIRS-BCI. BCI feedback that reflects the classifier's decision has the potential to improve user performance. The proposed system can provide a framework for future online NIRS-BCI development and testing.
NASA Astrophysics Data System (ADS)
Ould Bachir, Tarek
The real-time simulation of electrical networks gained a vivid industrial interest during recent years, motivated by the substantial development cost reduction that such a prototyping approach can offer. Real-time simulation allows the progressive inclusion of real hardware during its development, allowing its testing under realistic conditions. However, CPU-based simulations suffer from certain limitations such as the difficulty to reach time-steps of a few microsecond, an important challenge brought by modern power converters. Hence, industrial practitioners adopted the FPGA as a platform of choice for the implementation of calculation engines dedicated to the rapid real-time simulation of electrical networks. The reconfigurable technology broke the 5 kHz switching frequency barrier that is characteristic of CPU-based simulations. Moreover, FPGA-based real-time simulation offers many advantages, including the reduced latency of the simulation loop that is obtained thanks to a direct access to sensors and actuators. The fixed-point format is paradigmatic to FPGA-based digital signal processing. However, the format imposes a time penalty in the development process since the designer has to asses the required precision for all model variables. This fact brought an import research effort on the use of the floating-point format for the simulation of electrical networks. One of the main challenges in the use of the floating-point format are the long latencies required by the elementary arithmetic operators, particularly when an adder is used as an accumulator, an important building bloc for the implementation of integration rules such as the trapezoidal method. Hence, single-cycle floating-point accumulation forms the core of this research work. Our results help building such operators as accumulators, multiply-accumulators (MACs), and dot-product (DP) operators. These operators play a key role in the implementation of the proposed calculation engines. Therefore, this thesis contributes to the realm of FPGA-based real-time simulation in many ways. The research work proposes a new summation algorithm, which is a generalization of the so-called self-alignment technique. The new formulation is broader, simpler in its expression and hardware implementation. Our research helps formulating criteria to guarantee good accuracy, the criteria being established on a theoretical, as well as empirical basis. Moreover, the thesis offers a comprehensive analysis on the use of the redundant high radix carry-save (HRCS) format. The HRCS format is used to perform rapid additions of large mantissas. Two new HRCS operators are also proposed, namely an endomorphic adder and a HRCS to conventional converter. Once the mean to single-cycle accumulation is defined as a combination of the self-alignment technique and the HRCS format, the research focuses on the FPGA implementation of SIMD calculation engines using parallel floating-point MACs or DPs. The proposed operators are characterized by low latencies, allowing the engines to reach very low time-steps. The document finally discusses power electronic circuits modelling, and concludes with the presentation of a versatile calculation engine capable of simulating power converter with arbitrary topologies and up to 24 switches, while achieving time steps below 1 mus and allowing switching frequencies in the range of tens kilohertz. The latter realization has led to commercialization of a product by our industrial partner.
Gschwind, Michael K
2013-04-16
Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.
Two-bit trinary full adder design based on restricted signed-digit numbers
NASA Astrophysics Data System (ADS)
Ahmed, J. U.; Awwal, A. A. S.; Karim, M. A.
1994-08-01
A 2-bit trinary full adder using a restricted set of a modified signed-digit trinary numeric system is designed. When cascaded together to design a multi-bit adder machine, the resulting system is able to operate at a speed independent of the size of the operands. An optical non-holographic content addressable memory based on binary coded arithmetic is considered for implementing the proposed adder.
Lithium Niobate Arithmetic Logic Unit
1991-03-01
Boot51] A.D. Booth, "A Signed Binary Multiplication Technique," Quarterly Journal of Mechanics and Applied Mathematics , Vol. IV Part 2, 1951. [ChWi79...Trans. Computers, Vol. C-26, No. 7, July 1977, pp. 681-687. [Wake8 I] John F. Wakerly , "Miocrocomputer Architecture and Programming," John Wiley and...different division methods and discusses their applicability to simple bit serial implementation. Several different designs are then presented and
Improving energy efficiency in handheld biometric applications
NASA Astrophysics Data System (ADS)
Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.
2012-06-01
With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.
40 CFR 63.1063 - Floating roof requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the point of refloating the floating roof shall be continuous and shall be performed as soon as... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Floating roof requirements. 63.1063...) National Emission Standards for Storage Vessels (Tanks)-Control Level 2 § 63.1063 Floating roof...
50 CFR 679.94 - Economic data report (EDR) for the Amendment 80 sector.
Code of Federal Regulations, 2010 CFR
2010-10-01
...: NMFS, Alaska Fisheries Science Center, Economic Data Reports, 7600 Sand Point Way NE, F/AKC2, Seattle... Operation Description of code Code NMFS Alaska region ADF&G FCP Catcher/processor Floating catcher processor. FLD Mothership Floating domestic mothership. IFP Stationary Floating Processor Inshore floating...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, So
2003-11-20
We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes commonmore » binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory [MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ).« less
50 CFR 86.13 - What is boating infrastructure?
Code of Federal Regulations, 2010 CFR
2010-10-01
..., currents, etc., that provide a temporary safe anchorage point or harbor of refuge during storms); (f) Floating docks and fixed piers; (g) Floating and fixed breakwaters; (h) Dinghy docks (floating or fixed...
PCIPS 2.0: Powerful multiprofile image processing implemented on PCs
NASA Technical Reports Server (NTRS)
Smirnov, O. M.; Piskunov, N. E.
1992-01-01
Over the years, the processing power of personal computers has steadily increased. Now, 386- and 486-based PC's are fast enough for many image processing applications, and inexpensive enough even for amateur astronomers. PCIPS is an image processing system based on these platforms that was designed to satisfy a broad range of data analysis needs, while requiring minimum hardware and providing maximum expandability. It will run (albeit at a slow pace) even on a 80286 with 640K memory, but will take full advantage of bigger memory and faster CPU's. Because the actual image processing is performed by external modules, the system can be easily upgraded by the user for all sorts of scientific data analysis. PCIPS supports large format lD and 2D images in any numeric type from 8-bit integer to 64-bit floating point. The images can be displayed, overlaid, printed and any part of the data examined via an intuitive graphical user interface that employs buttons, pop-up menus, and a mouse. PCIPS automatically converts images between different types and sizes to satisfy the requirements of various applications. PCIPS features an API that lets users develop custom applications in C or FORTRAN. While doing so, a programmer can concentrate on the actual data processing, because PCIPS assumes responsibility for accessing images and interacting with the user. This also ensures that all applications, even custom ones, have a consistent and user-friendly interface. The API is compatible with factory programming, a metaphor for constructing image processing procedures that will be implemented in future versions of the system. Several application packages were created under PCIPS. The basic package includes elementary arithmetics and statistics, geometric transformations and import/export in various formats (FITS, binary, ASCII, and GIF). The CCD processing package and the spectral analysis package were successfully used to reduce spectra from the Nordic Telescope at La Palma. A photometry package is also available, and other packages are being developed. A multitasking version of PCIPS that utilizes the factory programming concept is currently under development. This version will remain compatible (on the source code level) with existing application packages and custom applications.
Developing an Energy Policy for the United States
ERIC Educational Resources Information Center
Keefe, Pat
2014-01-01
Al Bartlett's video "Arithmetic, Population, and Energy" spells out many of the complex issues related to energy use in our society. Bartlett makes the point that basic arithmetic is the fundamental obstacle preventing us from being able to grasp the relationships between energy consumption, population, and lifestyles. In an earlier…
III-V semiconductor solid solution single crystal growth
NASA Technical Reports Server (NTRS)
Gertner, E. R.
1982-01-01
The feasibility and desirability of space growth of bulk IR semiconductor crystals for use as substrates for epitaxial IR detector material were researched. A III-V ternary compound (GaInSb) and a II-VI binary compound were considered. Vapor epitaxy and quaternary epitaxy techniques were found to be sufficient to permit the use of ground based binary III-V crystals for all major device applications. Float zoning of CdTe was found to be a potentially successful approach to obtaining high quality substrate material, but further experiments were required.
Efficient algorithms for dilated mappings of binary trees
NASA Technical Reports Server (NTRS)
Iqbal, M. Ashraf
1990-01-01
The problem is addressed to find a 1-1 mapping of the vertices of a binary tree onto those of a target binary tree such that the son of a node on the first binary tree is mapped onto a descendent of the image of that node in the second binary tree. There are two natural measures of the cost of this mapping, namely the dilation cost, i.e., the maximum distance in the target binary tree between the images of vertices that are adjacent in the original tree. The other measure, expansion cost, is defined as the number of extra nodes/edges to be added to the target binary tree in order to ensure a 1-1 mapping. An efficient algorithm to find a mapping of one binary tree onto another is described. It is shown that it is possible to minimize one cost of mapping at the expense of the other. This problem arises when designing pipelined arithmetic logic units (ALU) for special purpose computers. The pipeline is composed of ALU chips connected in the form of a binary tree. The operands to the pipeline can be supplied to the leaf nodes of the binary tree which then process and pass the results up to their parents. The final result is available at the root. As each new application may require a distinct nesting of operations, it is useful to be able to find a good mapping of a new binary tree over existing ALU tree. Another problem arises if every distinct required binary tree is known beforehand. Here it is useful to hardwire the pipeline in the form of a minimal supertree that contains all required binary trees.
ERIC Educational Resources Information Center
Shutler, Paul M. E.; Fong, Ng Swee
2010-01-01
Modern Hindu-Arabic numeration is the end result of a long period of evolution, and is clearly superior to any system that has gone before, but is it optimal? We compare it to a hypothetical base 5 system, which we dub Predator arithmetic, and judge which of the two systems is superior from a mathematics education point of view. We find that…
Spontaneous Meta-Arithmetic as a First Step toward School Algebra
ERIC Educational Resources Information Center
Caspi, Shai; Sfard, Anna
2012-01-01
Taking as the point of departure the vision of school algebra as a formalized meta-discourse of arithmetic, we have been following five pairs of 7th grade students as they progress in algebraic discourse during 24 months, from their informal algebraic talk to the formal algebraic discourse, as taught in school. Our analysis follows changes that…
A High-Level Formalization of Floating-Point Number in PVS
NASA Technical Reports Server (NTRS)
Boldo, Sylvie; Munoz, Cesar
2006-01-01
We develop a formalization of floating-point numbers in PVS based on a well-known formalization in Coq. We first describe the definitions of all the needed notions, e.g., floating-point number, format, rounding modes, etc.; then, we present an application to polynomial evaluation for elementary function evaluation. The application already existed in Coq, but our formalization shows a clear improvement in the quality of the result due to the automation provided by PVS. We finally integrate our formalization into a PVS hardware-level formalization of the IEEE-854 standard previously developed at NASA.
33 CFR 147.815 - ExxonMobil Hoover Floating OCS Facility safety zone.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false ExxonMobil Hoover Floating OCS... Floating OCS Facility safety zone. (a) Description. The ExxonMobil Hoover Floating OCS Facility, Alaminos... (1640.4 feet) from each point on the structure's outer edge is a safety zone. (b) Regulation. No vessel...
Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
NASA Technical Reports Server (NTRS)
Kasahara, Hironori; Honda, Hiroki; Narita, Seinosuke
1989-01-01
Parallel processing of real-time dynamic systems simulation on a multiprocessor system named OSCAR is presented. In the simulation of dynamic systems, generally, the same calculation are repeated every time step. However, we cannot apply to Do-all or the Do-across techniques for parallel processing of the simulation since there exist data dependencies from the end of an iteration to the beginning of the next iteration and furthermore data-input and data-output are required every sampling time period. Therefore, parallelism inside the calculation required for a single time step, or a large basic block which consists of arithmetic assignment statements, must be used. In the proposed method, near fine grain tasks, each of which consists of one or more floating point operations, are generated to extract the parallelism from the calculation and assigned to processors by using optimal static scheduling at compile time in order to reduce large run time overhead caused by the use of near fine grain tasks. The practicality of the scheme is demonstrated on OSCAR (Optimally SCheduled Advanced multiprocessoR) which has been developed to extract advantageous features of static scheduling algorithms to the maximum extent.
Graphics processing unit (GPU)-based computation of heat conduction in thermally anisotropic solids
NASA Astrophysics Data System (ADS)
Nahas, C. A.; Balasubramaniam, Krishnan; Rajagopal, Prabhu
2013-01-01
Numerical modeling of anisotropic media is a computationally intensive task since it brings additional complexity to the field problem in such a way that the physical properties are different in different directions. Largely used in the aerospace industry because of their lightweight nature, composite materials are a very good example of thermally anisotropic media. With advancements in video gaming technology, parallel processors are much cheaper today and accessibility to higher-end graphical processing devices has increased dramatically over the past couple of years. Since these massively parallel GPUs are very good in handling floating point arithmetic, they provide a new platform for engineers and scientists to accelerate their numerical models using commodity hardware. In this paper we implement a parallel finite difference model of thermal diffusion through anisotropic media using the NVIDIA CUDA (Compute Unified device Architecture). We use the NVIDIA GeForce GTX 560 Ti as our primary computing device which consists of 384 CUDA cores clocked at 1645 MHz with a standard desktop pc as the host platform. We compare the results from standard CPU implementation for its accuracy and speed and draw implications for simulation using the GPU paradigm.
Yu, Hui; Qi, Dan; Li, Heng-da; Xu, Ke-xin; Yuan, Wei-jie
2012-03-01
Weak signal, low instrument signal-to-noise ratio, continuous variation of human physiological environment and the interferences from other components in blood make it difficult to extract the blood glucose information from near infrared spectrum in noninvasive blood glucose measurement. The floating-reference method, which analyses the effect of glucose concentration variation on absorption coefficient and scattering coefficient, gets spectrum at the reference point and the measurement point where the light intensity variations from absorption and scattering are counteractive and biggest respectively. By using the spectrum from reference point as reference, floating-reference method can reduce the interferences from variation of physiological environment and experiment circumstance. In the present paper, the effectiveness of floating-reference method working on improving prediction precision and stability was assessed through application experiments. The comparison was made between models whose data were processed with and without floating-reference method. The results showed that the root mean square error of prediction (RMSEP) decreased by 34.7% maximally. The floating-reference method could reduce the influences of changes of samples' state, instrument noises and drift, and improve the models' prediction precision and stability effectively.
Capture of free-floating planets by planetary systems
NASA Astrophysics Data System (ADS)
Goulinski, Nadav; Ribak, Erez N.
2018-01-01
Evidence of exoplanets with orbits that are misaligned with the spin of the host star may suggest that not all bound planets were born in the protoplanetary disc of their current planetary system. Observations have shown that free-floating Jupiter-mass objects can exceed the number of stars in our Galaxy, implying that capture scenarios may not be so rare. To address this issue, we construct a three-dimensional simulation of a three-body scattering between a free-floating planet and a star accompanied by a Jupiter-mass bound planet. We distinguish between three different possible scattering outcomes, where the free-floating planet may get weakly captured after the brief interaction with the binary, remain unbound or 'kick out' the bound planet and replace it. The simulation was performed for different masses of the free-floating planets and stars, as well as different impact parameters, inclination angles and approach velocities. The outcome statistics are used to construct an analytical approximation of the cross-section for capturing a free-floating planet by fitting their dependence on the tested variables. The analytically approximated cross-section is used to predict the capture rate for these kinds of objects, and to estimate that about 1 per cent of all stars are expected to experience a temporary capture of a free-floating planet during their lifetime. Finally, we propose additional physical processes that may increase the capture statistics and whose contribution should be considered in future simulations in order to determine the fate of the temporarily captured planets.
Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic
NASA Astrophysics Data System (ADS)
Narendran, S.; Selvakumar, J.
2018-04-01
Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.
Rahim, Safwan Abdel; Carter, Paul A; Elkordy, Amal Ali
2015-01-01
The aim of this work was to design and evaluate effervescent floating gastro-retentive drug delivery matrix tablets with sustained-release behavior using a binary mixture of hydroxyethyl cellulose and sodium alginate. Pentoxifylline was used as a highly water-soluble, short half-life model drug with a high density. The floating capacity, swelling, and drug release behaviors of drug-loaded matrix tablets were evaluated in 0.1 N HCl (pH 1.2) at 37°C±0.5°C. Release data were analyzed by fitting the power law model of Korsmeyer–Peppas. The effect of different formulation variables was investigated, such as wet granulation, sodium bicarbonate gas-forming agent level, and tablet hardness properties. Statistical analysis was applied by paired sample t-test and one-way analysis of variance depending on the type of data to determine significant effect of different parameters. All prepared tablets through wet granulation showed acceptable physicochemical properties and their drug release profiles followed non-Fickian diffusion. They could float on the surface of dissolution medium and sustain drug release over 24 hours. Tablets prepared with 20% w/w sodium bicarbonate at 50–54 N hardness were promising with respect to their floating lag time, floating duration, swelling ability, and sustained drug release profile. PMID:25848220
Real object-based 360-degree integral-floating display using multiple depth camera
NASA Astrophysics Data System (ADS)
Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam
2015-03-01
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
Siemann, Julia; Petermann, Franz
2018-01-01
This review reconciles past findings on numerical processing with key assumptions of the most predominant model of arithmetic in the literature, the Triple Code Model (TCM). This is implemented by reporting diverse findings in the literature ranging from behavioral studies on basic arithmetic operations over neuroimaging studies on numerical processing to developmental studies concerned with arithmetic acquisition, with a special focus on developmental dyscalculia (DD). We evaluate whether these studies corroborate the model and discuss possible reasons for contradictory findings. A separate section is dedicated to the transfer of TCM to arithmetic development and to alternative accounts focusing on developmental questions of numerical processing. We conclude with recommendations for future directions of arithmetic research, raising questions that require answers in models of healthy as well as abnormal mathematical development. This review assesses the leading model in the field of arithmetic processing (Triple Code Model) by presenting knowledge from interdisciplinary research. It assesses the observed contradictory findings and integrates the resulting opposing viewpoints. The focus is on the development of arithmetic expertise as well as abnormal mathematical development. The original aspect of this article is that it points to a gap in research on these topics and provides possible solutions for future models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Performance of FORTRAN floating-point operations on the Flex/32 multicomputer
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1987-01-01
A series of experiments has been run to examine the floating-point performance of FORTRAN programs on the Flex/32 (Trademark) computer. The experiments are described, and the timing results are presented. The time required to execute a floating-point operation is found to vary considerbaly depending on a number of factors. One factor of particular interest from an algorithm design standpoint is the difference in speed between common memory accesses and local memory accesses. Common memory accesses were found to be slower, and guidelines are given for determinig when it may be cost effective to copy data from common to local memory.
Cost-sensitive case-based reasoning using a genetic algorithm: application to medical diagnosis.
Park, Yoon-Joo; Chun, Se-Hak; Kim, Byung-Chun
2011-02-01
The paper studies the new learning technique called cost-sensitive case-based reasoning (CSCBR) incorporating unequal misclassification cost into CBR model. Conventional CBR is now considered as a suitable technique for diagnosis, prognosis and prescription in medicine. However it lacks the ability to reflect asymmetric misclassification and often assumes that the cost of a positive diagnosis (an illness) as a negative one (no illness) is the same with that of the opposite situation. Thus, the objective of this research is to overcome the limitation of conventional CBR and encourage applying CBR to many real world medical cases associated with costs of asymmetric misclassification errors. The main idea involves adjusting the optimal cut-off classification point for classifying the absence or presence of diseases and the cut-off distance point for selecting optimal neighbors within search spaces based on similarity distribution. These steps are dynamically adapted to new target cases using a genetic algorithm. We apply this proposed method to five real medical datasets and compare the results with two other cost-sensitive learning methods-C5.0 and CART. Our finding shows that the total misclassification cost of CSCBR is lower than other cost-sensitive methods in many cases. Even though the genetic algorithm has limitations in terms of unstable results and over-fitting training data, CSCBR results with GA are better overall than those of other methods. Also the paired t-test results indicate that the total misclassification cost of CSCBR is significantly less than C5.0 and CART for several datasets. We have proposed a new CBR method called cost-sensitive case-based reasoning (CSCBR) that can incorporate unequal misclassification costs into CBR and optimize the number of neighbors dynamically using a genetic algorithm. It is meaningful not only for introducing the concept of cost-sensitive learning to CBR, but also for encouraging the use of CBR in the medical area. The result shows that the total misclassification costs of CSCBR do not increase in arithmetic progression as the cost of false absence increases arithmetically, thus it is cost-sensitive. We also show that total misclassification costs of CSCBR are the lowest among all methods in four datasets out of five and the result is statistically significant in many cases. The limitation of our proposed CSCBR is confined to classify binary cases for minimizing misclassification cost because our proposed CSCBR is originally designed to classify binary case. Our future work extends this method for multi-classification which can classify more than two groups. Copyright © 2010 Elsevier B.V. All rights reserved.
Non-uniqueness of the point of application of the buoyancy force
NASA Astrophysics Data System (ADS)
Kliava, Janis; Mégel, Jacques
2010-07-01
Even though the buoyancy force (also known as the Archimedes force) has always been an important topic of academic studies in physics, its point of application has not been explicitly identified yet. We present a quantitative approach to this problem based on the concept of the hydrostatic energy, considered here for a general shape of the cross-section of a floating body and for an arbitrary angle of heel. We show that the location of the point of application of the buoyancy force essentially depends (i) on the type of motion experienced by the floating body and (ii) on the definition of this point. In a rolling/pitching motion, considerations involving the rotational moment lead to a particular dynamical point of application of the buoyancy force, and for some simple shapes of the floating body this point coincides with the well-known metacentre. On the other hand, from the work-energy relation it follows that in the rolling/pitching motion the energetical point of application of this force is rigidly connected to the centre of buoyancy; in contrast, in a vertical translation this point is rigidly connected to the centre of gravity of the body. Finally, we consider the location of the characteristic points of the floating bodies for some particular shapes of immersed cross-sections. The paper is intended for higher education level physics teachers and students.
VLSI Design Techniques for Floating-Point Computation
1988-11-18
J. C. Gibson, The Gibson Mix, IBM Systems Development Division Tech. Report(June 1970). [Heni83] A. Heninger, The Zilog Z8070 Floating-Point...Broadcast Oock Gen. ’ itp Divide Module Module byN Module Oock Communication l I T Oock Communication Bus Figure 7.2. Clock Distribution between
Implementing direct, spatially isolated problems on transputer networks
NASA Technical Reports Server (NTRS)
Ellis, Graham K.
1988-01-01
Parametric studies were performed on transputer networks of up to 40 processors to determine how to implement and maximize the performance of the solution of problems where no processor-to-processor data transfer is required for the problem solution (spatially isolated). Two types of problems are investigated a computationally intensive problem where the solution required the transmission of 160 bytes of data through the parallel network, and a communication intensive example that required the transmission of 3 Mbytes of data through the network. This data consists of solutions being sent back to the host processor and not intermediate results for another processor to work on. Studies were performed on both integer and floating-point transputers. The latter features an on-chip floating-point math unit and offers approximately an order of magnitude performance increase over the integer transputer on real valued computations. The results indicate that a minimum amount of work is required on each node per communication to achieve high network speedups (efficiencies). The floating-point processor requires approximately an order of magnitude more work per communication than the integer processor because of the floating-point unit's increased computing capacity.
CT image reconstruction with half precision floating-point values.
Maaß, Clemens; Baer, Matthias; Kachelrieß, Marc
2011-07-01
Analytic CT image reconstruction is a computationally demanding task. Currently, the even more demanding iterative reconstruction algorithms find their way into clinical routine because their image quality is superior to analytic image reconstruction. The authors thoroughly analyze a so far unconsidered but valuable tool of tomorrow's reconstruction hardware (CPU and GPU) that allows implementing the forward projection and backprojection steps, which are the computationally most demanding parts of any reconstruction algorithm, much more efficiently. Instead of the standard 32 bit floating-point values (float), a recently standardized floating-point value with 16 bit (half) is adopted for data representation in image domain and in rawdata domain. The reduction in the total data amount reduces the traffic on the memory bus, which is the bottleneck of today's high-performance algorithms, by 50%. In CT simulations and CT measurements, float reconstructions (gold standard) and half reconstructions are visually compared via difference images and by quantitative image quality evaluation. This is done for analytical reconstruction (filtered backprojection) and iterative reconstruction (ordered subset SART). The magnitude of quantization noise, which is caused by a reduction in the data precision of both rawdata and image data during image reconstruction, is negligible. This is clearly shown for filtered backprojection and iterative ordered subset SART reconstruction. In filtered backprojection, the implementation of the backprojection should be optimized for low data precision if the image data are represented in half format. In ordered subset SART image reconstruction, no adaptations are necessary and the convergence speed remains unchanged. Half precision floating-point values allow to speed up CT image reconstruction without compromising image quality.
Individual differences in children's understanding of inversion and arithmetical skill.
Gilmore, Camilla K; Bryant, Peter
2006-06-01
Background and aims. In order to develop arithmetic expertise, children must understand arithmetic principles, such as the inverse relationship between addition and subtraction, in addition to learning calculation skills. We report two experiments that investigate children's understanding of the principle of inversion and the relationship between their conceptual understanding and arithmetical skills. A group of 127 children from primary schools took part in the study. The children were from 2 age groups (6-7 and 8-9 years). Children's accuracy on inverse and control problems in a variety of presentation formats and in canonical and non-canonical forms was measured. Tests of general arithmetic ability were also administered. Children consistently performed better on inverse than control problems, which indicates that they could make use of the inverse principle. Presentation format affected performance: picture presentation allowed children to apply their conceptual understanding flexibly regardless of the problem type, while word problems restricted their ability to use their conceptual knowledge. Cluster analyses revealed three subgroups with different profiles of conceptual understanding and arithmetical skill. Children in the 'high ability' and 'low ability' groups showed conceptual understanding that was in-line with their arithmetical skill, whilst a 3rd group of children had more advanced conceptual understanding than arithmetical skill. The three subgroups may represent different points along a single developmental path or distinct developmental paths. The discovery of the existence of the three groups has important consequences for education. It demonstrates the importance of considering the pattern of individual children's conceptual understanding and problem-solving skills.
Metcalfe, Arron W. S.; Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Menon, Vinod
2013-01-01
Baddeley and Hitch’s multi-component working memory (WM) model has played an enduring and influential role in our understanding of cognitive abilities. Very little is known, however, about the neural basis of this multi-component WM model and the differential role each component plays in mediating arithmetic problem solving abilities in children. Here, we investigate the neural basis of the central executive (CE), phonological (PL) and visuo-spatial (VS) components of WM during a demanding mental arithmetic task in 7–9 year old children (N=74). The VS component was the strongest predictor of math ability in children and was associated with increased arithmetic complexity-related responses in left dorsolateral and right ventrolateral prefrontal cortices as well as bilateral intra-parietal sulcus and supramarginal gyrus in posterior parietal cortex. Critically, VS, CE and PL abilities were associated with largely distinct patterns of brain response. Overlap between VS and CE components was observed in left supramarginal gyrus and no overlap was observed between VS and PL components. Our findings point to a central role of visuo-spatial WM during arithmetic problem-solving in young grade-school children and highlight the usefulness of the multi-component Baddeley and Hitch WM model in fractionating the neural correlates of arithmetic problem solving during development. PMID:24212504
A hardware-oriented algorithm for floating-point function generation
NASA Technical Reports Server (NTRS)
O'Grady, E. Pearse; Young, Baek-Kyu
1991-01-01
An algorithm is presented for performing accurate, high-speed, floating-point function generation for univariate functions defined at arbitrary breakpoints. Rapid identification of the breakpoint interval, which includes the input argument, is shown to be the key operation in the algorithm. A hardware implementation which makes extensive use of read/write memories is used to illustrate the algorithm.
A Taxonomy-Based Approach to Shed Light on the Babel of Mathematical Models for Rice Simulation
NASA Technical Reports Server (NTRS)
Confalonieri, Roberto; Bregaglio, Simone; Adam, Myriam; Ruget, Francoise; Li, Tao; Hasegawa, Toshihiro; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Buis, Samuel;
2016-01-01
For most biophysical domains, differences in model structures are seldom quantified. Here, we used a taxonomy-based approach to characterise thirteen rice models. Classification keys and binary attributes for each key were identified, and models were categorised into five clusters using a binary similarity measure and the unweighted pair-group method with arithmetic mean. Principal component analysis was performed on model outputs at four sites. Results indicated that (i) differences in structure often resulted in similar predictions and (ii) similar structures can lead to large differences in model outputs. User subjectivity during calibration may have hidden expected relationships between model structure and behaviour. This explanation, if confirmed, highlights the need for shared protocols to reduce the degrees of freedom during calibration, and to limit, in turn, the risk that user subjectivity influences model performance.
An algorithm for the arithmetic classification of multilattices.
Indelicato, Giuliana
2013-01-01
A procedure for the construction and the classification of monoatomic multilattices in arbitrary dimension is developed. The algorithm allows one to determine the location of the points of all monoatomic multilattices with a given symmetry, or to determine whether two assigned multilattices are arithmetically equivalent. This approach is based on ideas from integral matrix theory, in particular the reduction to the Smith normal form, and can be coded to provide a classification software package.
Augmenting WFIRST Microlensing with a Ground-Based Telescope Network
NASA Astrophysics Data System (ADS)
Zhu, Wei; Gould, Andrew
2016-06-01
Augmenting the Wide Field Infrared Survey Telescope (WFIRST) microlensing campaigns with intensive observations from a ground-based network of wide-field survey telescopes would have several major advantages. First, it would enable full two-dimensional (2-D) vector microlens parallax measurements for a substantial fraction of low-mass lenses as well as planetary and binary events that show caustic crossing features. For a significant fraction of the free-floating planet (FFP) events and all caustic-crossing planetary/binary events, these 2-D parallax measurements directly lead to complete solutions (mass, distance, transverse velocity) of the lens object (or lens system). For even more events, the complementary ground-based observations will yield 1-D parallax measurements. Together with the 1-D parallaxes from WFIRST alone, they can probe the entire mass range M > M_Earth. For luminous lenses, such 1-D parallax measurements can be promoted to complete solutions (mass, distance, transverse velocity) by high-resolution imaging. This would provide crucial information not only about the hosts of planets and other lenses, but also enable a much more precise Galactic model. Other benefits of such a survey include improved understanding of binaries (particularly with low mass primaries), and sensitivity to distant ice-giant and gas-giant companions of WFIRST lenses that cannot be detected by WFIRST itself due to its restricted observing windows. Existing ground-based microlensing surveys can be employed if WFIRST is pointed at lower-extinction fields than is currently envisaged. This would come at some cost to the event rate. Therefore the benefits of improved characterization of lenses must be weighed against these costs.
NASA Technical Reports Server (NTRS)
Anderson, C. M.; Noor, A. K.
1975-01-01
Computerized symbolic integration was used in conjunction with group-theoretic techniques to obtain analytic expressions for the stiffness, geometric stiffness, consistent mass, and consistent load matrices of composite shallow shell structural elements. The elements are shear flexible and have variable curvature. A stiffness (displacement) formulation was used with the fundamental unknowns consisting of both the displacement and rotation components of the reference surface of the shell. The triangular elements have six and ten nodes; the quadrilateral elements have four and eight nodes and can have internal degrees of freedom associated with displacement modes which vanish along the edges of the element (bubble modes). The stiffness, geometric stiffness, consistent mass, and consistent load coefficients are expressed as linear combinations of integrals (over the element domain) whose integrands are products of shape functions and their derivatives. The evaluation of the elemental matrices is divided into two separate problems - determination of the coefficients in the linear combination and evaluation of the integrals. The integrals are performed symbolically by using the symbolic-and-algebraic-manipulation language MACSYMA. The efficiency of using symbolic integration in the element development is demonstrated by comparing the number of floating-point arithmetic operations required in this approach with those required by a commonly used numerical quadrature technique.
GRay: A Massively Parallel GPU-based Code for Ray Tracing in Relativistic Spacetimes
NASA Astrophysics Data System (ADS)
Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal
2013-11-01
We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This graphics-processing-unit (GPU)-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single-precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 ns per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing central-processing-unit-based ray-tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparing theoretical predictions of images, spectra, and light curves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.
Numerically stable formulas for a particle-based explicit exponential integrator
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth
2015-05-01
Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.
33 CFR 165.704 - Safety Zone; Tampa Bay, Florida.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., Florida. (a) A floating safety zone is established consisting of an area 1000 yards fore and aft of a... ending at Gadsden Point Cut Lighted Buoys “3” and “4”. The safety zone starts again at Gadsden Point Cut... the marked channel at Tampa Bay Cut “K” buoy “11K” enroute to Rattlesnake, Tampa, FL, the floating...
Developing an Energy Policy for the United States
NASA Astrophysics Data System (ADS)
Keefe, Pat
2014-12-01
Al Bartlett's video "Arithmetic, Population, and Energy"1 spells out many of the complex issues related to energy use in our society. Bartlett makes the point that basic arithmetic is the fundamental obstacle preventing us from being able to grasp the relationships between energy consumption, population, and lifestyles. In an earlier version of Bartlett's video, he refers to a "Hagar the Horrible" comic strip in which Hagar asks the critical question, "Good…Now can anybody here count?"
Trinary optical logic processors using shadow casting with polarized light
NASA Astrophysics Data System (ADS)
Ghosh, Amal K.; Basuray, A.
1990-10-01
An optical implementation is proposed of the modified trinary number (MTN) system (Datta et al., 1989) in which any binary number can have arithmetic operations performed on it in parallel without the need for carry and borrow steps. The present method extends the lensless shadow-casting technique of Tanida and Ichioka (1983, 1985). Three kinds of spatial coding are used for encoding the trinary input states, whereas in the decoding plane three states are identified by no light and light with two orthogonal states of polarization.
Metcalfe, Arron W S; Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Menon, Vinod
2013-10-01
Baddeley and Hitch's multi-component working memory (WM) model has played an enduring and influential role in our understanding of cognitive abilities. Very little is known, however, about the neural basis of this multi-component WM model and the differential role each component plays in mediating arithmetic problem solving abilities in children. Here, we investigate the neural basis of the central executive (CE), phonological (PL) and visuo-spatial (VS) components of WM during a demanding mental arithmetic task in 7-9 year old children (N=74). The VS component was the strongest predictor of math ability in children and was associated with increased arithmetic complexity-related responses in left dorsolateral and right ventrolateral prefrontal cortices as well as bilateral intra-parietal sulcus and supramarginal gyrus in posterior parietal cortex. Critically, VS, CE and PL abilities were associated with largely distinct patterns of brain response. Overlap between VS and CE components was observed in left supramarginal gyrus and no overlap was observed between VS and PL components. Our findings point to a central role of visuo-spatial WM during arithmetic problem-solving in young grade-school children and highlight the usefulness of the multi-component Baddeley and Hitch WM model in fractionating the neural correlates of arithmetic problem solving during development. Copyright © 2013 Elsevier Ltd. All rights reserved.
Fast and efficient compression of floating-point data.
Lindstrom, Peter; Isenburg, Martin
2006-01-01
Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data.
Logical NAND and NOR Operations Using Algorithmic Self-assembly of DNA Molecules
NASA Astrophysics Data System (ADS)
Wang, Yanfeng; Cui, Guangzhao; Zhang, Xuncai; Zheng, Yan
DNA self-assembly is the most advanced and versatile system that has been experimentally demonstrated for programmable construction of patterned systems on the molecular scale. It has been demonstrated that the simple binary arithmetic and logical operations can be computed by the process of self assembly of DNA tiles. Here we report a one-dimensional algorithmic self-assembly of DNA triple-crossover molecules that can be used to execute five steps of a logical NAND and NOR operations on a string of binary bits. To achieve this, abstract tiles were translated into DNA tiles based on triple-crossover motifs. Serving as input for the computation, long single stranded DNA molecules were used to nucleate growth of tiles into algorithmic crystals. Our method shows that engineered DNA self-assembly can be treated as a bottom-up design techniques, and can be capable of designing DNA computer organization and architecture.
Lossless compression of VLSI layout image data.
Dai, Vito; Zakhor, Avideh
2006-09-01
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
Entropy coders for image compression based on binary forward classification
NASA Astrophysics Data System (ADS)
Yoo, Hoon; Jeong, Jechang
2000-12-01
Entropy coders as a noiseless compression method are widely used as final step compression for images, and there have been many contributions to increase of entropy coder performance and to reduction of entropy coder complexity. In this paper, we propose some entropy coders based on the binary forward classification (BFC). The BFC requires overhead of classification but there is no change between the amount of input information and the total amount of classified output information, which we prove this property in this paper. And using the proved property, we propose entropy coders that are the BFC followed by Golomb-Rice coders (BFC+GR) and the BFC followed by arithmetic coders (BFC+A). The proposed entropy coders introduce negligible additional complexity due to the BFC. Simulation results also show better performance than other entropy coders that have similar complexity to the proposed coders.
Schema Knowledge Structures for Representing and Understanding Arithmetic Story Problems.
1987-03-01
do so on a common unit of measure. Implicit in the CP relation is the concept of one-to- one matching of one element in the problem with the other. As...engages in one-to-one matching , removing one member from each set and setting them apart as a matched pair. The smaller of the two sets is the one...to be critical. As we pointed out earlier, some of the semantic * relations can be present in situations that demand any of * the four arithmetic
Software Techniques for Non-Von Neumann Architectures
1990-01-01
Commtopo programmable Benes net.; hypercubic lattice for QCD Control CENTRALIZED Assign STATIC Memory :SHARED Synch UNIVERSAL Max-cpu 566 Proessor...boards (each = 4 floating point units, 2 multipliers) Cpu-size 32-bit floating point chips Perform 11.4 Gflops Market quantum chromodynamics ( QCD ...functions there should exist a capability to define hierarchies and lattices of complex objects. A complex object can be made up of a set of simple objects
Towards High Resolution Numerical Algorithms for Wave Dominated Physical Phenomena
2009-01-30
results are scaled as floating point operations per second, obtained by counting the number of floating point additions and multiplications in the...black horizontal line. Perhaps the most striking feature at first is the fact that the memory bandwidth measured for flux lifting transcends this...theoretical peak performance values. For a suitable CPU-limited workload, this means that a single workstation equipped with multiple GPUs can do work that
A novel high-frequency encoding algorithm for image compression
NASA Astrophysics Data System (ADS)
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-12-01
In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-04
... to the point of origin. The restricted area will be marked by a lighted and signed floating buoy line... a signed floating buoy line without permission from the Supervisor of Shipbuilding, Conversion and...
Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 1. Generalized Born
2012-01-01
We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers. PMID:22582031
Floating electrode dielectrophoresis.
Golan, Saar; Elata, David; Orenstein, Meir; Dinnar, Uri
2006-12-01
In practice, dielectrophoresis (DEP) devices are based on micropatterned electrodes. When subjected to applied voltages, the electrodes generate nonuniform electric fields that are necessary for the DEP manipulation of particles. In this study, electrically floating electrodes are used in DEP devices. It is demonstrated that effective DEP forces can be achieved by using floating electrodes. Additionally, DEP forces generated by floating electrodes are different from DEP forces generated by excited electrodes. The floating electrodes' capabilities are explained theoretically by calculating the electric field gradients and demonstrated experimentally by using test-devices. The test-devices show that floating electrodes can be used to collect erythrocytes (red blood cells). DEP devices which contain many floating electrodes ought to have fewer connections to external signal sources. Therefore, the use of floating electrodes may considerably facilitate the fabrication and operation of DEP devices. It can also reduce device dimensions. However, the key point is that DEP devices can integrate excited electrodes fabricated by microtechnology processes and floating electrodes fabricated by nanotechnology processes. Such integration is expected to promote the use of DEP devices in the manipulation of nanoparticles.
Yu, Jen-Shiang K; Yu, Chin-Hui
2002-01-01
One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.
NASA Astrophysics Data System (ADS)
Hwang, Han-Jeong; Choi, Han; Kim, Jeong-Youn; Chang, Won-Du; Kim, Do-Won; Kim, Kiwoong; Jo, Sungho; Im, Chang-Hwan
2016-09-01
In traditional brain-computer interface (BCI) studies, binary communication systems have generally been implemented using two mental tasks arbitrarily assigned to "yes" or "no" intentions (e.g., mental arithmetic calculation for "yes"). A recent pilot study performed with one paralyzed patient showed the possibility of a more intuitive paradigm for binary BCI communications, in which the patient's internal yes/no intentions were directly decoded from functional near-infrared spectroscopy (fNIRS). We investigated whether such an "fNIRS-based direct intention decoding" paradigm can be reliably used for practical BCI communications. Eight healthy subjects participated in this study, and each participant was administered 70 disjunctive questions. Brain hemodynamic responses were recorded using a multichannel fNIRS device, while the participants were internally expressing "yes" or "no" intentions to each question. Different feature types, feature numbers, and time window sizes were tested to investigate optimal conditions for classifying the internal binary intentions. About 75% of the answers were correctly classified when the individual best feature set was employed (75.89% ±1.39 and 74.08% ±2.87 for oxygenated and deoxygenated hemoglobin responses, respectively), which was significantly higher than a random chance level (68.57% for p<0.001). The kurtosis feature showed the highest mean classification accuracy among all feature types. The grand-averaged hemodynamic responses showed that wide brain regions are associated with the processing of binary implicit intentions. Our experimental results demonstrated that direct decoding of internal binary intention has the potential to be used for implementing more intuitive and user-friendly communication systems for patients with motor disabilities.
Term Cancellations in Computing Floating-Point Gröbner Bases
NASA Astrophysics Data System (ADS)
Sasaki, Tateaki; Kako, Fujio
We discuss the term cancellation which makes the floating-point Gröbner basis computation unstable, and show that error accumulation is never negligible in our previous method. Then, we present a new method, which removes accumulated errors as far as possible by reducing matrices constructed from coefficient vectors by the Gaussian elimination. The method manifests amounts of term cancellations caused by the existence of approximate linearly dependent relations among input polynomials.
Common Pitfalls in F77 Code Conversion
2003-02-01
implementation versus another are the source of these errors rather than typography . It is well to use the practice of commenting-out original source file lines...identifier), every I in the format field must be replaced with f followed by an appropriate floating point format designator . Floating point numeric...helps even more. Finally, libraries are a major source of non-portablility[sic], with graphics libraries one of the chief culprits. We in Fusion
NASA Astrophysics Data System (ADS)
Shi, Yu; Wang, Yue; Xu, Shijie
2018-04-01
The motion of a massless particle in the gravity of a binary asteroid system, referred as the restricted full three-body problem (RF3BP), is fundamental, not only for the evolution of the binary system, but also for the design of relevant space missions. In this paper, equilibrium points and associated periodic orbit families in the gravity of a binary system are investigated, with the binary (66391) 1999 KW4 as an example. The polyhedron shape model is used to describe irregular shapes and corresponding gravity fields of the primary and secondary of (66391) 1999 KW4, which is more accurate than the ellipsoid shape model in previous studies and provides a high-fidelity representation of the gravitational environment. Both of the synchronous and non-synchronous states of the binary system are considered. For the synchronous binary system, the equilibrium points and their stability are determined, and periodic orbit families emanating from each equilibrium point are generated by using the shooting (multiple shooting) method and the homotopy method, where the homotopy function connects the circular restricted three-body problem and RF3BP. In the non-synchronous binary system, trajectories of equivalent equilibrium points are calculated, and the associated periodic orbits are obtained by using the homotopy method, where the homotopy function connects the synchronous and non-synchronous systems. Although only the binary (66391) 1999 KW4 is considered, our methods will also be well applicable to other binary systems with polyhedron shape data. Our results on equilibrium points and associated periodic orbits provide general insights into the dynamical environment and orbital behaviors in proximity of small binary asteroids and enable the trajectory design and mission operations in future binary system explorations.
Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.
De Queiroz, Ricardo; Chou, Philip A
2016-06-01
In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.
Spin torque oscillator neuroanalog of von Neumann's microwave computer.
Hoppensteadt, Frank
2015-10-01
Frequency and phase of neural activity play important roles in the behaving brain. The emerging understanding of these roles has been informed by the design of analog devices that have been important to neuroscience, among them the neuroanalog computer developed by O. Schmitt and A. Hodgkin in the 1930s. Later J. von Neumann, in a search for high performance computing using microwaves, invented a logic machine based on crystal diodes that can perform logic functions including binary arithmetic. Described here is an embodiment of his machine using nano-magnetics. Electrical currents through point contacts on a ferromagnetic thin film can create oscillations in the magnetization of the film. Under natural conditions these properties of a ferromagnetic thin film may be described by a nonlinear Schrödinger equation for the film's magnetization. Radiating solutions of this system are referred to as spin waves, and communication within the film may be by spin waves or by directed graphs of electrical connections. It is shown here how to formulate a STO logic machine, and by computer simulation how this machine can perform several computations simultaneously using multiplexing of inputs, that this system can evaluate iterated logic functions, and that spin waves may communicate frequency, phase and binary information. Neural tissue and the Schmitt-Hodgkin, von Neumann and STO devices share a common bifurcation structure, although these systems operate on vastly different space and time scales; namely, all may exhibit Andronov-Hopf bifurcations. This suggests that neural circuits may be capable of the computational functionality as described by von Neumann. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pavlichin, Dmitri S.; Mabuchi, Hideo
2014-06-01
Nanoscale integrated photonic devices and circuits offer a path to ultra-low power computation at the few-photon level. Here we propose an optical circuit that performs a ubiquitous operation: the controlled, random-access readout of a collection of stored memory phases or, equivalently, the computation of the inner product of a vector of phases with a binary selector" vector, where the arithmetic is done modulo 2pi and the result is encoded in the phase of a coherent field. This circuit, a collection of cascaded interferometers driven by a coherent input field, demonstrates the use of coherence as a computational resource, and of the use of recently-developed mathematical tools for modeling optical circuits with many coupled parts. The construction extends in a straightforward way to the computation of matrix-vector and matrix-matrix products, and, with the inclusion of an optical feedback loop, to the computation of a weighted" readout of stored memory phases. We note some applications of these circuits for error correction and for computing tasks requiring fast vector inner products, e.g. statistical classification and some machine learning algorithms.
van den Tillaart-Haverkate, Maj; de Ronde-Brons, Inge; Dreschler, Wouter A; Houben, Rolph
2017-01-01
Single-microphone noise reduction leads to subjective benefit, but not to objective improvements in speech intelligibility. We investigated whether response times (RTs) provide an objective measure of the benefit of noise reduction and whether the effect of noise reduction is reflected in rated listening effort. Twelve normal-hearing participants listened to digit triplets that were either unprocessed or processed with one of two noise-reduction algorithms: an ideal binary mask (IBM) and a more realistic minimum mean square error estimator (MMSE). For each of these three processing conditions, we measured (a) speech intelligibility, (b) RTs on two different tasks (identification of the last digit and arithmetic summation of the first and last digit), and (c) subjective listening effort ratings. All measurements were performed at four signal-to-noise ratios (SNRs): -5, 0, +5, and +∞ dB. Speech intelligibility was high (>97% correct) for all conditions. A significant decrease in response time, relative to the unprocessed condition, was found for both IBM and MMSE for the arithmetic but not the identification task. Listening effort ratings were significantly lower for IBM than for MMSE and unprocessed speech in noise. We conclude that RT for an arithmetic task can provide an objective measure of the benefit of noise reduction. For young normal-hearing listeners, both ideal and realistic noise reduction can reduce RTs at SNRs where speech intelligibility is close to 100%. Ideal noise reduction can also reduce perceived listening effort.
Physical implication of transition voltage in organic nano-floating-gate nonvolatile memories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shun; Gao, Xu, E-mail: wangsd@suda.edu.cn, E-mail: gaoxu@suda.edu.cn; Zhong, Ya-Nan
High-performance pentacene-based organic field-effect transistor nonvolatile memories, using polystyrene as a tunneling dielectric and Au nanoparticles as a nano-floating-gate, show parallelogram-like transfer characteristics with a featured transition point. The transition voltage at the transition point corresponds to a threshold electric field in the tunneling dielectric, over which stored electrons in the nano-floating-gate will start to leak out. The transition voltage can be modulated depending on the bias configuration and device structure. For p-type active layers, optimized transition voltage should be on the negative side of but close to the reading voltage, which can simultaneously achieve a high ON/OFF ratio andmore » good memory retention.« less
NASA Astrophysics Data System (ADS)
Hill, C.
2008-12-01
Low cost graphic cards today use many, relatively simple, compute cores to deliver support for memory bandwidth of more than 100GB/s and theoretical floating point performance of more than 500 GFlop/s. Right now this performance is, however, only accessible to highly parallel algorithm implementations that, (i) can use a hundred or more, 32-bit floating point, concurrently executing cores, (ii) can work with graphics memory that resides on the graphics card side of the graphics bus and (iii) can be partially expressed in a language that can be compiled by a graphics programming tool. In this talk we describe our experiences implementing a complete, but relatively simple, time dependent shallow-water equations simulation targeting a cluster of 30 computers each hosting one graphics card. The implementation takes into account the considerations (i), (ii) and (iii) listed previously. We code our algorithm as a series of numerical kernels. Each kernel is designed to be executed by multiple threads of a single process. Kernels are passed memory blocks to compute over which can be persistent blocks of memory on a graphics card. Each kernel is individually implemented using the NVidia CUDA language but driven from a higher level supervisory code that is almost identical to a standard model driver. The supervisory code controls the overall simulation timestepping, but is written to minimize data transfer between main memory and graphics memory (a massive performance bottle-neck on current systems). Using the recipe outlined we can boost the performance of our cluster by nearly an order of magnitude, relative to the same algorithm executing only on the cluster CPU's. Achieving this performance boost requires that many threads are available to each graphics processor for execution within each numerical kernel and that the simulations working set of data can fit into the graphics card memory. As we describe, this puts interesting upper and lower bounds on the problem sizes for which this technology is currently most useful. However, many interesting problems fit within this envelope. Looking forward, we extrapolate our experience to estimate full-scale ocean model performance and applicability. Finally we describe preliminary hybrid mixed 32-bit and 64-bit experiments with graphics cards that support 64-bit arithmetic, albeit at a lower performance.
Design of crossed-mirror array to form floating 3D LED signs
NASA Astrophysics Data System (ADS)
Yamamoto, Hirotsugu; Bando, Hiroki; Kujime, Ryousuke; Suyama, Shiro
2012-03-01
3D representation of digital signage improves its significance and rapid notification of important points. Our goal is to realize floating 3D LED signs. The problem is there is no sufficient device to form floating 3D images from LEDs. LED lamp size is around 1 cm including wiring and substrates. Such large pitch increases display size and sometimes spoils image quality. The purpose of this paper is to develop optical device to meet the three requirements and to demonstrate floating 3D arrays of LEDs. We analytically investigate image formation by a crossed mirror structure with aerial aperture, called CMA (crossed-mirror array). CMA contains dihedral corner reflectors at each aperture. After double reflection, light rays emitted from an LED will converge into the corresponding image point. We have fabricated CMA for 3D array of LEDs. One CMA unit contains 20 x 20 apertures that are located diagonally. Floating image of LEDs was formed in wide range of incident angle. The image size of focused beam agreed to the apparent aperture size. When LEDs were located three-dimensionally (LEDs in three depths), the focused distances were the same as the distance between the real LED and the CMA.
DSS 13 Microprocessor Antenna Controller
NASA Technical Reports Server (NTRS)
Gosline, R. M.
1984-01-01
A microprocessor based antenna controller system developed as part of the unattended station project for DSS 13 is described. Both the hardware and software top level designs are presented and the major problems encounted are discussed. Developments useful to related projects include a JPL standard 15 line interface using a single board computer, a general purpose parser, a fast floating point to ASCII conversion technique, and experience gained in using off board floating point processors with the 8080 CPU.
Special relativity from observer's mathematics point of view
NASA Astrophysics Data System (ADS)
Khots, Boris; Khots, Dmitriy
2015-09-01
When we create mathematical models for quantum theory of light we assume that the mathematical apparatus used in modeling, at least the simplest mathematical apparatus, is infallible. In particular, this relates to the use of "infinitely small" and "infinitely large" quantities in arithmetic and the use of Newton - Cauchy definitions of a limit and derivative in analysis. We believe that is where the main problem lies in contemporary study of nature. We have introduced a new concept of Observer's Mathematics (see www.mathrelativity.com). Observer's Mathematics creates new arithmetic, algebra, geometry, topology, analysis and logic which do not contain the concept of continuum, but locally coincide with the standard fields. We use Einstein special relativity principles and get the analogue of classical Lorentz transformation. This work considers this transformation from Observer's Mathematics point of view.
NASA Astrophysics Data System (ADS)
Balzani, Daniel; Gandhi, Ashutosh; Tanaka, Masato; Schröder, Jörg
2015-05-01
In this paper a robust approximation scheme for the numerical calculation of tangent stiffness matrices is presented in the context of nonlinear thermo-mechanical finite element problems and its performance is analyzed. The scheme extends the approach proposed in Kim et al. (Comput Methods Appl Mech Eng 200:403-413, 2011) and Tanaka et al. (Comput Methods Appl Mech Eng 269:454-470, 2014 and bases on applying the complex-step-derivative approximation to the linearizations of the weak forms of the balance of linear momentum and the balance of energy. By incorporating consistent perturbations along the imaginary axis to the displacement as well as thermal degrees of freedom, we demonstrate that numerical tangent stiffness matrices can be obtained with accuracy up to computer precision leading to quadratically converging schemes. The main advantage of this approach is that contrary to the classical forward difference scheme no round-off errors due to floating-point arithmetics exist within the calculation of the tangent stiffness. This enables arbitrarily small perturbation values and therefore leads to robust schemes even when choosing small values. An efficient algorithmic treatment is presented which enables a straightforward implementation of the method in any standard finite-element program. By means of thermo-elastic and thermo-elastoplastic boundary value problems at finite strains the performance of the proposed approach is analyzed.
GRay: A MASSIVELY PARALLEL GPU-BASED CODE FOR RAY TRACING IN RELATIVISTIC SPACETIMES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal
We introduce GRay, a massively parallel integrator designed to trace the trajectories of billions of photons in a curved spacetime. This graphics-processing-unit (GPU)-based integrator employs the stream processing paradigm, is implemented in CUDA C/C++, and runs on nVidia graphics cards. The peak performance of GRay using single-precision floating-point arithmetic on a single GPU exceeds 300 GFLOP (or 1 ns per photon per time step). For a realistic problem, where the peak performance cannot be reached, GRay is two orders of magnitude faster than existing central-processing-unit-based ray-tracing codes. This performance enhancement allows more effective searches of large parameter spaces when comparingmore » theoretical predictions of images, spectra, and light curves from the vicinities of compact objects to observations. GRay can also perform on-the-fly ray tracing within general relativistic magnetohydrodynamic algorithms that simulate accretion flows around compact objects. Making use of this algorithm, we calculate the properties of the shadows of Kerr black holes and the photon rings that surround them. We also provide accurate fitting formulae of their dependencies on black hole spin and observer inclination, which can be used to interpret upcoming observations of the black holes at the center of the Milky Way, as well as M87, with the Event Horizon Telescope.« less
Nakamura, N; Nakano, K; Sugiura, N; Matsumura, M
2003-12-01
A process using a floating carrier for immobilization of cyanobacteriolytic bacteria, B.cereus N-14, was proposed to realize an effective in situ control of natural floating cyanobacterial blooms. The critical concentrations of the cyanobacteriolytic substance and B.cereus N-14 cells required to exhibit cyanobacteriolytic activity were investigated. The results indicated the necessity of cell growth to produce sufficiently high amounts of the cyanobacteriolytic substance to exhibit its activity and also for conditions enabling good contact between high concentrations of the cyanobacteriolytic substance and cyanobacteria. Floating biodegradable plastics made of starch were applied as a carrier material to maintain close contact between the immobilized cyanobacteriolytic bacteria and floating cyanobacteria. The floating starch-carriers could eliminate 99% of floating cyanobacteria in 4 d. Since B.cereus N-14 could produce the cyanobacteriolytic substance under the presence of starch and some amino acids, the cyanobacteriolytic activity could be attributed to carbon source fed from starch carrier and amino acids eluted from lysed cyanobacteria. Therefore, the effect of using a floating starch-carrier was confirmed from both view points as a carrier for immobilization and a nutrient source to stimulate cyanobacteriolytic activity. The new concept to apply a floating carrier immobilizing useful microorganisms for intensive treatment of a nuisance floating target was demonstrated.
NETS - A NEURAL NETWORK DEVELOPMENT TOOL, VERSION 3.0 (MACINTOSH VERSION)
NASA Technical Reports Server (NTRS)
Phillips, T. A.
1994-01-01
NETS, A Tool for the Development and Evaluation of Neural Networks, provides a simulation of Neural Network algorithms plus an environment for developing such algorithms. Neural Networks are a class of systems modeled after the human brain. Artificial Neural Networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to brain neurons. Problems which involve pattern matching readily fit the class of problems which NETS is designed to solve. NETS uses the back propagation learning method for all of the networks which it creates. The nodes of a network are usually grouped together into clumps called layers. Generally, a network will have an input layer through which the various environment stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to some features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. NETS allows the user to customize the patterns of connections between layers of a network. NETS also provides features for saving the weight values of a network during the learning process, which allows for more precise control over the learning process. NETS is an interpreter. Its method of execution is the familiar "read-evaluate-print" loop found in interpreted languages such as BASIC and LISP. The user is presented with a prompt which is the simulator's way of asking for input. After a command is issued, NETS will attempt to evaluate the command, which may produce more prompts requesting specific information or an error if the command is not understood. The typical process involved when using NETS consists of translating the problem into a format which uses input/output pairs, designing a network configuration for the problem, and finally training the network with input/output pairs until an acceptable error is reached. NETS allows the user to generate C code to implement the network loaded into the system. This permits the placement of networks as components, or subroutines, in other systems. In short, once a network performs satisfactorily, the Generate C Code option provides the means for creating a program separate from NETS to run the network. Other features: files may be stored in binary or ASCII format; multiple input propagation is permitted; bias values may be included; capability to scale data without writing scaling code; quick interactive testing of network from the main menu; and several options that allow the user to manipulate learning efficiency. NETS is written in ANSI standard C language to be machine independent. The Macintosh version (MSC-22108) includes code for both a graphical user interface version and a command line interface version. The machine independent version (MSC-21588) only includes code for the command line interface version of NETS 3.0. The Macintosh version requires a Macintosh II series computer and has been successfully implemented under System 7. Four executables are included on these diskettes, two for floating point operations and two for integer arithmetic. It requires Think C 5.0 to compile. A minimum of 1Mb of RAM is required for execution. Sample input files and executables for both the command line version and the Macintosh user interface version are provided on the distribution medium. The Macintosh version is available on a set of three 3.5 inch 800K Macintosh format diskettes. The machine independent version has been successfully implemented on an IBM PC series compatible running MS-DOS, a DEC VAX running VMS, a SunIPC running SunOS, and a CRAY Y-MP running UNICOS. Two executables for the IBM PC version are included on the MS-DOS distribution media, one compiled for floating point operations and one for integer arithmetic. The machine independent version is available on a set of three 5.25 inch 360K MS-DOS format diskettes (standard distribution medium) or a .25 inch streaming magnetic tape cartridge in UNIX tar format. NETS was developed in 1989 and updated in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. SunIPC and SunOS are trademarks of Sun Microsystems, Inc. CRAY Y-MP and UNICOS are trademarks of Cray Research, Inc.
NETS - A NEURAL NETWORK DEVELOPMENT TOOL, VERSION 3.0 (MACHINE INDEPENDENT VERSION)
NASA Technical Reports Server (NTRS)
Baffes, P. T.
1994-01-01
NETS, A Tool for the Development and Evaluation of Neural Networks, provides a simulation of Neural Network algorithms plus an environment for developing such algorithms. Neural Networks are a class of systems modeled after the human brain. Artificial Neural Networks are formed from hundreds or thousands of simulated neurons, connected to each other in a manner similar to brain neurons. Problems which involve pattern matching readily fit the class of problems which NETS is designed to solve. NETS uses the back propagation learning method for all of the networks which it creates. The nodes of a network are usually grouped together into clumps called layers. Generally, a network will have an input layer through which the various environment stimuli are presented to the network, and an output layer for determining the network's response. The number of nodes in these two layers is usually tied to some features of the problem being solved. Other layers, which form intermediate stops between the input and output layers, are called hidden layers. NETS allows the user to customize the patterns of connections between layers of a network. NETS also provides features for saving the weight values of a network during the learning process, which allows for more precise control over the learning process. NETS is an interpreter. Its method of execution is the familiar "read-evaluate-print" loop found in interpreted languages such as BASIC and LISP. The user is presented with a prompt which is the simulator's way of asking for input. After a command is issued, NETS will attempt to evaluate the command, which may produce more prompts requesting specific information or an error if the command is not understood. The typical process involved when using NETS consists of translating the problem into a format which uses input/output pairs, designing a network configuration for the problem, and finally training the network with input/output pairs until an acceptable error is reached. NETS allows the user to generate C code to implement the network loaded into the system. This permits the placement of networks as components, or subroutines, in other systems. In short, once a network performs satisfactorily, the Generate C Code option provides the means for creating a program separate from NETS to run the network. Other features: files may be stored in binary or ASCII format; multiple input propagation is permitted; bias values may be included; capability to scale data without writing scaling code; quick interactive testing of network from the main menu; and several options that allow the user to manipulate learning efficiency. NETS is written in ANSI standard C language to be machine independent. The Macintosh version (MSC-22108) includes code for both a graphical user interface version and a command line interface version. The machine independent version (MSC-21588) only includes code for the command line interface version of NETS 3.0. The Macintosh version requires a Macintosh II series computer and has been successfully implemented under System 7. Four executables are included on these diskettes, two for floating point operations and two for integer arithmetic. It requires Think C 5.0 to compile. A minimum of 1Mb of RAM is required for execution. Sample input files and executables for both the command line version and the Macintosh user interface version are provided on the distribution medium. The Macintosh version is available on a set of three 3.5 inch 800K Macintosh format diskettes. The machine independent version has been successfully implemented on an IBM PC series compatible running MS-DOS, a DEC VAX running VMS, a SunIPC running SunOS, and a CRAY Y-MP running UNICOS. Two executables for the IBM PC version are included on the MS-DOS distribution media, one compiled for floating point operations and one for integer arithmetic. The machine independent version is available on a set of three 5.25 inch 360K MS-DOS format diskettes (standard distribution medium) or a .25 inch streaming magnetic tape cartridge in UNIX tar format. NETS was developed in 1989 and updated in 1992. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. SunIPC and SunOS are trademarks of Sun Microsystems, Inc. CRAY Y-MP and UNICOS are trademarks of Cray Research, Inc.
Centroid tracker and aimpoint selection
NASA Astrophysics Data System (ADS)
Venkateswarlu, Ronda; Sujata, K. V.; Venkateswara Rao, B.
1992-11-01
Autonomous fire and forget weapons have gained importance to achieve accurate first pass kill by hitting the target at an appropriate aim point. Centroid of the image presented by a target in the field of view (FOV) of a sensor is generally accepted as the aimpoint for these weapons. Centroid trackers are applicable only when the target image is of significant size in the FOV of the sensor but does not overflow the FOV. But as the range between the sensor and the target decreases the image of the target will grow and finally overflow the FOV at close ranges and the centroid point on the target will keep on changing which is not desirable. And also centroid need not be the most desired/vulnerable point on the target. For hardened targets like tanks, proper aimpoint selection and guidance up to almost zero range is essential to achieve maximum kill probability. This paper presents a centroid tracker realization. As centroid offers a stable tracking point, it can be used as a reference to select the proper aimpoint. The centroid and the desired aimpoint are simultaneously tracked to avoid jamming by flares and also to take care of the problems arising due to image overflow. Thresholding of gray level image to binary image is a crucial step in centroid tracker. Different thresholding algorithms are discussed and a suitable algorithm is chosen. The real-time hardware implementation of centroid tracker with a suitable thresholding technique is presented including the interfacing to a multimode tracker for autonomous target tracking and aimpoint selection. The hardware uses very high speed arithmetic and programmable logic devices to meet the speed requirement and a microprocessor based subsystem for the system control. The tracker has been evaluated in a field environment.
An integrated circuit floating point accumulator
NASA Technical Reports Server (NTRS)
Goldsmith, T. C.
1977-01-01
Goddard Space Flight Center has developed a large scale integrated circuit (type 623) which can perform pulse counting, storage, floating point compression, and serial transmission, using a single monolithic device. Counts of 27 or 19 bits can be converted to transmitted values of 12 or 8 bits respectively. Use of the 623 has resulted in substantial savaings in weight, volume, and dollar resources on at least 11 scientific instruments to be flown on 4 NASA spacecraft. The design, construction, and application of the 623 are described.
Floating-point function generation routines for 16-bit microcomputers
NASA Technical Reports Server (NTRS)
Mackin, M. A.; Soeder, J. F.
1984-01-01
Several computer subroutines have been developed that interpolate three types of nonanalytic functions: univariate, bivariate, and map. The routines use data in floating-point form. However, because they are written for use on a 16-bit Intel 8086 system with an 8087 mathematical coprocessor, they execute as fast as routines using data in scaled integer form. Although all of the routines are written in assembly language, they have been implemented in a modular fashion so as to facilitate their use with high-level languages.
Hwang, Han-Jeong; Choi, Han; Kim, Jeong-Youn; Chang, Won-Du; Kim, Do-Won; Kim, Kiwoong; Jo, Sungho; Im, Chang-Hwan
2016-09-01
In traditional brain-computer interface (BCI) studies, binary communication systems have generally been implemented using two mental tasks arbitrarily assigned to “yes” or “no” intentions (e.g., mental arithmetic calculation for “yes”). A recent pilot study performed with one paralyzed patient showed the possibility of a more intuitive paradigm for binary BCI communications, in which the patient’s internal yes/no intentions were directly decoded from functional near-infrared spectroscopy (fNIRS). We investigated whether such an “fNIRS-based direct intention decoding” paradigm can be reliably used for practical BCI communications. Eight healthy subjects participated in this study, and each participant was administered 70 disjunctive questions. Brain hemodynamic responses were recorded using a multichannel fNIRS device, while the participants were internally expressing “yes” or “no” intentions to each question. Different feature types, feature numbers, and time window sizes were tested to investigate optimal conditions for classifying the internal binary intentions. About 75% of the answers were correctly classified when the individual best feature set was employed (75.89% ± 1.39 and 74.08% ± 2.87 for oxygenated and deoxygenated hemoglobin responses, respectively), which was significantly higher than a random chance level (68.57% for p < 0.001). The kurtosis feature showed the highest mean classification accuracy among all feature types. The grand-averaged hemodynamic responses showed that wide brain regions are associated with the processing of binary implicit intentions. Our experimental results demonstrated that direct decoding of internal binary intention has the potential to be used for implementing more intuitive and user-friendly communication systems for patients with motor disabilities.
2012-03-01
Description A dass that handles Imming the JAUS header pmUon of JAUS messages. jaus_hmd~_msg is included as a data member in all JAUS messages. Member...scaleTolnt16 (float val, float low, float high) [related] Scales signed short value val, which is bounded by low and high. Shifts the center point of low...and high to zero, and shifts val accordingly. V a! is then up scaled by the ratio of the range of short values to the range of values from high to low
Discovery of a Free-Floating Double Planet?
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-07-01
An object previously identified as a free-floating, large Jupiter analogturns out to be two objects each with the mass of a few Jupiters. This system is the lowest-mass binary weve ever discovered.Tracking Down Ages2MASS J111932541137466 is thought to be a member of the TW Hydrae Association, a group of roughly two dozen young stars moving together in the solar neighborhood. [University of Western Ontario/Carnegie Institution of Washington DTM/David Rodriguez]Brown dwarfs represent the bottom end of the stellar mass spectrum, with masses too low to fuse hydrogen (typically below 75-80 Jupiter masses). Observing these objects provides us a unique opportunity to learn about stellar evolution and atmospheric models but to properly understand these observations, we need to determine the dwarfs masses and ages.This is surprisingly difficult, however. Brown dwarfs cool continuously as they age, which creates an observational degeneracy: dwarfs of different masses and ages can have the same luminosity, making it difficult to infer their physical properties from observations.We can solve this problem with an independent measurement of the dwarfs masses. One approach is to find brown dwarfs that are members of nearby stellar associations called moving groups. The stars within the association share the same approximate age, so a brown dwarfs age can be estimated based on the easier-to-identify ages of other stars in the group.An Unusual BinaryRecently, a team of scientists led by William Best (Institute for Astronomy, University of Hawaii) were following up on such an object: the extremely red, low-gravity L7 dwarf 2MASS J111932541137466, possibly a member of the TW Hydrae Association. With the help of the powerful adaptive optics on the Keck II telescope in Hawaii, however, the team discovered that this Jupiter-like objectwas hiding something: its actually two objects of equal flux orbiting each other.Keck images of 2MASS J111932541137466 reveal that this object is actually a binary system. A similar image of another dwarf, WISEA J1147-2040, is shown at bottom left for contrast: this one does not show signs of being a binary at this resolution. [Best et al. 2017]To learn more about this unusual binary, Best and collaborators began by using observed properties like sky position, proper motion, and radial velocity to estimate the likelihood that 2MASS J111932541137466AB is, indeed, a member of the TW Hydrae Association of stars. They found roughly an 80% chance that it belongs to this group.Under this assumption, the authors then used the distance to the group around 160 light-years to estimate that the binarys separation is 3.9 AU. The assumed membership in the TW Hydrae Association also provides binarys age: roughly 10 million years. This allowed Best and collaborators to estimate the masses and effective temperatures of the components from luminosities and evolutionary models.Planetary-Mass ObjectsThe positions of 2MASS J111932541137466A and B on a color-magnitude diagram for ultracool dwarfs. The binary components lie among the faintest and reddest planetary-mass L dwarfs. [Best et al. 2017]The team found that each component is a mere 3.7 Jupiter masses, placing them in the fuzzy region between planets and stars. While the International Astronomical Union considers objects below the minimum mass to fuse deuterium (around 13 Jupiter masses) to be planets, other definitions vary, depending on factors such as composition, temperature, and formation. The authors describe the binary as consisting of two planetary-mass objects.Regardless of its definition, 2MASS J111932541137466AB qualifies as the lowest-mass binary discovered to date. The individual masses of the components also place them among the lowest-mass free-floating brown dwarfs known. This system will therefore be a crucial benchmark for tests of evolutionary and atmospheric models for low-mass stars in the future.CitationWilliam M. J. Best et al 2017 ApJL 843 L4. doi:10.3847/2041-8213/aa76df
Shin, Jaeyoung; Kwon, Jinuk; Im, Chang-Hwan
2018-01-01
The performance of a brain-computer interface (BCI) can be enhanced by simultaneously using two or more modalities to record brain activity, which is generally referred to as a hybrid BCI. To date, many BCI researchers have tried to implement a hybrid BCI system by combining electroencephalography (EEG) and functional near-infrared spectroscopy (NIRS) to improve the overall accuracy of binary classification. However, since hybrid EEG-NIRS BCI, which will be denoted by hBCI in this paper, has not been applied to ternary classification problems, paradigms and classification strategies appropriate for ternary classification using hBCI are not well investigated. Here we propose the use of an hBCI for the classification of three brain activation patterns elicited by mental arithmetic, motor imagery, and idle state, with the aim to elevate the information transfer rate (ITR) of hBCI by increasing the number of classes while minimizing the loss of accuracy. EEG electrodes were placed over the prefrontal cortex and the central cortex, and NIRS optodes were placed only on the forehead. The ternary classification problem was decomposed into three binary classification problems using the "one-versus-one" (OVO) classification strategy to apply the filter-bank common spatial patterns filter to EEG data. A 10 × 10-fold cross validation was performed using shrinkage linear discriminant analysis (sLDA) to evaluate the average classification accuracies for EEG-BCI, NIRS-BCI, and hBCI when the meta-classification method was adopted to enhance classification accuracy. The ternary classification accuracies for EEG-BCI, NIRS-BCI, and hBCI were 76.1 ± 12.8, 64.1 ± 9.7, and 82.2 ± 10.2%, respectively. The classification accuracy of the proposed hBCI was thus significantly higher than those of the other BCIs ( p < 0.005). The average ITR for the proposed hBCI was calculated to be 4.70 ± 1.92 bits/minute, which was 34.3% higher than that reported for a previous binary hBCI study.
Evaluation of selected strapdown inertial instruments and pulse torque loops, volume 1
NASA Technical Reports Server (NTRS)
Sinkiewicz, J. S.; Feldman, J.; Lory, C. B.
1974-01-01
Design, operational and performance variations between ternary, binary and forced-binary pulse torque loops are presented. A fill-in binary loop which combines the constant power advantage of binary with the low sampling error of ternary is also discussed. The effects of different output-axis supports on the performance of a single-degree-of-freedom, floated gyroscope under a strapdown environment are illustrated. Three types of output-axis supports are discussed: pivot-dithered jewel, ball bearing and electromagnetic. A test evaluation on a Kearfott 2544 single-degree-of-freedom, strapdown gyroscope operating with a pulse torque loop, under constant rates and angular oscillatory inputs is described and the results presented. Contributions of the gyroscope's torque generator and the torque-to-balance electronics on scale factor variation with rate are illustrated for a SDF 18 IRIG Mod-B strapdown gyroscope operating with various pulse rebalance loops. Also discussed are methods of reducing this scale factor variation with rate by adjusting the tuning network which shunts the torque coil. A simplified analysis illustrating the principles of operation of the Teledyne two-degree-of-freedom, elastically-supported, tuned gyroscope and the results of a static and constant rate test evaluation of that instrument are presented.
Differential porosimetry and permeametry for random porous media.
Hilfer, R; Lemmer, A
2015-07-01
Accurate determination of geometrical and physical properties of natural porous materials is notoriously difficult. Continuum multiscale modeling has provided carefully calibrated realistic microstructure models of reservoir rocks with floating point accuracy. Previous measurements using synthetic microcomputed tomography (μ-CT) were based on extrapolation of resolution-dependent properties for discrete digitized approximations of the continuum microstructure. This paper reports continuum measurements of volume and specific surface with full floating point precision. It also corrects an incomplete description of rotations in earlier publications. More importantly, the methods of differential permeametry and differential porosimetry are introduced as precision tools. The continuum microstructure chosen to exemplify the methods is a homogeneous, carefully calibrated and characterized model for Fontainebleau sandstone. The sample has been publicly available since 2010 on the worldwide web as a benchmark for methodical studies of correlated random media. High-precision porosimetry gives the volume and internal surface area of the sample with floating point accuracy. Continuum results with floating point precision are compared to discrete approximations. Differential porosities and differential surface area densities allow geometrical fluctuations to be discriminated from discretization effects and numerical noise. Differential porosimetry and Fourier analysis reveal subtle periodic correlations. The findings uncover small oscillatory correlations with a period of roughly 850μm, thus implying that the sample is not strictly stationary. The correlations are attributed to the deposition algorithm that was used to ensure the grain overlap constraint. Differential permeabilities are introduced and studied. Differential porosities and permeabilities provide scale-dependent information on geometry fluctuations, thereby allowing quantitative error estimates.
An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.
Fout, N; Ma, Kwan-Liu
2012-12-01
In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, Kenta; Department of Chemistry, Biology, and Biotechnology, University of Perugia, 06123 Perugia; Gotoda, Hiroshi
2016-05-15
The convective motions within a solution of a photochromic spiro-oxazine being irradiated by UV only on the bottom part of its volume, give rise to aperiodic spectrophotometric dynamics. In this paper, we study three nonlinear properties of the aperiodic time series: permutation entropy, short-term predictability and long-term unpredictability, and degree distribution of the visibility graph networks. After ascertaining the extracted chaotic features, we show how the aperiodic time series can be exploited to implement all the fundamental two-inputs binary logic functions (AND, OR, NAND, NOR, XOR, and XNOR) and some basic arithmetic operations (half-adder, full-adder, half-subtractor). This is possible duemore » to the wide range of states a nonlinear system accesses in the course of its evolution. Therefore, the solution of the convective photochemical oscillator results in hardware for chaos-computing alternative to conventional complementary metal-oxide semiconductor-based integrated circuits.« less
One-Time Pad as a nonlinear dynamical system
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin
2012-11-01
The One-Time Pad (OTP) is the only known unbreakable cipher, proved mathematically by Shannon in 1949. In spite of several practical drawbacks of using the OTP, it continues to be used in quantum cryptography, DNA cryptography and even in classical cryptography when the highest form of security is desired (other popular algorithms like RSA, ECC, AES are not even proven to be computationally secure). In this work, we prove that the OTP encryption and decryption is equivalent to finding the initial condition on a pair of binary maps (Bernoulli shift). The binary map belongs to a family of 1D nonlinear chaotic and ergodic dynamical systems known as Generalized Luröth Series (GLS). Having established these interesting connections, we construct other perfect secrecy systems on the GLS that are equivalent to the One-Time Pad, generalizing for larger alphabets. We further show that OTP encryption is related to Randomized Arithmetic Coding - a scheme for joint compression and encryption.
ICRF-Induced Changes in Floating Potential and Ion Saturation Current in the EAST Divertor
NASA Astrophysics Data System (ADS)
Perkins, Rory; Hosea, Joel; Taylor, Gary; Bertelli, Nicola; Kramer, Gerrit; Qin, Chengming; Wang, Liang; Yang, Jichan; Zhang, Xinjun
2017-10-01
Injection of waves in the ion cyclotron range of frequencies (ICRF) into a tokamak can potentially raise the plasma potential via RF rectification. Probes are affected both by changes in plasma potential and also by RF-averaging of the probe characteristic, with the latter tending to drop the floating potential. We present the effect of ICRF heating on divertor Langmuir probes in the EAST experiment. Over a scan of the outer gap, probes connected to the antennas have increases in floating potential with ICRF, but probes in between the outer-vessel strike point and flux surface tangent to the antenna have decreased floating potential. This behaviour is investigated using field-line mapping. Preliminary results show that mdiplane gas puffing can suppress the strong influence of ICRF on the probes' floating potential.
Lithium-ion drifting: Application to the study of point defects in floating-zone silicon
NASA Technical Reports Server (NTRS)
Walton, J. T.; Wong, Y. K.; Zulehner, W.
1997-01-01
The use of lithium-ion (Li(+)) drifting to study the properties of point defects in p-type Floating-Zone (FZ) silicon crystals is reported. The Li(+) drift technique is used to detect the presence of vacancy-related defects (D defects) in certain p-type FZ silicon crystals. SUPREM-IV modeling suggests that the silicon point defect diffusivities are considerably higher than those commonly accepted, but are in reasonable agreement with values recently proposed. These results demonstrate the utility of Li(+) drifting in the study of silicon point defect properties in p-type FZ crystals. Finally, a straightforward measurement of the Li(+) compensation depth is shown to yield estimates of the vacancy-related defect concentration in p-type FZ crystals.
Wang, Jun; Cui, Xiao; Ni, Huan-Huan; Huang, Chun-Shui; Zhou, Cui-Xia; Wu, Ji; Shi, Jun-Chao; Wu, Yi
2013-04-01
To compare the efficacy difference in the treatment of shoulder pain in post-stroke shoulder-hand syndrome among floating acupuncture, oral administration of western medicine and local fumigation of Chinese herbs. Ninety cases of post-stroke shoulder-hand syndrome (stage I) were randomized into a floating acupuncture group, a western medicine group and a local Chinese herbs fumigation group, 30 cases in each one. In the floating acupuncture group, two obvious tender points were detected on the shoulder and the site 80-100 mm inferior to each tender point was taken as the inserting point and stimulated with floating needling technique. In the western medicine group, mobic 7.5 mg was prescribed for oral administration. In the local Chinese herbs fumigation group, the formula for activating blood circulation and relaxing tendon was used for local fumigation. All the patients in three groups received rehabilitation training. The floating acupuncture, oral administration of western medicine, local Chinese herbs fumigation and rehabilitation training were given once a day respectively in corresponding group and the cases were observed for 1 month. The visual analogue scale (VAS) and Takagishi shoulder joint function assessment were adopted to evaluate the dynamic change of the patients with shoulder pain before and after treatment in three groups. The modified Barthel index was used to evaluate the dynamic change of daily life activity of the patients in three groups. With floating acupuncture, shoulder pain was relieved and the daily life activity was improved in the patients with post-stroke shoulder-hand syndrome, which was superior to the oral administration of western medicine and local Chinese herbs fumigation (P < 0.01). With local Chinese herbs fumigation, the improvement of shoulder pain was superior to the oral administration of western medicine. The difference in the improvement of daily life activity was not significant statistically between the local Chinese herbs fumigation and oral administration of western medicine, the efficacy was similar between these two therapies (P > 0.05). The floating acupuncture relieves shoulder pain of the patients with post-stroke shoulder-hand syndrome promptly and effectively, and the effects on shoulder pain and the improvements of daily life activity are superior to that of the oral administration of western medicine and local Chinese herbs fumigation.
Expert Systems on Multiprocessor Architectures. Volume 4. Technical Reports
1991-06-01
Floated-Current-Time0 -> The time that this function is called in user time uflts, expressed as a floating point number. Halt- Poligono Arrests the...default a statistics file will be printed out, if it can be. To prevent this make No-Statistics true. Unhalt- Poligono Unarrests the process in which the
76 FR 19290 - Safety Zone; Commencement Bay, Tacoma, WA
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-07
... the following points Latitude 47[deg]17'38'' N, Longitude 122[deg]28'43'' W; thence south easterly to... protruding from the shoreline along Ruston Way. Floating markers will be placed by the sponsor of the event... rectangle protruding from the shoreline along Ruston Way. Floating markers will be placed by the sponsor of...
40 CFR 63.685 - Standards: Tanks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... in paragraph (c)(2)(i) of this section when a tank is used as an interim transfer point to transfer... fixed-roof tank equipped with an internal floating roof in accordance with the requirements specified in paragraph (e) of this section; (2) A tank equipped with an external floating roof in accordance with the...
Design of Efficient Mirror Adder in Quantum- Dot Cellular Automata
NASA Astrophysics Data System (ADS)
Mishra, Prashant Kumar; Chattopadhyay, Manju K.
2018-03-01
Lower power consumption is an essential demand for portable multimedia system using digital signal processing algorithms and architectures. Quantum dot cellular automata (QCA) is a rising nano technology for the development of high performance ultra-dense low power digital circuits. QCA based several efficient binary and decimal arithmetic circuits are implemented, however important improvements are still possible. This paper demonstrate Mirror Adder circuit design in QCA. We present comparative study of mirror adder cells designed using conventional CMOS technique and mirror adder cells designed using quantum-dot cellular automata. QCA based mirror adders are better in terms of area by order of three.
Dynamical genetic programming in XCSF.
Preen, Richard J; Bull, Larry
2013-01-01
A number of representation schemes have been presented for use within learning classifier systems, ranging from binary encodings to artificial neural networks. This paper presents results from an investigation into using a temporally dynamic symbolic representation within the XCSF learning classifier system. In particular, dynamical arithmetic networks are used to represent the traditional condition-action production system rules to solve continuous-valued reinforcement learning problems and to perform symbolic regression, finding competitive performance with traditional genetic programming on a number of composite polynomial tasks. In addition, the network outputs are later repeatedly sampled at varying temporal intervals to perform multistep-ahead predictions of a financial time series.
Oil/gas collector/separator for underwater oil leaks
Henning, Carl D.
1993-01-01
An oil/gas collector/separator for recovery of oil leaking, for example, from an offshore or underwater oil well. The separator is floated over the point of the leak and tethered in place so as to receive oil/gas floating, or forced under pressure, toward the water surface from either a broken or leaking oil well casing, line, or sunken ship. The separator is provided with a downwardly extending skirt to contain the oil/gas which floats or is forced upward into a dome wherein the gas is separated from the oil/water, with the gas being flared (burned) at the top of the dome, and the oil is separated from water and pumped to a point of use. Since the density of oil is less than that of water it can be easily separated from any water entering the dome.
Evaluation of floating-point sum or difference of products in carry-save domain
NASA Technical Reports Server (NTRS)
Wahab, A.; Erdogan, S.; Premkumar, A. B.
1992-01-01
An architecture to evaluate a 24-bit floating-point sum or difference of products using modified sequential carry-save multipliers with extensive pipelining is described. The basic building block of the architecture is a carry-save multiplier with built-in mantissa alignment for the summation during the multiplication cycles. A carry-save adder, capable of mantissa alignment, correctly positions products with the current carry-save sum. Carry propagation in individual multipliers is avoided and is only required once to produce the final result.
Floating-point scaling technique for sources separation automatic gain control
NASA Astrophysics Data System (ADS)
Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.
2012-07-01
Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.
How Math Anxiety Relates to Number-Space Associations.
Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine
2016-01-01
Given the considerable prevalence of math anxiety, it is important to identify the factors contributing to it in order to improve mathematical learning. Research on math anxiety typically focusses on the effects of more complex arithmetic skills. Recent evidence, however, suggests that deficits in basic numerical processing and spatial skills also constitute potential risk factors of math anxiety. Given these observations, we determined whether math anxiety also depends on the quality of spatial-numerical associations. Behavioral evidence for a tight link between numerical and spatial representations is given by the SNARC (spatial-numerical association of response codes) effect, characterized by faster left-/right-sided responses for small/large digits respectively in binary classification tasks. We compared the strength of the SNARC effect between high and low math anxious individuals using the classical parity judgment task in addition to evaluating their spatial skills, arithmetic performance, working memory and inhibitory control. Greater math anxiety was significantly associated with stronger spatio-numerical interactions. This finding adds to the recent evidence supporting a link between math anxiety and basic numerical abilities and strengthens the idea that certain characteristics of low-level number processing such as stronger number-space associations constitute a potential risk factor of math anxiety.
How Math Anxiety Relates to Number–Space Associations
Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine
2016-01-01
Given the considerable prevalence of math anxiety, it is important to identify the factors contributing to it in order to improve mathematical learning. Research on math anxiety typically focusses on the effects of more complex arithmetic skills. Recent evidence, however, suggests that deficits in basic numerical processing and spatial skills also constitute potential risk factors of math anxiety. Given these observations, we determined whether math anxiety also depends on the quality of spatial-numerical associations. Behavioral evidence for a tight link between numerical and spatial representations is given by the SNARC (spatial-numerical association of response codes) effect, characterized by faster left-/right-sided responses for small/large digits respectively in binary classification tasks. We compared the strength of the SNARC effect between high and low math anxious individuals using the classical parity judgment task in addition to evaluating their spatial skills, arithmetic performance, working memory and inhibitory control. Greater math anxiety was significantly associated with stronger spatio-numerical interactions. This finding adds to the recent evidence supporting a link between math anxiety and basic numerical abilities and strengthens the idea that certain characteristics of low-level number processing such as stronger number–space associations constitute a potential risk factor of math anxiety. PMID:27683570
Onboard Data Processors for Planetary Ice-Penetrating Sounding Radars
NASA Astrophysics Data System (ADS)
Tan, I. L.; Friesenhahn, R.; Gim, Y.; Wu, X.; Jordan, R.; Wang, C.; Clark, D.; Le, M.; Hand, K. P.; Plaut, J. J.
2011-12-01
Among the many concerns faced by outer planetary missions, science data storage and transmission hold special significance. Such missions must contend with limited onboard storage, brief data downlink windows, and low downlink bandwidths. A potential solution to these issues lies in employing onboard data processors (OBPs) to convert raw data into products that are smaller and closely capture relevant scientific phenomena. In this paper, we present the implementation of two OBP architectures for ice-penetrating sounding radars tasked with exploring Europa and Ganymede. Our first architecture utilizes an unfocused processing algorithm extended from the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS, Jordan et. al. 2009). Compared to downlinking raw data, we are able to reduce data volume by approximately 100 times through OBP usage. To ensure the viability of our approach, we have implemented, simulated, and synthesized this architecture using both VHDL and Matlab models (with fixed-point and floating-point arithmetic) in conjunction with Modelsim. Creation of a VHDL model of our processor is the principle step in transitioning to actual digital hardware, whether in a FPGA (field-programmable gate array) or an ASIC (application-specific integrated circuit), and successful simulation and synthesis strongly indicate feasibility. In addition, we examined the tradeoffs faced in the OBP between fixed-point accuracy, resource consumption, and data product fidelity. Our second architecture is based upon a focused fast back projection (FBP) algorithm that requires a modest amount of computing power and on-board memory while yielding high along-track resolution and improved slope detection capability. We present an overview of the algorithm and details of our implementation, also in VHDL. With the appropriate tradeoffs, the use of OBPs can significantly reduce data downlink requirements without sacrificing data product fidelity. Through the development, simulation, and synthesis of two different OBP architectures, we have proven the feasibility and efficacy of an OBP for planetary ice-penetrating radars.
NASA Astrophysics Data System (ADS)
Li, W.; Shigeta, K.; Hasegawa, K.; Li, L.; Yano, K.; Tanaka, S.
2017-09-01
Recently, laser-scanning technology, especially mobile mapping systems (MMSs), has been applied to measure 3D urban scenes. Thus, it has become possible to simulate a traditional cultural event in a virtual space constructed using measured point clouds. In this paper, we take the festival float procession in the Gion Festival that has a long history in Kyoto City, Japan. The city government plans to revive the original procession route that is narrow and not used at present. For the revival, it is important to know whether a festival float collides with houses, billboards, electric wires or other objects along the original route. Therefore, in this paper, we propose a method for visualizing the collisions of point cloud objects. The advantageous features of our method are (1) a see-through visualization with a correct depth feel that is helpful to robustly determine the collision areas, (2) the ability to visualize areas of high collision risk as well as real collision areas, and (3) the ability to highlight target visualized areas by increasing the point densities there.
Rosenberg-Lee, Miriam; Chang, Ting Ting; Young, Christina B; Wu, Sarah; Menon, Vinod
2011-01-01
Although lesion studies over the past several decades have focused on functional dissociations in posterior parietal cortex (PPC) during arithmetic, no consistent view has emerged of its differential involvement in addition, subtraction, multiplication, and division. To circumvent problems with poor anatomical localization, we examined functional overlap and dissociations in cytoarchitectonically-defined subdivisions of the intraparietal sulcus (IPS), superior parietal lobule (SPL) and angular gyrus (AG), across these four operations. Compared to a number identification control task, all operations except addition, showed a consistent profile of left posterior IPS activation and deactivation in the right posterior AG. Multiplication and subtraction differed significantly in right, but not left, IPS and AG activity, challenging the view that the left AG differentially subserves retrieval during multiplication. Although addition and multiplication both rely on retrieval, multiplication evoked significantly greater activation in right posterior IPS, as well as the prefrontal cortex, lingual and fusiform gyri, demonstrating that addition and multiplication engage different brain processes. Comparison of PPC responses to the two pairs of inverse operations: division vs. multiplication and subtraction vs. addition revealed greater activation of left lateral SPL during division, suggesting that processing inverse relations is operation specific. Our findings demonstrate that individual IPS, SPL and AG subdivisions are differentially modulated by the four arithmetic operations and they point to significant functional heterogeneity and individual differences in activation and deactivation within the PPC. Critically, these effects are related to retrieval, calculation and inversion, the three key cognitive processes that are differentially engaged by arithmetic operations. Our findings point to distributed representation of these processes in the human PPC and also help explain why lesion and previous imaging studies have yielded inconsistent findings. PMID:21616086
Rosenberg-Lee, Miriam; Chang, Ting Ting; Young, Christina B; Wu, Sarah; Menon, Vinod
2011-07-01
Although lesion studies over the past several decades have focused on functional dissociations in posterior parietal cortex (PPC) during arithmetic, no consistent view has emerged of its differential involvement in addition, subtraction, multiplication, and division. To circumvent problems with poor anatomical localization, we examined functional overlap and dissociations in cytoarchitectonically defined subdivisions of the intraparietal sulcus (IPS), superior parietal lobule (SPL) and angular gyrus (AG), across these four operations. Compared to a number identification control task, all operations except addition, showed a consistent profile of left posterior IPS activation and deactivation in the right posterior AG. Multiplication and subtraction differed significantly in right, but not left, IPS and AG activity, challenging the view that the left AG differentially subserves retrieval during multiplication. Although addition and multiplication both rely on retrieval, multiplication evoked significantly greater activation in right posterior IPS, as well as the prefrontal cortex, lingual and fusiform gyri, demonstrating that addition and multiplication engage different brain processes. Comparison of PPC responses to the two pairs of inverse operations: division versus multiplication and subtraction versus addition revealed greater activation of left lateral SPL during division, suggesting that processing inverse relations is operation specific. Our findings demonstrate that individual IPS, SPL and AG subdivisions are differentially modulated by the four arithmetic operations and they point to significant functional heterogeneity and individual differences in activation and deactivation within the PPC. Critically, these effects are related to retrieval, calculation and inversion, the three key cognitive processes that are differentially engaged by arithmetic operations. Our findings point to distribute representation of these processes in the human PPC and also help explain why lesion and previous imaging studies have yielded inconsistent findings. Copyright © 2011 Elsevier Ltd. All rights reserved.
Advanced hydrogen electrode for hydrogen-bromide battery
NASA Technical Reports Server (NTRS)
Kosek, Jack A.; Laconti, Anthony B.
1987-01-01
Binary platinum alloys are being developed as hydrogen electrocatalysts for use in a hydrogen bromide battery system. These alloys were varied in terms of alloy component mole ratio and heat treatment temperature. Electrocatalyst evaluation, performed in the absence and presence of bromide ion, includes floating half cell polarization studies, electrochemical surface area measurements, X ray diffraction analysis, scanning electron microscopy analysis and corrosion measurements. Results obtained to date indicate a platinum rich alloy has the best tolerance to bromide ion poisoning.
Does size and buoyancy affect the long-distance transport of floating debris?
NASA Astrophysics Data System (ADS)
Ryan, Peter G.
2015-08-01
Floating persistent debris, primarily made from plastic, disperses long distances from source areas and accumulates in oceanic gyres. However, biofouling can increase the density of debris items to the point where they sink. Buoyancy is related to item volume, whereas fouling is related to surface area, so small items (which have high surface area to volume ratios) should start to sink sooner than large items. Empirical observations off South Africa support this prediction: moving offshore from coastal source areas there is an increase in the size of floating debris, an increase in the proportion of highly buoyant items (e.g. sealed bottles, floats and foamed plastics), and a decrease in the proportion of thin items such as plastic bags and flexible packaging which have high surface area to volume ratios. Size-specific sedimentation rates may be one reason for the apparent paucity of small plastic items floating in the world’s oceans.
NASA Astrophysics Data System (ADS)
Li, Jun; Qin, Qiming; Xie, Chao; Zhao, Yue
2012-10-01
The update frequency of digital road maps influences the quality of road-dependent services. However, digital road maps surveyed by probe vehicles or extracted from remotely sensed images still have a long updating circle and their cost remain high. With GPS technology and wireless communication technology maturing and their cost decreasing, floating car technology has been used in traffic monitoring and management, and the dynamic positioning data from floating cars become a new data source for updating road maps. In this paper, we aim to update digital road maps using the floating car data from China's National Commercial Vehicle Monitoring Platform, and present an incremental road network extraction method suitable for the platform's GPS data whose sampling frequency is low and which cover a large area. Based on both spatial and semantic relationships between a trajectory point and its associated road segment, the method classifies each trajectory point, and then merges every trajectory point into the candidate road network through the adding or modifying process according to its type. The road network is gradually updated until all trajectories have been processed. Finally, this method is applied in the updating process of major roads in North China and the experimental results reveal that it can accurately derive geometric information of roads under various scenes. This paper provides a highly-efficient, low-cost approach to update digital road maps.
The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations
NASA Astrophysics Data System (ADS)
Orf, L.
2017-12-01
In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress extremely well. We observe that the overhead for compressing data with ZFP is low, and that compressing data in memory reduces the amount of memory overhead needed to store the virtual files before they are flushed to disk.
33 CFR 162.130 - Connecting waters from Lake Huron to Lake Erie; general rules.
Code of Federal Regulations, 2010 CFR
2010-07-01
... vessel astern, alongside, or by pushing ahead; and (iii) Each dredge and floating plant. (4) The traffic... towing another vessel astern, alongside or by pushing ahead; and (iv) Each dredge and floating plant. (c... Captain of the Port of Detroit, Michigan. Detroit River means the connecting waters from Windmill Point...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-10
...]37[min]10.0[sec] W; thence easterly along the Marinette Marine Corporation pier to the point of origin. The restricted area will be marked by a lighted and signed floating boat barrier. (b) The... floating boat barrier without permission from the United States Navy, Supervisor of Shipbuilding Gulf Coast...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-24
... changed so that the restricted area could be marked with a signed floating buoy line instead of a signed floating barrier. That change has been made to the final rule. Procedural Requirements a. Review Under...; thence easterly along the Marinette Marine Corporation pier to the point of origin. The restricted area...
A Floating Cylinder on an Unbounded Bath
NASA Astrophysics Data System (ADS)
Chen, Hanzhe; Siegel, David
2018-03-01
In this paper, we reconsider a circular cylinder horizontally floating on an unbounded reservoir in a gravitational field directed downwards, which was studied by Bhatnagar and Finn (Phys Fluids 18(4):047103, 2006). We follow their approach but with some modifications. We establish the relation between the total energy E_T relative to the undisturbed state and the total force F_T , that is, F_T = -dE_T/dh , where h is the height of the center of the cylinder relative to the undisturbed fluid level. There is a monotone relation between h and the wetting angle φ _0 . We study the number of equilibria, the floating configurations and their stability for all parameter values. We find that the system admits at most two equilibrium points for arbitrary contact angle γ , the one with smaller φ _0 is stable and the one with larger φ _0 is unstable. Since the one-sided solution can be translated horizontally, the fluid interfaces may intersect. We show that the stable equilibrium point never lies in the intersection region, while the unstable equilibrium point may lie in the intersection region.
NASA Astrophysics Data System (ADS)
Cruz Jiménez, Miriam Guadalupe; Meyer Baese, Uwe; Jovanovic Dolecek, Gordana
2017-12-01
New theoretical lower bounds for the number of operators needed in fixed-point constant multiplication blocks are presented. The multipliers are constructed with the shift-and-add approach, where every arithmetic operation is pipelined, and with the generalization that n-input pipelined additions/subtractions are allowed, along with pure pipelining registers. These lower bounds, tighter than the state-of-the-art theoretical limits, are particularly useful in early design stages for a quick assessment in the hardware utilization of low-cost constant multiplication blocks implemented in the newest families of field programmable gate array (FPGA) integrated circuits.
ERIC Educational Resources Information Center
Lupi, Marsha Mead
1979-01-01
The article illustrates the use of commercial jingles as high interest, low-level reading and language arts materials for primary age mildly retarded students. It is pointed out that jingles can be used in teaching initial consonants, vocabulary words, and arithmetic concepts. (SBH)
Fast Image Texture Classification Using Decision Trees
NASA Technical Reports Server (NTRS)
Thompson, David R.
2011-01-01
Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.
NASA Astrophysics Data System (ADS)
Eriksen, Janus J.
2017-09-01
It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order Møller-Plesset (MP2) model in its resolution-of-the-identity approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimised device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimised to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-ζ quality). For all but the smallest problem sizes of the present study, the optimised accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimised, OpenMP-threaded CPU-only reference implementations.
NASA Astrophysics Data System (ADS)
Pi, E. I.; Siegel, E.
2010-03-01
Siegel[AMS Natl.Mtg.(2002)-Abs.973-60-124] digits logarithmic- law inversion to ONLY BEQS BEC:Quanta/Bosons=#: EMP-like SEVERE VULNERABILITY of ONLY #-networks(VS.ANALOG INvulnerability) via Barabasi NP(VS.dynamics[Not.AMS(5/2009)] critique);(so called)``quantum-computing''(QC) = simple-arithmetic (sansdivision);algorithmiccomplexities:INtractibility/UNdecidabi lity/INefficiency/NONcomputability/HARDNESS(so MIScalled) ``noise''-induced-phase-transition(NIT)ACCELERATION:Cook-Levin theorem Reducibility = RG fixed-points; #-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(2002)] How? mea culpa)= ONLY MBCS hot-plasma v #-clumping NON-random BEC; Modular-Arithmetic Congruences = Signal x Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)]BEC logarithmic-law inversion factorization: Watkins #-theory U statistical- physics); P=/=NP C-S TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation(3 millennia AGO geometry: NO:CC,``CS'';``Feet of Clay!!!'']; Query WHAT?:Definition: (so MIScalled)``complexity''=UTTER-SIMPLICITY!! v COMPLICATEDNESS MEASURE(S).
Arithmetic learning with the use of graphic organiser
NASA Astrophysics Data System (ADS)
Sai, F. L.; Shahrill, M.; Tan, A.; Han, S. H.
2018-01-01
For this study, Zollman’s four corners-and-a-diamond mathematics graphic organiser embedded with Polya’s Problem Solving Model was used to investigate secondary school students’ performance in arithmetic word problems. This instructional learning tool was used to help students break down the given information into smaller units for better strategic planning. The participants were Year 7 students, comprised of 21 male and 20 female students, aged between 11-13 years old, from a co-ed secondary school in Brunei Darussalam. This study mainly adopted a quantitative approach to investigate the types of differences found in the arithmetic word problem pre- and post-tests results from the use of the learning tool. Although the findings revealed slight improvements in the overall comparisons of the students’ test results, the in-depth analysis of the students’ responses in their activity worksheets shows a different outcome. Some students were able to make good attempts in breaking down the key points into smaller information in order to solve the word problems.
A test data compression scheme based on irrational numbers stored coding.
Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan
2014-01-01
Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.
26 CFR 1.1274-2 - Issue price of debt instruments to which section 1274 applies.
Code of Federal Regulations, 2010 CFR
2010-04-01
...- borrower to the seller-lender that is designated as interest or points. See Example 2 of § 1.1273-2(g)(5... ignored. (f) Treatment of variable rate debt instruments—(1) Stated interest at a qualified floating rate... qualified floating rate (or rates) is determined by assuming that the instrument provides for a fixed rate...
76 FR 71322 - Taking and Importing Marine Mammals; U.S. Navy Training in the Hawaii Range Complex
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-17
..., most operationally sound method of initiating a demolition charge on a floating mine or mine at depth...; require building/ deploying an improvised, bulky, floating system for the receiver; and add another 180 ft... charge initiating device are taken to the detonation point. Military forms of C-4 are used as the...
Shahan, M R; Seaman, C E; Beck, T W; Colinet, J F; Mischler, S E
2017-09-01
Float coal dust is produced by various mining methods, carried by ventilating air and deposited on the floor, roof and ribs of mine airways. If deposited, float dust is re-entrained during a methane explosion. Without sufficient inert rock dust quantities, this float coal dust can propagate an explosion throughout mining entries. Consequently, controlling float coal dust is of critical interest to mining operations. Rock dusting, which is the adding of inert material to airway surfaces, is the main control technique currently used by the coal mining industry to reduce the float coal dust explosion hazard. To assist the industry in reducing this hazard, the Pittsburgh Mining Research Division of the U.S. National Institute for Occupational Safety and Health initiated a project to investigate methods and technologies to reduce float coal dust in underground coal mines through prevention, capture and suppression prior to deposition. Field characterization studies were performed to determine quantitatively the sources, types and amounts of dust produced during various coal mining processes. The operations chosen for study were a continuous miner section, a longwall section and a coal-handling facility. For each of these operations, the primary dust sources were confirmed to be the continuous mining machine, longwall shearer and conveyor belt transfer points, respectively. Respirable and total airborne float dust samples were collected and analyzed for each operation, and the ratio of total airborne float coal dust to respirable dust was calculated. During the continuous mining process, the ratio of total airborne float coal dust to respirable dust ranged from 10.3 to 13.8. The ratios measured on the longwall face were between 18.5 and 21.5. The total airborne float coal dust to respirable dust ratio observed during belt transport ranged between 7.5 and 21.8.
A floating-point digital receiver for MRI.
Hoenninger, John C; Crooks, Lawrence E; Arakawa, Mitsuaki
2002-07-01
A magnetic resonance imaging (MRI) system requires the highest possible signal fidelity and stability for clinical applications. Quadrature analog receivers have problems with channel matching, dc offset and analog-to-digital linearity. Fixed-point digital receivers (DRs) reduce all of these problems. We have demonstrated that a floating-point DR using large (order 124 to 512) FIR low-pass filters also overcomes these problems, automatically provides long word length and has low latency between signals. A preloaded table of finite impuls response (FIR) filter coefficients provides fast switching between one of 129 different one-stage and two-stage multrate FIR low-pass filters with bandwidths between 4 KHz and 125 KHz. This design has been implemented on a dual channel circuit board for a commercial MRI system.
Fast Localization in Large-Scale Environments Using Supervised Indexing of Binary Features.
Youji Feng; Lixin Fan; Yihong Wu
2016-01-01
The essence of image-based localization lies in matching 2D key points in the query image and 3D points in the database. State-of-the-art methods mostly employ sophisticated key point detectors and feature descriptors, e.g., Difference of Gaussian (DoG) and Scale Invariant Feature Transform (SIFT), to ensure robust matching. While a high registration rate is attained, the registration speed is impeded by the expensive key point detection and the descriptor extraction. In this paper, we propose to use efficient key point detectors along with binary feature descriptors, since the extraction of such binary features is extremely fast. The naive usage of binary features, however, does not lend itself to significant speedup of localization, since existing indexing approaches, such as hierarchical clustering trees and locality sensitive hashing, are not efficient enough in indexing binary features and matching binary features turns out to be much slower than matching SIFT features. To overcome this, we propose a much more efficient indexing approach for approximate nearest neighbor search of binary features. This approach resorts to randomized trees that are constructed in a supervised training process by exploiting the label information derived from that multiple features correspond to a common 3D point. In the tree construction process, node tests are selected in a way such that trees have uniform leaf sizes and low error rates, which are two desired properties for efficient approximate nearest neighbor search. To further improve the search efficiency, a probabilistic priority search strategy is adopted. Apart from the label information, this strategy also uses non-binary pixel intensity differences available in descriptor extraction. By using the proposed indexing approach, matching binary features is no longer much slower but slightly faster than matching SIFT features. Consequently, the overall localization speed is significantly improved due to the much faster key point detection and descriptor extraction. It is empirically demonstrated that the localization speed is improved by an order of magnitude as compared with state-of-the-art methods, while comparable registration rate and localization accuracy are still maintained.
Zhai, H; Jones, D S; McCoy, C P; Madi, A M; Tian, Y; Andrews, G P
2014-10-06
The objective of this work was to investigate the feasibility of using a novel granulation technique, namely, fluidized hot melt granulation (FHMG), to prepare gastroretentive extended-release floating granules. In this study we have utilized FHMG, a solvent free process in which granulation is achieved with the aid of low melting point materials, using Compritol 888 ATO and Gelucire 50/13 as meltable binders, in place of conventional liquid binders. The physicochemical properties, morphology, floating properties, and drug release of the manufactured granules were investigated. Granules prepared by this method were spherical in shape and showed good flowability. The floating granules exhibited sustained release exceeding 10 h. Granule buoyancy (floating time and strength) and drug release properties were significantly influenced by formulation variables such as excipient type and concentration, and the physical characteristics (particle size, hydrophilicity) of the excipients. Drug release rate was increased by increasing the concentration of hydroxypropyl cellulose (HPC) and Gelucire 50/13, or by decreasing the particle size of HPC. Floating strength was improved through the incorporation of sodium bicarbonate and citric acid. Furthermore, floating strength was influenced by the concentration of HPC within the formulation. Granules prepared in this way show good physical characteristics, floating ability, and drug release properties when placed in simulated gastric fluid. Moreover, the drug release and floating properties can be controlled by modification of the ratio or physical characteristics of the excipients used in the formulation.
ERIC Educational Resources Information Center
Pierstorff, Don K.
1981-01-01
Parodies holistic approaches to education. Explains an educational approach which simultaneously teaches grammar and arithmetic. Lauds the advantages of the approach as high student attrition, ease of grading, and focus on developing the reptilian portion of the brain. Points out common errors made by students. (AYC)
26 CFR 1.483-2 - Unstated interest.
Code of Federal Regulations, 2010 CFR
2010-04-01
... percentage points above the yield on 6-month Treasury bills at the mid-point of the semiannual period immediately preceding each interest payment date. Assume that the interest rate is a qualified floating rate...
An O(log sup 2 N) parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix
NASA Technical Reports Server (NTRS)
Swarztrauber, Paul N.
1989-01-01
An O(log sup 2 N) parallel algorithm is presented for computing the eigenvalues of a symmetric tridiagonal matrix using a parallel algorithm for computing the zeros of the characteristic polynomial. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The exact behavior of the polynomials at the interval endpoints is used to eliminate the usual problems induced by finite precision arithmetic.
Ando, S; Sekine, S; Mita, M; Katsuo, S
1989-12-15
An architecture and the algorithms for matrix multiplication using optical flip-flops (OFFs) in optical processors are proposed based on residue arithmetic. The proposed system is capable of processing all elements of matrices in parallel utilizing the information retrieving ability of optical Fourier processors. The employment of OFFs enables bidirectional data flow leading to a simpler architecture and the burden of residue-to-decimal (or residue-to-binary) conversion to operation time can be largely reduced by processing all elements in parallel. The calculated characteristics of operation time suggest a promising use of the system in a real time 2-D linear transform.
A preliminary study of molecular dynamics on reconfigurable computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolinski, C.; Trouw, F. R.; Gokhale, M.
2003-01-01
In this paper we investigate the performance of platform FPGAs on a compute-intensive, floating-point-intensive supercomputing application, Molecular Dynamics (MD). MD is a popular simulation technique to track interacting particles through time by integrating their equations of motion. One part of the MD algorithm was implemented using the Fabric Generator (FG)[l I ] and mapped onto several reconfigurable logic arrays. FG is a Java-based toolset that greatly accelerates construction of the fabrics from an abstract technology independent representation. Our experiments used technology-independent IEEE 32-bit floating point operators so that the design could be easily re-targeted. Experiments were performed using both non-pipelinedmore » and pipelined floating point modules. We present results for the Altera Excalibur ARM System on a Programmable Chip (SoPC), the Altera Strath EPlS80, and the Xilinx Virtex-N Pro 2VP.50. The best results obtained were 5.69 GFlops at 8OMHz(Altera Strath EPlS80), and 4.47 GFlops at 82 MHz (Xilinx Virtex-II Pro 2VF50). Assuming a lOWpower budget, these results compare very favorably to a 4Gjlop/40Wprocessing/power rate for a modern Pentium, suggesting that reconfigurable logic can achieve high performance at low power on jloating-point-intensivea pplications.« less
Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin
2015-02-01
There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes.
Using Model Point Spread Functions to Identifying Binary Brown Dwarf Systems
NASA Astrophysics Data System (ADS)
Matt, Kyle; Stephens, Denise C.; Lunsford, Leanne T.
2017-01-01
A Brown Dwarf (BD) is a celestial object that is not massive enough to undergo hydrogen fusion in its core. BDs can form in pairs called binaries. Due to the great distances between Earth and these BDs, they act as point sources of light and the angular separation between binary BDs can be small enough to appear as a single, unresolved object in images, according to Rayleigh Criterion. It is not currently possible to resolve some of these objects into separate light sources. Stephens and Noll (2006) developed a method that used model point spread functions (PSFs) to identify binary Trans-Neptunian Objects, we will use this method to identify binary BD systems in the Hubble Space Telescope archive. This method works by comparing model PSFs of single and binary sources to the observed PSFs. We also use a method to compare model spectral data for single and binary fits to determine the best parameter values for each component of the system. We describe these methods, its challenges and other possible uses in this poster.
Harada, Ichiro; Kim, Sung-Gon; Cho, Chong Su; Kurosawa, Hisashi; Akaike, Toshihiro
2007-01-01
In this study, a simple combined method consisting of floating and anchored collagen gel in a ligament or tendon equivalent culture system was used to produce the oriented fibrils in fibroblast-populated collagen matrices (FPCMs) during the remodeling and contraction of the collagen gel. Orientation of the collagen fibrils along single axis occurred over the whole area of the floating section and most of the fibroblasts were elongated and aligned along the oriented collagen fibrils, whereas no significant orientation of fibrils was observed in normally contracted FPCMs by the floating method. Higher elasticity and enhanced mechanical strength were obtained using our simple method compared with normally contracted floating FPCMs. The Young's modulus and the breaking point of the FPCMs were dependent on the initial cell densities. This simple method will be applied as a convenient bioreactor to study cellular processes of the fibroblasts in the tissues with highly oriented fibrils such as ligaments or tendons. (c) 2006 Wiley Periodicals, Inc.
Tarte, Stephen R.; Schmidt, A.R.; Sullivan, Daniel J.
1992-01-01
A floating sample-collection platform is described for stream sites where the vertical or horizontal distance between the stream-sampling point and a safe location for the sampler exceed the suction head of the sampler. The platform allows continuous water sampling over the entire storm-runoff hydrogrpah. The platform was developed for a site in southern Illinois.
Floating assembly of diatom Coscinodiscus sp. microshells.
Wang, Yu; Pan, Junfeng; Cai, Jun; Zhang, Deyuan
2012-03-30
Diatoms have silica frustules with transparent and delicate micro/nano scale structures, two dimensional pore arrays, and large surface areas. Although, the diatom cells of Coscinodiscus sp. live underwater, we found that their valves can float on water and assemble together. Experiments show that the convex shape and the 40 nm sieve pores of the valves allow them to float on water, and that the buoyancy and the micro-range attractive forces cause the valves to assemble together at the highest point of water. As measured by AFM calibrated glass needles fixed in manipulator, the buoyancy force on a single floating valve may reach up to 10 μN in water. Turning the valves over, enlarging the sieve pores, reducing the surface tension of water, or vacuum pumping may cause the floating valves to sink. After the water has evaporated, the floating valves remained in their assembled state and formed a monolayer film. The bonded diatom monolayer may be valuable in studies on diatom based optical devices, biosensors, solar cells, and batteries, to better use the optical and adsorption properties of frustules. The floating assembly phenomenon can also be used as a self-assembly method for fabricating monolayer of circular plates. Copyright © 2012 Elsevier Inc. All rights reserved.
Wang, Ji-Wei; Cui, Zhi-Ting; Cui, Hong-Wei; Wei, Chang-Nian; Harada, Koichi; Minamoto, Keiko; Ueda, Kimiyo; Ingle, Kapilkumar N; Zhang, Cheng-Gang; Ueda, Atsushi
2010-12-01
The floating population refers to the large and increasing number of migrants without local household registration status and has become a new demographic phenomenon in China. Most of these migrants move from the rural areas of the central and western parts of China to the eastern and coastal metropolitan areas in pursuit of a better life. The floating population of China was composed of 121 million people in 2000, and this number was expected to increase to 300 million by 2010. Quality of life (QOL) studies of the floating population could provide a critical starting point for recognizing the potential of regions, cities and local communities to improve QOL. This study explored the construct of QOL of the floating population in Shanghai, China. We conducted eight focus groups with 58 members of the floating population (24 males and 34 females) and then performed a qualitative thematic analysis of the interviews. The following five QOL domains were identified from the analysis: personal development, jobs and career, family life, social relationships and social security. The results indicated that stigma and discrimination permeate these life domains and influence the framing of life expectations. Proposals were made for reducing stigma and discrimination against the floating population to improve the QOL of this population.
Digital hardware implementation of a stochastic two-dimensional neuron model.
Grassia, F; Kohno, T; Levi, T
2016-11-01
This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Price, Gavin R; Ansari, Daniel
2013-01-01
Developmental dyscalculia (DD) is a learning disorder affecting the acquisition of school level arithmetic skills present in approximately 3-6% of the population. At the behavioral level DD is characterized by poor retrieval of arithmetic facts from memory, the use of immature calculation procedures and counting strategies, and the atypical representation and processing of numerical magnitude. At the neural level emerging evidence suggests DD is associated with atypical structure and function in brain regions associated with the representation of numerical magnitude. The current state of knowledge points to a core deficit in numerical magnitude representation in DD, but further work is required to elucidate causal mechanisms underlying the disorder. Copyright © 2013 Elsevier B.V. All rights reserved.
Control of broadband optically generated ultrasound pulses using binary amplitude holograms.
Brown, Michael D; Jaros, Jiri; Cox, Ben T; Treeby, Bradley E
2016-04-01
In this work, the use of binary amplitude holography is investigated as a mechanism to focus broadband acoustic pulses generated by high peak-power pulsed lasers. Two algorithms are described for the calculation of the binary holograms; one using ray-tracing, and one using an optimization based on direct binary search. It is shown using numerical simulations that when a binary amplitude hologram is excited by a train of laser pulses at its design frequency, the acoustic field can be focused at a pre-determined distribution of points, including single and multiple focal points, and line and square foci. The numerical results are validated by acoustic field measurements from binary amplitude holograms, excited by a high peak-power laser.
Chandra Detection of Intracluster X-Ray sources in Virgo
NASA Astrophysics Data System (ADS)
Hou, Meicun; Li, Zhiyuan; Peng, Eric W.; Liu, Chengze
2017-09-01
We present a survey of X-ray point sources in the nearest and dynamically young galaxy cluster, Virgo, using archival Chandra observations that sample the vicinity of 80 early-type member galaxies. The X-ray source populations at the outskirts of these galaxies are of particular interest. We detect a total of 1046 point sources (excluding galactic nuclei) out to a projected galactocentric radius of ˜40 kpc and down to a limiting 0.5-8 keV luminosity of ˜ 2× {10}38 {erg} {{{s}}}-1. Based on the cumulative spatial and flux distributions of these sources, we statistically identify ˜120 excess sources that are not associated with the main stellar content of the individual galaxies, nor with the cosmic X-ray background. This excess is significant at a 3.5σ level, when Poisson error and cosmic variance are taken into account. On the other hand, no significant excess sources are found at the outskirts of a control sample of field galaxies, suggesting that at least some fraction of the excess sources around the Virgo galaxies are truly intracluster X-ray sources. Assisted with ground-based and HST optical imaging of Virgo, we discuss the origins of these intracluster X-ray sources, in terms of supernova-kicked low-mass X-ray binaries (LMXBs), globular clusters, LMXBs associated with the diffuse intracluster light, stripped nucleated dwarf galaxies and free-floating massive black holes.
Elliptic Curve Integral Points on y2 = x3 + 3x ‑ 14
NASA Astrophysics Data System (ADS)
Zhao, Jianhong
2018-03-01
The positive integer points and integral points of elliptic curves are very important in the theory of number and arithmetic algebra, it has a wide range of applications in cryptography and other fields. There are some results of positive integer points of elliptic curve y 2 = x 3 + ax + b, a, b ∈ Z In 1987, D. Zagier submit the question of the integer points on y 2 = x 3 ‑ 27x + 62, it count a great deal to the study of the arithmetic properties of elliptic curves. In 2009, Zhu H L and Chen J H solved the problem of the integer points on y 2 = x 3 ‑ 27x + 62 by using algebraic number theory and P-adic analysis method. In 2010, By using the elementary method, Wu H M obtain all the integral points of elliptic curves y 2 = x 3 ‑ 27x ‑ 62. In 2015, Li Y Z and Cui B J solved the problem of the integer points on y 2 = x 3 ‑ 21x ‑ 90 By using the elementary method. In 2016, Guo J solved the problem of the integer points on y 2 = x 3 + 27x + 62 by using the elementary method. In 2017, Guo J proved that y 2 = x 3 ‑ 21x + 90 has no integer points by using the elementary method. Up to now, there is no relevant conclusions on the integral points of elliptic curves y 2 = x 3 + 3x ‑ 14, which is the subject of this paper. By using congruence and Legendre Symbol, it can be proved that elliptic curve y 2 = x 3 + 3x ‑ 14 has only one integer point: (x, y) = (2, 0).
Determinant Computation on the GPU using the Condensation Method
NASA Astrophysics Data System (ADS)
Anisul Haque, Sardar; Moreno Maza, Marc
2012-02-01
We report on a GPU implementation of the condensation method designed by Abdelmalek Salem and Kouachi Said for computing the determinant of a matrix. We consider two types of coefficients: modular integers and floating point numbers. We evaluate the performance of our code by measuring its effective bandwidth and argue that it is numerical stable in the floating point number case. In addition, we compare our code with serial implementation of determinant computation from well-known mathematical packages. Our results suggest that a GPU implementation of the condensation method has a large potential for improving those packages in terms of running time and numerical stability.
Investigation of Springing Responses on the Great Lakes Ore Carrier M/V STEWART J. CORT
1980-12-01
175k tons.6 Using these values one can write : JL@APBD - ACTflALIVIRTVAL (MALAST) (4.) BeALLAST &VAC TUAL U(L@ADN@) and 0.94 10 The shifting of theI’M...will have to write a routine to convert the floating-point num- bers into the other machine’s internal floating-point format. The CCI record is again...THE RESULTS AND WRITES W1l TO THE LINE PRINTER. C IT ALSO PUTS THE RESUL~rs IN A DISA FIL1E .C C WRITTEN BY JCD3 NOVEMBER 1970f C C C
LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor
NASA Astrophysics Data System (ADS)
Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram
2007-09-01
Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.
Optimal Compression Methods for Floating-point Format Images
NASA Technical Reports Server (NTRS)
Pence, W. D.; White, R. L.; Seaman, R.
2009-01-01
We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.
Mutual information-based analysis of JPEG2000 contexts.
Liu, Zhen; Karam, Lina J
2005-04-01
Context-based arithmetic coding has been widely adopted in image and video compression and is a key component of the new JPEG2000 image compression standard. In this paper, the contexts used in JPEG2000 are analyzed using the mutual information, which is closely related to the compression performance. We first show that, when combining the contexts, the mutual information between the contexts and the encoded data will decrease unless the conditional probability distributions of the combined contexts are the same. Given I, the initial number of contexts, and F, the final desired number of contexts, there are S(I, F) possible context classification schemes where S(I, F) is called the Stirling number of the second kind. The optimal classification scheme is the one that gives the maximum mutual information. Instead of using an exhaustive search, the optimal classification scheme can be obtained through a modified generalized Lloyd algorithm with the relative entropy as the distortion metric. For binary arithmetic coding, the search complexity can be reduced by using dynamic programming. Our experimental results show that the JPEG2000 contexts capture the correlations among the wavelet coefficients very well. At the same time, the number of contexts used as part of the standard can be reduced without loss in the coding performance.
Deflection of Resilient Materials for Reduction of Floor Impact Sound
Lee, Jung-Yoon; Kim, Jong-Mun
2014-01-01
Recently, many residents living in apartment buildings in Korea have been bothered by noise coming from the houses above. In order to reduce noise pollution, communities are increasingly imposing bylaws, including the limitation of floor impact sound, minimum thickness of floors, and floor soundproofing solutions. This research effort focused specifically on the deflection of resilient materials in the floor sound insulation systems of apartment houses. The experimental program involved conducting twenty-seven material tests and ten sound insulation floating concrete floor specimens. Two main parameters were considered in the experimental investigation: the seven types of resilient materials and the location of the loading point. The structural behavior of sound insulation floor floating was predicted using the Winkler method. The experimental and analytical results indicated that the cracking strength of the floating concrete floor significantly increased with increasing the tangent modulus of resilient material. The deflection of the floating concrete floor loaded at the side of the specimen was much greater than that of the floating concrete floor loaded at the center of the specimen. The Winkler model considering the effect of modulus of resilient materials was able to accurately predict the cracking strength of the floating concrete floor. PMID:25574491
Deflection of resilient materials for reduction of floor impact sound.
Lee, Jung-Yoon; Kim, Jong-Mun
2014-01-01
Recently, many residents living in apartment buildings in Korea have been bothered by noise coming from the houses above. In order to reduce noise pollution, communities are increasingly imposing bylaws, including the limitation of floor impact sound, minimum thickness of floors, and floor soundproofing solutions. This research effort focused specifically on the deflection of resilient materials in the floor sound insulation systems of apartment houses. The experimental program involved conducting twenty-seven material tests and ten sound insulation floating concrete floor specimens. Two main parameters were considered in the experimental investigation: the seven types of resilient materials and the location of the loading point. The structural behavior of sound insulation floor floating was predicted using the Winkler method. The experimental and analytical results indicated that the cracking strength of the floating concrete floor significantly increased with increasing the tangent modulus of resilient material. The deflection of the floating concrete floor loaded at the side of the specimen was much greater than that of the floating concrete floor loaded at the center of the specimen. The Winkler model considering the effect of modulus of resilient materials was able to accurately predict the cracking strength of the floating concrete floor.
A Cryptological Way of Teaching Mathematics
ERIC Educational Resources Information Center
Caballero-Gil, Pino; Bruno-Castaneda, Carlos
2007-01-01
This work addresses the subject of mathematics education at secondary schools from a current and stimulating point of view intimately related to computational science. Cryptology is a captivating way of introducing into the classroom different mathematical subjects such as functions, matrices, modular arithmetic, combinatorics, equations,…
Speech recognition for embedded automatic positioner for laparoscope
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Yin, Qingyun; Wang, Yi; Yu, Daoyin
2014-07-01
In this paper a novel speech recognition methodology based on Hidden Markov Model (HMM) is proposed for embedded Automatic Positioner for Laparoscope (APL), which includes a fixed point ARM processor as the core. The APL system is designed to assist the doctor in laparoscopic surgery, by implementing the specific doctor's vocal control to the laparoscope. Real-time respond to the voice commands asks for more efficient speech recognition algorithm for the APL. In order to reduce computation cost without significant loss in recognition accuracy, both arithmetic and algorithmic optimizations are applied in the method presented. First, depending on arithmetic optimizations most, a fixed point frontend for speech feature analysis is built according to the ARM processor's character. Then the fast likelihood computation algorithm is used to reduce computational complexity of the HMM-based recognition algorithm. The experimental results show that, the method shortens the recognition time within 0.5s, while the accuracy higher than 99%, demonstrating its ability to achieve real-time vocal control to the APL.
NASA Astrophysics Data System (ADS)
Siegel, J.; Siegel, Edward Carl-Ludwig
2011-03-01
Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!
Hirayama, Kazumi; Taguchi, Yuzuru; Tsukamoto, Tetsuro
2002-10-01
A 35-year-old right handed man developed pure anarithmetia after an left parieto-occipital subcortical hemorrhage. His intelligence, memory, language, and construction ability were all within normal limits. No hemispatial neglect, agraphia, finger agnosia, or right-left disorientation were noted. He showed no impairments in reading numbers aloud, pointing to written numbers, writing numbers to dictation, decomposition of numbers, estimation of numbers of dots, reading and writing of arithmetic signs, comprehension of arithmetic signs, appreciation of number values, appreciation of dots' number, counting aloud, alignment numbers, comprehension of the commulative law and the distributive law, retrieval of the table value (ku-ku), immediate memory for arithmetic problems, and use of electric calculator. He showed, however, remarkable difficulty even in addition and subtraction between one figure digits, and used counting on his fingers or intuitive strategy to solve the problems even when he could solve them. He could not execute multiplication and division, if the problems required other than the table value (ku-ku). Thus, he seemed to have difficulties in both of elemental arithmetic facts and calculating procedures. In addition, his backward digit span and reading of analogue clocks were deteriorated, and he showed logico-grammatical disorder of Luria. Our case supports the notion that there is a neural system which was shared in part between processing of abstract spatial relationship and calculation.
33 CFR 110.127b - Flaming Gorge Lake, Wyoming-Utah.
Code of Federal Regulations, 2010 CFR
2010-07-01
... launching ramp to a point beyond the floating breakwater and then westerly, as established by the... following points, excluding a 150-foot-wide fairway, extending southeasterly from the launching ramp, as... inclosed by the shore and a line connecting the following points, excluding a 100-foot-wide fairway...
NASA Technical Reports Server (NTRS)
Weick, Fred E; Harris, Thomas A
1933-01-01
Discussed here are a series of systematic tests being conducted to compare different lateral control devices with particular reference to their effectiveness at high angles of attack. The present tests were made with six different forms of floating tip ailerons of symmetrical section. The tests showed the effect of the various ailerons on the general performance characteristics of the wing, and on the lateral controllability and stability characteristics. In addition, the hinge moments were measured for the most interesting cases. The results are compared with those for a rectangular wing with ordinary ailerons and also with those for a rectangular wing having full-chord floating tip ailerons. Practically all the floating tip ailerons gave satisfactory rolling moments at all angles of attack and at the same time gave no adverse yawing moments of appreciable magnitude. The general performance characteristics with the floating tip ailerons, however, were relatively poor, especially the rate of climb. None of the floating tip ailerons entirely eliminated the auto rotational moments at angles of attack above the stall, but all of them gave lower moments than a plain wing. Some of the floating ailerons fluttered if given sufficiently large deflection, but this could have been eliminated by moving the hinge axis of the ailerons forward. Considering all points including hinge moments, the floating tip ailerons on the wing with 5:1 taper are probably the best of those which were tested.
R Jivani, Rishad; N Patel, Chhagan; M Patel, Dashrath; P Jivani, Nurudin
2010-01-01
The present study deals with development of a floating in-situ gel of the narrow absorption window drug baclofen. Sodium alginate-based in-situ gelling systems were prepared by dissolving various concentrations of sodium alginate in deionized water, to which varying concentrations of drug and calcium bicarbonate were added. Fourier transform infrared spectroscopy (FTIR) and differential scanning calorimetry (DSC) were used to check the presence of any interaction between the drug and the excipients. A 3(2) full factorial design was used for optimization. The concentrations of sodium alginate (X1) and calcium bicarbonate (X2) were selected as the independent variables. The amount of the drug released after 1 h (Q1) and 10 h (Q10) and the viscosity of the solution were selected as the dependent variables. The gels were studied for their viscosity, in-vitro buoyancy and drug release. Contour plots were drawn for each dependent variable and check-point batches were prepared in order to get desirable release profiles. The drug release profiles were fitted into different kinetic models. The floating lag time and floating time found to be 2 min and 12 h respectively. A decreasing trend in drug release was observed with increasing concentrations of CaCO3. The computed values of Q1 and Q10 for the check-point batch were 25% and 86% respectively, compared to the experimental values of 27.1% and 88.34%. The similarity factor (f 2) for the check-point batch being 80.25 showed that the two dissolution profiles were similar. The drug release from the in-situ gel follows the Higuchi model, which indicates a diffusion-controlled release. A stomach specific in-situ gel of baclofen could be prepared using floating mechanism to increase the residence time of the drug in stomach and thereby increase the absorption.
Transport properties of gases and binary liquids near the critical point
NASA Technical Reports Server (NTRS)
Sengers, J. V.
1972-01-01
A status report is presented on the anomalies observed in the behavior of transport properties near the critical point of gases and binary liquids. The shear viscosity exhibits a weak singularity near the critical point. An analysis is made of the experimental data for those transport properties, thermal conductivity and thermal diffusivity near the gas-liquid critical point and binary diffusion coefficient near the critical mixing point, that determine the critical slowing down of the thermodynamic fluctuations in the order parameter. The asymptotic behavior of the thermal conductivity appears to be closely related to the asymptotic behavior of the correlation length. The experimental data for the thermal conductivity and diffusivity are shown to be in substantial agreement with current theoretical predictions.
30 CFR 250.428 - What must I do in certain cementing and casing situations?
Code of Federal Regulations, 2010 CFR
2010-07-01
... point. (h) Need to use less than required cement for the surface casing during floating drilling... permafrost zone uncemented Fill the annulus with a liquid that has a freezing point below the minimum...
Decidable and undecidable arithmetic functions in actin filament networks
NASA Astrophysics Data System (ADS)
Schumann, Andrew
2018-01-01
The plasmodium of Physarum polycephalum is very sensitive to its environment, and reacts to stimuli with appropriate motions. Both the sensory and motor stages of these reactions are explained by hydrodynamic processes, based on fluid dynamics, with the participation of actin filament networks. This paper is devoted to actin filament networks as a computational medium. The point is that actin filaments, with contributions from many other proteins like myosin, are sensitive to extracellular stimuli (attractants as well as repellents), and appear and disappear at different places in the cell to change aspects of the cell structure—e.g. its shape. By assembling and disassembling actin filaments, some unicellular organisms, like Amoeba proteus, can move in response to various stimuli. As a result, these organisms can be considered a simple reversible logic gate—extracellular signals being its inputs and motions its outputs. In this way, we can implement various logic gates on amoeboid behaviours. These networks can embody arithmetic functions within p-adic valued logic. Furthermore, within these networks we can define the so-called diagonalization for deducing undecidable arithmetic functions.
Shahan, M.R.; Seaman, C.E.; Beck, T.W.; Colinet, J.F.; Mischler, S.E.
2017-01-01
Float coal dust is produced by various mining methods, carried by ventilating air and deposited on the floor, roof and ribs of mine airways. If deposited, float dust is re-entrained during a methane explosion. Without sufficient inert rock dust quantities, this float coal dust can propagate an explosion throughout mining entries. Consequently, controlling float coal dust is of critical interest to mining operations. Rock dusting, which is the adding of inert material to airway surfaces, is the main control technique currently used by the coal mining industry to reduce the float coal dust explosion hazard. To assist the industry in reducing this hazard, the Pittsburgh Mining Research Division of the U.S. National Institute for Occupational Safety and Health initiated a project to investigate methods and technologies to reduce float coal dust in underground coal mines through prevention, capture and suppression prior to deposition. Field characterization studies were performed to determine quantitatively the sources, types and amounts of dust produced during various coal mining processes. The operations chosen for study were a continuous miner section, a longwall section and a coal-handling facility. For each of these operations, the primary dust sources were confirmed to be the continuous mining machine, longwall shearer and conveyor belt transfer points, respectively. Respirable and total airborne float dust samples were collected and analyzed for each operation, and the ratio of total airborne float coal dust to respirable dust was calculated. During the continuous mining process, the ratio of total airborne float coal dust to respirable dust ranged from 10.3 to 13.8. The ratios measured on the longwall face were between 18.5 and 21.5. The total airborne float coal dust to respirable dust ratio observed during belt transport ranged between 7.5 and 21.8. PMID:28936001
Predicting Arithmetic Abilities: The Role of Preparatory Arithmetic Markers and Intelligence
ERIC Educational Resources Information Center
Stock, Pieter; Desoete, Annemie; Roeyers, Herbert
2009-01-01
Arithmetic abilities acquired in kindergarten are found to be strong predictors for later deficient arithmetic abilities. This longitudinal study (N = 684) was designed to examine if it was possible to predict the level of children's arithmetic abilities in first and second grade from their performance on preparatory arithmetic abilities in…
Ran, Bin; Song, Li; Cheng, Yang; Tan, Huachun
2016-01-01
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%. PMID:27448326
Ran, Bin; Song, Li; Zhang, Jian; Cheng, Yang; Tan, Huachun
2016-01-01
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.
Optical programmable Boolean logic unit.
Chattopadhyay, Tanay
2011-11-10
Logic units are the building blocks of many important computational operations likes arithmetic, multiplexer-demultiplexer, radix conversion, parity checker cum generator, etc. Multifunctional logic operation is very much essential in this respect. Here a programmable Boolean logic unit is proposed that can perform 16 Boolean logical operations from a single optical input according to the programming input without changing the circuit design. This circuit has two outputs. One output is complementary to the other. Hence no loss of data can occur. The circuit is basically designed by a 2×2 polarization independent optical cross bar switch. Performance of the proposed circuit has been achieved by doing numerical simulations. The binary logical states (0,1) are represented by the absence of light (null) and presence of light, respectively.
What is the size of a floating sheath? An answer
NASA Astrophysics Data System (ADS)
Voigt, Farina; Naggary, Schabnam; Brinkmann, Ralf Peter
2016-09-01
The formation of a non-neutral boundary sheath in front of material surfaces is universal plasma phenomenon. Despite several decades of research, however, not all related issues are fully clarified. In a recent paper, Chabert pointed out that this lack of clarity applies even to the seemingly innocuous question ``What the size of a floating sheath?'' This contribution attempts to provide an answer that is not arbitrary: The size of a floating sheath is defined as the plate separation of an equivalent parallel plate capacitor. The consequences of the definition are explored with the help of a self-consistent sheath model, and a comparison is made with other sheath size definitions. Deutsche Forschungsgemeinschaft within SFB TR 87.
NASA Astrophysics Data System (ADS)
Dagousset, Laure; Pognon, Grégory; Nguyen, Giao T. M.; Vidal, Frédéric; Jus, Sébastien; Aubert, Pierre-Henri
2017-08-01
Electrochemical properties in mesoporous media of three different ionic liquids (1-propyl-1-methylpyrrolidinium-bis(fluorosulfonyl)imide - Pyr13FSI, 1-butyl-1-methylpyrrolidinium-bis(trifluoromethanesulfonyl)imide - Pyr14TFSI and 1-ethyl-3-methylimidazolium-bis(trifluoromethanesulfonyl)imide - EMITFSI) are investigated from -50 °C to 100 °C and compared with binary mixtures with γ-butyrolactone (GBL). Buckypaper composed of Single-Wall Carbon Nanotubes (SWCNTs) are used to prepare and study coin-cell supercapacitors. Supercapacitor using Pyr13FSI/GBL present a rapid loss of capacitance after only a thousand cycles at 100 °C. On the contrary, EMITFSI/GBL and Pyr14TFSI/GBL prove to be very promising at high temperature (the capacitance loss after 10,000 cycles is 9% and 10%). More drastic ageing tests such as floating are also carried out for these two mixtures at 100 °C and -50 °C. 23% and 15% capacitance losses have been recorded after 500 h of floating at 100 °C for EMITFSI/GBL and Pyr14TFSI/GBL. The capacitance of supercapacitors based on Pyr14TFSI/GBL dropped by 20% after 200 h of floating at -50 °C rather than EMITFSI/GBL show a remarkable stability during floating at -50 °C, with 6.6% capacitance loss after 500 h (3 V at -50 °C). These results show that the mixture EMITFSI/GBL works properly all along the broad range of temperature [-50 °C to +100 °C] and thus proved that our approach is very promising for the development of high performances supercapacitors specifically adapted for extreme environment.
Random Matrix Theory and Elliptic Curves
2014-11-24
distribution is unlimited. 1 ELLIPTIC CURVES AND THEIR L-FUNCTIONS 2 points on that curve. Counting rational points on curves is a field with a rich ...deficiency of zeros near the origin of the histograms in Figure 1. While as d becomes large this discretization becomes smaller and has less and less effect...order of 30), the regular oscillations seen at the origin become dominated by fluctuations of an arithmetic origin, influenced by zeros of the Riemann
FloPSy - Search-Based Floating Point Constraint Solving for Symbolic Execution
NASA Astrophysics Data System (ADS)
Lakhotia, Kiran; Tillmann, Nikolai; Harman, Mark; de Halleux, Jonathan
Recently there has been an upsurge of interest in both, Search-Based Software Testing (SBST), and Dynamic Symbolic Execution (DSE). Each of these two approaches has complementary strengths and weaknesses, making it a natural choice to explore the degree to which the strengths of one can be exploited to offset the weakness of the other. This paper introduces an augmented version of DSE that uses a SBST-based approach to handling floating point computations, which are known to be problematic for vanilla DSE. The approach has been implemented as a plug in for the Microsoft Pex DSE testing tool. The paper presents results from both, standard evaluation benchmarks, and two open source programs.
From 16-bit to high-accuracy IDCT approximation: fruits of single architecture affliation
NASA Astrophysics Data System (ADS)
Liu, Lijie; Tran, Trac D.; Topiwala, Pankaj
2007-09-01
In this paper, we demonstrate an effective unified framework for high-accuracy approximation of the irrational co-effcient floating-point IDCT by a single integer-coeffcient fixed-point architecture. Our framework is based on a modified version of the Loeffler's sparse DCT factorization, and the IDCT architecture is constructed via a cascade of dyadic lifting steps and butterflies. We illustrate that simply varying the accuracy of the approximating parameters yields a large family of standard-compliant IDCTs, from rare 16-bit approximations catering to portable computing to ultra-high-accuracy 32-bit versions that virtually eliminate any drifting effect when pairing with the 64-bit floating-point IDCT at the encoder. Drifting performances of the proposed IDCTs along with existing popular IDCT algorithms in H.263+, MPEG-2 and MPEG-4 are also demonstrated.
Decipipes: Helping Students to "Get the Point"
ERIC Educational Resources Information Center
Moody, Bruce
2011-01-01
Decipipes are a representational model that can be used to help students develop conceptual understanding of decimal place value. They provide a non-standard tool for representing length, which in turn can be represented using conventional decimal notation. They are conceptually identical to Linear Arithmetic Blocks. This article reviews theory…
Quantum Theory from Observer's Mathematics Point of View
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khots, Dmitriy; Khots, Boris
2010-05-04
This work considers the linear (time-dependent) Schrodinger equation, quantum theory of two-slit interference, wave-particle duality for single photons, and the uncertainty principle in a setting of arithmetic, algebra, and topology provided by Observer's Mathematics, see [1]. Certain theoretical results and communications pertaining to these theorems are also provided.
Comparison of eigensolvers for symmetric band matrices.
Moldaschl, Michael; Gansterer, Wilfried N
2014-09-15
We compare different algorithms for computing eigenvalues and eigenvectors of a symmetric band matrix across a wide range of synthetic test problems. Of particular interest is a comparison of state-of-the-art tridiagonalization-based methods as implemented in Lapack or Plasma on the one hand, and the block divide-and-conquer (BD&C) algorithm as well as the block twisted factorization (BTF) method on the other hand. The BD&C algorithm does not require tridiagonalization of the original band matrix at all, and the current version of the BTF method tridiagonalizes the original band matrix only for computing the eigenvalues. Avoiding the tridiagonalization process sidesteps the cost of backtransformation of the eigenvectors. Beyond that, we discovered another disadvantage of the backtransformation process for band matrices: In several scenarios, a lot of gradual underflow is observed in the (optional) accumulation of the transformation matrix and in the (obligatory) backtransformation step. According to the IEEE 754 standard for floating-point arithmetic, this implies many operations with subnormal (denormalized) numbers, which causes severe slowdowns compared to the other algorithms without backtransformation of the eigenvectors. We illustrate that in these cases the performance of existing methods from Lapack and Plasma reaches a competitive level only if subnormal numbers are disabled (and thus the IEEE standard is violated). Overall, our performance studies illustrate that if the problem size is large enough relative to the bandwidth, BD&C tends to achieve the highest performance of all methods if the spectrum to be computed is clustered. For test problems with well separated eigenvalues, the BTF method tends to become the fastest algorithm with growing problem size.
Robust and Efficient Spin Purification for Determinantal Configuration Interaction.
Fales, B Scott; Hohenstein, Edward G; Levine, Benjamin G
2017-09-12
The limited precision of floating point arithmetic can lead to the qualitative and even catastrophic failure of quantum chemical algorithms, especially when high accuracy solutions are sought. For example, numerical errors accumulated while solving for determinantal configuration interaction wave functions via Davidson diagonalization may lead to spin contamination in the trial subspace. This spin contamination may cause the procedure to converge to roots with undesired ⟨Ŝ 2 ⟩, wasting computer time in the best case and leading to incorrect conclusions in the worst. In hopes of finding a suitable remedy, we investigate five purification schemes for ensuring that the eigenvectors have the desired ⟨Ŝ 2 ⟩. These schemes are based on projection, penalty, and iterative approaches. All of these schemes rely on a direct, graphics processing unit-accelerated algorithm for calculating the S 2 c matrix-vector product. We assess the computational cost and convergence behavior of these methods by application to several benchmark systems and find that the first-order spin penalty method is the optimal choice, though first-order and Löwdin projection approaches also provide fast convergence to the desired spin state. Finally, to demonstrate the utility of these approaches, we computed the lowest several excited states of an open-shell silver cluster (Ag 19 ) using the state-averaged complete active space self-consistent field method, where spin purification was required to ensure spin stability of the CI vector coefficients. Several low-lying states with significant multiply excited character are predicted, suggesting the value of a multireference approach for modeling plasmonic nanomaterials.
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Fang, Jing; Yuan, Jianping
2018-03-01
The existence of the path dependent dynamic singularities limits the volume of available workspace of free-floating space robot and induces enormous joint velocities when such singularities are met. In order to overcome this demerit, this paper presents an optimal joint trajectory planning method using forward kinematics equations of free-floating space robot, while joint motion laws are delineated with application of the concept of reaction null-space. Bézier curve, in conjunction with the null-space column vectors, are applied to describe the joint trajectories. Considering the forward kinematics equations of the free-floating space robot, the trajectory planning issue is consequently transferred to an optimization issue while the control points to construct the Bézier curve are the design variables. A constrained differential evolution (DE) scheme with premature handling strategy is implemented to find the optimal solution of the design variables while specific objectives and imposed constraints are satisfied. Differ from traditional methods, we synthesize null-space and specialized curve to provide a novel viewpoint for trajectory planning of free-floating space robot. Simulation results are presented for trajectory planning of 7 degree-of-freedom (DOF) kinematically redundant manipulator mounted on a free-floating spacecraft and demonstrate the feasibility and effectiveness of the proposed method.
rpe v5: an emulator for reduced floating-point precision in large numerical simulations
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.
2017-06-01
This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.
Recent advances in lossy compression of scientific floating-point data
NASA Astrophysics Data System (ADS)
Lindstrom, P.
2017-12-01
With a continuing exponential trend in supercomputer performance, ever larger data sets are being generated through numerical simulation. Bandwidth and storage capacity are, however, not keeping pace with this increase in data size, causing significant data movement bottlenecks in simulation codes and substantial monetary costs associated with archiving vast volumes of data. Worse yet, ever smaller fractions of data generated can be stored for further analysis, where scientists frequently rely on decimating or averaging large data sets in time and/or space. One way to mitigate these problems is to employ data compression to reduce data volumes. However, lossless compression of floating-point data can achieve only very modest size reductions on the order of 10-50%. We present ZFP and FPZIP, two state-of-the-art lossy compressors for structured floating-point data that routinely achieve one to two orders of magnitude reduction with little to no impact on the accuracy of visualization and quantitative data analysis. We provide examples of the use of such lossy compressors in climate and seismic modeling applications to effectively accelerate I/O and reduce storage requirements. We further discuss how the design decisions behind these and other compressors impact error distributions and other statistical and differential properties, including derived quantities of interest relevant to each science application.
Yang, Jiangxia; Xiao, Hong
2015-08-01
To explore the improvement of hand motion function,spasm and self-care ability of daily life for stroke patients treated with floating-needle combined with rehabilitation training. Eighty hand spasm patients of post-stroke within one year after stroke were randomly divided into an observation group and a control group, 40 cases in each one. In the two groups, rehabilitation was adopted for eight weeks,once a day,40 min one time. In the observation group, based on the above treatment and according to muscle fascia trigger point, 2~3 points in both the internal and external sides of forearm were treated with floating-needle. The positive or passive flexion and extension of wrist and knuckle till the relief of spasm hand was combined. The floating-needle therapy was given for eight weeks, on the first three days once a day and later once every other day. Modified Ashworth Scale(MAS), activity of daily life(ADL, Barthel index) scores and Fugl-Meyer(FMA) scores were used to assess the spasm hand degree,activity of daily life and hand motion function before and after 7-day, 14-day and 8-week treatment. After 7-day, 14-day and 8-week treatment, MAS scores were apparently lower than those before treatment in the two groups(all P<0. 05), and Barthel scores and FMA scores were obviously higher than those before-treatment(all P<0. 05). After 14-day and 8-week treatment, FMA scores in the observation group were markedly higher than those in the control group(both P<0. 05). Floating-needle therapy combined with rehabilitation training and simple rehabilitation training could both improve hand spasm degree, hand function and activity of daily life of post-stroke patients, but floating-needle therapy combined with rehabilitation training is superior to simple rehabilitation training for the improvement of hand function.
The Gibbs Energy Basis and Construction of Boiling Point Diagrams in Binary Systems
ERIC Educational Resources Information Center
Smith, Norman O.
2004-01-01
An illustration of how excess Gibbs energies of the components in binary systems can be used to construct boiling point diagrams is given. The underlying causes of the various types of behavior of the systems in terms of intermolecular forces and the method of calculating the coexisting liquid and vapor compositions in boiling point diagrams with…
Floating shoulders: Clinical and radiographic analysis at a mean follow-up of 11 years
Pailhes, ReÌ gis; Bonnevialle, Nicolas; Laffosse, JeanMichel; Tricoire, JeanLouis; Cavaignac, Etienne; Chiron, Philippe
2013-01-01
Context: The floating shoulder (FS) is an uncommon injury, which can be managed conservatively or surgically. The therapeutic option remains controversial. Aims: The goal of our study was to evaluate the long-term results and to identify predictive factors of functional outcomes. Settings and Design: Retrospective monocentric study. Materials and Methods: Forty consecutive FS were included (24 nonoperated and 16 operated) from 1984 to 2009. Clinical results were assessed with Simple Shoulder Test (SST), Oxford Shoulder Score (OSS), Single Assessment Numeric Evaluation (SANE), Short Form-12 (SF12), Disabilities of the Arm Shoulder and Hand score (DASH), and Constant score (CST). Plain radiographs were reviewed to evaluate secondary displacement, fracture healing, and modification of the lateral offset of the gleno-humeral joint (chest X-rays). New radiographs were made to evaluate osteoarthritis during follow-up. Statistical Analysis Used: T-test, Mann-Whitney test, and the Pearson's correlation coefficient were used. The significance level was set at 0.05. Results: At mean follow-up of 135 months (range 12-312), clinical results were satisfactory regarding different mean scores: SST 10.5 points, OSS 14 points, SANE 81%, SF12 (50 points and 60 points), DASH 14.5 points and CST 84 points. There were no significant differences between operative and non-operative groups. However, the loss of lateral offset influenced the results negatively. Osteoarthritis was diagnosed in five patients (12.5%) without correlation to fracture patterns and type of treatment. Conclusions: This study advocates that floating shoulder may be treated conservatively and surgically with satisfactory clinical long-term outcomes. However, the loss of gleno-humeral lateral offset should be evaluated carefully before taking a therapeutic option. PMID:23960364
Code of Federal Regulations, 2010 CFR
2010-07-01
... collection point for stormwater runoff received directly from refinery surfaces and for refinery wastewater... chamber in a stationary manner and which does not move with fluctuations in wastewater levels. Floating... separator. Junction box means a manhole or access point to a wastewater sewer system line. No detectable...
Quality of Arithmetic Education for Children with Cerebral Palsy
ERIC Educational Resources Information Center
Jenks, Kathleen M.; de Moor, Jan; van Lieshout, Ernest C. D. M.; Withagen, Floortje
2010-01-01
The aim of this exploratory study was to investigate the quality of arithmetic education for children with cerebral palsy. The use of individual educational plans, amount of arithmetic instruction time, arithmetic instructional grouping, and type of arithmetic teaching method were explored in three groups: children with cerebral palsy (CP) in…
Wong, Terry Tin-Yau
2017-12-01
The current study examined the unique and shared contributions of arithmetic operation understanding and numerical magnitude representation to children's mathematics achievement. A sample of 124 fourth graders was tested on their arithmetic operation understanding (as reflected by their understanding of arithmetic principles and the knowledge about the application of arithmetic operations) and their precision of rational number magnitude representation. They were also tested on their mathematics achievement and arithmetic computation performance as well as the potential confounding factors. The findings suggested that both arithmetic operation understanding and numerical magnitude representation uniquely predicted children's mathematics achievement. The findings highlight the significance of arithmetic operation understanding in mathematics learning. Copyright © 2017 Elsevier Inc. All rights reserved.
Träff, Ulf; Olsson, Linda; Skagerlund, Kenny; Östergren, Rickard
2018-03-01
A modified pathways to mathematics model was used to examine the cognitive mechanisms underlying arithmetic skills in third graders. A total of 269 children were assessed on tasks tapping the four pathways and arithmetic skills. A path analysis showed that symbolic number processing was directly supported by the linguistic and approximate quantitative pathways. The direct contribution from the four pathways to arithmetic proficiency varied; the linguistic pathway supported single-digit arithmetic and word problem solving, whereas the approximate quantitative pathway supported only multi-digit calculation. The spatial processing and verbal working memory pathways supported only arithmetic word problem solving. The notion of hierarchical levels of arithmetic was supported by the results, and the different levels were supported by different constellations of pathways. However, the strongest support to the hierarchical levels of arithmetic were provided by the proximal arithmetic skills. Copyright © 2017 Elsevier Inc. All rights reserved.
Combined GPS/GLONASS Precise Point Positioning with Fixed GPS Ambiguities
Pan, Lin; Cai, Changsheng; Santerre, Rock; Zhu, Jianjun
2014-01-01
Precise point positioning (PPP) technology is mostly implemented with an ambiguity-float solution. Its performance may be further improved by performing ambiguity-fixed resolution. Currently, the PPP integer ambiguity resolutions (IARs) are mainly based on GPS-only measurements. The integration of GPS and GLONASS can speed up the convergence and increase the accuracy of float ambiguity estimates, which contributes to enhancing the success rate and reliability of fixing ambiguities. This paper presents an approach of combined GPS/GLONASS PPP with fixed GPS ambiguities (GGPPP-FGA) in which GPS ambiguities are fixed into integers, while all GLONASS ambiguities are kept as float values. An improved minimum constellation method (MCM) is proposed to enhance the efficiency of GPS ambiguity fixing. Datasets from 20 globally distributed stations on two consecutive days are employed to investigate the performance of the GGPPP-FGA, including the positioning accuracy, convergence time and the time to first fix (TTFF). All datasets are processed for a time span of three hours in three scenarios, i.e., the GPS ambiguity-float solution, the GPS ambiguity-fixed resolution and the GGPPP-FGA resolution. The results indicate that the performance of the GPS ambiguity-fixed resolutions is significantly better than that of the GPS ambiguity-float solutions. In addition, the GGPPP-FGA improves the positioning accuracy by 38%, 25% and 44% and reduces the convergence time by 36%, 36% and 29% in the east, north and up coordinate components over the GPS-only ambiguity-fixed resolutions, respectively. Moreover, the TTFF is reduced by 27% after adding GLONASS observations. Wilcoxon rank sum tests and chi-square two-sample tests are made to examine the significance of the improvement on the positioning accuracy, convergence time and TTFF. PMID:25237901
Parametric study of two-body floating-point wave absorber
NASA Astrophysics Data System (ADS)
Amiri, Atena; Panahi, Roozbeh; Radfar, Soheil
2016-03-01
In this paper, we present a comprehensive numerical simulation of a point wave absorber in deep water. Analyses are performed in both the frequency and time domains. The converter is a two-body floating-point absorber (FPA) with one degree of freedom in the heave direction. Its two parts are connected by a linear mass-spring-damper system. The commercial ANSYS-AQWA software used in this study performs well in considering validations. The velocity potential is obtained by assuming incompressible and irrotational flow. As such, we investigated the effects of wave characteristics on energy conversion and device efficiency, including wave height and wave period, as well as the device diameter, draft, geometry, and damping coefficient. To validate the model, we compared our numerical results with those from similar experiments. Our study results can clearly help to maximize the converter's efficiency when considering specific conditions.
High resolution time interval counter
Condreva, Kenneth J.
1994-01-01
A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured.
High resolution time interval counter
Condreva, K.J.
1994-07-26
A high resolution counter circuit measures the time interval between the occurrence of an initial and a subsequent electrical pulse to two nanoseconds resolution using an eight megahertz clock. The circuit includes a main counter for receiving electrical pulses and generating a binary word--a measure of the number of eight megahertz clock pulses occurring between the signals. A pair of first and second pulse stretchers receive the signal and generate a pair of output signals whose widths are approximately sixty-four times the time between the receipt of the signals by the respective pulse stretchers and the receipt by the respective pulse stretchers of a second subsequent clock pulse. Output signals are thereafter supplied to a pair of start and stop counters operable to generate a pair of binary output words representative of the measure of the width of the pulses to a resolution of two nanoseconds. Errors associated with the pulse stretchers are corrected by providing calibration data to both stretcher circuits, and recording start and stop counter values. Stretched initial and subsequent signals are combined with autocalibration data and supplied to an arithmetic logic unit to determine the time interval in nanoseconds between the pair of electrical pulses being measured. 3 figs.
An array processing system for lunar geochemical and geophysical data
NASA Technical Reports Server (NTRS)
Eliason, E. M.; Soderblom, L. A.
1977-01-01
A computerized array processing system has been developed to reduce, analyze, display, and correlate a large number of orbital and earth-based geochemical, geophysical, and geological measurements of the moon on a global scale. The system supports the activities of a consortium of about 30 lunar scientists involved in data synthesis studies. The system was modeled after standard digital image-processing techniques but differs in that processing is performed with floating point precision rather than integer precision. Because of flexibility in floating-point image processing, a series of techniques that are impossible or cumbersome in conventional integer processing were developed to perform optimum interpolation and smoothing of data. Recently color maps of about 25 lunar geophysical and geochemical variables have been generated.
Kraus, Wayne A; Wagner, Albert F
1986-04-01
A triatomic classical trajectory code has been modified by extensive vectorization of the algorithms to achieve much improved performance on an FPS 164 attached processor. Extensive timings on both the FPS 164 and a VAX 11/780 with floating point accelerator are presented as a function of the number of trajectories simultaneously run. The timing tests involve a potential energy surface of the LEPS variety and trajectories with 1000 time steps. The results indicate that vectorization results in timing improvements on both the VAX and the FPS. For larger numbers of trajectories run simultaneously, up to a factor of 25 improvement in speed occurs between VAX and FPS vectorized code. Copyright © 1986 John Wiley & Sons, Inc.
Algebraic Functions, Computer Programming, and the Challenge of Transfer
ERIC Educational Resources Information Center
Schanzer, Emmanuel Tanenbaum
2015-01-01
Students' struggles with algebra are well documented. Prior to the introduction of functions, mathematics is typically focused on applying a set of arithmetic operations to compute an answer. The introduction of functions, however, marks the point at which mathematics begins to focus on building up abstractions as a way to solve complex problems.…
Memristive effects in oxygenated amorphous carbon nanodevices
NASA Astrophysics Data System (ADS)
Bachmann, T. A.; Koelmans, W. W.; Jonnalagadda, V. P.; Le Gallo, M.; Santini, C. A.; Sebastian, A.; Eleftheriou, E.; Craciun, M. F.; Wright, C. D.
2018-01-01
Computing with resistive-switching (memristive) memory devices has shown much recent progress and offers an attractive route to circumvent the von-Neumann bottleneck, i.e. the separation of processing and memory, which limits the performance of conventional computer architectures. Due to their good scalability and nanosecond switching speeds, carbon-based resistive-switching memory devices could play an important role in this respect. However, devices based on elemental carbon, such as tetrahedral amorphous carbon or ta-C, typically suffer from a low cycling endurance. A material that has proven to be capable of combining the advantages of elemental carbon-based memories with simple fabrication methods and good endurance performance for binary memory applications is oxygenated amorphous carbon, or a-CO x . Here, we examine the memristive capabilities of nanoscale a-CO x devices, in particular their ability to provide the multilevel and accumulation properties that underpin computing type applications. We show the successful operation of nanoscale a-CO x memory cells for both the storage of multilevel states (here 3-level) and for the provision of an arithmetic accumulator. We implement a base-16, or hexadecimal, accumulator and show how such a device can carry out hexadecimal arithmetic and simultaneously store the computed result in the self-same a-CO x cell, all using fast (sub-10 ns) and low-energy (sub-pJ) input pulses.
Single Crystal Fibers of Yttria-Stabilized Cubic Zirconia with Ternary Oxide Additions
NASA Technical Reports Server (NTRS)
Ritzert, F. J.; Yun, H. M.; Miner, R. V.
1997-01-01
Single crystal fibers of yttria (Y2O3)-stabilized cubic zirconia, (ZrO2) with ternary oxide additions were grown using the laser float zone fiber processing technique. Ternary additions to the ZrO2-Y2O3 binary system were studied aimed at increasing strength while maintaining the high coefficient of thermal expansion of the binary system. Statistical methods aided in identifying the most promising ternary oxide candidate (Ta2O5, Sc2O3, and HfO2) and optimum composition. The yttria, range investigated was 14 to 24 mol % and the ternary oxide component ranged from 1 to 5 mol %. Hafnium oxide was the most promising ternary oxide component based on 816 C tensile strength results and ease of fabrication. The optimum composition for development was 81 ZrO2-14 Y203-5 HfO2 based upon the same elevated temperature strength tests. Preliminary results indicate process improvements could improve the fiber performance. We also investigated the effect of crystal orientation on strength.
Implicit Learning of Arithmetic Regularities Is Facilitated by Proximal Contrast
Prather, Richard W.
2012-01-01
Natural number arithmetic is a simple, powerful and important symbolic system. Despite intense focus on learning in cognitive development and educational research many adults have weak knowledge of the system. In current study participants learn arithmetic principles via an implicit learning paradigm. Participants learn not by solving arithmetic equations, but through viewing and evaluating example equations, similar to the implicit learning of artificial grammars. We expand this to the symbolic arithmetic system. Specifically we find that exposure to principle-inconsistent examples facilitates the acquisition of arithmetic principle knowledge if the equations are presented to the learning in a temporally proximate fashion. The results expand on research of the implicit learning of regularities and suggest that contrasting cases, show to facilitate explicit arithmetic learning, is also relevant to implicit learning of arithmetic. PMID:23119101
Arithmetic Circuit Verification Based on Symbolic Computer Algebra
NASA Astrophysics Data System (ADS)
Watanabe, Yuki; Homma, Naofumi; Aoki, Takafumi; Higuchi, Tatsuo
This paper presents a formal approach to verify arithmetic circuits using symbolic computer algebra. Our method describes arithmetic circuits directly with high-level mathematical objects based on weighted number systems and arithmetic formulae. Such circuit description can be effectively verified by polynomial reduction techniques using Gröbner Bases. In this paper, we describe how the symbolic computer algebra can be used to describe and verify arithmetic circuits. The advantageous effects of the proposed approach are demonstrated through experimental verification of some arithmetic circuits such as multiply-accumulator and FIR filter. The result shows that the proposed approach has a definite possibility of verifying practical arithmetic circuits.
2015-01-01
crafts on floating ice sheets near McMurdo, Antarctica (Katona and Vaudrey 1973; Katona 1974; Vaudrey 1977). To comply with the first criterion, one...Nomographs for operating wheeled aircraft on sea- ice runways: McMurdo Station, Antarctica . In Proceedings of the Offshore Mechanics and Arctic Engineering... Ice Thickness Requirements for Vehicles and Heavy Equipment at McMurdo Station, Antarctica . CRREL Project Report 04- 09, “Safe Sea Ice for Vehicle
The anatomy of floating shock fitting. [shock waves computation for flow field
NASA Technical Reports Server (NTRS)
Salas, M. D.
1975-01-01
The floating shock fitting technique is examined. Second-order difference formulas are developed for the computation of discontinuities. A procedure is developed to compute mesh points that are crossed by discontinuities. The technique is applied to the calculation of internal two-dimensional flows with arbitrary number of shock waves and contact surfaces. A new procedure, based on the coalescence of characteristics, is developed to detect the formation of shock waves. Results are presented to validate and demonstrate the versatility of the technique.
He, Yunfeng; Zhou, Xinlin; Shi, Dexin; Song, Hairong; Zhang, Hui; Shi, Jiannong
2016-01-01
Approximate number system (ANS) acuity and mathematical ability have been found to be closely associated in recent studies. However, whether and how these two measures are causally related still remain less addressed. There are two hypotheses about the possible causal relationship: ANS acuity influences mathematical performances, or access to math education sharpens ANS acuity. Evidences in support of both hypotheses have been reported, but these two hypotheses have never been tested simultaneously. Therefore, questions still remain whether only one-direction or reciprocal causal relationships existed in the association. In this work, we provided a new evidence on the causal relationship between ANS acuity and arithmetic ability. ANS acuity and mathematical ability of elementary-school students were measured sequentially at three time points within one year, and all possible causal directions were evaluated simultaneously using cross-lagged regression analysis. The results show that ANS acuity influences later arithmetic ability while the reverse causal direction was not supported. Our finding adds a strong evidence to the causal association between ANS acuity and mathematical ability, and also has important implications for educational intervention designed to train ANS acuity and thereby promote mathematical ability.
He, Yunfeng; Zhou, Xinlin; Shi, Dexin; Song, Hairong; Zhang, Hui; Shi, Jiannong
2016-01-01
Approximate number system (ANS) acuity and mathematical ability have been found to be closely associated in recent studies. However, whether and how these two measures are causally related still remain less addressed. There are two hypotheses about the possible causal relationship: ANS acuity influences mathematical performances, or access to math education sharpens ANS acuity. Evidences in support of both hypotheses have been reported, but these two hypotheses have never been tested simultaneously. Therefore, questions still remain whether only one-direction or reciprocal causal relationships existed in the association. In this work, we provided a new evidence on the causal relationship between ANS acuity and arithmetic ability. ANS acuity and mathematical ability of elementary-school students were measured sequentially at three time points within one year, and all possible causal directions were evaluated simultaneously using cross-lagged regression analysis. The results show that ANS acuity influences later arithmetic ability while the reverse causal direction was not supported. Our finding adds a strong evidence to the causal association between ANS acuity and mathematical ability, and also has important implications for educational intervention designed to train ANS acuity and thereby promote mathematical ability. PMID:27462291
Fast Fuzzy Arithmetic Operations
NASA Technical Reports Server (NTRS)
Hampton, Michael; Kosheleva, Olga
1997-01-01
In engineering applications of fuzzy logic, the main goal is not to simulate the way the experts really think, but to come up with a good engineering solution that would (ideally) be better than the expert's control, In such applications, it makes perfect sense to restrict ourselves to simplified approximate expressions for membership functions. If we need to perform arithmetic operations with the resulting fuzzy numbers, then we can use simple and fast algorithms that are known for operations with simple membership functions. In other applications, especially the ones that are related to humanities, simulating experts is one of the main goals. In such applications, we must use membership functions that capture every nuance of the expert's opinion; these functions are therefore complicated, and fuzzy arithmetic operations with the corresponding fuzzy numbers become a computational problem. In this paper, we design a new algorithm for performing such operations. This algorithm is applicable in the case when negative logarithms - log(u(x)) of membership functions u(x) are convex, and reduces computation time from O(n(exp 2))to O(n log(n)) (where n is the number of points x at which we know the membership functions u(x)).
NASA Astrophysics Data System (ADS)
Kadum, Hawwa; Rockel, Stanislav; Holling, Michael; Peinke, Joachim; Cal, Raul Bayon
2017-11-01
The wake behind a floating model horizontal axis wind turbine during pitch motion is investigated and compared to a fixed wind turbine wake. An experiment is conducted in an acoustic wind tunnel where hot-wire data are acquired at five downstream locations. At each downstream location, a rake of 16 hot-wires was used with placement of the probes increasing radially in the vertical, horizontal, and diagonally at 45 deg. In addition, the effect of turbulence intensity on the floating wake is examined by subjecting the wind turbine to different inflow conditions controlled through three settings in the wind tunnel grid, a passive and two active protocols, thus varying in intensity. The wakes are inspected by statistics of the point measurements, where the various length/time scales are considered. The wake characteristics for a floating wind turbine are compared to a fixed turbine, and uncovering its features; relevant as the demand for exploiting deep waters in wind energy is increasing.
Generalized Roche potential for misaligned binary systems - Properties of the critical lobe
NASA Technical Reports Server (NTRS)
Avni, Y.; Schiller, N.
1982-01-01
The paper considers the Roche potential for binary systems where the stellar rotation axis is not aligned with the orbital revolution axis. It is shown that, as the degree of misalignment varies, internal Lagrangian points and external Lagrangian points may switch their roles. A systematic method to identify the internal Lagrangian point and to calculate the volume of the critical lobe is developed, and numerical results for a wide range of parameters of binary systems with circular orbits are presented. For binary systems with large enough misalignment, discrete changes occur in the topological structure of the equipotential surfaces as the orbital phase varies. The volume of the critical lobe has minima, as a function of orbital phase, at the two instances when the secondary crosses the equatorial plane of the primary. In semidetached systems, mass transfer may be confined to the vicinity of these two instances.
The neural circuits for arithmetic principles.
Liu, Jie; Zhang, Han; Chen, Chuansheng; Chen, Hui; Cui, Jiaxin; Zhou, Xinlin
2017-02-15
Arithmetic principles are the regularities underlying arithmetic computation. Little is known about how the brain supports the processing of arithmetic principles. The current fMRI study examined neural activation and functional connectivity during the processing of verbalized arithmetic principles, as compared to numerical computation and general language processing. As expected, arithmetic principles elicited stronger activation in bilateral horizontal intraparietal sulcus and right supramarginal gyrus than did language processing, and stronger activation in left middle temporal lobe and left orbital part of inferior frontal gyrus than did computation. In contrast, computation elicited greater activation in bilateral horizontal intraparietal sulcus (extending to posterior superior parietal lobule) than did either arithmetic principles or language processing. Functional connectivity analysis with the psychophysiological interaction approach (PPI) showed that left temporal-parietal (MTG-HIPS) connectivity was stronger during the processing of arithmetic principle and language than during computation, whereas parietal-occipital connectivities were stronger during computation than during the processing of arithmetic principles and language. Additionally, the left fronto-parietal (orbital IFG-HIPS) connectivity was stronger during the processing of arithmetic principles than during computation. The results suggest that verbalized arithmetic principles engage a neural network that overlaps but is distinct from the networks for computation and language processing. Copyright © 2016 Elsevier Inc. All rights reserved.
Specificity and Overlap in Skills Underpinning Reading and Arithmetical Fluency
ERIC Educational Resources Information Center
van Daal, Victor; van der Leij, Aryan; Ader, Herman
2013-01-01
The aim of this study was to examine unique and common causes of problems in reading and arithmetic fluency. 13- to 14-year-old students were placed into one of five groups: reading disabled (RD, n = 16), arithmetic disabled (AD, n = 34), reading and arithmetic disabled (RAD, n = 17), reading, arithmetic, and listening comprehension disabled…
NASA Astrophysics Data System (ADS)
Pape, Dennis R.
1990-09-01
The present conference discusses topics in optical image processing, optical signal processing, acoustooptic spectrum analyzer systems and components, and optical computing. Attention is given to tradeoffs in nonlinearly recorded matched filters, miniature spatial light modulators, detection and classification using higher-order statistics of optical matched filters, rapid traversal of an image data base using binary synthetic discriminant filters, wideband signal processing for emitter location, an acoustooptic processor for autonomous SAR guidance, and sampling of Fresnel transforms. Also discussed are an acoustooptic RF signal-acquisition system, scanning acoustooptic spectrum analyzers, the effects of aberrations on acoustooptic systems, fast optical digital arithmetic processors, information utilization in analog and digital processing, optical processors for smart structures, and a self-organizing neural network for unsupervised learning.
Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej
2015-01-01
The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.
Recall of patterns using binary and gray-scale autoassociative morphological memories
NASA Astrophysics Data System (ADS)
Sussner, Peter
2005-08-01
Morphological associative memories (MAM's) belong to a class of artificial neural networks that perform the operations erosion or dilation of mathematical morphology at each node. Therefore we speak of morphological neural networks. Alternatively, the total input effect on a morphological neuron can be expressed in terms of lattice induced matrix operations in the mathematical theory of minimax algebra. Neural models of associative memories are usually concerned with the storage and the retrieval of binary or bipolar patterns. Thus far, the emphasis in research on morphological associative memory systems has been on binary models, although a number of notable features of autoassociative morphological memories (AMM's) such as optimal absolute storage capacity and one-step convergence have been shown to hold in the general, gray-scale setting. In previous papers, we gained valuable insight into the storage and recall phases of AMM's by analyzing their fixed points and basins of attraction. We have shown in particular that the fixed points of binary AMM's correspond to the lattice polynomials in the original patterns. This paper extends these results in the following ways. In the first place, we provide an exact characterization of the fixed points of gray-scale AMM's in terms of combinations of the original patterns. Secondly, we present an exact expression for the fixed point attractor that represents the output of either a binary or a gray-scale AMM upon presentation of a certain input. The results of this paper are confirmed in several experiments using binary patterns and gray-scale images.
Binary Colloidal Alloy Test-3 and 4: Critical Point
NASA Technical Reports Server (NTRS)
Weitz, David A.; Lu, Peter J.
2007-01-01
Binary Colloidal Alloy Test - 3 and 4: Critical Point (BCAT-3-4-CP) will determine phase separation rates and add needed points to the phase diagram of a model critical fluid system. Crewmembers photograph samples of polymer and colloidal particles (tiny nanoscale spheres suspended in liquid) that model liquid/gas phase changes. Results will help scientists develop fundamental physics concepts previously cloaked by the effects of gravity.
33 CFR 161.18 - Reporting requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... call. H HOTEL Date, time and point of entry system Entry time expressed as in (B) and into the entry... KILO Date, time and point of exit from system Exit time expressed as in (B) and exit position expressed....; for a dredge or floating plant: configuration of pipeline, mooring configuration, number of assist...
ERIC Educational Resources Information Center
Zhang, Xiao; Räsänen, Pekka; Koponen, Tuire; Aunola, Kaisa; Lerkkanen, Marja-Kristiina; Nurmi, Jari-Erik
2017-01-01
The longitudinal relations of domain-general and numerical skills at ages 6-7 years to 3 cognitive domains of arithmetic learning, namely knowing (written computation), applying (arithmetic word problems), and reasoning (arithmetic reasoning) at age 11, were examined for a representative sample of 378 Finnish children. The results showed that…
Foley, Alana E; Vasilyeva, Marina; Laski, Elida V
2017-06-01
This study examined the mediating role of children's use of decomposition strategies in the relation between visuospatial memory (VSM) and arithmetic accuracy. Children (N = 78; Age M = 9.36) completed assessments of VSM, arithmetic strategies, and arithmetic accuracy. Consistent with previous findings, VSM predicted arithmetic accuracy in children. Extending previous findings, the current study showed that the relation between VSM and arithmetic performance was mediated by the frequency of children's use of decomposition strategies. Identifying the role of arithmetic strategies in this relation has implications for increasing the math performance of children with lower VSM. Statement of contribution What is already known on this subject? The link between children's visuospatial working memory and arithmetic accuracy is well documented. Frequency of decomposition strategy use is positively related to children's arithmetic accuracy. Children's spatial skill positively predicts the frequency with which they use decomposition. What does this study add? Short-term visuospatial memory (VSM) positively relates to the frequency of children's decomposition use. Decomposition use mediates the relation between short-term VSM and arithmetic accuracy. Children with limited short-term VSM may struggle to use decomposition, decreasing accuracy. © 2016 The British Psychological Society.
Moura, Octávio; Simões, Mário R; Pereira, Marcelino
2014-02-01
This study analysed the usefulness of the Wechsler Intelligence Scale for Children-Third Edition in identifying specific cognitive impairments that are linked to developmental dyslexia (DD) and the diagnostic utility of the most common profiles in a sample of 100 Portuguese children (50 dyslexic and 50 normal readers) between the ages of 8 and 12 years. Children with DD exhibited significantly lower scores in the Verbal Comprehension Index (except the Vocabulary subtest), Freedom from Distractibility Index (FDI) and Processing Speed Index subtests, with larger effect sizes than normal readers in Information, Arithmetic and Digit Span. The Verbal-Performance IQs discrepancies, Bannatyne pattern and the presence of FDI; Arithmetic, Coding, Information and Digit Span subtests (ACID) and Symbol Search, Coding, Arithmetic and Digit Span subtests (SCAD) profiles (full or partial) in the lowest subtests revealed a low diagnostic utility. However, the receiver operating characteristic curve and the optimal cut-off score analyses of the composite ACID; FDI and SCAD profiles scores showed moderate accuracy in correctly discriminating dyslexic readers from normal ones. These results suggested that in the context of a comprehensive assessment, the Wechsler Intelligence Scale for Children-Third Edition provides some useful information about the presence of specific cognitive disabilities in DD. Practitioner Points. Children with developmental dyslexia revealed significant deficits in the Wechsler Intelligence Scale for Children-Third Edition subtests that rely on verbal abilities, processing speed and working memory. The composite Arithmetic, Coding, Information and Digit Span subtests (ACID); Freedom from Distractibility Index and Symbol Search, Coding, Arithmetic and Digit Span subtests (SCAD) profile scores showed moderate accuracy in correctly discriminating dyslexics from normal readers. Wechsler Intelligence Scale for Children-Third Edition may provide some useful information about the presence of specific cognitive disabilities in developmental dyslexia. Copyright © 2013 John Wiley & Sons, Ltd.
Reading instead of reasoning? Predictors of arithmetic skills in children with cochlear implants.
Huber, Maria; Kipman, Ulrike; Pletzer, Belinda
2014-07-01
The aim of the present study was to evaluate whether the arithmetic achievement of children with cochlear implants (CI) was lower or comparable to that of their normal hearing peers and to identify predictors of arithmetic achievement in children with CI. In particular we related the arithmetic achievement of children with CI to nonverbal IQ, reading skills and hearing variables. 23 children with CI (onset of hearing loss in the first 24 months, cochlear implantation in the first 60 months of life, atleast 3 years of hearing experience with the first CI) and 23 normal hearing peers matched by age, gender, and social background participated in this case control study. All attended grades two to four in primary schools. To assess their arithmetic achievement, all children completed the "Arithmetic Operations" part of the "Heidelberger Rechentest" (HRT), a German arithmetic test. To assess reading skills and nonverbal intelligence as potential predictors of arithmetic achievement, all children completed the "Salzburger Lesetest" (SLS), a German reading screening, and the Culture Fair Intelligence Test (CFIT), a nonverbal intelligence test. Children with CI did not differ significantly from hearing children in their arithmetic achievement. Correlation and regression analyses revealed that in children with CI, arithmetic achievement was significantly (positively) related to reading skills, but not to nonverbal IQ. Reading skills and nonverbal IQ were not related to each other. In normal hearing children, arithmetic achievement was significantly (positively) related to nonverbal IQ, but not to reading skills. Reading skills and nonverbal IQ were positively correlated. Hearing variables were not related to arithmetic achievement. Children with CI do not show lower performance in non-verbal arithmetic tasks, compared to normal hearing peers. Copyright © 2014. Published by Elsevier Ireland Ltd.
Parallel processor for real-time structural control
NASA Astrophysics Data System (ADS)
Tise, Bert L.
1993-07-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-to-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look- up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating- point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An OpenWindows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.
Unsteady aerodynamic analysis for offshore floating wind turbines under different wind conditions.
Xu, B F; Wang, T G; Yuan, Y; Cao, J F
2015-02-28
A free-vortex wake (FVW) model is developed in this paper to analyse the unsteady aerodynamic performance of offshore floating wind turbines. A time-marching algorithm of third-order accuracy is applied in the FVW model. Owing to the complex floating platform motions, the blade inflow conditions and the positions of initial points of vortex filaments, which are different from the fixed wind turbine, are modified in the implemented model. A three-dimensional rotational effect model and a dynamic stall model are coupled into the FVW model to improve the aerodynamic performance prediction in the unsteady conditions. The effects of floating platform motions in the simulation model are validated by comparison between calculation and experiment for a small-scale rigid test wind turbine coupled with a floating tension leg platform (TLP). The dynamic inflow effect carried by the FVW method itself is confirmed and the results agree well with the experimental data of a pitching transient on another test turbine. Also, the flapping moment at the blade root in yaw on the same test turbine is calculated and compares well with the experimental data. Then, the aerodynamic performance is simulated in a yawed condition of steady wind and in an unyawed condition of turbulent wind, respectively, for a large-scale wind turbine coupled with the floating TLP motions, demonstrating obvious differences in rotor performance and blade loading from the fixed wind turbine. The non-dimensional magnitudes of loading changes due to the floating platform motions decrease from the blade root to the blade tip. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Unsteady aerodynamic analysis for offshore floating wind turbines under different wind conditions
Xu, B. F.; Wang, T. G.; Yuan, Y.; Cao, J. F.
2015-01-01
A free-vortex wake (FVW) model is developed in this paper to analyse the unsteady aerodynamic performance of offshore floating wind turbines. A time-marching algorithm of third-order accuracy is applied in the FVW model. Owing to the complex floating platform motions, the blade inflow conditions and the positions of initial points of vortex filaments, which are different from the fixed wind turbine, are modified in the implemented model. A three-dimensional rotational effect model and a dynamic stall model are coupled into the FVW model to improve the aerodynamic performance prediction in the unsteady conditions. The effects of floating platform motions in the simulation model are validated by comparison between calculation and experiment for a small-scale rigid test wind turbine coupled with a floating tension leg platform (TLP). The dynamic inflow effect carried by the FVW method itself is confirmed and the results agree well with the experimental data of a pitching transient on another test turbine. Also, the flapping moment at the blade root in yaw on the same test turbine is calculated and compares well with the experimental data. Then, the aerodynamic performance is simulated in a yawed condition of steady wind and in an unyawed condition of turbulent wind, respectively, for a large-scale wind turbine coupled with the floating TLP motions, demonstrating obvious differences in rotor performance and blade loading from the fixed wind turbine. The non-dimensional magnitudes of loading changes due to the floating platform motions decrease from the blade root to the blade tip. PMID:25583859
Artificial equilibrium points in binary asteroid systems with continuous low-thrust
NASA Astrophysics Data System (ADS)
Bu, Shichao; Li, Shuang; Yang, Hongwei
2017-08-01
The positions and dynamical characteristics of artificial equilibrium points (AEPs) in the vicinity of a binary asteroid with continuous low-thrust are studied. The restricted ellipsoid-ellipsoid model of binary system is employed for the binary asteroid system. The positions of AEPs are obtained by this model. It is found that the set of the point L1 or L2 forms a shape of an ellipsoid while the set of the point L3 forms a shape like a "banana". The effect of the continuous low-thrust on the feasible region of motion is analyzed by zero velocity curves. Because of using the low-thrust, the unreachable region can become reachable. The linearized equations of motion are derived for stability's analysis. Based on the characteristic equation of the linearized equations, the stability conditions are derived. The stable regions of AEPs are investigated by a parametric analysis. The effect of the mass ratio and ellipsoid parameters on stable region is also discussed. The results show that the influence of the mass ratio on the stable regions is more significant than the parameters of ellipsoid.
The computationalist reformulation of the mind-body problem.
Marchal, Bruno
2013-09-01
Computationalism, or digital mechanism, or simply mechanism, is a hypothesis in the cognitive science according to which we can be emulated by a computer without changing our private subjective feeling. We provide a weaker form of that hypothesis, weaker than the one commonly referred to in the (vast) literature and show how to recast the mind-body problem in that setting. We show that such a mechanist hypothesis does not solve the mind-body problem per se, but does help to reduce partially the mind-body problem into another problem which admits a formulation in pure arithmetic. We will explain that once we adopt the computationalist hypothesis, which is a form of mechanist assumption, we have to derive from it how our belief in the physical laws can emerge from *only* arithmetic and classical computer science. In that sense we reduce the mind-body problem to a body problem appearance in computer science, or in arithmetic. The general shape of the possible solution of that subproblem, if it exists, is shown to be closer to "Platonist or neoplatonist theology" than to the "Aristotelian theology". In Plato's theology, the physical or observable reality is only the shadow of a vaster hidden nonphysical and nonobservable, perhaps mathematical, reality. The main point is that the derivation is constructive, and it provides the technical means to derive physics from arithmetic, and this will make the computationalist hypothesis empirically testable, and thus scientific in the Popperian analysis of science. In case computationalism is wrong, the derivation leads to a procedure for measuring "our local degree of noncomputationalism". Copyright © 2013 Elsevier Ltd. All rights reserved.
Berg, Derek H
2008-04-01
The cognitive underpinnings of arithmetic calculation in children are noted to involve working memory; however, cognitive processes related to arithmetic calculation and working memory suggest that this relationship is more complex than stated previously. The purpose of this investigation was to examine the relative contributions of processing speed, short-term memory, working memory, and reading to arithmetic calculation in children. Results suggested four important findings. First, processing speed emerged as a significant contributor of arithmetic calculation only in relation to age-related differences in the general sample. Second, processing speed and short-term memory did not eliminate the contribution of working memory to arithmetic calculation. Third, individual working memory components--verbal working memory and visual-spatial working memory--each contributed unique variance to arithmetic calculation in the presence of all other variables. Fourth, a full model indicated that chronological age remained a significant contributor to arithmetic calculation in the presence of significant contributions from all other variables. Results are discussed in terms of directions for future research on working memory in arithmetic calculation.
Cognitive Predictors of Achievement Growth in Mathematics: A 5-Year Longitudinal Study
ERIC Educational Resources Information Center
Geary, David C.
2011-01-01
The study's goal was to identify the beginning of 1st grade quantitative competencies that predict mathematics achievement start point and growth through 5th grade. Measures of number, counting, and arithmetic competencies were administered in early 1st grade and used to predict mathematics achievement through 5th (n = 177), while controlling for…
Zeeberg, Barry R; Riss, Joseph; Kane, David W; Bussey, Kimberly J; Uchio, Edward; Linehan, W Marston; Barrett, J Carl; Weinstein, John N
2004-01-01
Background When processing microarray data sets, we recently noticed that some gene names were being changed inadvertently to non-gene names. Results A little detective work traced the problem to default date format conversions and floating-point format conversions in the very useful Excel program package. The date conversions affect at least 30 gene names; the floating-point conversions affect at least 2,000 if Riken identifiers are included. These conversions are irreversible; the original gene names cannot be recovered. Conclusions Users of Excel for analyses involving gene names should be aware of this problem, which can cause genes, including medically important ones, to be lost from view and which has contaminated even carefully curated public databases. We provide work-arounds and scripts for circumventing the problem. PMID:15214961
Renormalization group procedure for potential -g/r2
NASA Astrophysics Data System (ADS)
Dawid, S. M.; Gonsior, R.; Kwapisz, J.; Serafin, K.; Tobolski, M.; Głazek, S. D.
2018-02-01
Schrödinger equation with potential - g /r2 exhibits a limit cycle, described in the literature in a broad range of contexts using various regularizations of the singularity at r = 0. Instead, we use the renormalization group transformation based on Gaussian elimination, from the Hamiltonian eigenvalue problem, of high momentum modes above a finite, floating cutoff scale. The procedure identifies a richer structure than the one we found in the literature. Namely, it directly yields an equation that determines the renormalized Hamiltonians as functions of the floating cutoff: solutions to this equation exhibit, in addition to the limit-cycle, also the asymptotic-freedom, triviality, and fixed-point behaviors, the latter in vicinity of infinitely many separate pairs of fixed points in different partial waves for different values of g.
Program Correctness, Verification and Testing for Exascale (Corvette)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Koushik; Iancu, Costin; Demmel, James W
The goal of this project is to provide tools to assess the correctness of parallel programs written using hybrid parallelism. There is a dire lack of both theoretical and engineering know-how in the area of finding bugs in hybrid or large scale parallel programs, which our research aims to change. In the project we have demonstrated novel approaches in several areas: 1. Low overhead automated and precise detection of concurrency bugs at scale. 2. Using low overhead bug detection tools to guide speculative program transformations for performance. 3. Techniques to reduce the concurrency required to reproduce a bug using partialmore » program restart/replay. 4. Techniques to provide reproducible execution of floating point programs. 5. Techniques for tuning the floating point precision used in codes.« less
Kundeti, Vamsi; Rajasekaran, Sanguthevar
2012-06-01
Efficient tile sets for self assembling rectilinear shapes is of critical importance in algorithmic self assembly. A lower bound on the tile complexity of any deterministic self assembly system for an n × n square is [Formula: see text] (inferred from the Kolmogrov complexity). Deterministic self assembly systems with an optimal tile complexity have been designed for squares and related shapes in the past. However designing [Formula: see text] unique tiles specific to a shape is still an intensive task in the laboratory. On the other hand copies of a tile can be made rapidly using PCR (polymerase chain reaction) experiments. This led to the study of self assembly on tile concentration programming models. We present two major results in this paper on the concentration programming model. First we show how to self assemble rectangles with a fixed aspect ratio ( α:β ), with high probability, using Θ( α + β ) tiles. This result is much stronger than the existing results by Kao et al. (Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008) and Doty (Randomized self-assembly for exact shapes. In: proceedings of the 50th annual IEEE symposium on foundations of computer science (FOCS), IEEE, Atlanta. pp 85-94, 2009)-which can only self assembly squares and rely on tiles which perform binary arithmetic. On the other hand, our result is based on a technique called staircase sampling . This technique eliminates the need for sub-tiles which perform binary arithmetic, reduces the constant in the asymptotic bound, and eliminates the need for approximate frames (Kao et al. Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008). Our second result applies staircase sampling on the equimolar concentration programming model (The tile complexity of linear assemblies. In: proceedings of the 36th international colloquium automata, languages and programming: Part I on ICALP '09, Springer-Verlag, pp 235-253, 2009), to self assemble rectangles (of fixed aspect ratio) with high probability. The tile complexity of our algorithm is Θ(log( n )) and is optimal on the probabilistic tile assembly model (PTAM)- n being an upper bound on the dimensions of a rectangle.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., below a height of 4 inches measured from the lowest point in the boat where liquid can collect when the boat is in its static floating position, except engine rooms. Connected means allowing a flow of water... the engine room or a connected compartment below a height of 12 inches measured from the lowest point...
40 CFR 63.653 - Monitoring, recordkeeping, and implementation plan for emissions averaging.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) For each emission point included in an emissions average, the owner or operator shall perform testing, monitoring, recordkeeping, and reporting equivalent to that required for Group 1 emission points complying... internal floating roof, external roof, or a closed vent system with a control device, as appropriate to the...
33 CFR 110.60 - Captain of the Port, New York.
Code of Federal Regulations, 2011 CFR
2011-07-01
... yachts and other recreational craft. A mooring buoy is permitted. (4) Manhattan, Fort Washington Point... special anchorage area is principally for use by yachts and other recreational craft. A temporary float or... shoreline to the point of origin. Note to paragraph (d)(5): The area will be principally for use by yachts...
Geographic Resources Analysis Support System (GRASS) Version 4.0 User’s Reference Manual
1992-06-01
inpur-image need not be square; before processing, the X and Y dimensions of the input-image are padded with zeroes to the next highest power of two in...structures an input kowledge /control script with an appropriate combination of map layer category values (GRASS raster map layers that contain data on...F cos(x) cosine of x (x is in degrees) F exp(x) exponential function of x F exp(x,y) x to the power y F float(x) convert x to floating point F if
Lonnemann, Jan; Li, Su; Zhao, Pei; Li, Peng; Linkersdörfer, Janosch; Lindberg, Sven; Hasselhorn, Marcus; Yan, Song
2017-01-01
Human beings are assumed to possess an approximate number system (ANS) dedicated to extracting and representing approximate numerical magnitude information. The ANS is assumed to be fundamental to arithmetic learning and has been shown to be associated with arithmetic performance. It is, however, still a matter of debate whether better arithmetic skills are reflected in the ANS. To address this issue, Chinese and German adults were compared regarding their performance in simple arithmetic tasks and in a non-symbolic numerical magnitude comparison task. Chinese participants showed a better performance in solving simple arithmetic tasks and faster reaction times in the non-symbolic numerical magnitude comparison task without making more errors than their German peers. These differences in performance could not be ascribed to differences in general cognitive abilities. Better arithmetic skills were thus found to be accompanied by a higher speed of retrieving non-symbolic numerical magnitude knowledge but not by a higher precision of non-symbolic numerical magnitude representations. The group difference in the speed of retrieving non-symbolic numerical magnitude knowledge was fully mediated by the performance in arithmetic tasks, suggesting that arithmetic skills shape non-symbolic numerical magnitude processing skills. PMID:28384191
Lonnemann, Jan; Linkersdörfer, Janosch; Hasselhorn, Marcus; Lindberg, Sven
2016-01-01
Symbolic numerical magnitude processing skills are assumed to be fundamental to arithmetic learning. It is, however, still an open question whether better arithmetic skills are reflected in symbolic numerical magnitude processing skills. To address this issue, Chinese and German third graders were compared regarding their performance in arithmetic tasks and in a symbolic numerical magnitude comparison task. Chinese children performed better in the arithmetic tasks and were faster in deciding which one of two Arabic numbers was numerically larger. The group difference in symbolic numerical magnitude processing was fully mediated by the performance in arithmetic tasks. We assume that a higher degree of familiarity with arithmetic in Chinese compared to German children leads to a higher speed of retrieving symbolic numerical magnitude knowledge. PMID:27630606
AmeriFlux US-WPT Winous Point North Marsh
Chen, Jiquan [University of Toledo / Michigan State University
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site US-WPT Winous Point North Marsh. Site Description - The marsh site has been owned by the Winous Point Shooting Club since 1856 and has been managed by wildlife biologists since 1946. The hydrology of the marsh is relatively isolated by the surrounding dikes and drainages and only receives drainage from nearby croplands through three connecting ditches. Since 2001, the marsh has been managed to maintain year-round inundation with the lowest water levels in September. Within the 0–250 m fetch of the tower, the marsh comprises 42.9% of floating-leaved vegetation, 52.7% of emergent vegetation, and 4.4% of dike and upland during the growing season. Dominant emergent plants include narrow-leaved cattail (Typha angustifolia), rose mallow (Hibiscus moscheutos), and bur reed (Sparganium americanum). Common floating-leaved species are water lily (Nymphaea odorata) and American lotus (Nelumbo lutea) with foliage usually covering the water surface from late May to early October.
Flash-point prediction for binary partially miscible mixtures of flammable solvents.
Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng
2008-05-30
Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.
NASA Astrophysics Data System (ADS)
Bag, Rabindranath; Karmakar, Koushik; Singh, Surjeet
2017-01-01
We present here crystal growth of dilutely Co-doped spin-ladder compounds Sr14(Cu 1-x, Cox)24O41 (x = 0, 0.01, 0.03, 0.05, 0.1) using the Travelling Solvent Floating Zone (TSFZ) technique associated with an image furnace. We carried out detailed microstructure and compositional analysis. The microstructure of the frozen-in FZ revealed two bands: a lower band consisting of well-aligned single-crystalline stripes of the phase Sr14(Cu, Co)24O41 embedded in the eutectic mixture of composition SrO 18% and (Cu, Co)O 82%; and an upper band consisting of a criss-crossed pattern of these stripes. These analyses were also employed to determine the distribution coefficient of the dopants in Sr14Cu24O41. The distribution coefficient turned out to be close to 1, different from Sr2CuO3 reported previously where Co tend to accumulate in the molten zone. Direct access to the composition of the frozen-in zone eliminated any previous ambiguities associated with the composition of the peritectic point of Sr14Cu24O41; and also the eutectic point in the binary SrO-CuO phase diagram. The lattice parameters show an anisotropic variation upon Co-doping with parameters a and b increasing, c decreasing; and with an overall decrease of the unit cell volume. Magnetic susceptibility measurements were carried out on the pristine and the Co-doped crystals along the principal crystallographic axes. The spin susceptibility of the x = 0.01 crystal exhibits a strong anisotropy, which is in stark contrast with the isotropic behaviour of the pristine crystal. This anisotropy seems to arise from the intradimer exchange interaction as inferred from the anisotropy of the dimer contribution to the susceptibility of the Co-doped crystal. The Curie-tail in the magnetic susceptibility of Sr14(Cu 1-x, Cox)24O41 (x = 0, 0.01, 0.03, 0.05, 0.1) crystals (field applied parallel to the ladder) was found to scale with Co-doping - the scaling is employed to confirm a homogeneous distribution of Co in a x = 0.1 crystal boule.
Bartelet, Dimona; Vaessen, Anniek; Blomert, Leo; Ansari, Daniel
2014-01-01
Relations between children's mathematics achievement and their basic number processing skills have been reported in both cross-sectional and longitudinal studies. Yet, some key questions are currently unresolved, including which kindergarten skills uniquely predict children's arithmetic fluency during the first year of formal schooling and the degree to which predictors are contingent on children's level of arithmetic proficiency. The current study assessed kindergarteners' non-symbolic and symbolic number processing efficiency. In addition, the contribution of children's underlying magnitude representations to differences in arithmetic achievement was assessed. Subsequently, in January of Grade 1, their arithmetic proficiency was assessed. Hierarchical regression analysis revealed that children's efficiency to compare digits, count, and estimate numerosities uniquely predicted arithmetic differences above and beyond the non-numerical factors included. Moreover, quantile regression analysis indicated that symbolic number processing efficiency was consistently a significant predictor of arithmetic achievement scores regardless of children's level of arithmetic proficiency, whereas their non-symbolic number processing efficiency was not. Finally, none of the task-specific effects indexing children's representational precision was significantly associated with arithmetic fluency. The implications of the results are 2-fold. First, the findings indicate that children's efficiency to process symbols is important for the development of their arithmetic fluency in Grade 1 above and beyond the influence of non-numerical factors. Second, the impact of children's non-symbolic number processing skills does not depend on their arithmetic achievement level given that they are selected from a nonclinical population. Copyright © 2013 Elsevier Inc. All rights reserved.
Dynamical evolution of a fictitious population of binary Neptune Trojans
NASA Astrophysics Data System (ADS)
Brunini, Adrián
2018-03-01
We present numerical simulations of the evolution of a synthetic population of Binary Neptune Trojans, under the influence of the solar perturbations and tidal friction (the so-called Kozai cycles and tidal friction evolution). Our model includes the dynamical influence of the four giant planets on the heliocentric orbit of the binary centre of mass. In this paper, we explore the evolution of initially tight binaries around the Neptune L4 Lagrange point. We found that the variation of the heliocentric orbital elements due to the libration around the Lagrange point introduces significant changes in the orbital evolution of the binaries. Collisional processes would not play a significant role in the dynamical evolution of Neptune Trojans. After 4.5 × 109 yr of evolution, ˜50 per cent of the synthetic systems end up separated as single objects, most of them with slow diurnal rotation rate. The final orbital distribution of the surviving binary systems is statistically similar to the one found for Kuiper Belt Binaries when collisional evolution is not included in the model. Systems composed by a primary and a small satellite are more fragile than the ones composed by components of similar sizes.
Kator, H; Rhodes, M
2001-06-01
Declining oyster (Crassostrea virginica) production in the Chesapeake Bay has stimulated aquaculture based on floats for off-bottom culture. While advantages of off-bottom culture are significant, the increased use of floating containers raises public health and microbiological concerns, because oysters in floats may be more susceptible to fecal contamination from storm runoff compared to those cultured on-bottom. We conducted four commercial-scale studies with market-size oysters naturally contaminated with fecal coliforms (FC) and a candidate viral indicator, F-specific RNA (FRNA) coliphage. To facilitate sampling and to test for location effects, 12 replicate subsamples, each consisting of 15 to 20 randomly selected oysters in plastic mesh bags, were placed at four characteristic locations within a 0.6- by 3.0-m "Taylor" float, and the remaining oysters were added to a depth not exceeding 15.2 cm. The float containing approximately 3,000 oysters was relaid in the York River, Virginia, for 14 days. During relay, increases in shellfish FC densities followed rain events such that final mean levels exceeded initial levels or did not meet an arbitrary product end point of 50 FC/100 ml. FRNA coliphage densities decreased to undetectable levels within 14 days (16 to 28 degrees C) in all but the last experiment, when temperatures fell between 12 and 16 degrees C. Friedman (nonparametric analysis of variance) tests performed on FC/Escherichia coli and FRNA densities indicated no differences in counts as a function of location within the float. The public health consequences of these observations are discussed, and future research and educational needs are identified.
Low-complex energy-aware image communication in visual sensor networks
NASA Astrophysics Data System (ADS)
Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran
2013-10-01
A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.
ERIC Educational Resources Information Center
Rhodes, Katherine T.; Branum-Martin, Lee; Washington, Julie A.; Fuchs, Lynn S.
2017-01-01
Using multitrait, multimethod data, and confirmatory factor analysis, the current study examined the effects of arithmetic item formatting and the possibility that across formats, abilities other than arithmetic may contribute to children's answers. Measurement hypotheses were guided by several leading theories of arithmetic cognition. With a…
Personal Experience and Arithmetic Meaning in Semantic Dementia
ERIC Educational Resources Information Center
Julien, Camille L.; Neary, David; Snowden, Julie S.
2010-01-01
Arithmetic skills are generally claimed to be preserved in semantic dementia (SD), suggesting functional independence of arithmetic knowledge from other aspects of semantic memory. However, in a recent case series analysis we showed that arithmetic performance in SD is not entirely normal. The finding of a direct association between severity of…
Characteristics of a Single Float Seaplane During Take-off
NASA Technical Reports Server (NTRS)
Crowley, J W , Jr; Ronan, K M
1925-01-01
At the request of the Bureau of Aeronautics, Navy Department, the National Advisory Committee for Aeronautics at Langley Field is investigating the get-away characteristics of an N-9H, a DT-2, and an F-5l, as representing, respectively, a single float, a double float, and a boat type of seaplane. This report covers the investigation conducted on the N-9H. The results show that a single float seaplane trims aft in taking off. Until a planing condition is reached the angle of attack is about 15 degrees and is only slightly affected by controls. When planing it seeks a lower angle, but is controllable through a widening range, until at the take-off it is possible to obtain angles of 8 degrees to 15 degrees with corresponding speeds of 53 to 41 M. P. H. or about 40 per cent of the speed range. The point of greatest resistance occurs at about the highest angle of a pontoon planing angle of 9 1/2 degrees and at a water speed of 24 M. P. H.
Analysis of Static Spacecraft Floating Potential at Low Earth Orbit (LEO)
NASA Technical Reports Server (NTRS)
Herr, Joel L.; Hwang, K. S.; Wu, S. T.
1995-01-01
Spacecraft floating potential is the charge on the external surfaces of orbiting spacecraft relative to the space. Charging is caused by unequal negative and positive currents to spacecraft surfaces. The charging process continues until the accelerated particles can be collected rapidly enough to balance the currents at which point the spacecraft has reached its equilibrium or floating potential. In low inclination. Low Earth Orbit (LEO), the collection of positive ion and negative electrons. in a particular direction. are typically not equal. The level of charging required for equilibrium to be established is influenced by the characteristics of the ambient plasma environment. by the spacecraft motion, and by the geometry of the spacecraft. Using the kinetic theory, a statistical approach for studying the interaction is developed. The approach used to study the spacecraft floating potential depends on which phenomena are being applied. and on the properties of the plasma. especially the density and temperature. The results from kinetic theory derivation are applied to determine the charging level and the electric potential distribution at an infinite flat plate perpendicular to a streaming plasma using finite-difference scheme.
NASA Astrophysics Data System (ADS)
Chen, Xin; Sánchez-Arriaga, Gonzalo
2018-02-01
To model the sheath structure around an emissive probe with cylindrical geometry, the Orbital-Motion theory takes advantage of three conserved quantities (distribution function, transverse energy, and angular momentum) to transform the stationary Vlasov-Poisson system into a single integro-differential equation. For a stationary collisionless unmagnetized plasma, this equation describes self-consistently the probe characteristics. By solving such an equation numerically, parametric analyses for the current-voltage (IV) and floating-potential (FP) characteristics can be performed, which show that: (a) for strong emission, the space-charge effects increase with probe radius; (b) the probe can float at a positive potential relative to the plasma; (c) a smaller probe radius is preferred for the FP method to determine the plasma potential; (d) the work function of the emitting material and the plasma-ion properties do not influence the reliability of the floating-potential method. Analytical analysis demonstrates that the inflection point of an IV curve for non-emitting probes occurs at the plasma potential. The flat potential is not a self-consistent solution for emissive probes.
Gulp: An Imaginatively Different Approach to Learning about Water.
ERIC Educational Resources Information Center
Baird, Colette
1997-01-01
Provides details of performances by the Floating Point Science Theater working with elementary school children about the characteristics of water. Discusses student reactions to various parts of the performances. (DDR)
Code of Federal Regulations, 2010 CFR
2010-07-01
..., community or corporate docks, or at any fixed or permanent mooring point, may only be used for overnight... floating or stationary mooring facilities on, adjacent to, or interfering with a buoy, channel marker or...
A Multidimensional Ideal Point Item Response Theory Model for Binary Data
ERIC Educational Resources Information Center
Maydeu-Olivares, Albert; Hernandez, Adolfo; McDonald, Roderick P.
2006-01-01
We introduce a multidimensional item response theory (IRT) model for binary data based on a proximity response mechanism. Under the model, a respondent at the mode of the item response function (IRF) endorses the item with probability one. The mode of the IRF is the ideal point, or in the multidimensional case, an ideal hyperplane. The model…
Zhou, Zhao-Hui; Zhuang, Li-Xing; Chen, Zhen-Hu; Lang, Jian-Ying; Li, Yan-Hui; Jiang, Gang-Hui; Xu, Zhan-Qiong; Liao, Mu-Xi
2014-07-01
To compare the clinical efficacy in the treatment of post-stroke shoulder-hand syndrome between floating-needle therapy and conventional acupuncture on the basis of rehabilitation training. One hundred cases of post-stroke shoulder-hand syndrome were randomized into a floating-needle group and an acupuncture group, 50 cases in each one. The passive and positive rehabilitation training was adopted in the two groups. Additionally, in the floating-needle group, the floating-needle therapy was used. The needle was inserted at the site 5 to 10 cm away from myofasical trigger point (MTrP), manipulated and scattered subcutaneously, for 2 min continuously. In the acupuncture group, the conventional acupuncture was applied at Jianqian (EX-UE), Jianyu (LI 15), Jianliao (TE 14), etc. The treatment was given once every two days, 3 times a week, and 14 days of treatment were required. The shoulder hand syndrome scale (SHSS), the short form McGill pain scale (SF-MPQ) and the modified Fugl-Meyer motor function scale (FMA) were used to evaluate the damage severity, pain and motor function of the upper limbs before and after treatment in the two groups. The clinical efficacy was compared between the two groups. SHSS score, SF-MPQ score and FMA score were improved significantly after treatment in the two groups (all P < 0.01), and the improvements in the floating-needle group were superior to those in the acupuncture group (all P < 0.05). The total effective rate was 94.0% (47/50) in the floating-needle group, which was better than 90.0% (45/50) in the acupuncture group (P < 0.05). The floating-needle therapy combined with rehabilitation training achieves a satisfactory efficacy on post-stroke shoulder-hand syndrome, which is better than the combined therapy of conventional acupuncture and rehabilitation training.
Investigating the potential of floating mires as record of palaeoenvironmental changes
NASA Astrophysics Data System (ADS)
Zaccone, C.; Adamo, P.; Giordano, S.; Miano, T. M.
2012-04-01
Peat-forming floating mires could provide an exceptional resource for palaeoenvironmental and environmental monitoring studies, as much of their own history, as well as the history of their surrounds, is recorded in their peat deposits. In his Naturalis historia (AD 77-79), Pliny the Elder described floating islands on Lake Vadimonis (now Posta Fibreno Lake, Italy). Actually, a small floating island (ca. 35 m of diameter and 3 m of submerged thickness) still occurs on this calcareous lake fed by karstic springs at the base of the Apennine Mountains. Here the southernmost Italian populations of Sphagnum palustre occur on the small surface of this floating mire known as "La Rota", i.e., a cup-formed core of Sphagnum peat and rhizomes of Helophytes, erratically floating on the water-body of a submerged doline, annexed to the easternmost edge of the lake, characterised by the extension of a large reed bed. Geological evidence point out the existence in the area of a large lacustrine basin since Late Pleistocene. The progressive filling of the lake caused by changing in climatic conditions and neotectonic events, brought about the formation of peat deposits in the area, following different depositional cycles in a swampy environment. Then, a round-shaped portion of fen, originated around lake margins in waterlogged areas, was somehow isolated from the bank and started to float. Coupling data about concentrations and fluxes of several major and trace elements of different origin (i.e., dust particles, volcanic emissions, cosmogenic dusts and marine aerosols), with climate records (plant micro- and macrofossils, pollens, isotopic ratios), biomolecular records (e.g., lipids), detailed age-depth modelling (i.e., 210Pb, 137Cs, 14C), and humification indexes, the present work is hoped to identify and better understand the reliability of this particular "archive", and thus possible relationships between biogeochemical processes occurring in this floating bog and environmental changes.
Galoian, V R
1988-01-01
It is well known that the eye is a phylogenetically stabilized body with rotation properties. The eye has an elastic cover and is filled with uniform fluid. According to the theory of covers and other concepts on the configuration of turning fluid mass we concluded that the eyeball has an elliptic configuration. Classification of the eyeball is here presented with simultaneous studies of the principles of the eye situation. The parallelism between the state and different types of heterophory and orthophory was studied. To determine normal configuration it is necessary to have in mind some principles of achieving advisable correct situation of the eye in orbit. We determined the centre of the eye rotation and showed that it is impossible to situate it out of the geometrical centre of the eyeball. It was pointed out that for adequate perception the rotation centre must be situated on the visual axis. Using the well known theory of floating we experimentally determined that the centre of the eye rotation lies on the level of the floating eye, just on the point of cross of the visual line with the optical axis. It was shown experimentally on the basis of recording the eye movements in the process of eyelid closing that weakening of the eye movements is of gravitational pattern and proceeds under the action of stability forces, which directly indicates the floating state of the eye. For the first time using the model of the floating eye it was possible to show the formation of extraeye vacuum by straining the back wall. This effect can be obtained without any difficulty, if the face is turned down. The role of negative pressure in the formation of the eye ametropy, as well as new conclusions and prognostications about this new model are discussed.
Functional outcomes of "floating elbow" injuries in adult patients.
Yokoyama, K; Itoman, M; Kobayashi, A; Shindo, M; Futami, T
1998-05-01
To assess elbow function, complications, and problems of floating elbow fractures in adults receiving surgical treatment. Retrospective clinical review. Level I trauma center in Kanagawa, Japan. Fourteen patients with fifteen floating elbow injuries, excluding one immediate amputation, seen at the Kitasato University Hospital from January 1, 1984, to April 30, 1995. All fractures were managed surgically by various methods. In ten cases, the humeral and forearm fractures were treated simultaneously with immediate fixation. In three cases, both the humeral and forearm fractures were treated with delayed fixation on Day 1, 4, or 7. In the remaining two cases, the open forearm fracture was managed with immediate fixation and the humerus fracture with delayed fixation on Day 10 or 25. All subjects underwent standardized elbow evaluations, and results were compared with an elbow score based on a 100-point scale. The parameters evaluated were pain, motion, elbow and grip strength, and function during daily activities. Complications such as infections, nonunions, malunions, and refractures were investigated. Mean follow-up was forty-three months (range 13 to 112 months). At final follow-up, the mean elbow function score was 79 points, with 67 percent (ten of fifteen) of the subjects having good or excellent results. The functional outcome did not correlate with the Injury Severity Score of the individual patients, the existence of open injuries or neurovascular injuries, or the timing of surgery. There were one deep infection, two nonunions of the humerus, two nonunions of the forearm, one varus deformity of the humerus, and one forearm refracture. Based on the present data, we could not clarify the factors influencing the final functional outcome after floating elbow injury. These injuries, however, potentially have many complications, such as infection or nonunion, especially when there is associated brachial plexus injury. We consider that floating elbow injuries are severe injuries and that surgical stabilization is needed; beyond that, there are no specific forms of surgical treatment to reliably guarantee excellent results.
30 CFR 250.907 - Where must I locate foundation boreholes?
Code of Federal Regulations, 2014 CFR
2014-07-01
... soil boring must not exceed 500 feet. (b) For deepwater floating platforms which utilize catenary or..., other points throughout the anchor pattern to establish the soil profile suitable for foundation design...
30 CFR 250.907 - Where must I locate foundation boreholes?
Code of Federal Regulations, 2013 CFR
2013-07-01
... soil boring must not exceed 500 feet. (b) For deepwater floating platforms which utilize catenary or..., other points throughout the anchor pattern to establish the soil profile suitable for foundation design...
... weight normally for the first month. After that point, the baby will lose weight and become irritable, and will have worsening jaundice. Other symptoms may include: Dark urine Enlarged spleen Floating stools Foul-smelling stools Pale or clay-colored ...
30 CFR 250.907 - Where must I locate foundation boreholes?
Code of Federal Regulations, 2012 CFR
2012-07-01
... soil boring must not exceed 500 feet. (b) For deepwater floating platforms which utilize catenary or..., other points throughout the anchor pattern to establish the soil profile suitable for foundation design...
Early but not late blindness leads to enhanced arithmetic and working memory abilities.
Dormal, Valérie; Crollen, Virginie; Baumans, Christine; Lepore, Franco; Collignon, Olivier
2016-10-01
Behavioural and neurophysiological evidence suggest that vision plays an important role in the emergence and development of arithmetic abilities. However, how visual deprivation impacts on the development of arithmetic processing remains poorly understood. We compared the performances of early (EB), late blind (LB) and sighted control (SC) individuals during various arithmetic tasks involving addition, subtraction and multiplication of various complexities. We also assessed working memory (WM) performances to determine if they relate to a blind person's arithmetic capacities. Results showed that EB participants performed better than LB and SC in arithmetic tasks, especially in conditions in which verbal routines and WM abilities are needed. Moreover, EB participants also showed higher WM abilities. Together, our findings demonstrate that the absence of developmental vision does not prevent the development of refined arithmetic skills and can even trigger the refinement of these abilities in specific tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Long, Imogen; Malone, Stephanie A; Tolan, Anne; Burgoyne, Kelly; Heron-Delaney, Michelle; Witteveen, Kate; Hulme, Charles
2016-12-01
Following on from ideas developed by Gerstmann, a body of work has suggested that impairments in finger gnosis may be causally related to children's difficulties in learning arithmetic. We report a study with a large sample of typically developing children (N=197) in which we assessed finger gnosis and arithmetic along with a range of other relevant cognitive predictors of arithmetic skills (vocabulary, counting, and symbolic and nonsymbolic magnitude judgments). Contrary to some earlier claims, we found no meaningful association between finger gnosis and arithmetic skills. Counting and symbolic magnitude comparison were, however, powerful predictors of arithmetic skills, replicating a number of earlier findings. Our findings seriously question theories that posit either a simple association or a causal connection between finger gnosis and the development of arithmetic skills. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Rapid Design of Gravity Assist Trajectories
NASA Technical Reports Server (NTRS)
Carrico, J.; Hooper, H. L.; Roszman, L.; Gramling, C.
1991-01-01
Several International Solar Terrestrial Physics (ISTP) missions require the design of complex gravity assisted trajectories in order to investigate the interaction of the solar wind with the Earth's magnetic field. These trajectories present a formidable trajectory design and optimization problem. The philosophy and methodology that enable an analyst to design and analyse such trajectories are discussed. The so called 'floating end point' targeting, which allows the inherently nonlinear multiple body problem to be solved with simple linear techniques, is described. The combination of floating end point targeting with analytic approximations with a Newton method targeter to achieve trajectory design goals quickly, even for the very sensitive double lunar swingby trajectories used by the ISTP missions, is demonstrated. A multiconic orbit integration scheme allows fast and accurate orbit propagation. A prototype software tool, Swingby, built for trajectory design and launch window analysis, is described.
NASA Technical Reports Server (NTRS)
Kelly, G. L.; Berthold, G.; Abbott, L.
1982-01-01
A 5 MHZ single-board microprocessor system which incorporates an 8086 CPU and an 8087 Numeric Data Processor is used to implement the control laws for the NASA Drones for Aerodynamic and Structural Testing, Aeroelastic Research Wing II. The control laws program was executed in 7.02 msec, with initialization consuming 2.65 msec and the control law loop 4.38 msec. The software emulator execution times for these two tasks were 36.67 and 61.18, respectively, for a total of 97.68 msec. The space, weight and cost reductions achieved in the present, aircraft control application of this combination of a 16-bit microprocessor with an 80-bit floating point coprocessor may be obtainable in other real time control applications.
Implementation of kernels on the Maestro processor
NASA Astrophysics Data System (ADS)
Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.
Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.
Parallel processor for real-time structural control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tise, B.L.
1992-01-01
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection tomore » host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An Open Windows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.« less
NASA Astrophysics Data System (ADS)
Neji, N.; Jridi, M.; Alfalou, A.; Masmoudi, N.
2016-02-01
The double random phase encryption (DRPE) method is a well-known all-optical architecture which has many advantages especially in terms of encryption efficiency. However, the method presents some vulnerabilities against attacks and requires a large quantity of information to encode the complex output plane. In this paper, we present an innovative hybrid technique to enhance the performance of DRPE method in terms of compression and encryption. An optimized simultaneous compression and encryption method is applied simultaneously on the real and imaginary components of the DRPE output plane. The compression and encryption technique consists in using an innovative randomized arithmetic coder (RAC) that can well compress the DRPE output planes and at the same time enhance the encryption. The RAC is obtained by an appropriate selection of some conditions in the binary arithmetic coding (BAC) process and by using a pseudo-random number to encrypt the corresponding outputs. The proposed technique has the capabilities to process video content and to be standard compliant with modern video coding standards such as H264 and HEVC. Simulations demonstrate that the proposed crypto-compression system has presented the drawbacks of the DRPE method. The cryptographic properties of DRPE have been enhanced while a compression rate of one-sixth can be achieved. FPGA implementation results show the high performance of the proposed method in terms of maximum operating frequency, hardware occupation, and dynamic power consumption.
NASA Astrophysics Data System (ADS)
Noll, Keith S.
2015-08-01
The Pluto-Charon binary was the first trans-neptunian binary to be identified in 1978. Pluto-Charon is a true binary with both components orbiting a barycenter located between them. The Pluto system is also the first, and to date only, known binary with a satellite system consisting of four small satellites in near-resonant orbits around the common center of mass. Seven other Plutinos, objects in 3:2 mean motion resonance with Neptune, have orbital companions including 2004 KB19 reported here for the first time. Compared to the Cold Classical population, the Plutinos differ in the frequency of binaries, the relative sizes of the components, and their inclination distribution. These differences point to distinct dynamical histories and binary formation processes encountered by Plutinos.
[Acquisition of arithmetic knowledge].
Fayol, Michel
2008-01-01
The focus of this paper is on contemporary research on the number counting and arithmetical competencies that emerge during infancy, the preschool years, and the elementary school. I provide a brief overview of the evolution of children's conceptual knowledge of arithmetic knowledge, the acquisition and use of counting and how they solve simple arithmetic problems (e.g. 4 + 3).
The Development of Arithmetic Principle Knowledge: How Do We Know What Learners Know?
ERIC Educational Resources Information Center
Prather, Richard W.; Alibali, Martha W.
2009-01-01
This paper reviews research on learners' knowledge of three arithmetic principles: "Commutativity", "Relation to Operands", and "Inversion." Studies of arithmetic principle knowledge vary along several dimensions, including the age of the participants, the context in which the arithmetic is presented, and most importantly, the type of knowledge…
QCA Gray Code Converter Circuits Using LTEx Methodology
NASA Astrophysics Data System (ADS)
Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan
2018-07-01
The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.
QCA Gray Code Converter Circuits Using LTEx Methodology
NASA Astrophysics Data System (ADS)
Mukherjee, Chiradeep; Panda, Saradindu; Mukhopadhyay, Asish Kumar; Maji, Bansibadan
2018-04-01
The Quantum-dot Cellular Automata (QCA) is the prominent paradigm of nanotechnology considered to continue the computation at deep sub-micron regime. The QCA realizations of several multilevel circuit of arithmetic logic unit have been introduced in the recent years. However, as high fan-in Binary to Gray (B2G) and Gray to Binary (G2B) Converters exist in the processor based architecture, no attention has been paid towards the QCA instantiation of the Gray Code Converters which are anticipated to be used in 8-bit, 16-bit, 32-bit or even more bit addressable machines of Gray Code Addressing schemes. In this work the two-input Layered T module is presented to exploit the operation of an Exclusive-OR Gate (namely LTEx module) as an elemental block. The "defect-tolerant analysis" of the two-input LTEx module has been analyzed to establish the scalability and reproducibility of the LTEx module in the complex circuits. The novel formulations exploiting the operability of the LTEx module have been proposed to instantiate area-delay efficient B2G and G2B Converters which can be exclusively used in Gray Code Addressing schemes. Moreover this work formulates the QCA design metrics such as O-Cost, Effective area, Delay and Cost α for the n-bit converter layouts.
How to interpret cognitive training studies: A reply to Lindskog & Winman
Park, Joonkoo; Brannon, Elizabeth M.
2017-01-01
In our previous studies, we demonstrated that repeated training on an approximate arithmetic task selectively improves symbolic arithmetic performance (Park & Brannon, 2013, 2014). We proposed that mental manipulation of quantity is the common cognitive component between approximate arithmetic and symbolic arithmetic, driving the causal relationship between the two. In a commentary to our work, Lindskog and Winman argue that there is no evidence of performance improvement during approximate arithmetic training and that this challenges the proposed causal relationship between approximate arithmetic and symbolic arithmetic. Here, we argue that causality in cognitive training experiments is interpreted from the selectivity of transfer effects and does not hinge upon improved performance in the training task. This is because changes in the unobservable cognitive elements underlying the transfer effect may not be observable from performance measures in the training task. We also question the validity of Lindskog and Winman’s simulation approach for testing for a training effect, given that simulations require a valid and sufficient model of a decision process, which is often difficult to achieve. Finally we provide an empirical approach to testing the training effects in adaptive training. Our analysis reveals new evidence that approximate arithmetic performance improved over the course of training in Park and Brannon (2014). We maintain that our data supports the conclusion that approximate arithmetic training leads to improvement in symbolic arithmetic driven by the common cognitive component of mental quantity manipulation. PMID:26972469
The neural correlates of mental arithmetic in adolescents: a longitudinal fNIRS study.
Artemenko, Christina; Soltanlou, Mojtaba; Ehlis, Ann-Christine; Nuerk, Hans-Christoph; Dresler, Thomas
2018-03-10
Arithmetic processing in adults is known to rely on a frontal-parietal network. However, neurocognitive research focusing on the neural and behavioral correlates of arithmetic development has been scarce, even though the acquisition of arithmetic skills is accompanied by changes within the fronto-parietal network of the developing brain. Furthermore, experimental procedures are typically adjusted to constraints of functional magnetic resonance imaging, which may not reflect natural settings in which children and adolescents actually perform arithmetic. Therefore, we investigated the longitudinal neurocognitive development of processes involved in performing the four basic arithmetic operations in 19 adolescents. By using functional near-infrared spectroscopy, we were able to use an ecologically valid task, i.e., a written production paradigm. A common pattern of activation in the bilateral fronto-parietal network for arithmetic processing was found for all basic arithmetic operations. Moreover, evidence was obtained for decreasing activation during subtraction over the course of 1 year in middle and inferior frontal gyri, and increased activation during addition and multiplication in angular and middle temporal gyri. In the self-paced block design, parietal activation in multiplication and left angular and temporal activation in addition were observed to be higher for simple than for complex blocks, reflecting an inverse effect of arithmetic complexity. In general, the findings suggest that the brain network for arithmetic processing is already established in 12-14 year-old adolescents, but still undergoes developmental changes.
VIEW OF FACILITY NO. S 20 NEAR THE POINT WHERE ...
VIEW OF FACILITY NO. S 20 NEAR THE POINT WHERE IT JOINS FACILITY NO. S 21. NOTE THE ASPHALT-FILLED NARROW-GAUGE TRACKWAY WITH SOME AREAS OF STEEL TRACK SHOWING. VIEW FACING NORTHEAST - U.S. Naval Base, Pearl Harbor, Floating Dry Dock Quay, Hurt Avenue at northwest side of Magazine Loch, Pearl City, Honolulu County, HI
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS OIL AND GAS EXTRACTION POINT SOURCE CATEGORY Offshore... 40 CFR 125.30-32, any existing point source subject to this subpart must achieve the following... Minimum of 1 mg/l and maintained as close to this concentration as possible. Sanitary M91M Floating solids...
33 CFR 183.558 - Hoses and connections.
Code of Federal Regulations, 2010 CFR
2010-07-01
...: (A) The hose is severed at the point where maximum drainage of fuel would occur, (B) The boat is in its static floating position, and (C) The fuel system is filled to the capacity market on the tank... minutes when: (A) The hose is severed at the point where maximum drainage of fuel would occur, (B) The...
Approximate Arithmetic Training Improves Informal Math Performance in Low Achieving Preschoolers
Szkudlarek, Emily; Brannon, Elizabeth M.
2018-01-01
Recent studies suggest that practice with approximate and non-symbolic arithmetic problems improves the math performance of adults, school aged children, and preschoolers. However, the relative effectiveness of approximate arithmetic training compared to available educational games, and the type of math skills that approximate arithmetic targets are unknown. The present study was designed to (1) compare the effectiveness of approximate arithmetic training to two commercially available numeral and letter identification tablet applications and (2) to examine the specific type of math skills that benefit from approximate arithmetic training. Preschool children (n = 158) were pseudo-randomly assigned to one of three conditions: approximate arithmetic, letter identification, or numeral identification. All children were trained for 10 short sessions and given pre and post tests of informal and formal math, executive function, short term memory, vocabulary, alphabet knowledge, and number word knowledge. We found a significant interaction between initial math performance and training condition, such that children with low pretest math performance benefited from approximate arithmetic training, and children with high pretest math performance benefited from symbol identification training. This effect was restricted to informal, and not formal, math problems. There were also effects of gender, socio-economic status, and age on post-test informal math score after intervention. A median split on pretest math ability indicated that children in the low half of math scores in the approximate arithmetic training condition performed significantly better than children in the letter identification training condition on post-test informal math problems when controlling for pretest, age, gender, and socio-economic status. Our results support the conclusion that approximate arithmetic training may be especially effective for children with low math skills, and that approximate arithmetic training improves early informal, but not formal, math skills. PMID:29867624
Approximate Arithmetic Training Improves Informal Math Performance in Low Achieving Preschoolers.
Szkudlarek, Emily; Brannon, Elizabeth M
2018-01-01
Recent studies suggest that practice with approximate and non-symbolic arithmetic problems improves the math performance of adults, school aged children, and preschoolers. However, the relative effectiveness of approximate arithmetic training compared to available educational games, and the type of math skills that approximate arithmetic targets are unknown. The present study was designed to (1) compare the effectiveness of approximate arithmetic training to two commercially available numeral and letter identification tablet applications and (2) to examine the specific type of math skills that benefit from approximate arithmetic training. Preschool children ( n = 158) were pseudo-randomly assigned to one of three conditions: approximate arithmetic, letter identification, or numeral identification. All children were trained for 10 short sessions and given pre and post tests of informal and formal math, executive function, short term memory, vocabulary, alphabet knowledge, and number word knowledge. We found a significant interaction between initial math performance and training condition, such that children with low pretest math performance benefited from approximate arithmetic training, and children with high pretest math performance benefited from symbol identification training. This effect was restricted to informal, and not formal, math problems. There were also effects of gender, socio-economic status, and age on post-test informal math score after intervention. A median split on pretest math ability indicated that children in the low half of math scores in the approximate arithmetic training condition performed significantly better than children in the letter identification training condition on post-test informal math problems when controlling for pretest, age, gender, and socio-economic status. Our results support the conclusion that approximate arithmetic training may be especially effective for children with low math skills, and that approximate arithmetic training improves early informal, but not formal, math skills.
NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data
Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.
2005-01-01
NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.
ERIC Educational Resources Information Center
Hitt, Fernando; Saboya, Mireille; Cortés Zavala, Carlos
2016-01-01
This paper presents an experiment that attempts to mobilise an arithmetic-algebraic way of thinking in order to articulate between arithmetic thinking and the early algebraic thinking, which is considered a prelude to algebraic thinking. In the process of building this latter way of thinking, researchers analysed pupils' spontaneous production…
Non-symbolic arithmetic in adults and young children.
Barth, Hilary; La Mont, Kristen; Lipton, Jennifer; Dehaene, Stanislas; Kanwisher, Nancy; Spelke, Elizabeth
2006-01-01
Five experiments investigated whether adults and preschool children can perform simple arithmetic calculations on non-symbolic numerosities. Previous research has demonstrated that human adults, human infants, and non-human animals can process numerical quantities through approximate representations of their magnitudes. Here we consider whether these non-symbolic numerical representations might serve as a building block of uniquely human, learned mathematics. Both adults and children with no training in arithmetic successfully performed approximate arithmetic on large sets of elements. Success at these tasks did not depend on non-numerical continuous quantities, modality-specific quantity information, the adoption of alternative non-arithmetic strategies, or learned symbolic arithmetic knowledge. Abstract numerical quantity representations therefore are computationally functional and may provide a foundation for formal mathematics.
Segregation simulation of binary granular matter under horizontal pendulum vibrations
NASA Astrophysics Data System (ADS)
Ma, Xuedong; Zhang, Yanbing; Ran, Heli; Zhang, Qingying
2016-08-01
Segregation of binary granular matter with different densities under horizontal pendulum vibrations was investigated through numerical simulation using a 3D discrete element method (DEM). The particle segregation mechanism was theoretically analyzed using gap filling, momentum and kinetic energy. The effect of vibrator geometry on granular segregation was determined using the Lacey mixing index. This study shows that dynamic changes in particle gaps under periodic horizontal pendulum vibrations create a premise for particle segregation. The momentum of heavy particles is higher than that of light particles, which causes heavy particles to sink and light particles to float. With the same horizontal vibration parameters, segregation efficiency and stability, which are affected by the vibrator with a cylindrical convex geometry, are superior to that of the original vibrator and the vibrator with a cross-bar structure. Moreover, vibrator geometry influences the segregation speed of granular matter. Simulation results of granular segregation by using the DEM are consistent with the final experimental results, thereby confirming the accuracy of the simulation results and the reliability of the analysis.
Rigorous high-precision enclosures of fixed points and their invariant manifolds
NASA Astrophysics Data System (ADS)
Wittig, Alexander N.
The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.
33 CFR 149.625 - What are the design standards?
Code of Federal Regulations, 2010 CFR
2010-07-01
... elsewhere in this subpart (for example, single point moorings, hoses, and aids to navigation buoys), must be... components. (c) Heliports on floating deepwater ports must be designed in compliance with the regulations at...
33 CFR 329.6 - Interstate or foreign commerce.
Code of Federal Regulations, 2010 CFR
2010-07-01
... United States. Note, however, that the mere presence of floating logs will not of itself make the river... the future, or at a past point in time. (b) Nature of commerce: interstate and intrastate. Interstate...
30 CFR 250.907 - Where must I locate foundation boreholes?
Code of Federal Regulations, 2011 CFR
2011-07-01
... foundation pile to a soil boring must not exceed 500 feet. (b) For deepwater floating platforms which utilize... necessary, other points throughout the anchor pattern to establish the soil profile suitable for foundation...
2009-06-01
to floating point , to multi-level logic. 2 Overview Self-aware computation can be distinguished from existing computational models which are...systems have advanced to the point that the time is ripe to realize such a system. To illustrate, let us examine each of the key aspects of self...servers for each service, there are no single points of failure in the system. If an OS or user core has a failure, one of several introspection cores
Single crystal growth of 67%BiFeO 3 -33%BaTiO 3 solution by the floating zone method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rong, Y.; Zheng, H.; Krogstad, M. J.
The growth conditions and the resultant grain morphologies and phase purities from floating-zone growth of 67%BiFeO3-33%BaTiO3 (BF-33BT) single crystals are reported. We find two formidable challenges for the growth. First, a low-melting point constituent leads to a pre-melt zone in the feed-rod that adversely affects growth stability. Second, constitutional super-cooling (CSC), which was found to lead to dendritic and columnar features in the grain morphology, necessitates slow traveling rates during growth. Both challenges were addressed by modifications to the floating-zone furnace that steepened the temperature gradient at the melt-solid interfaces. Slow growth was also required to counter the effects ofmore » CSC. Single crystals with typical dimensions of hundreds of microns have been obtained which possess high quality and are suitable for detailed structural studies.« less
Rear surface effects in high efficiency silicon solar cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenham, S.R.; Robinson, S.J.; Dai, X.
1994-12-31
Rear surface effects in PERL solar cells can lead not only to degradation in the short circuit current and open circuit voltage, but also fill factor. Three mechanisms capable of changing the effective rear surface recombination velocity with injection level are identified, two associated with oxidized p-type surfaces, and the third with two dimensional effects associated with a rear floating junction. Each of these will degrade the fill factor if the range of junction biases corresponding to the rear surface transition, coincides with the maximum power point. Despite the identified non idealities, PERL cells with rear floating junctions (PERF cells)more » have achieved record open circuit voltages for silicon solar cells, while simultaneously achieving fill factor improvements relative to standard PERL solar cells. Without optimization, a record efficiency of 22% has been demonstrated for a cell with a rear floating junction. The results of both theoretical and experimental studies are provided.« less
Single crystal growth of 67%BiFeO3-33%BaTiO3 solution by the floating zone method
NASA Astrophysics Data System (ADS)
Rong, Y.; Zheng, H.; Krogstad, M. J.; Mitchell, J. F.; Phelan, D.
2018-01-01
The growth conditions and the resultant grain morphologies and phase purities from floating-zone growth of 67%BiFeO3-33%BaTiO3 (BF-33BT) single crystals are reported. We find two formidable challenges for the growth. First, a low-melting point constituent leads to a pre-melt zone in the feed-rod that adversely affects growth stability. Second, constitutional super-cooling (CSC), which was found to lead to dendritic and columnar features in the grain morphology, necessitates slow traveling rates during growth. Both challenges were addressed by modifications to the floating-zone furnace that steepened the temperature gradient at the melt-solid interfaces. Slow growth was also required to counter the effects of CSC. Single crystals with typical dimensions of hundreds of microns have been obtained which possess high quality and are suitable for detailed structural studies.
Bächli, Heidi; Steiner, Michel A; Habersetzer, Ursula; Wotjak, Carsten T
2008-02-11
To investigate genotype x environment interactions in the forced swim test, we tested the influence of water temperature (20 degrees C, 25 degrees C, 30 degrees C) on floating behaviour in single-housed male C57BL/6J and BALB/c mice. We observed a contrasting relationship between floating and water temperature between the two strains, with C57BL/6J floating more and BALB/c floating less with increasing water temperature, independent of the lightening conditions and the time point of testing during the animals' circadian rhythm. Both strains showed an inverse relationship between plasma corticosterone concentration and water temperature, indicating that the differences in stress coping are unrelated to different perception of the aversive encounter. Treatment with desipramine (20mg/kg, i.p.) caused a reduction in immobility time in C57BL/6J mice if the animals were tested at 30 degrees C water temperature, with no effect at 25 degrees C and no effects on forced swim stress-induced corticosterone secretion. The same treatment failed to affect floating behaviour in BALB/c at any temperature, but caused a decrease in plasma corticosterone levels. Taken together we demonstrate that an increase in water temperature in the forced swim test exerts opposite effects on floating behaviour in C57BL/6J and BALB/c and renders single-housed C57BL/6J mice, but not BALB/c mice, susceptible to antidepressant-like behavioral effects of desipramine.
Peljo, Pekka; Scanlon, Micheál D; Olaya, Astrid J; Rivier, Lucie; Smirnov, Evgeny; Girault, Hubert H
2017-08-03
Redox electrocatalysis (catalysis of electron-transfer reactions by floating conductive particles) is discussed from the point-of-view of Fermi level equilibration, and an overall theoretical framework is given. Examples of redox electrocatalysis in solution, in bipolar configuration, and at liquid-liquid interfaces are provided, highlighting that bipolar and liquid-liquid interfacial systems allow the study of the electrocatalytic properties of particles without effects from the support, but only liquid-liquid interfaces allow measurement of the electrocatalytic current directly. Additionally, photoinduced redox electrocatalysis will be of interest, for example, to achieve water splitting.
Jenkins, Martin
2016-01-01
Objective. In clinical trials of RA, it is common to assess effectiveness using end points based upon dichotomized continuous measures of disease activity, which classify patients as responders or non-responders. Although dichotomization generally loses statistical power, there are good clinical reasons to use these end points; for example, to allow for patients receiving rescue therapy to be assigned as non-responders. We adopt a statistical technique called the augmented binary method to make better use of the information provided by these continuous measures and account for how close patients were to being responders. Methods. We adapted the augmented binary method for use in RA clinical trials. We used a previously published randomized controlled trial (Oral SyK Inhibition in Rheumatoid Arthritis-1) to assess its performance in comparison to a standard method treating patients purely as responders or non-responders. The power and error rate were investigated by sampling from this study. Results. The augmented binary method reached similar conclusions to standard analysis methods but was able to estimate the difference in response rates to a higher degree of precision. Results suggested that CI widths for ACR responder end points could be reduced by at least 15%, which could equate to reducing the sample size of a study by 29% to achieve the same statistical power. For other end points, the gain was even higher. Type I error rates were not inflated. Conclusion. The augmented binary method shows considerable promise for RA trials, making more efficient use of patient data whilst still reporting outcomes in terms of recognized response end points. PMID:27338084
Fehr, Thorsten; Code, Chris; Herrmann, Manfred
2007-10-03
The issue of how and where arithmetic operations are represented in the brain has been addressed in numerous studies. Lesion studies suggest that a network of different brain areas are involved in mental calculation. Neuroimaging studies have reported inferior parietal and lateral frontal activations during mental arithmetic using tasks of different complexities and using different operators (addition, subtraction, etc.). Indeed, it has been difficult to compare brain activation across studies because of the variety of different operators and different presentation modalities used. The present experiment examined fMRI-BOLD activity in participants during calculation tasks entailing different arithmetic operations -- addition, subtraction, multiplication and division -- of different complexities. Functional imaging data revealed a common activation pattern comprising right precuneus, left and right middle and superior frontal regions during all arithmetic operations. All other regional activations were operation specific and distributed in prominently frontal, parietal and central regions when contrasting complex and simple calculation tasks. The present results largely confirm former studies suggesting that activation patterns due to mental arithmetic appear to reflect a basic anatomical substrate of working memory, numerical knowledge and processing based on finger counting, and derived from a network originally related to finger movement. We emphasize that in mental arithmetic research different arithmetic operations should always be examined and discussed independently of each other in order to avoid invalid generalizations on arithmetics and involved brain areas.
Cui, Jiaxin; Georgiou, George K; Zhang, Yiyun; Li, Yixun; Shu, Hua; Zhou, Xinlin
2017-02-01
Rapid automatized naming (RAN) has been found to predict mathematics. However, the nature of their relationship remains unclear. Thus, the purpose of this study was twofold: (a) to examine how RAN (numeric and non-numeric) predicts a subdomain of mathematics (arithmetic fluency) and (b) to examine what processing skills may account for the RAN-arithmetic fluency relationship. A total of 160 third-year kindergarten Chinese children (83 boys and 77 girls, mean age=5.11years) were assessed on RAN (colors, objects, digits, and dice), nonverbal IQ, visual-verbal paired associate learning, phonological awareness, short-term memory, speed of processing, approximate number system acuity, and arithmetic fluency (addition and subtraction). The results indicated first that RAN was a significant correlate of arithmetic fluency and the correlations did not vary as a function of type of RAN or arithmetic fluency tasks. In addition, RAN continued to predict addition and subtraction fluency even after controlling for all other processing skills. Taken together, these findings challenge the existing theoretical accounts of the RAN-arithmetic fluency relationship and suggest that, similar to reading fluency, multiple processes underlie the RAN-arithmetic fluency relationship. Copyright © 2016 Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-29
... Murray Docks, Inc./Windward Point Yacht Club to use project waters to expand an existing boat dock facility through the addition of an 8-slip floating dock to accommodate a maximum of 12 additional boats. The proposed new structures would be for the private use of members of the Windward Point Yacht Club...
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS OIL AND GAS EXTRACTION POINT SOURCE CATEGORY... provided in 40 CFR 125.30-32, any existing point source subject to this subpart must achieve the following... maintained as close to this concentration as possible. 3 There shall be no floating solids as a result of the...
NASA Astrophysics Data System (ADS)
Morrison, R. E.; Robinson, S. H.
A continuous wave Doppler radar system has been designed which is portable, easily deployed, and remotely controlled. The heart of this system is a DSP/control board using Analog Devices ADSP-21020 40-bit floating point digital signal processor (DSP) microprocessor. Two 18-bit audio A/D converters provide digital input to the DSP/controller board for near real time target detection. Program memory for the DSP is dual ported with an Intel 87C51 microcontroller allowing DSP code to be up-loaded or down-loaded from a central controlling computer. The 87C51 provides overall system control for the remote radar and includes a time-of-day/day-of-year real time clock, system identification (ID) switches, and input/output (I/O) expansion by an Intel 82C55 I/O expander.
Optimized Latching Control of Floating Point Absorber Wave Energy Converter
NASA Astrophysics Data System (ADS)
Gadodia, Chaitanya; Shandilya, Shubham; Bansal, Hari Om
2018-03-01
There is an increasing demand for energy in today’s world. Currently main energy resources are fossil fuels, which will eventually drain out, also the emissions produced from them contribute to global warming. For a sustainable future, these fossil fuels should be replaced with renewable and green energy sources. Sea waves are a gigantic and undiscovered vitality asset. The potential for extricating energy from waves is extensive. To trap this energy, wave energy converters (WEC) are needed. There is a need for increasing the energy output and decreasing the cost requirement of these existing WECs. This paper presents a method which uses prediction as a part of the control scheme to increase the energy efficiency of the floating-point absorber WECs. Kalman Filter is used for estimation, coupled with latching control in regular as well as irregular sea waves. Modelling and Simulation results for the same are also included.
Microfluidic quadrupole and floating concentration gradient.
Qasaimeh, Mohammad A; Gervais, Thomas; Juncker, David
2011-09-06
The concept of fluidic multipoles, in analogy to electrostatics, has long been known as a particular class of solutions of the Navier-Stokes equation in potential flows; however, experimental observations of fluidic multipoles and of their characteristics have not been reported yet. Here we present a two-dimensional microfluidic quadrupole and a theoretical analysis consistent with the experimental observations. The microfluidic quadrupole was formed by simultaneously injecting and aspirating fluids from two pairs of opposing apertures in a narrow gap formed between a microfluidic probe and a substrate. A stagnation point was formed at the centre of the microfluidic quadrupole, and its position could be rapidly adjusted hydrodynamically. Following the injection of a solute through one of the poles, a stationary, tunable, and movable-that is, 'floating'-concentration gradient was formed at the stagnation point. Our results lay the foundation for future combined experimental and theoretical exploration of microfluidic planar multipoles including convective-diffusive phenomena.
Atmospheric Modeling And Sensor Simulation (AMASS) study
NASA Technical Reports Server (NTRS)
Parker, K. G.
1984-01-01
The capabilities of the atmospheric modeling and sensor simulation (AMASS) system were studied in order to enhance them. This system is used in processing atmospheric measurements which are utilized in the evaluation of sensor performance, conducting design-concept simulation studies, and also in the modeling of the physical and dynamical nature of atmospheric processes. The study tasks proposed in order to both enhance the AMASS system utilization and to integrate the AMASS system with other existing equipment to facilitate the analysis of data for modeling and image processing are enumerated. The following array processors were evaluated for anticipated effectiveness and/or improvements in throughput by attachment of the device to the P-e: (1) Floating Point Systems AP-120B; (2) Floating Point Systems 5000; (3) CSP, Inc. MAP-400; (4) Analogic AP500; (5) Numerix MARS-432; and (6) Star Technologies, Inc. ST-100.
Creation of an anti-imaging system using binary optics.
Wang, Haifeng; Lin, Jian; Zhang, Dawei; Wang, Yang; Gu, Min; Urbach, H P; Gan, Fuxi; Zhuang, Songlin
2016-09-13
We present a concealing method in which an anti-point spread function (APSF) is generated using binary optics, which produces a large-scale dark area in the focal region that can hide any object located within it. This result is achieved by generating two identical PSFs of opposite signs, one consisting of positive electromagnetic waves from the zero-phase region of the binary optical element and the other consisting of negative electromagnetic waves from the pi-phase region of the binary optical element.
Creation of an anti-imaging system using binary optics
Wang, Haifeng; Lin, Jian; Zhang, Dawei; Wang, Yang; Gu, Min; Urbach, H. P.; Gan, Fuxi; Zhuang, Songlin
2016-01-01
We present a concealing method in which an anti-point spread function (APSF) is generated using binary optics, which produces a large-scale dark area in the focal region that can hide any object located within it. This result is achieved by generating two identical PSFs of opposite signs, one consisting of positive electromagnetic waves from the zero-phase region of the binary optical element and the other consisting of negative electromagnetic waves from the pi-phase region of the binary optical element. PMID:27620068
Li, Yongxin; Hu, Yuzheng; Wang, Yunqi; Weng, Jian; Chen, Feiyan
2013-01-01
Arithmetic skill is of critical importance for academic achievement, professional success and everyday life, and childhood is the key period to acquire this skill. Neuroimaging studies have identified that left parietal regions are a key neural substrate for representing arithmetic skill. Although the relationship between functional brain activity in left parietal regions and arithmetic skill has been studied in detail, it remains unclear about the relationship between arithmetic achievement and structural properties in left inferior parietal area in schoolchildren. The current study employed a combination of voxel-based morphometry (VBM) for high-resolution T1-weighted images and fiber tracking on diffusion tensor imaging (DTI) to examine the relationship between structural properties in the inferior parietal area and arithmetic achievement in 10-year-old schoolchildren. VBM of the T1-weighted images revealed that individual differences in arithmetic scores were significantly and positively correlated with the gray matter (GM) volume in the left intraparietal sulcus (IPS). Fiber tracking analysis revealed that the forceps major, left superior longitudinal fasciculus (SLF), bilateral inferior longitudinal fasciculus (ILF) and inferior fronto-occipital fasciculus (IFOF) were the primary pathways connecting the left IPS with other brain areas. Furthermore, the regression analysis of the probabilistic pathways revealed a significant and positive correlation between the fractional anisotropy (FA) values in the left SLF, ILF and bilateral IFOF and arithmetic scores. The brain structure-behavior correlation analyses indicated that the GM volumes in the left IPS and the FA values in the tract pathways connecting left IPS were both related to children's arithmetic achievement. The present findings provide evidence that individual structural differences in the left IPS are associated with arithmetic scores in schoolchildren. PMID:24367320
Hinault, T; Lemaire, P
2016-01-01
In this review, we provide an overview of how age-related changes in executive control influence aging effects in arithmetic processing. More specifically, we consider the role of executive control in strategic variations with age during arithmetic problem solving. Previous studies found that age-related differences in arithmetic performance are associated with strategic variations. That is, when they accomplish arithmetic problem-solving tasks, older adults use fewer strategies than young adults, use strategies in different proportions, and select and execute strategies less efficiently. Here, we review recent evidence, suggesting that age-related changes in inhibition, cognitive flexibility, and working memory processes underlie age-related changes in strategic variations during arithmetic problem solving. We discuss both behavioral and neural mechanisms underlying age-related changes in these executive control processes. © 2016 Elsevier B.V. All rights reserved.
Reconfigurable data path processor
NASA Technical Reports Server (NTRS)
Donohoe, Gregory (Inventor)
2005-01-01
A reconfigurable data path processor comprises a plurality of independent processing elements. Each of the processing elements advantageously comprising an identical architecture. Each processing element comprises a plurality of data processing means for generating a potential output. Each processor is also capable of through-putting an input as a potential output with little or no processing. Each processing element comprises a conditional multiplexer having a first conditional multiplexer input, a second conditional multiplexer input and a conditional multiplexer output. A first potential output value is transmitted to the first conditional multiplexer input, and a second potential output value is transmitted to the second conditional multiplexer output. The conditional multiplexer couples either the first conditional multiplexer input or the second conditional multiplexer input to the conditional multiplexer output, according to an output control command. The output control command is generated by processing a set of arithmetic status-bits through a logical mask. The conditional multiplexer output is coupled to a first processing element output. A first set of arithmetic bits are generated according to the processing of the first processable value. A second set of arithmetic bits may be generated from a second processing operation. The selection of the arithmetic status-bits is performed by an arithmetic-status bit multiplexer selects the desired set of arithmetic status bits from among the first and second set of arithmetic status bits. The conditional multiplexer evaluates the select arithmetic status bits according to logical mask defining an algorithm for evaluating the arithmetic status bits.
Mansour, Fotouh R; Danielson, Neil D
2017-08-01
Dispersive liquid-liquid microextraction (DLLME) is a special type of microextraction in which a mixture of two solvents (an extracting solvent and a disperser) is injected into the sample. The extraction solvent is then dispersed as fine droplets in the cloudy sample through manual or mechanical agitation. Hence, the sample is centrifuged to break the formed emulsion and the extracting solvent is manually separated. The organic solvents commonly used in DLLME are halogenated hydrocarbons that are highly toxic. These solvents are heavier than water, so they sink to the bottom of the centrifugation tube which makes the separation step difficult. By using solvents of low density, the organic extractant floats on the sample surface. If the selected solvent such as undecanol has a freezing point in the range 10-25°C, the floating droplet can be solidified using a simple ice-bath, and then transferred out of the sample matrix; this step is known as solidification of floating organic droplet (SFOD). Coupling DLLME to SFOD combines the advantages of both approaches together. The DLLME-SFOD process is controlled by the same variables of conventional liquid-liquid extraction. The organic solvents used as extractants in DLLME-SFOD must be immiscible with water, of lower density, low volatility, high partition coefficient and low melting and freezing points. The extraction efficiency of DLLME-SFOD is affected by types and volumes of organic extractant and disperser, salt addition, pH, temperature, stirring rate and extraction time. This review discusses the principle, optimization variables, advantages and disadvantages and some selected applications of DLLME-SFOD in water, food and biomedical analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Berg, Derek H.; Hutchinson, Nancy L.
2010-01-01
This study investigated whether processing speed, short-term memory, and working memory accounted for the differential mental addition fluency between children typically achieving in arithmetic (TA) and children at-risk for failure in arithmetic (AR). Further, we drew attention to fluency differences in simple (e.g., 5 + 3) and complex (e.g., 16 +…
An effective method on pornographic images realtime recognition
NASA Astrophysics Data System (ADS)
Wang, Baosong; Lv, Xueqiang; Wang, Tao; Wang, Chengrui
2013-03-01
In this paper, skin detection, texture filtering and face detection are used to extract feature on an image library, training them with the decision tree arithmetic to create some rules as a decision tree classifier to distinguish an unknown image. Experiment based on more than twenty thousand images, the precision rate can get 76.21% when testing on 13025 pornographic images and elapsed time is less than 0.2s. This experiment shows it has a good popularity. Among the steps mentioned above, proposing a new skin detection model which called irregular polygon region skin detection model based on YCbCr color space. This skin detection model can lower the false detection rate on skin detection. A new method called sequence region labeling on binary connected area can calculate features on connected area, it is faster and needs less memory than other recursive methods.
Exploring hurdles to transfer : student experiences of applying knowledge across disciplines
NASA Astrophysics Data System (ADS)
Lappalainen, Jouni; Rosqvist, Juho
2015-04-01
This paper explores the ways students perceive the transfer of learned knowledge to new situations - often a surprisingly difficult prospect. The novel aspect compared to the traditional transfer studies is that the learning phase is not a part of the experiment itself. The intention was only to activate acquired knowledge relevant to the transfer target using a short primer immediately prior to the situation where the knowledge was to be applied. Eight volunteer students from either mathematics or computer science curricula were given a task of designing an adder circuit using logic gates: a new context in which to apply knowledge of binary arithmetic and Boolean algebra. The results of a phenomenographic classification of the views presented by the students in their post-experiment interviews are reported. The degree to which the students were conscious of the acquired knowledge they employed and how they applied it in a new context emerged as the differentiating factors.
Method and apparatus for high speed data acquisition and processing
Ferron, J.R.
1997-02-11
A method and apparatus are disclosed for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register. 15 figs.
Method and apparatus for high speed data acquisition and processing
Ferron, John R.
1997-01-01
A method and apparatus for high speed digital data acquisition. The apparatus includes one or more multiplexers for receiving multiple channels of digital data at a low data rate and asserting a multiplexed data stream at a high data rate, and one or more FIFO memories for receiving data from the multiplexers and asserting the data to a real time processor. Preferably, the invention includes two multiplexers, two FIFO memories, and a 64-bit bus connecting the FIFO memories with the processor. Each multiplexer receives four channels of 14-bit digital data at a rate of up to 5 MHz per channel, and outputs a data stream to one of the FIFO memories at a rate of 20 MHz. The FIFO memories assert output data in parallel to the 64-bit bus, thus transferring 14-bit data values to the processor at a combined rate of 40 MHz. The real time processor is preferably a floating-point processor which processes 32-bit floating-point words. A set of mask bits is prestored in each 32-bit storage location of the processor memory into which a 14-bit data value is to be written. After data transfer from the FIFO memories, mask bits are concatenated with each stored 14-bit data value to define a valid 32-bit floating-point word. Preferably, a user can select any of several modes for starting and stopping direct memory transfers of data from the FIFO memories to memory within the real time processor, by setting the content of a control and status register.
Fast and Scalable Computation of the Forward and Inverse Discrete Periodic Radon Transform.
Carranza, Cesar; Llamocca, Daniel; Pattichis, Marios
2016-01-01
The discrete periodic radon transform (DPRT) has extensively been used in applications that involve image reconstructions from projections. Beyond classic applications, the DPRT can also be used to compute fast convolutions that avoids the use of floating-point arithmetic associated with the use of the fast Fourier transform. Unfortunately, the use of the DPRT has been limited by the need to compute a large number of additions and the need for a large number of memory accesses. This paper introduces a fast and scalable approach for computing the forward and inverse DPRT that is based on the use of: a parallel array of fixed-point adder trees; circular shift registers to remove the need for accessing external memory components when selecting the input data for the adder trees; an image block-based approach to DPRT computation that can fit the proposed architecture to available resources; and fast transpositions that are computed in one or a few clock cycles that do not depend on the size of the input image. As a result, for an N × N image (N prime), the proposed approach can compute up to N(2) additions per clock cycle. Compared with the previous approaches, the scalable approach provides the fastest known implementations for different amounts of computational resources. For example, for a 251×251 image, for approximately 25% fewer flip-flops than required for a systolic implementation, we have that the scalable DPRT is computed 36 times faster. For the fastest case, we introduce optimized just 2N + ⌈log(2) N⌉ + 1 and 2N + 3 ⌈log(2) N⌉ + B + 2 cycles, architectures that can compute the DPRT and its inverse in respectively, where B is the number of bits used to represent each input pixel. On the other hand, the scalable DPRT approach requires more 1-b additions than for the systolic implementation and provides a tradeoff between speed and additional 1-b additions. All of the proposed DPRT architectures were implemented in VHSIC Hardware Description Language (VHDL) and validated using an Field-Programmable Gate Array (FPGA) implementation.
Aerial LED signage by use of crossed-mirror array
NASA Astrophysics Data System (ADS)
Yamamoto, Hirotsugu; Kujime, Ryousuke; Bando, Hiroki; Suyama, Shiro
2013-03-01
3D representation of digital signage improves its significance and rapid notification of important points. Real 3D display techniques such as volumetric 3D displays are effective for use of 3D for public signs because it provides not only binocular disparity but also motion parallax and other cues, which will give 3D impression even people with abnormal binocular vision. Our goal is to realize aerial 3D LED signs. We have specially designed and fabricated a reflective optical device to form an aerial image of LEDs with a wide field angle. The developed reflective optical device composed of crossed-mirror array (CMA). CMA contains dihedral corner reflectors at each aperture. After double reflection, light rays emitted from an LED will converge into the corresponding image point. The depth between LED lamps is represented in the same depth in the floating 3D image. Floating image of LEDs was formed in wide range of incident angle with a peak reflectance at 35 deg. The image size of focused beam (point spread function) agreed to the apparent aperture size.
Where Kinsey, Christ, and Tila Tequila meet: discourse and the sexual (non)-binary.
Callis, April S
2014-01-01
Drawing on 80 interviews and 17 months of participant observation in Lexington, Kentucky, this article details how individuals drew on three areas of national and local discourse to conceptualize sexuality. Media, popular science, and religious discourses can be viewed as portraying sexuality bifocally--as both a binary of heterosexual/homosexual and as a non-binary that encompasses fluidity. However, individuals in Lexington drew on each of these areas of discourse differently. Religion was thought to produce a binary vision of sexuality, whereas popular science accounts were understood as both binary and not. The media was understood as portraying non-binary identities that were not viable, thus strengthening the sexual binary. These differing points of view led identities such as bisexual and queer to lack cultural intelligibility.
NASA Astrophysics Data System (ADS)
Wang, Li-Qun; Saito, Masao
We used 1.5T functional magnetic resonance imaging (fMRI) to explore that which brain areas contribute uniquely to numeric computation. The BOLD effect activation pattern of metal arithmetic task (successive subtraction: actual calculation task) was compared with multiplication tables repetition task (rote verbal arithmetic memory task) response. The activation found in right parietal lobule during metal arithmetic task suggested that quantitative cognition or numeric computation may need the assistance of sensuous convert, such as spatial imagination and spatial sensuous convert. In addition, this mechanism may be an ’analog algorithm’ in the simple mental arithmetic processing.
Interpolator for numerically controlled machine tools
Bowers, Gary L.; Davenport, Clyde M.; Stephens, Albert E.
1976-01-01
A digital differential analyzer circuit is provided that depending on the embodiment chosen can carry out linear, parabolic, circular or cubic interpolation. In the embodiment for parabolic interpolations, the circuit provides pulse trains for the X and Y slide motors of a two-axis machine to effect tool motion along a parabolic path. The pulse trains are generated by the circuit in such a way that parabolic tool motion is obtained from information contained in only one block of binary input data. A part contour may be approximated by one or more parabolic arcs. Acceleration and initial velocity values from a data block are set in fixed bit size registers for each axis separately but simultaneously and the values are integrated to obtain the movement along the respective axis as a function of time. Integration is performed by continual addition at a specified rate of an integrand value stored in one register to the remainder temporarily stored in another identical size register. Overflows from the addition process are indicative of the integral. The overflow output pulses from the second integration may be applied to motors which position the respective machine slides according to a parabolic motion in time to produce a parabolic machine tool motion in space. An additional register for each axis is provided in the circuit to allow "floating" of the radix points of the integrand registers and the velocity increment to improve position accuracy and to reduce errors encountered when the acceleration integrand magnitudes are small when compared to the velocity integrands. A divider circuit is provided in the output of the circuit to smooth the output pulse spacing and prevent motor stall, because the overflow pulses produced in the binary addition process are spaced unevenly in time. The divider has the effect of passing only every nth motor drive pulse, with n being specifiable. The circuit inputs (integrands, rates, etc.) are scaled to give exactly n times the desired number of pulses out, in order to compensate for the divider.
Nonergodicity in binary alloys
NASA Astrophysics Data System (ADS)
Son, Leonid; Sidorov, Valery; Popel, Pjotr; Shulgin, Dmitry
2015-09-01
For binary liquids with limited miscibility of the components, we provide the corrections to the equation of state which arise from the nonergogic diffusivity. It is shown that these corrections result in lowering of critical miscibility point. In some cases, it may result in a bifurcation of miscibility curve: the mixtures near 50% concentration which are homogeneous at the microscopic level, occur to be too stable to provide a quasi - eutectic triple point. These features provide a new look on the phase diagrams of some binary systems. In present work, we discuss Ga-Pb, Fe-Cu, and Cu-Zr alloys. Our investigation corresponds their complex behavior in liquid state to the shapes of their phase diagrams.
Moll, Kristina; Snowling, Margaret J.; Göbel, Silke M.; Hulme, Charles
2015-01-01
Two important foundations for learning are language and executive skills. Data from a longitudinal study tracking the development of 93 children at family-risk of dyslexia and 76 controls was used to investigate the influence of these skills on the development of arithmetic. A two-group longitudinal path model assessed the relationships between language and executive skills at 3–4 years, verbal number skills (counting and number knowledge) and phonological processing skills at 4–5 years, and written arithmetic in primary school. The same cognitive processes accounted for variability in arithmetic skills in both groups. Early language and executive skills predicted variations in preschool verbal number skills, which in turn, predicted arithmetic skills in school. In contrast, phonological awareness was not a predictor of later arithmetic skills. These results suggest that verbal and executive processes provide the foundation for verbal number skills, which in turn influence the development of formal arithmetic skills. Problems in early language development may explain the comorbidity between reading and mathematics disorder. PMID:26412946
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 30 2011-07-01 2011-07-01 false [Reserved] 426.54 Section 426.54 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.54 [Reserved] ...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false [Reserved] 426.54 Section 426.54 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.54 [Reserved] ...
33 CFR 100.101 - Harvard-Yale Regatta, Thames River, New London, CT.
Code of Federal Regulations, 2010 CFR
2010-07-01
... race course, between Scotch Cap and Bartlett Point Light. (ii) Within the race course boundaries or in... not cause waves which result in damage to submarines or other vessels in the floating drydocks. (11...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-01
... facilities associated with the Willow Glynn at Willow Point residential subdivision. These facilities include 2 floating docks, with 16 double-slips each, a wooden pedestrian bridge, a wooden boardwalk along 1...
40 CFR 125.133 - What special definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Subcategories of the Oil and Gas Extraction Point Source Category Effluent Guidelines in 40 CFR 435.10 or 40 CFR..., floating, mobile, facility engaged in the processing of fresh, frozen, canned, smoked, salted or pickled...
33 CFR 110.29 - Boston Inner Harbor, Mass.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Park Yacht Club, Winthrop. Southerly of a line bearing 276° from a point on the west side of Pleasant.... [NAD83]. (2) The area is principally for use by yachts and other recreational craft. Temporary floats or...
33 CFR 110.29 - Boston Inner Harbor, Mass.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Park Yacht Club, Winthrop. Southerly of a line bearing 276° from a point on the west side of Pleasant.... [NAD83]. (2) The area is principally for use by yachts and other recreational craft. Temporary floats or...
Teachers’ Beliefs and Practices Regarding the Role of Executive Functions in Reading and Arithmetic
Rapoport, Shirley; Rubinsten, Orly; Katzir, Tami
2016-01-01
The current study investigated early elementary school teachers’ beliefs and practices regarding the role of Executive Functions (EFs) in reading and arithmetic. A new research questionnaire was developed and judged by professionals in the academia and the field. Reponses were obtained from 144 teachers from Israel. Factor analysis divided the questionnaire into three valid and reliable subscales, reflecting (1) beliefs regarding the contribution of EFs to reading and arithmetic, (2) pedagogical practices, and (3) a connection between the cognitive mechanisms of reading and arithmetic. Findings indicate that teachers believe EFs affect students’ performance in reading and arithmetic. These beliefs were also correlated with pedagogical practices. Additionally, special education teachers’ scored higher on the different subscales compared to general education teachers. These findings shed light on the way teachers perceive the cognitive foundations of reading and arithmetic and indicate to which extent these perceptions guide their teaching practices. PMID:27799917
Teachers' Beliefs and Practices Regarding the Role of Executive Functions in Reading and Arithmetic.
Rapoport, Shirley; Rubinsten, Orly; Katzir, Tami
2016-01-01
The current study investigated early elementary school teachers' beliefs and practices regarding the role of Executive Functions (EFs) in reading and arithmetic. A new research questionnaire was developed and judged by professionals in the academia and the field. Reponses were obtained from 144 teachers from Israel. Factor analysis divided the questionnaire into three valid and reliable subscales, reflecting (1) beliefs regarding the contribution of EFs to reading and arithmetic, (2) pedagogical practices, and (3) a connection between the cognitive mechanisms of reading and arithmetic. Findings indicate that teachers believe EFs affect students' performance in reading and arithmetic. These beliefs were also correlated with pedagogical practices. Additionally, special education teachers' scored higher on the different subscales compared to general education teachers. These findings shed light on the way teachers perceive the cognitive foundations of reading and arithmetic and indicate to which extent these perceptions guide their teaching practices.
NASA Astrophysics Data System (ADS)
Steffen, K.; Huff, R. D.; Cullen, N.; Rignot, E.; Stewart, C.; Jenkins, A.
2003-12-01
Petermann Gletscher is the largest and most influential outlet glacier in central northern Greenland. Located at 81 N, 60 W, it drains an area of 71,580 km2, with a discharge of 12 cubic km of ice per year into the Arctic Ocean. We finished a second field season in spring 2003 collecting in situ data on local climate, ice velocity, strain rates, ice thickness profiles and bottom melt rates of the floating ice tongue. Last years findings have been confirmed that large channels of several hundred meters in depth at the underside of the floating ice tongue are running roughly parallel to the flow direction. We mapped these channels using ground penetrating radar at 25 MHz frequency and multi-phase radar in profiling mode over half of the glacier's width. In addition, NASA airborne laser altimeter data was collected along and cross-glacier for accurate assessment of surface topography. We will present a 3-D model of the floating ice tongue and provide hypothesis of the origin and mechanism that caused these large ice channels at the bottom of the floating ice tongue. Multi-phase radar point measurements revealed interesting results of bottom melt rates, which exceed all previous estimates. It is worth mentioned that the largest bottom melt rates were not found at the grounding line, which is common on ice shelves in the Antarctica. In addition, GPS tidal motion has been measured over one lunar cycle at the flex zone and on the free floating ice tongue and the result will be compared to historic measurements made at the beginning of last century. The surface climate has been recorded by two automatic weather stations over a 12 month period, and the local climate of this remote region will be presented.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-29
... animals, such as pelagic fishes and sea turtles, tend to congregate to naturally-occurring floating... American Samoa enclosed by straight lines connecting the following coordinates: Point S. latitude W. longitude AS-3-A 11[deg]12[min] 172[deg]18[min] AS-3-B 12[deg]12[min] 169[deg]56[min] and from Point AS-3-A...
75 FR 33692 - Safety Zone; Tacoma Freedom Fair Air Show, Commencement Bay, Tacoma, WA
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-15
... this rule encompasses all waters within the points 47[deg]-17.63' N., 122[deg]-28.724' W.; 47[deg]-17... Ruston Way and extending approximately 1100 yards into Commencement Bay. Floating markers will be placed... designated safety zone: All waters within the points 47[deg]-17.63' N., 122[deg]-28.724' W.; 47[deg]-17.059...
Improving Soldier Training: An Aptitude-Treatment Interaction Approach.
1979-06-01
magazines. Eighteen percent of American adults lack basic literacy skills to the point where they cannot even fill out basic forms. Dr. Food emphasized...designed to upgrade the literacy and computational skills of Army personnel found deficient. The magnitude of the problem is such, however, that the services...knowledge, (WK); arithmetic reasoning, AR); etc.) predict the aiount learned or the rate of learning or both. Special abilities such as psychomotor skills
LAVA: Large scale Automated Vulnerability Addition
2016-05-23
memory copy, e.g., are reasonable attack points. If the goal is to inject divide- by-zero, then arithmetic operations involving division will be...ways. First, it introduces deterministic record and replay , which can be used for iterated and expensive analyses that cannot be performed online... memory . Since our approach records the correspondence between source lines and program basic block execution, it would be just as easy to figure out
Abushammala, Mohammed F M; Basri, Noor Ezlin Ahmad; Elfithri, Rahmah
2013-12-01
Methane (CH₄) emissions and oxidation were measured at the Air Hitam sanitary landfill in Malaysia and were modeled using the Intergovernmental Panel on Climate Change waste model to estimate the CH₄ generation rate constant, k. The emissions were measured at several locations using a fabricated static flux chamber. A combination of gas concentrations in soil profiles and surface CH₄ and carbon dioxide (CO₂) emissions at four monitoring locations were used to estimate the CH₄ oxidation capacity. The temporal variations in CH₄ and CO₂ emissions were also investigated in this study. Geospatial means using point kriging and inverse distance weight (IDW), as well as arithmetic and geometric means, were used to estimate total CH₄ emissions. The point kriging, IDW, and arithmetic means were almost identical and were two times higher than the geometric mean. The CH₄ emission geospatial means estimated using the kriging and IDW methods were 30.81 and 30.49 gm(−2) day(−1), respectively. The total CH₄ emissions from the studied area were 53.8 kg day(−1). The mean of the CH₄ oxidation capacity was 27.5 %. The estimated value of k is 0.138 year(−1). Special consideration must be given to the CH₄ oxidation in the wet tropical climate for enhancing CH₄ emission reduction.
NASA Astrophysics Data System (ADS)
Simon, Sílvia; Duran, Miquel
1997-08-01
Quantum molecular similarity (QMS) techniques are used to assess the response of the electron density of various small molecules to application of a static, uniform electric field. Likewise, QMS is used to analyze the changes in electron density generated by the process of floating a basis set. The results obtained show an interrelation between the floating process, the optimum geometry, and the presence of an external field. Cases involving the Le Chatelier principle are discussed, and an insight on the changes of bond critical point properties, self-similarity values and density differences is performed.
Acceleration of linear stationary iterative processes in multiprocessor computers. II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romm, Ya.E.
1982-05-01
For pt.I, see Kibernetika, vol.18, no.1, p.47 (1982). For pt.I, see Cybernetics, vol.18, no.1, p.54 (1982). Considers a reduced system of linear algebraic equations x=ax+b, where a=(a/sub ij/) is a real n*n matrix; b is a real vector with common euclidean norm >>>. It is supposed that the existence and uniqueness of solution det (0-a) not equal to e is given, where e is a unit matrix. The linear iterative process converging to x x/sup (k+1)/=fx/sup (k)/, k=0, 1, 2, ..., where the operator f translates r/sup n/ into r/sup n/. In considering implementation of the iterative process (ip) inmore » a multiprocessor system, it is assumed that the number of processors is constant, and are various values of the latter investigated; it is assumed in addition, that the processors perform elementary binary arithmetic operations of addition and multiestimates only include the time of execution of arithmetic operations. With any paralleling of individual iteration, the execution time of the ip is proportional to the number of sequential steps k+1. The author sets the task of reducing the number of sequential steps in the ip so as to execute it in a time proportional to a value smaller than k+1. He also sets the goal of formulating a method of accelerated bit serial-parallel execution of each successive step of the ip, with, in the modification sought, a reduced number of steps in a time comparable to the operation time of logical elements. 6 references.« less
Edit distance for marked point processes revisited: An implementation by binary integer programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito; Aihara, Kazuyuki
2015-12-15
We implement the edit distance for marked point processes [Suzuki et al., Int. J. Bifurcation Chaos 20, 3699–3708 (2010)] as a binary integer program. Compared with the previous implementation using minimum cost perfect matching, the proposed implementation has two advantages: first, by using the proposed implementation, we can apply a wide variety of software and hardware, even spin glasses and coherent ising machines, to calculate the edit distance for marked point processes; second, the proposed implementation runs faster than the previous implementation when the difference between the numbers of events in two time windows for a marked point process ismore » large.« less
Zaccone, Claudio; Lobianco, Daniela; Shotyk, William; Ciavatta, Claudio; Appleby, Peter G.; Brugiapaglia, Elisabetta; Casella, Laura; Miano, Teodoro M.; D’Orazio, Valeria
2017-01-01
Floating islands mysteriously moving around on lakes were described by several Latin authors almost two millennia ago. These fascinating ecosystems, known as free-floating mires, have been extensively investigated from ecological, hydrological and management points of view, but there have been no detailed studies of their rates of accumulation of organic matter (OM), organic carbon (OC) and total nitrogen (TN). We have collected a peat core 4 m long from the free-floating island of Posta Fibreno, a relic mire in Central Italy. This is the thickest accumulation of peat ever found in a free-floating mire, yet it has formed during the past seven centuries and represents the greatest accumulation rates, at both decadal and centennial timescale, of OM (0.63 vs. 0.37 kg/m2/yr), OC (0.28 vs. 0.18 kg/m2/yr) and TN (3.7 vs. 6.1 g/m2/yr) ever reported for coeval peatlands. The anomalously high accretion rates, obtained using 14C age dating, were confirmed using 210Pb and 137Cs: these show that the top 2 m of Sphagnum-peat has accumulated in only ~100 years. As an environmental archive, Posta Fibreno offers a temporal resolution which is 10x greater than any terrestrial peat bog, and promises to provide new insight into environmental changes occurring during the Anthropocene. PMID:28230066
40 CFR 426.51 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Specialized definitions. 426.51 Section 426.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.51...
40 CFR 426.51 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Specialized definitions. 426.51 Section 426.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS GLASS MANUFACTURING POINT SOURCE CATEGORY Float Glass Manufacturing Subcategory § 426.51...
Efficient Atomization and Combustion of Emulsified Crude Oil
2014-09-18
2.26 Naphthenes , vol % 50.72 Aromatics, vol % 16.82 Freezing Point, °F -49.7 Freezing Point, °C -45.4 Smoke Point, mm (ASTM) 19.2 Acid ...needed by the proposed method for capturing and oil removal , in particular the same vessels and booms used to herd the floating crude oil into a thick...slicks need to be removed more rapidly than they can be transported, in situ burning offers a rapid disposal method that minimizes risk to marine life