A Computational Model of Fraction Arithmetic
ERIC Educational Resources Information Center
Braithwaite, David W.; Pyke, Aryn A.; Siegler, Robert S.
2017-01-01
Many children fail to master fraction arithmetic even after years of instruction, a failure that hinders their learning of more advanced mathematics as well as their occupational success. To test hypotheses about why children have so many difficulties in this area, we created a computational model of fraction arithmetic learning and presented it…
The neural circuits for arithmetic principles.
Liu, Jie; Zhang, Han; Chen, Chuansheng; Chen, Hui; Cui, Jiaxin; Zhou, Xinlin
2017-02-15
Arithmetic principles are the regularities underlying arithmetic computation. Little is known about how the brain supports the processing of arithmetic principles. The current fMRI study examined neural activation and functional connectivity during the processing of verbalized arithmetic principles, as compared to numerical computation and general language processing. As expected, arithmetic principles elicited stronger activation in bilateral horizontal intraparietal sulcus and right supramarginal gyrus than did language processing, and stronger activation in left middle temporal lobe and left orbital part of inferior frontal gyrus than did computation. In contrast, computation elicited greater activation in bilateral horizontal intraparietal sulcus (extending to posterior superior parietal lobule) than did either arithmetic principles or language processing. Functional connectivity analysis with the psychophysiological interaction approach (PPI) showed that left temporal-parietal (MTG-HIPS) connectivity was stronger during the processing of arithmetic principle and language than during computation, whereas parietal-occipital connectivities were stronger during computation than during the processing of arithmetic principles and language. Additionally, the left fronto-parietal (orbital IFG-HIPS) connectivity was stronger during the processing of arithmetic principles than during computation. The results suggest that verbalized arithmetic principles engage a neural network that overlaps but is distinct from the networks for computation and language processing. Copyright © 2016 Elsevier Inc. All rights reserved.
Inexact hardware for modelling weather & climate
NASA Astrophysics Data System (ADS)
Düben, Peter D.; McNamara, Hugh; Palmer, Tim
2014-05-01
The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.
The use of imprecise processing to improve accuracy in weather & climate prediction
NASA Astrophysics Data System (ADS)
Düben, Peter D.; McNamara, Hugh; Palmer, T. N.
2014-08-01
The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.
Arithmetic Circuit Verification Based on Symbolic Computer Algebra
NASA Astrophysics Data System (ADS)
Watanabe, Yuki; Homma, Naofumi; Aoki, Takafumi; Higuchi, Tatsuo
This paper presents a formal approach to verify arithmetic circuits using symbolic computer algebra. Our method describes arithmetic circuits directly with high-level mathematical objects based on weighted number systems and arithmetic formulae. Such circuit description can be effectively verified by polynomial reduction techniques using Gröbner Bases. In this paper, we describe how the symbolic computer algebra can be used to describe and verify arithmetic circuits. The advantageous effects of the proposed approach are demonstrated through experimental verification of some arithmetic circuits such as multiply-accumulator and FIR filter. The result shows that the proposed approach has a definite possibility of verifying practical arithmetic circuits.
NASA Astrophysics Data System (ADS)
Bogdanov, Alexander; Khramushin, Vasily
2016-02-01
The architecture of a digital computing system determines the technical foundation of a unified mathematical language for exact arithmetic-logical description of phenomena and laws of continuum mechanics for applications in fluid mechanics and theoretical physics. The deep parallelization of the computing processes results in functional programming at a new technological level, providing traceability of the computing processes with automatic application of multiscale hybrid circuits and adaptive mathematical models for the true reproduction of the fundamental laws of physics and continuum mechanics.
Exploring the Feasibility of a DNA Computer: Design of an ALU Using Sticker-Based DNA Model.
Sarkar, Mayukh; Ghosal, Prasun; Mohanty, Saraju P
2017-09-01
Since its inception, DNA computing has advanced to offer an extremely powerful, energy-efficient emerging technology for solving hard computational problems with its inherent massive parallelism and extremely high data density. This would be much more powerful and general purpose when combined with other existing well-known algorithmic solutions that exist for conventional computing architectures using a suitable ALU. Thus, a specifically designed DNA Arithmetic and Logic Unit (ALU) that can address operations suitable for both domains can mitigate the gap between these two. An ALU must be able to perform all possible logic operations, including NOT, OR, AND, XOR, NOR, NAND, and XNOR; compare, shift etc., integer and floating point arithmetic operations (addition, subtraction, multiplication, and division). In this paper, design of an ALU has been proposed using sticker-based DNA model with experimental feasibility analysis. Novelties of this paper may be in manifold. First, the integer arithmetic operations performed here are 2s complement arithmetic, and the floating point operations follow the IEEE 754 floating point format, resembling closely to a conventional ALU. Also, the output of each operation can be reused for any next operation. So any algorithm or program logic that users can think of can be implemented directly on the DNA computer without any modification. Second, once the basic operations of sticker model can be automated, the implementations proposed in this paper become highly suitable to design a fully automated ALU. Third, proposed approaches are easy to implement. Finally, these approaches can work on sufficiently large binary numbers.
NASA Astrophysics Data System (ADS)
Pi, E. I.; Siegel, E.
2010-03-01
Siegel[AMS Natl.Mtg.(2002)-Abs.973-60-124] digits logarithmic- law inversion to ONLY BEQS BEC:Quanta/Bosons=#: EMP-like SEVERE VULNERABILITY of ONLY #-networks(VS.ANALOG INvulnerability) via Barabasi NP(VS.dynamics[Not.AMS(5/2009)] critique);(so called)``quantum-computing''(QC) = simple-arithmetic (sansdivision);algorithmiccomplexities:INtractibility/UNdecidabi lity/INefficiency/NONcomputability/HARDNESS(so MIScalled) ``noise''-induced-phase-transition(NIT)ACCELERATION:Cook-Levin theorem Reducibility = RG fixed-points; #-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(2002)] How? mea culpa)= ONLY MBCS hot-plasma v #-clumping NON-random BEC; Modular-Arithmetic Congruences = Signal x Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)]BEC logarithmic-law inversion factorization: Watkins #-theory U statistical- physics); P=/=NP C-S TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation(3 millennia AGO geometry: NO:CC,``CS'';``Feet of Clay!!!'']; Query WHAT?:Definition: (so MIScalled)``complexity''=UTTER-SIMPLICITY!! v COMPLICATEDNESS MEASURE(S).
ERIC Educational Resources Information Center
Alcoholado, Cristián; Diaz, Anita; Tagle, Arturo; Nussbaum, Miguel; Infante, Cristián
2016-01-01
This study aims to understand the differences in student learning outcomes and classroom behaviour when using the interpersonal computer, personal computer and pen-and-paper to solve arithmetic exercises. In this multi-session experiment, third grade students working on arithmetic exercises from various curricular units were divided into three…
Inconsistencies in Numerical Simulations of Dynamical Systems Using Interval Arithmetic
NASA Astrophysics Data System (ADS)
Nepomuceno, Erivelton G.; Peixoto, Márcia L. C.; Martins, Samir A. M.; Rodrigues, Heitor M.; Perc, Matjaž
Over the past few decades, interval arithmetic has been attracting widespread interest from the scientific community. With the expansion of computing power, scientific computing is encountering a noteworthy shift from floating-point arithmetic toward increased use of interval arithmetic. Notwithstanding the significant reliability of interval arithmetic, this paper presents a theoretical inconsistency in a simulation of dynamical systems using a well-known implementation of arithmetic interval. We have observed that two natural interval extensions present an empty intersection during a finite time range, which is contrary to the fundamental theorem of interval analysis. We have proposed a procedure to at least partially overcome this problem, based on the union of the two generated pseudo-orbits. This paper also shows a successful case of interval arithmetic application in the reduction of interval width size on the simulation of discrete map. The implications of our findings on the reliability of scientific computing using interval arithmetic have been properly addressed using two numerical examples.
Digital hardware implementation of a stochastic two-dimensional neuron model.
Grassia, F; Kohno, T; Levi, T
2016-11-01
This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.
The semantic system is involved in mathematical problem solving.
Zhou, Xinlin; Li, Mengyi; Li, Leinian; Zhang, Yiyun; Cui, Jiaxin; Liu, Jie; Chen, Chuansheng
2018-02-01
Numerous studies have shown that the brain regions around bilateral intraparietal cortex are critical for number processing and arithmetical computation. However, the neural circuits for more advanced mathematics such as mathematical problem solving (with little routine arithmetical computation) remain unclear. Using functional magnetic resonance imaging (fMRI), this study (N = 24 undergraduate students) compared neural bases of mathematical problem solving (i.e., number series completion, mathematical word problem solving, and geometric problem solving) and arithmetical computation. Direct subject- and item-wise comparisons revealed that mathematical problem solving typically had greater activation than arithmetical computation in all 7 regions of the semantic system (which was based on a meta-analysis of 120 functional neuroimaging studies on semantic processing). Arithmetical computation typically had greater activation in the supplementary motor area and left precentral gyrus. The results suggest that the semantic system in the brain supports mathematical problem solving. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Habiby, Sarry F.
1987-01-01
The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.
Aztec arithmetic revisited: land-area algorithms and Acolhua congruence arithmetic.
Williams, Barbara J; Jorge y Jorge, María del Carmen
2008-04-04
Acolhua-Aztec land records depicting areas and side dimensions of agricultural fields provide insight into Aztec arithmetic. Hypothesizing that recorded areas resulted from indigenous calculation, in a study of sample quadrilateral fields we found that 60% of the area values could be reproduced exactly by computation. In remaining cases, discrepancies between computed and recorded areas were consistently small, suggesting use of an unknown indigenous arithmetic. In revisiting the research, we discovered evidence for the use of congruence principles, based on proportions between the standard linear Acolhua measure and their units of shorter length. This procedure substitutes for computation with fractions and is labeled "Acolhua congruence arithmetic." The findings also clarify variance between Acolhua and Tenochca linear units, long an issue in understanding Aztec metrology.
Efficient Craig Interpolation for Linear Diophantine (Dis)Equations and Linear Modular Equations
2008-02-01
Craig interpolants has enabled the development of powerful hardware and software model checking techniques. Efficient algorithms are known for computing...interpolants in rational and real linear arithmetic. We focus on subsets of integer linear arithmetic. Our main results are polynomial time algorithms ...congruences), and linear diophantine disequations. We show the utility of the proposed interpolation algorithms for discovering modular/divisibility predicates
Vasilyeva, Marina; Laski, Elida V; Shen, Chen
2015-10-01
The present study tested the hypothesis that children's fluency with basic number facts and knowledge of computational strategies, derived from early arithmetic experience, predicts their performance on complex arithmetic problems. First-grade students from United States and Taiwan (N = 152, mean age: 7.3 years) were presented with problems that differed in difficulty: single-, mixed-, and double-digit addition. Children's strategy use varied as a function of problem difficulty, consistent with Siegler's theory of strategy choice. The use of decomposition strategy interacted with computational fluency in predicting the accuracy of double-digit addition. Further, the frequency of decomposition and computational fluency fully mediated cross-national differences in accuracy on these complex arithmetic problems. The results indicate the importance of both fluency with basic number facts and the decomposition strategy for later arithmetic performance. (c) 2015 APA, all rights reserved).
Gauss Elimination: Workhorse of Linear Algebra.
1995-08-05
linear algebra computation for solving systems, computing determinants and determining the rank of matrix. All of these are discussed in varying contexts. These include different arithmetic or algebraic setting such as integer arithmetic or polynomial rings as well as conventional real (floating-point) arithmetic. These have effects on both accuracy and complexity analyses of the algorithm. These, too, are covered here. The impact of modern parallel computer architecture on GE is also
ASIC For Complex Fixed-Point Arithmetic
NASA Technical Reports Server (NTRS)
Petilli, Stephen G.; Grimm, Michael J.; Olson, Erlend M.
1995-01-01
Application-specific integrated circuit (ASIC) performs 24-bit, fixed-point arithmetic operations on arrays of complex-valued input data. High-performance, wide-band arithmetic logic unit (ALU) designed for use in computing fast Fourier transforms (FFTs) and for performing ditigal filtering functions. Other applications include general computations involved in analysis of spectra and digital signal processing.
IBM system/360 assembly language interval arithmetic software
NASA Technical Reports Server (NTRS)
Phillips, E. J.
1972-01-01
Computer software designed to perform interval arithmetic is described. An interval is defined as the set of all real numbers between two given numbers including or excluding one or both endpoints. Interval arithmetic consists of the various elementary arithmetic operations defined on the set of all intervals, such as interval addition, subtraction, union, etc. One of the main applications of interval arithmetic is in the area of error analysis of computer calculations. For example, it has been used sucessfully to compute bounds on sounding errors in the solution of linear algebraic systems, error bounds in numerical solutions of ordinary differential equations, as well as integral equations and boundary value problems. The described software enables users to implement algorithms of the type described in references efficiently on the IBM 360 system.
ERIC Educational Resources Information Center
Vasilyeva, Marina; Laski, Elida V.; Shen, Chen
2015-01-01
The present study tested the hypothesis that children's fluency with basic number facts and knowledge of computational strategies, derived from early arithmetic experience, predicts their performance on complex arithmetic problems. First-grade students from United States and Taiwan (N = 152, mean age: 7.3 years) were presented with problems that…
Probabilistic arithmetic automata and their applications.
Marschall, Tobias; Herms, Inke; Kaltenbach, Hans-Michael; Rahmann, Sven
2012-01-01
We present a comprehensive review on probabilistic arithmetic automata (PAAs), a general model to describe chains of operations whose operands depend on chance, along with two algorithms to numerically compute the distribution of the results of such probabilistic calculations. PAAs provide a unifying framework to approach many problems arising in computational biology and elsewhere. We present five different applications, namely 1) pattern matching statistics on random texts, including the computation of the distribution of occurrence counts, waiting times, and clump sizes under hidden Markov background models; 2) exact analysis of window-based pattern matching algorithms; 3) sensitivity of filtration seeds used to detect candidate sequence alignments; 4) length and mass statistics of peptide fragments resulting from enzymatic cleavage reactions; and 5) read length statistics of 454 and IonTorrent sequencing reads. The diversity of these applications indicates the flexibility and unifying character of the presented framework. While the construction of a PAA depends on the particular application, we single out a frequently applicable construction method: We introduce deterministic arithmetic automata (DAAs) to model deterministic calculations on sequences, and demonstrate how to construct a PAA from a given DAA and a finite-memory random text model. This procedure is used for all five discussed applications and greatly simplifies the construction of PAAs. Implementations are available as part of the MoSDi package. Its application programming interface facilitates the rapid development of new applications based on the PAA framework.
NASA Astrophysics Data System (ADS)
Wang, Li-Qun; Saito, Masao
We used 1.5T functional magnetic resonance imaging (fMRI) to explore that which brain areas contribute uniquely to numeric computation. The BOLD effect activation pattern of metal arithmetic task (successive subtraction: actual calculation task) was compared with multiplication tables repetition task (rote verbal arithmetic memory task) response. The activation found in right parietal lobule during metal arithmetic task suggested that quantitative cognition or numeric computation may need the assistance of sensuous convert, such as spatial imagination and spatial sensuous convert. In addition, this mechanism may be an ’analog algorithm’ in the simple mental arithmetic processing.
FAST TRACK COMMUNICATION: Reversible arithmetic logic unit for quantum arithmetic
NASA Astrophysics Data System (ADS)
Kirkedal Thomsen, Michael; Glück, Robert; Axelsen, Holger Bock
2010-09-01
This communication presents the complete design of a reversible arithmetic logic unit (ALU) that can be part of a programmable reversible computing device such as a quantum computer. The presented ALU is garbage free and uses reversible updates to combine the standard reversible arithmetic and logical operations in one unit. Combined with a suitable control unit, the ALU permits the construction of an r-Turing complete computing device. The garbage-free ALU developed in this communication requires only 6n elementary reversible gates for five basic arithmetic-logical operations on two n-bit operands and does not use ancillae. This remarkable low resource consumption was achieved by generalizing the V-shape design first introduced for quantum ripple-carry adders and nesting multiple V-shapes in a novel integrated design. This communication shows that the realization of an efficient reversible ALU for a programmable computing device is possible and that the V-shape design is a very versatile approach to the design of quantum networks.
Rauscher, Larissa; Kohn, Juliane; Käser, Tanja; Mayer, Verena; Kucian, Karin; McCaskey, Ursina; Esser, Günter; von Aster, Michael
2016-01-01
Calcularis is a computer-based training program which focuses on basic numerical skills, spatial representation of numbers and arithmetic operations. The program includes a user model allowing flexible adaptation to the child's individual knowledge and learning profile. The study design to evaluate the training comprises three conditions (Calcularis group, waiting control group, spelling training group). One hundred and thirty-eight children from second to fifth grade participated in the study. Training duration comprised a minimum of 24 training sessions of 20 min within a time period of 6-8 weeks. Compared to the group without training (waiting control group) and the group with an alternative training (spelling training group), the children of the Calcularis group demonstrated a higher benefit in subtraction and number line estimation with medium to large effect sizes. Therefore, Calcularis can be used effectively to support children in arithmetic performance and spatial number representation.
Arabidopsis plants perform arithmetic division to prevent starvation at night
Scialdone, Antonio; Mugford, Sam T; Feike, Doreen; Skeffington, Alastair; Borrill, Philippa; Graf, Alexander; Smith, Alison M; Howard, Martin
2013-01-01
Photosynthetic starch reserves that accumulate in Arabidopsis leaves during the day decrease approximately linearly with time at night to support metabolism and growth. We find that the rate of decrease is adjusted to accommodate variation in the time of onset of darkness and starch content, such that reserves last almost precisely until dawn. Generation of these dynamics therefore requires an arithmetic division computation between the starch content and expected time to dawn. We introduce two novel chemical kinetic models capable of implementing analog arithmetic division. Predictions from the models are successfully tested in plants perturbed by a night-time light period or by mutations in starch degradation pathways. Our experiments indicate which components of the starch degradation apparatus may be important for appropriate arithmetic division. Our results are potentially relevant for any biological system dependent on a food reserve for survival over a predictable time period. DOI: http://dx.doi.org/10.7554/eLife.00669.001 PMID:23805380
ERIC Educational Resources Information Center
Zhang, Xiao; Räsänen, Pekka; Koponen, Tuire; Aunola, Kaisa; Lerkkanen, Marja-Kristiina; Nurmi, Jari-Erik
2017-01-01
The longitudinal relations of domain-general and numerical skills at ages 6-7 years to 3 cognitive domains of arithmetic learning, namely knowing (written computation), applying (arithmetic word problems), and reasoning (arithmetic reasoning) at age 11, were examined for a representative sample of 378 Finnish children. The results showed that…
Error-correcting codes in computer arithmetic.
NASA Technical Reports Server (NTRS)
Massey, J. L.; Garcia, O. N.
1972-01-01
Summary of the most important results so far obtained in the theory of coding for the correction and detection of errors in computer arithmetic. Attempts to satisfy the stringent reliability demands upon the arithmetic unit are considered, and special attention is given to attempts to incorporate redundancy into the numbers themselves which are being processed so that erroneous results can be detected and corrected.
The MasPar MP-1 As a Computer Arithmetic Laboratory
Anuta, Michael A.; Lozier, Daniel W.; Turner, Peter R.
1996-01-01
This paper is a blueprint for the use of a massively parallel SIMD computer architecture for the simulation of various forms of computer arithmetic. The particular system used is a DEC/MasPar MP-1 with 4096 processors in a square array. This architecture has many advantages for such simulations due largely to the simplicity of the individual processors. Arithmetic operations can be spread across the processor array to simulate a hardware chip. Alternatively they may be performed on individual processors to allow simulation of a massively parallel implementation of the arithmetic. Compromises between these extremes permit speed-area tradeoffs to be examined. The paper includes a description of the architecture and its features. It then summarizes some of the arithmetic systems which have been, or are to be, implemented. The implementation of the level-index and symmetric level-index, LI and SLI, systems is described in some detail. An extensive bibliography is included. PMID:27805123
Classified one-step high-radix signed-digit arithmetic units
NASA Astrophysics Data System (ADS)
Cherri, Abdallah K.
1998-08-01
High-radix number systems enable higher information storage density, less complexity, fewer system components, and fewer cascaded gates and operations. A simple one-step fully parallel high-radix signed-digit arithmetic is proposed for parallel optical computing based on new joint spatial encodings. This reduces hardware requirements and improves throughput by reducing the space-bandwidth produce needed. The high-radix signed-digit arithmetic operations are based on classifying the neighboring input digit pairs into various groups to reduce the computation rules. A new joint spatial encoding technique is developed to present both the operands and the computation rules. This technique increases the spatial bandwidth product of the spatial light modulators of the system. An optical implementation of the proposed high-radix signed-digit arithmetic operations is also presented. It is shown that our one-step trinary signed-digit and quaternary signed-digit arithmetic units are much simpler and better than all previously reported high-radix signed-digit techniques.
Versatile analog pulse height computer performs real-time arithmetic operations
NASA Technical Reports Server (NTRS)
Brenner, R.; Strauss, M. G.
1967-01-01
Multipurpose analog pulse height computer performs real-time arithmetic operations on relatively fast pulses. This computer can be used for identification of charged particles, pulse shape discrimination, division of signals from position sensitive detectors, and other on-line data reduction techniques.
Neural computation of arithmetic functions
NASA Technical Reports Server (NTRS)
Siu, Kai-Yeung; Bruck, Jehoshua
1990-01-01
An area of application of neural networks is considered. A neuron is modeled as a linear threshold gate, and the network architecture considered is the layered feedforward network. It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network. Some known results are improved by showing that the product of two n-bit numbers and sorting of n n-bit numbers can be computed by a polynomial-size neural network using only four and five unit delays, respectively. Moreover, the weights of each threshold element in the neural networks require O(log n)-bit (instead of n-bit) accuracy. These results can be extended to more complicated functions such as multiple products, division, rational functions, and approximation of analytic functions.
Fault tolerant computing: A preamble for assuring viability of large computer systems
NASA Technical Reports Server (NTRS)
Lim, R. S.
1977-01-01
The need for fault-tolerant computing is addressed from the viewpoints of (1) why it is needed, (2) how to apply it in the current state of technology, and (3) what it means in the context of the Phoenix computer system and other related systems. To this end, the value of concurrent error detection and correction is described. User protection, program retry, and repair are among the factors considered. The technology of algebraic codes to protect memory systems and arithmetic codes to protect memory systems and arithmetic codes to protect arithmetic operations is discussed.
NASA Astrophysics Data System (ADS)
Siegel, J.; Siegel, Edward Carl-Ludwig
2011-03-01
Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!
Redundant binary number representation for an inherently parallel arithmetic on optical computers.
De Biase, G A; Massini, A
1993-02-10
A simple redundant binary number representation suitable for digital-optical computers is presented. By means of this representation it is possible to build an arithmetic with carry-free parallel algebraic sums carried out in constant time and parallel multiplication in log N time. This redundant number representation naturally fits the 2's complement binary number system and permits the construction of inherently parallel arithmetic units that are used in various optical technologies. Some properties of this number representation and several examples of computation are presented.
Computer-Based Arithmetic Test Generation
ERIC Educational Resources Information Center
Trocchi, Robert F.
1973-01-01
The computer can be a welcome partner in the instructional process, but only if there is man-machine interaction. Man should not compromise system design because of available hardware; the computer must fit the system design for the result to represent an acceptable solution to instructional technology. The Arithmetic Test Generator system fits…
Modified-Signed-Digit Optical Computing Using Fan-Out
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang; Zhou, Shaomin; Yeh, Pochi
1996-01-01
Experimental optical computing system containing optical fan-out elements implements modified signed-digit (MSD) arithmetic and logic. In comparison with previous optical implementations of MSD arithmetic, this one characterized by larger throughput, greater flexibility, and simpler optics.
Efficient Probabilistic Diagnostics for Electrical Power Systems
NASA Technical Reports Server (NTRS)
Mengshoel, Ole J.; Chavira, Mark; Cascio, Keith; Poll, Scott; Darwiche, Adnan; Uckun, Serdar
2008-01-01
We consider in this work the probabilistic approach to model-based diagnosis when applied to electrical power systems (EPSs). Our probabilistic approach is formally well-founded, as it based on Bayesian networks and arithmetic circuits. We investigate the diagnostic task known as fault isolation, and pay special attention to meeting two of the main challenges . model development and real-time reasoning . often associated with real-world application of model-based diagnosis technologies. To address the challenge of model development, we develop a systematic approach to representing electrical power systems as Bayesian networks, supported by an easy-to-use speci.cation language. To address the real-time reasoning challenge, we compile Bayesian networks into arithmetic circuits. Arithmetic circuit evaluation supports real-time diagnosis by being predictable and fast. In essence, we introduce a high-level EPS speci.cation language from which Bayesian networks that can diagnose multiple simultaneous failures are auto-generated, and we illustrate the feasibility of using arithmetic circuits, compiled from Bayesian networks, for real-time diagnosis on real-world EPSs of interest to NASA. The experimental system is a real-world EPS, namely the Advanced Diagnostic and Prognostic Testbed (ADAPT) located at the NASA Ames Research Center. In experiments with the ADAPT Bayesian network, which currently contains 503 discrete nodes and 579 edges, we .nd high diagnostic accuracy in scenarios where one to three faults, both in components and sensors, were inserted. The time taken to compute the most probable explanation using arithmetic circuits has a small mean of 0.2625 milliseconds and standard deviation of 0.2028 milliseconds. In experiments with data from ADAPT we also show that arithmetic circuit evaluation substantially outperforms joint tree propagation and variable elimination, two alternative algorithms for diagnosis using Bayesian network inference.
NASA Astrophysics Data System (ADS)
Oztekin, Halit; Temurtas, Feyzullah; Gulbag, Ali
The Arithmetic and Logic Unit (ALU) design is one of the important topics in Computer Architecture and Organization course in Computer and Electrical Engineering departments. There are ALU designs that have non-modular nature to be used as an educational tool. As the programmable logic technology has developed rapidly, it is feasible that ALU design based on Field Programmable Gate Array (FPGA) is implemented in this course. In this paper, we have adopted the modular approach to ALU design based on FPGA. All the modules in the ALU design are realized using schematic structure on Altera's Cyclone II Development board. Under this model, the ALU content is divided into four distinct modules. These are arithmetic unit except for multiplication and division operations, logic unit, multiplication unit and division unit. User can easily design any size of ALU unit since this approach has the modular nature. Then, this approach was applied to microcomputer architecture design named BZK.SAU.FPGA10.0 instead of the current ALU unit.
NASA Astrophysics Data System (ADS)
Ghosh, Amal K.; Bhattacharya, Animesh; Raul, Moumita; Basuray, Amitabha
2012-07-01
Arithmetic logic unit (ALU) is the most important unit in any computing system. Optical computing is becoming popular day-by-day because of its ultrahigh processing speed and huge data handling capability. Obviously for the fast processing we need the optical TALU compatible with the multivalued logic. In this regard we are communicating the trinary arithmetic and logic unit (TALU) in modified trinary number (MTN) system, which is suitable for the optical computation and other applications in multivalued logic system. Here the savart plate and spatial light modulator (SLM) based optoelectronic circuits have been used to exploit the optical tree architecture (OTA) in optical interconnection network.
Wong, Terry Tin-Yau
2017-12-01
The current study examined the unique and shared contributions of arithmetic operation understanding and numerical magnitude representation to children's mathematics achievement. A sample of 124 fourth graders was tested on their arithmetic operation understanding (as reflected by their understanding of arithmetic principles and the knowledge about the application of arithmetic operations) and their precision of rational number magnitude representation. They were also tested on their mathematics achievement and arithmetic computation performance as well as the potential confounding factors. The findings suggested that both arithmetic operation understanding and numerical magnitude representation uniquely predicted children's mathematics achievement. The findings highlight the significance of arithmetic operation understanding in mathematics learning. Copyright © 2017 Elsevier Inc. All rights reserved.
Floating-point geometry: toward guaranteed geometric computations with approximate arithmetics
NASA Astrophysics Data System (ADS)
Bajard, Jean-Claude; Langlois, Philippe; Michelucci, Dominique; Morin, Géraldine; Revol, Nathalie
2008-08-01
Geometric computations can fail because of inconsistencies due to floating-point inaccuracy. For instance, the computed intersection point between two curves does not lie on the curves: it is unavoidable when the intersection point coordinates are non rational, and thus not representable using floating-point arithmetic. A popular heuristic approach tests equalities and nullities up to a tolerance ɛ. But transitivity of equality is lost: we can have A approx B and B approx C, but A not approx C (where A approx B means ||A - B|| < ɛ for A,B two floating-point values). Interval arithmetic is another, self-validated, alternative; the difficulty is to limit the swell of the width of intervals with computations. Unfortunately interval arithmetic cannot decide equality nor nullity, even in cases where it is decidable by other means. A new approach, developed in this paper, consists in modifying the geometric problems and algorithms, to account for the undecidability of the equality test and unavoidable inaccuracy. In particular, all curves come with a non-zero thickness, so two curves (generically) cut in a region with non-zero area, an inner and outer representation of which is computable. This last approach no more assumes that an equality or nullity test is available. The question which arises is: which geometric problems can still be solved with this last approach, and which cannot? This paper begins with the description of some cases where every known arithmetic fails in practice. Then, for each arithmetic, some properties of the problems they can solve are given. We end this work by proposing the bases of a new approach which aims to fulfill the geometric computations requirements.
Arithmetic 400. A Computer Educational Program.
ERIC Educational Resources Information Center
Firestein, Laurie
"ARITHMETIC 400" is the first of the next generation of educational programs designed to encourage thinking about arithmetic problems. Presented in video game format, performance is a measure of correctness, speed, accuracy, and fortune as well. Play presents a challenge to individuals at various skill levels. The program, run on an Apple…
Bit-parallel arithmetic in a massively-parallel associative processor
NASA Technical Reports Server (NTRS)
Scherson, Isaac D.; Kramer, David A.; Alleyne, Brian D.
1992-01-01
A simple but powerful new architecture based on a classical associative processor model is presented. Algorithms for performing the four basic arithmetic operations both for integer and floating point operands are described. For m-bit operands, the proposed architecture makes it possible to execute complex operations in O(m) cycles as opposed to O(m exp 2) for bit-serial machines. A word-parallel, bit-parallel, massively-parallel computing system can be constructed using this architecture with VLSI technology. The operation of this system is demonstrated for the fast Fourier transform and matrix multiplication.
Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic
NASA Astrophysics Data System (ADS)
Narendran, S.; Selvakumar, J.
2018-04-01
Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.
NASA Astrophysics Data System (ADS)
Power, Sarah D.; Falk, Tiago H.; Chau, Tom
2010-04-01
Near-infrared spectroscopy (NIRS) has recently been investigated as a non-invasive brain-computer interface (BCI). In particular, previous research has shown that NIRS signals recorded from the motor cortex during left- and right-hand imagery can be distinguished, providing a basis for a two-choice NIRS-BCI. In this study, we investigated the feasibility of an alternative two-choice NIRS-BCI paradigm based on the classification of prefrontal activity due to two cognitive tasks, specifically mental arithmetic and music imagery. Deploying a dual-wavelength frequency domain near-infrared spectrometer, we interrogated nine sites around the frontopolar locations (International 10-20 System) while ten able-bodied adults performed mental arithmetic and music imagery within a synchronous shape-matching paradigm. With the 18 filtered AC signals, we created task- and subject-specific maximum likelihood classifiers using hidden Markov models. Mental arithmetic and music imagery were classified with an average accuracy of 77.2% ± 7.0 across participants, with all participants significantly exceeding chance accuracies. The results suggest the potential of a two-choice NIRS-BCI based on cognitive rather than motor tasks.
Binary Arithmetic From Hariot (CA, 1600 A.D.) to the Computer Age.
ERIC Educational Resources Information Center
Glaser, Anton
This history of binary arithmetic begins with details of Thomas Hariot's contribution and includes specific references to Hariot's manuscripts kept at the British Museum. A binary code developed by Sir Francis Bacon is discussed. Briefly mentioned are contributions to binary arithmetic made by Leibniz, Fontenelle, Gauss, Euler, Benzout, Barlow,…
Computer-Assisted Instruction: Stanford's 1965-66 Arithmetic Program.
ERIC Educational Resources Information Center
Suppes, Patrick; And Others
A review of the possibilities and challenges of computer-assisted instruction (CAI), and a brief history of CAI projects at Stanford serve to give the reader the context of the particular program described and analyzed in this book. The 1965-66 arithmetic drill-and-practice program is described, summarizing the curriculum and project operation. An…
Sign Language for K-8 Mathematics by 3D Interactive Animation
ERIC Educational Resources Information Center
Adamo-Villani, Nicoletta; Doublestein, John; Martin, Zachary
2005-01-01
We present a new highly interactive computer animation tool to increase the mathematical skills of deaf children. We aim at increasing the effectiveness of (hearing) parents in teaching arithmetic to their deaf children, and the opportunity of deaf children to learn arithmetic via interactive media. Using state-of-the-art computer animation…
NASA Astrophysics Data System (ADS)
Pavlichin, Dmitri S.; Mabuchi, Hideo
2014-06-01
Nanoscale integrated photonic devices and circuits offer a path to ultra-low power computation at the few-photon level. Here we propose an optical circuit that performs a ubiquitous operation: the controlled, random-access readout of a collection of stored memory phases or, equivalently, the computation of the inner product of a vector of phases with a binary selector" vector, where the arithmetic is done modulo 2pi and the result is encoded in the phase of a coherent field. This circuit, a collection of cascaded interferometers driven by a coherent input field, demonstrates the use of coherence as a computational resource, and of the use of recently-developed mathematical tools for modeling optical circuits with many coupled parts. The construction extends in a straightforward way to the computation of matrix-vector and matrix-matrix products, and, with the inclusion of an optical feedback loop, to the computation of a weighted" readout of stored memory phases. We note some applications of these circuits for error correction and for computing tasks requiring fast vector inner products, e.g. statistical classification and some machine learning algorithms.
Preverbal and verbal counting and computation.
Gallistel, C R; Gelman, R
1992-08-01
We describe the preverbal system of counting and arithmetic reasoning revealed by experiments on numerical representations in animals. In this system, numerosities are represented by magnitudes, which are rapidly but inaccurately generated by the Meck and Church (1983) preverbal counting mechanism. We suggest the following. (1) The preverbal counting mechanism is the source of the implicit principles that guide the acquisition of verbal counting. (2) The preverbal system of arithmetic computation provides the framework for the assimilation of the verbal system. (3) Learning to count involves, in part, learning a mapping from the preverbal numerical magnitudes to the verbal and written number symbols and the inverse mappings from these symbols to the preverbal magnitudes. (4) Subitizing is the use of the preverbal counting process and the mapping from the resulting magnitudes to number words in order to generate rapidly the number words for small numerosities. (5) The retrieval of the number facts, which plays a central role in verbal computation, is mediated via the inverse mappings from verbal and written numbers to the preverbal magnitudes and the use of these magnitudes to find the appropriate cells in tabular arrangements of the answers. (6) This model of the fact retrieval process accounts for the salient features of the reaction time differences and error patterns revealed by experiments on mental arithmetic. (7) The application of verbal and written computational algorithms goes on in parallel with, and is to some extent guided by, preverbal computations, both in the child and in the adult.
Nonverbal arithmetic in humans: light from noise.
Cordes, Sara; Gallistel, C R; Gelman, Rochel; Latham, Peter
2007-10-01
Animal and human data suggest the existence of a cross-species system of analog number representation (e.g., Cordes, Gelman, Gallistel, & Whalen, 2001; Meeck & Church, 1983), which may mediate the computation of statistical regularities in the environment (Gallistel, Gelman, & Cordes, 2006). However, evidence of arithmetic manipulation of these nonverbal magnitude representations is sparse and lacking in depth. This study uses the analysis of variability as a tool for understanding properties of these combinatorial processes. Human subjects participated in tasks requiring responses dependent upon the addition, subtraction, or reproduction of nonverbal counts. Variance analyses revealed that the magnitude of both inputs and answer contributed to the variability in the arithmetic responses, with operand variability dominating. Other contributing factors to the observed variability and implications for logarithmic versus scalar models of magnitude representation are discussed in light of these results.
An Input Routine Using Arithmetic Statements for the IBM 704 Digital Computer
NASA Technical Reports Server (NTRS)
Turner, Don N.; Huff, Vearl N.
1961-01-01
An input routine has been designed for use with FORTRAN or SAP coded programs which are to be executed on an IBM 704 digital computer. All input to be processed by the routine is punched on IBM cards as declarative statements of the arithmetic type resembling the FORTRAN language. The routine is 850 words in length. It is capable of loading fixed- or floating-point numbers, octal numbers, and alphabetic words, and of performing simple arithmetic as indicated on input cards. Provisions have been made for rapid loading of arrays of numbers in consecutive memory locations.
Item Mass and Complexity and the Arithmetic Computation of Students with Learning Disabilities.
ERIC Educational Resources Information Center
Cawley, John F.; Shepard, Teri; Smith, Maureen; Parmar, Rene S.
1997-01-01
The performance of 76 students (ages 10 to 15) with learning disabilities on four tasks of arithmetic computation within each of the four basic operations was examined. Tasks varied in difficulty level and number of strokes needed to complete all items. Intercorrelations between task sets and operations were examined as was the use of…
Non-symbolic arithmetic in adults and young children.
Barth, Hilary; La Mont, Kristen; Lipton, Jennifer; Dehaene, Stanislas; Kanwisher, Nancy; Spelke, Elizabeth
2006-01-01
Five experiments investigated whether adults and preschool children can perform simple arithmetic calculations on non-symbolic numerosities. Previous research has demonstrated that human adults, human infants, and non-human animals can process numerical quantities through approximate representations of their magnitudes. Here we consider whether these non-symbolic numerical representations might serve as a building block of uniquely human, learned mathematics. Both adults and children with no training in arithmetic successfully performed approximate arithmetic on large sets of elements. Success at these tasks did not depend on non-numerical continuous quantities, modality-specific quantity information, the adoption of alternative non-arithmetic strategies, or learned symbolic arithmetic knowledge. Abstract numerical quantity representations therefore are computationally functional and may provide a foundation for formal mathematics.
The Problem-Solving Nemesis: Mindless Manipulation.
ERIC Educational Resources Information Center
Hawkins, Vincent J.
1987-01-01
Indicates that only 21% of respondents (secondary school math teachers) used computer-assisted instruction for tutorial work, physical models to interpret abstract concepts, or real-life application of the arithmetic or algebraic manipulation. Recommends that creative teaching methods be applied to problem solving. (NKA)
Optimized 4-bit Quantum Reversible Arithmetic Logic Unit
NASA Astrophysics Data System (ADS)
Ayyoub, Slimani; Achour, Benslama
2017-08-01
Reversible logic has received a great attention in the recent years due to its ability to reduce the power dissipation. The main purposes of designing reversible logic are to decrease quantum cost, depth of the circuits and the number of garbage outputs. The arithmetic logic unit (ALU) is an important part of central processing unit (CPU) as the execution unit. This paper presents a complete design of a new reversible arithmetic logic unit (ALU) that can be part of a programmable reversible computing device such as a quantum computer. The proposed ALU based on a reversible low power control unit and small performance parameters full adder named double Peres gates. The presented ALU can produce the largest number (28) of arithmetic and logic functions and have the smallest number of quantum cost and delay compared with existing designs.
ERIC Educational Resources Information Center
Brownell, William A.; And Others
Reported are the results and conclusions of an arithmetic investigation made in the schools of Scotland in the spring and fall of 1966. The first problem in this investigation was to ascertain which, if either, of two unlike programs of instruction was more effective in developing skill in computation. The second was to determine the value of an…
ERIC Educational Resources Information Center
HANKIN, EDWARD K.; AND OTHERS
THIS TECHNICAL PROGRESS REPORT COVERS THE FIRST THREE MONTHS OF A PROJECT TO DEVELOP COMPUTER ASSISTED PREVOCATIONAL READING AND ARITHMETIC COURSES FOR DISADVANTAGED YOUTHS AND ADULTS. DURING THE FIRST MONTH OF OPERATION, PROJECT PERSONNEL CONCENTRATED ON SUCH ADMINISTRATIVE MATTERS AS TRAINING STAFF AND PREPARING FACILITIES. AN ARITHMETIC PROGRAM…
High-precision arithmetic in mathematical physics
Bailey, David H.; Borwein, Jonathan M.
2015-05-12
For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.
Optical computation using residue arithmetic.
Huang, A; Tsunoda, Y; Goodman, J W; Ishihara, S
1979-01-15
Using residue arithmetic it is possible to perform additions, subtractions, multiplications, and polynomial evaluation without the necessity for carry operations. Calculations can, therefore, be performed in a fully parallel manner. Several different optical methods for performing residue arithmetic operations are described. A possible combination of such methods to form a matrix vector multiplier is considered. The potential advantages of optics in performing these kinds of operations are discussed.
Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures
NASA Astrophysics Data System (ADS)
Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.
2017-12-01
Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.
Fault-tolerant arithmetic via time-shared TMR
NASA Astrophysics Data System (ADS)
Swartzlander, Earl E.
1999-11-01
Fault tolerance is increasingly important as society has come to depend on computers for more and more aspects of daily life. The current concern about the Y2K problems indicates just how much we depend on accurate computers. This paper describes work on time- shared TMR, a technique which is used to provide arithmetic operations that produce correct results in spite of circuit faults.
Category-theoretic models of algebraic computer systems
NASA Astrophysics Data System (ADS)
Kovalyov, S. P.
2016-01-01
A computer system is said to be algebraic if it contains nodes that implement unconventional computation paradigms based on universal algebra. A category-based approach to modeling such systems that provides a theoretical basis for mapping tasks to these systems' architecture is proposed. The construction of algebraic models of general-purpose computations involving conditional statements and overflow control is formally described by a reflector in an appropriate category of algebras. It is proved that this reflector takes the modulo ring whose operations are implemented in the conventional arithmetic processors to the Łukasiewicz logic matrix. Enrichments of the set of ring operations that form bases in the Łukasiewicz logic matrix are found.
Speech recognition for embedded automatic positioner for laparoscope
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Yin, Qingyun; Wang, Yi; Yu, Daoyin
2014-07-01
In this paper a novel speech recognition methodology based on Hidden Markov Model (HMM) is proposed for embedded Automatic Positioner for Laparoscope (APL), which includes a fixed point ARM processor as the core. The APL system is designed to assist the doctor in laparoscopic surgery, by implementing the specific doctor's vocal control to the laparoscope. Real-time respond to the voice commands asks for more efficient speech recognition algorithm for the APL. In order to reduce computation cost without significant loss in recognition accuracy, both arithmetic and algorithmic optimizations are applied in the method presented. First, depending on arithmetic optimizations most, a fixed point frontend for speech feature analysis is built according to the ARM processor's character. Then the fast likelihood computation algorithm is used to reduce computational complexity of the HMM-based recognition algorithm. The experimental results show that, the method shortens the recognition time within 0.5s, while the accuracy higher than 99%, demonstrating its ability to achieve real-time vocal control to the APL.
ERIC Educational Resources Information Center
Tolar, Tammy Daun; Lederberg, Amy R.; Fletcher, Jack M.
2009-01-01
The goal of this study was to develop and evaluate a structural model of the relations among cognitive abilities and arithmetic skills and college students' algebra achievement. The model of algebra achievement was compared to a model of performance on the Scholastic Assessment in Mathematics (SAT-M) to determine whether the pattern of relations…
Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Michael
Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less
Computer Architecture for Energy Efficient SFQ
2014-08-27
IBM Corporation (T.J. Watson Research Laboratory) 1101 Kitchawan Road Yorktown Heights, NY 10598 -0000 2 ABSTRACT Number of Papers published in peer...accomplished during this ARO-sponsored project at IBM Research to identify and model an energy efficient SFQ-based computer architecture. The... IBM Windsor Blue (WB), illustrated schematically in Figure 2. The basic building block of WB is a "tile" comprised of a 64-bit arithmetic logic unit
Basic Techniques in Environmental Simulation.
1982-07-01
the devel- ’I or oper is liable for all necessary changes in the model or its supporting computer software . After the 90-day warranty expires, the user...processing unit, that part of a computer which accom- plishes arithmetic and logical operations DCFLOS Dynamic cloud -free line-of-sight, a simulation... Software Development ......... 12 1.7.7 Operational Environment, Interfaces, and Constraints. . 12 1.7.8 Effectiveness Evaluation, Value Analysis, and
NASA Astrophysics Data System (ADS)
Coggins, Porter E.
2015-04-01
The purpose of this paper is (1) to present how general education elementary school age students constructed computer passwords using digital root sums and second-order arithmetic sequences, (2) argue that computer password construction can be used as an engaging introduction to generate interest in elementary school students to study mathematics related to computer science, and (3) share additional mathematical ideas accessible to elementary school students that can be used to create computer passwords. This paper serves to fill a current gap in the literature regarding the integration of mathematical content accessible to upper elementary school students and aspects of computer science in general, and computer password construction in particular. In addition, the protocols presented here can serve as a hook to generate further interest in mathematics and computer science. Students learned to create a random-looking computer password by using biometric measurements of their shoe size, height, and age in months and to create a second-order arithmetic sequence, then converted the resulting numbers into characters that become their computer passwords. This password protocol can be used to introduce students to good computer password habits that can serve a foundation for a life-long awareness of data security. A refinement of the password protocol is also presented.
Moore, R. Davis; Drollette, Eric S.; Scudder, Mark R.; Bharij, Aashiv; Hillman, Charles H.
2014-01-01
The current study investigated the influence of cardiorespiratory fitness on arithmetic cognition in forty 9–10 year old children. Measures included a standardized mathematics achievement test to assess conceptual and computational knowledge, self-reported strategy selection, and an experimental arithmetic verification task (including small and large addition problems), which afforded the measurement of event-related brain potentials (ERPs). No differences in math achievement were observed as a function of fitness level, but all children performed better on math concepts relative to math computation. Higher fit children reported using retrieval more often to solve large arithmetic problems, relative to lower fit children. During the arithmetic verification task, higher fit children exhibited superior performance for large problems, as evidenced by greater d' scores, while all children exhibited decreased accuracy and longer reaction time for large relative to small problems, and incorrect relative to correct solutions. On the electrophysiological level, modulations of early (P1, N170) and late ERP components (P3, N400) were observed as a function of problem size and solution correctness. Higher fit children exhibited selective modulations for N170, P3, and N400 amplitude relative to lower fit children, suggesting that fitness influences symbolic encoding, attentional resource allocation and semantic processing during arithmetic tasks. The current study contributes to the fitness-cognition literature by demonstrating that the benefits of cardiorespiratory fitness extend to arithmetic cognition, which has important implications for the educational environment and the context of learning. PMID:24829556
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2016-12-01
Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can hamper AM resulting in severe underestimation of BME. TI turned out to be the most vulnerable, resulting in BME overestimation. Finally, we show how SS can be largely invariant to rounding errors, yielding the most accurate and computational efficient results. These research results are useful for MC simulations to estimate Bayesian model evidence.
Träff, Ulf; Olsson, Linda; Skagerlund, Kenny; Östergren, Rickard
2018-03-01
A modified pathways to mathematics model was used to examine the cognitive mechanisms underlying arithmetic skills in third graders. A total of 269 children were assessed on tasks tapping the four pathways and arithmetic skills. A path analysis showed that symbolic number processing was directly supported by the linguistic and approximate quantitative pathways. The direct contribution from the four pathways to arithmetic proficiency varied; the linguistic pathway supported single-digit arithmetic and word problem solving, whereas the approximate quantitative pathway supported only multi-digit calculation. The spatial processing and verbal working memory pathways supported only arithmetic word problem solving. The notion of hierarchical levels of arithmetic was supported by the results, and the different levels were supported by different constellations of pathways. However, the strongest support to the hierarchical levels of arithmetic were provided by the proximal arithmetic skills. Copyright © 2017 Elsevier Inc. All rights reserved.
Towards constructing multi-bit binary adder based on Belousov-Zhabotinsky reaction
NASA Astrophysics Data System (ADS)
Zhang, Guo-Mao; Wong, Ieong; Chou, Meng-Ta; Zhao, Xin
2012-04-01
It has been proposed that the spatial excitable media can perform a wide range of computational operations, from image processing, to path planning, to logical and arithmetic computations. The realizations in the field of chemical logical and arithmetic computations are mainly concerned with single simple logical functions in experiments. In this study, based on Belousov-Zhabotinsky reaction, we performed simulations toward the realization of a more complex operation, the binary adder. Combining with some of the existing functional structures that have been verified experimentally, we designed a planar geometrical binary adder chemical device. Through numerical simulations, we first demonstrated that the device can implement the function of a single-bit full binary adder. Then we show that the binary adder units can be further extended in plane, and coupled together to realize a two-bit, or even multi-bit binary adder. The realization of chemical adders can guide the constructions of other sophisticated arithmetic functions, ultimately leading to the implementation of chemical computer and other intelligent systems.
NASA Technical Reports Server (NTRS)
Jones, J. R.; Bodenheimer, R. E.
1976-01-01
A simple programmable Tse processor organization and arithmetic operations necessary for extraction of the desired topological information are described. Hardware additions to this organization are discussed along with trade-offs peculiar to the tse computing concept. An improved organization is presented along with the complementary software for the various arithmetic operations. The performance of the two organizations is compared in terms of speed, power, and cost. Software routines developed to extract the desired information from an image are included.
Construction of Rational Maps on the Projective Line with Given Dynamical Structure
2016-05-11
References 42 4 1. Introduction The is a paper in arithmetic dynamics, a relatively young field at the intersection of the older studies of number theory...computers became available. The exponentially increased computational power and access to larger data sets rocketed the field forward, allowing...theory and dy- 5 namical systems, have come together to create a new field : arithmetic dynamics. Relative to the study of mathematics as a whole
Learning, Realizability and Games in Classical Arithmetic
NASA Astrophysics Data System (ADS)
Aschieri, Federico
2010-12-01
In this dissertation we provide mathematical evidence that the concept of learning can be used to give a new and intuitive computational semantics of classical proofs in various fragments of Predicative Arithmetic. First, we extend Kreisel modified realizability to a classical fragment of first order Arithmetic, Heyting Arithmetic plus EM1 (Excluded middle axiom restricted to Sigma^0_1 formulas). We introduce a new realizability semantics we call "Interactive Learning-Based Realizability". Our realizers are self-correcting programs, which learn from their errors and evolve through time. Secondly, we extend the class of learning based realizers to a classical version PCFclass of PCF and, then, compare the resulting notion of realizability with Coquand game semantics and prove a full soundness and completeness result. In particular, we show there is a one-to-one correspondence between realizers and recursive winning strategies in the 1-Backtracking version of Tarski games. Third, we provide a complete and fully detailed constructive analysis of learning as it arises in learning based realizability for HA+EM1, Avigad's update procedures and epsilon substitution method for Peano Arithmetic PA. We present new constructive techniques to bound the length of learning processes and we apply them to reprove - by means of our theory - the classic result of Godel that provably total functions of PA can be represented in Godel's system T. Last, we give an axiomatization of the kind of learning that is needed to computationally interpret Predicative classical second order Arithmetic. Our work is an extension of Avigad's and generalizes the concept of update procedure to the transfinite case. Transfinite update procedures have to learn values of transfinite sequences of non computable functions in order to extract witnesses from classical proofs.
Arithmetic Data Cube as a Data Intensive Benchmark
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Shabano, Leonid
2003-01-01
Data movement across computational grids and across memory hierarchy of individual grid machines is known to be a limiting factor for application involving large data sets. In this paper we introduce the Data Cube Operator on an Arithmetic Data Set which we call Arithmetic Data Cube (ADC). We propose to use the ADC to benchmark grid capabilities to handle large distributed data sets. The ADC stresses all levels of grid memory by producing 2d views of an Arithmetic Data Set of d-tuples described by a small number of parameters. We control data intensity of the ADC by controlling the sizes of the views through choice of the tuple parameters.
Siemann, Julia; Petermann, Franz
2018-01-01
This review reconciles past findings on numerical processing with key assumptions of the most predominant model of arithmetic in the literature, the Triple Code Model (TCM). This is implemented by reporting diverse findings in the literature ranging from behavioral studies on basic arithmetic operations over neuroimaging studies on numerical processing to developmental studies concerned with arithmetic acquisition, with a special focus on developmental dyscalculia (DD). We evaluate whether these studies corroborate the model and discuss possible reasons for contradictory findings. A separate section is dedicated to the transfer of TCM to arithmetic development and to alternative accounts focusing on developmental questions of numerical processing. We conclude with recommendations for future directions of arithmetic research, raising questions that require answers in models of healthy as well as abnormal mathematical development. This review assesses the leading model in the field of arithmetic processing (Triple Code Model) by presenting knowledge from interdisciplinary research. It assesses the observed contradictory findings and integrates the resulting opposing viewpoints. The focus is on the development of arithmetic expertise as well as abnormal mathematical development. The original aspect of this article is that it points to a gap in research on these topics and provides possible solutions for future models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Unified commutation-pruning technique for efficient computation of composite DFTs
NASA Astrophysics Data System (ADS)
Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.
2015-12-01
An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.
ERIC Educational Resources Information Center
Gerhardt, Ira
2015-01-01
An experiment was conducted over three recent semesters of an introductory calculus course to test whether it was possible to quantify the effect that difficulty with basic algebraic and arithmetic computation had on individual performance. Points lost during the term were classified as being due to either algebraic and arithmetic mistakes…
The Performance of Chinese Primary School Students on Realistic Arithmetic Word Problems
ERIC Educational Resources Information Center
Xin, Ziqiang; Lin, Chongde; Zhang, Li; Yan, Rong
2007-01-01
Compared with standard arithmetic word problems demanding only the direct use of number operations and computations, realistic problems are harder to solve because children need to incorporate "real-world" knowledge into their solutions. Using the realistic word problem testing materials developed by Verschaffel, De Corte, and Lasure…
Mathematics: Essential to Marketing. Student's Manual and Teacher's Guide.
ERIC Educational Resources Information Center
Helton, Betty G.; Griffin, Jennie
This document contains both a student's manual and a teacher's guide for high school mathematics essential to marketing. The student's manual contains 34 assignments within the following 11 units: (1) arithmetic fundamentals; (2) application of arithmetic fundamentals; (3) cashiering; (4) inventory procedures; (5) invoices; (6) computing employee…
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.
NASA Astrophysics Data System (ADS)
Chung, Kun-Jen
2012-08-01
Cardenas-Barron [Cardenas-Barron, L.E. (2010) 'A Simple Method to Compute Economic order Quantities: Some Observations', Applied Mathematical Modelling, 34, 1684-1688] indicates that there are several functions in which the arithmetic-geometric mean method (AGM) does not give the minimum. This article presents another situation to reveal that the AGM inequality to locate the optimal solution may be invalid for Teng, Chen, and Goyal [Teng, J.T., Chen, J., and Goyal S.K. (2009), 'A Comprehensive Note on: An Inventory Model under Two Levels of Trade Credit and Limited Storage Space Derived without Derivatives', Applied Mathematical Modelling, 33, 4388-4396], Teng and Goyal [Teng, J.T., and Goyal S.K. (2009), 'Comment on 'Optimal Inventory Replenishment Policy for the EPQ Model under Trade Credit Derived without Derivatives', International Journal of Systems Science, 40, 1095-1098] and Hsieh, Chang, Weng, and Dye [Hsieh, T.P., Chang, H.J., Weng, M.W., and Dye, C.Y. (2008), 'A Simple Approach to an Integrated Single-vendor Single-buyer Inventory System with Shortage', Production Planning and Control, 19, 601-604]. So, the main purpose of this article is to adopt the calculus approach not only to overcome shortcomings of the arithmetic-geometric mean method of Teng et al. (2009), Teng and Goyal (2009) and Hsieh et al. (2008), but also to develop the complete solution procedures for them.
Application of a simple cerebellar model to geologic surface mapping
Hagens, A.; Doveton, J.H.
1991-01-01
Neurophysiological research into the structure and function of the cerebellum has inspired computational models that simulate information processing associated with coordination and motor movement. The cerebellar model arithmetic computer (CMAC) has a design structure which makes it readily applicable as an automated mapping device that "senses" a surface, based on a sample of discrete observations of surface elevation. The model operates as an iterative learning process, where cell weights are continuously modified by feedback to improve surface representation. The storage requirements are substantially less than those of a conventional memory allocation, and the model is extended easily to mapping in multidimensional space, where the memory savings are even greater. ?? 1991.
The computationalist reformulation of the mind-body problem.
Marchal, Bruno
2013-09-01
Computationalism, or digital mechanism, or simply mechanism, is a hypothesis in the cognitive science according to which we can be emulated by a computer without changing our private subjective feeling. We provide a weaker form of that hypothesis, weaker than the one commonly referred to in the (vast) literature and show how to recast the mind-body problem in that setting. We show that such a mechanist hypothesis does not solve the mind-body problem per se, but does help to reduce partially the mind-body problem into another problem which admits a formulation in pure arithmetic. We will explain that once we adopt the computationalist hypothesis, which is a form of mechanist assumption, we have to derive from it how our belief in the physical laws can emerge from *only* arithmetic and classical computer science. In that sense we reduce the mind-body problem to a body problem appearance in computer science, or in arithmetic. The general shape of the possible solution of that subproblem, if it exists, is shown to be closer to "Platonist or neoplatonist theology" than to the "Aristotelian theology". In Plato's theology, the physical or observable reality is only the shadow of a vaster hidden nonphysical and nonobservable, perhaps mathematical, reality. The main point is that the derivation is constructive, and it provides the technical means to derive physics from arithmetic, and this will make the computationalist hypothesis empirically testable, and thus scientific in the Popperian analysis of science. In case computationalism is wrong, the derivation leads to a procedure for measuring "our local degree of noncomputationalism". Copyright © 2013 Elsevier Ltd. All rights reserved.
Visuospatial and verbal memory in mental arithmetic.
Clearman, Jack; Klinger, Vojtěch; Szűcs, Dénes
2017-09-01
Working memory allows complex information to be remembered and manipulated over short periods of time. Correlations between working memory and mathematics achievement have been shown across the lifespan. However, only a few studies have examined the potentially distinct contributions of domain-specific visuospatial and verbal working memory resources in mental arithmetic computation. Here we aimed to fill this gap in a series of six experiments pairing addition and subtraction tasks with verbal and visuospatial working memory and interference tasks. In general, we found higher levels of interference between mental arithmetic and visuospatial working memory tasks than between mental arithmetic and verbal working memory tasks. Additionally, we found that interference that matched the working memory domain of the task (e.g., verbal task with verbal interference) lowered working memory performance more than mismatched interference (verbal task with visuospatial interference). Findings suggest that mental arithmetic relies on domain-specific working memory resources.
An Experimental Comparison of an Intrinsically Programed Text and a Narrative Text.
ERIC Educational Resources Information Center
Senter, R. J.; And Others
The study compared three methods of instruction in binary and octal arithmetic, i.e., (1) Norman Crowder's branched programed text, "The Arithmetic of Computers," (2) another version of this text modified so that subjects could not see the instructional material while answering "branching" questions, and (3) a narrative text…
Using Microcomputers To Help Learning Disabled Student with Arithmetic Difficulties.
ERIC Educational Resources Information Center
Brevil, Margarette
The use of microcomputers to help the learning disabled increase their arithmetic skills is examined. The microcomputer should be used to aid the learning disabled student to practice the concepts taught by the teacher. Computer-aided instruction such as drill and practice may help the learning disabled student because it gives immediate feedback…
Trinary signed-digit arithmetic using an efficient encoding scheme
NASA Astrophysics Data System (ADS)
Salim, W. Y.; Alam, M. S.; Fyath, R. S.; Ali, S. A.
2000-09-01
The trinary signed-digit (TSD) number system is of interest for ultrafast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
One-step trinary signed-digit arithmetic using an efficient encoding scheme
NASA Astrophysics Data System (ADS)
Salim, W. Y.; Fyath, R. S.; Ali, S. A.; Alam, Mohammad S.
2000-11-01
The trinary signed-digit (TSD) number system is of interest for ultra fast optoelectronic computing systems since it permits parallel carry-free addition and borrow-free subtraction of two arbitrary length numbers in constant time. In this paper, a simple coding scheme is proposed to encode the decimal number directly into the TSD form. The coding scheme enables one to perform parallel one-step TSD arithmetic operation. The proposed coding scheme uses only a 5-combination coding table instead of the 625-combination table reported recently for recoded TSD arithmetic technique.
Basic mathematical function libraries for scientific computation
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
Ada packages implementing selected mathematical functions for the support of scientific and engineering applications were written. The packages provide the Ada programmer with the mathematical function support found in the languages Pascal and FORTRAN as well as an extended precision arithmetic and a complete complex arithmetic. The algorithms used are fully described and analyzed. Implementation assumes that the Ada type FLOAT objects fully conform to the IEEE 754-1985 standard for single binary floating-point arithmetic, and that INTEGER objects are 32-bit entities. Codes for the Ada packages are included as appendixes.
Network-Physics(NP) Bec DIGITAL(#)-VULNERABILITY Versus Fault-Tolerant Analog
NASA Astrophysics Data System (ADS)
Alexander, G. K.; Hathaway, M.; Schmidt, H. E.; Siegel, E.
2011-03-01
Siegel[AMS Joint Mtg.(2002)-Abs.973-60-124] digits logarithmic-(Newcomb(1881)-Weyl(1914; 1916)-Benford(1938)-"NeWBe"/"OLDbe")-law algebraic-inversion to ONLY BEQS BEC:Quanta/Bosons= digits: Synthesis reveals EMP-like SEVERE VULNERABILITY of ONLY DIGITAL-networks(VS. FAULT-TOLERANT ANALOG INvulnerability) via Barabasi "Network-Physics" relative-``statics''(VS.dynamics-[Willinger-Alderson-Doyle(Not.AMS(5/09)]-]critique); (so called)"Quantum-computing is simple-arithmetic(sans division/ factorization); algorithmic-complexities: INtractibility/ UNdecidability/ INefficiency/NONcomputability / HARDNESS(so MIScalled) "noise"-induced-phase-transitions(NITS) ACCELERATION: Cook-Levin theorem Reducibility is Renormalization-(Semi)-Group fixed-points; number-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(02)] How? mea culpa)can ONLY be MBCS "hot-plasma" versus digit-clumping NON-random BEC; Modular-arithmetic Congruences= Signal X Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)] BEC logarithmic-law inversion factorization:Watkins number-thy. U stat.-phys.); P=/=NP TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation via geometry.
Paranoia.Ada: A diagnostic program to evaluate Ada floating-point arithmetic
NASA Technical Reports Server (NTRS)
Hjermstad, Chris
1986-01-01
Many essential software functions in the mission critical computer resource application domain depend on floating point arithmetic. Numerically intensive functions associated with the Space Station project, such as emphemeris generation or the implementation of Kalman filters, are likely to employ the floating point facilities of Ada. Paranoia.Ada appears to be a valuabe program to insure that Ada environments and their underlying hardware exhibit the precision and correctness required to satisfy mission computational requirements. As a diagnostic tool, Paranoia.Ada reveals many essential characteristics of an Ada floating point implementation. Equipped with such knowledge, programmers need not tremble before the complex task of floating point computation.
Benchmarking Memory Performance with the Data Cube Operator
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Shabanov, Leonid V.
2004-01-01
Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.
Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions
NASA Astrophysics Data System (ADS)
McCullough, Christopher; Bettadpur, Srinivas
2015-04-01
In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.
On Certain Topological Indices of Boron Triangular Nanotubes
NASA Astrophysics Data System (ADS)
Aslam, Adnan; Ahmad, Safyan; Gao, Wei
2017-08-01
The topological index gives information about the whole structure of a chemical graph, especially degree-based topological indices that are very useful. Boron triangular nanotubes are now replacing usual carbon nanotubes due to their excellent properties. We have computed general Randić (Rα), first Zagreb (M1) and second Zagreb (M2), atom-bond connectivity (ABC), and geometric-arithmetic (GA) indices of boron triangular nanotubes. Also, we have computed the fourth version of atom-bond connectivity (ABC4) and the fifth version of geometric-arithmetic (GA5) indices of boron triangular nanotubes.
ERIC Educational Resources Information Center
Schoppek, Wolfgang; Tulis, Maria
2010-01-01
The fluency of basic arithmetical operations is a precondition for mathematical problem solving. However, the training of skills plays a minor role in contemporary mathematics instruction. The authors proposed individualization of practice as a means to improve its efficiency, so that the time spent with the training of skills is minimized. As a…
Multiplier Architecture for Coding Circuits
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.
1986-01-01
Multipliers based on new algorithm for Galois-field (GF) arithmetic regular and expandable. Pipeline structures used for computing both multiplications and inverses. Designs suitable for implementation in very-large-scale integrated (VLSI) circuits. This general type of inverter and multiplier architecture especially useful in performing finite-field arithmetic of Reed-Solomon error-correcting codes and of some cryptographic algorithms.
Umari, Amjad M.J.; Gorelick, Steven M.
1986-01-01
In the numerical modeling of groundwater solute transport, explicit solutions may be obtained for the concentration field at any future time without computing concentrations at intermediate times. The spatial variables are discretized and time is left continuous in the governing differential equation. These semianalytical solutions have been presented in the literature and involve the eigensystem of a coefficient matrix. This eigensystem may be complex (i.e., have imaginary components) due to the asymmetry created by the advection term in the governing advection-dispersion equation. Previous investigators have either used complex arithmetic to represent a complex eigensystem or chosen large dispersivity values for which the imaginary components of the complex eigenvalues may be ignored without significant error. It is shown here that the error due to ignoring the imaginary components of complex eigenvalues is large for small dispersivity values. A new algorithm that represents the complex eigensystem by converting it to a real eigensystem is presented. The method requires only real arithmetic.
Arithmetic on Your Phone: A Large Scale Investigation of Simple Additions and Multiplications.
Zimmerman, Federico; Shalom, Diego; Gonzalez, Pablo A; Garrido, Juan Manuel; Alvarez Heduan, Facundo; Dehaene, Stanislas; Sigman, Mariano; Rieznik, Andres
2016-01-01
We present the results of a gamified mobile device arithmetic application which allowed us to collect vast amount of data in simple arithmetic operations. Our results confirm and replicate, on a large sample, six of the main principles derived in a long tradition of investigation: size effect, tie effect, size-tie interaction effect, five-effect, RTs and error rates correlation effect, and most common error effect. Our dataset allowed us to perform a robust analysis of order effects for each individual problem, for which there is controversy both in experimental findings and in the predictions of theoretical models. For addition problems, the order effect was dominated by a max-then-min structure (i.e 7+4 is easier than 4+7). This result is predicted by models in which additions are performed as a translation starting from the first addend, with a distance given by the second addend. In multiplication, we observed a dominance of two effects: (1) a max-then-min pattern that can be accounted by the fact that it is easier to perform fewer additions of the largest number (i.e. 8x3 is easier to compute as 8+8+8 than as 3+3+…+3) and (2) a phonological effect by which problems for which there is a rhyme (i.e. "seis por cuatro es veinticuatro") are performed faster. Above and beyond these results, our study bares an important practical conclusion, as proof of concept, that participants can be motivated to perform substantial arithmetic training simply by presenting it in a gamified format.
Arithmetic on Your Phone: A Large Scale Investigation of Simple Additions and Multiplications
Zimmerman, Federico; Shalom, Diego; Gonzalez, Pablo A.; Garrido, Juan Manuel; Alvarez Heduan, Facundo; Dehaene, Stanislas; Sigman, Mariano; Rieznik, Andres
2016-01-01
We present the results of a gamified mobile device arithmetic application which allowed us to collect vast amount of data in simple arithmetic operations. Our results confirm and replicate, on a large sample, six of the main principles derived in a long tradition of investigation: size effect, tie effect, size-tie interaction effect, five-effect, RTs and error rates correlation effect, and most common error effect. Our dataset allowed us to perform a robust analysis of order effects for each individual problem, for which there is controversy both in experimental findings and in the predictions of theoretical models. For addition problems, the order effect was dominated by a max-then-min structure (i.e 7+4 is easier than 4+7). This result is predicted by models in which additions are performed as a translation starting from the first addend, with a distance given by the second addend. In multiplication, we observed a dominance of two effects: (1) a max-then-min pattern that can be accounted by the fact that it is easier to perform fewer additions of the largest number (i.e. 8x3 is easier to compute as 8+8+8 than as 3+3+…+3) and (2) a phonological effect by which problems for which there is a rhyme (i.e. "seis por cuatro es veinticuatro") are performed faster. Above and beyond these results, our study bares an important practical conclusion, as proof of concept, that participants can be motivated to perform substantial arithmetic training simply by presenting it in a gamified format. PMID:28033357
Fast reversible wavelet image compressor
NASA Astrophysics Data System (ADS)
Kim, HyungJun; Li, Ching-Chung
1996-10-01
We present a unified image compressor with spline biorthogonal wavelets and dyadic rational filter coefficients which gives high computational speed and excellent compression performance. Convolutions with these filters can be preformed by using only arithmetic shifting and addition operations. Wavelet coefficients can be encoded with an arithmetic coder which also uses arithmetic shifting and addition operations. Therefore, from the beginning to the end, the while encoding/decoding process can be done within a short period of time. The proposed method naturally extends form the lossless compression to the lossy but high compression range and can be easily adapted to the progressive reconstruction.
Computations of Eisenstein series on Fuchsian groups
NASA Astrophysics Data System (ADS)
Avelin, Helen
2008-09-01
We present numerical investigations of the value distribution and distribution of Fourier coefficients of the Eisenstein series E(z;s) on arithmetic and non-arithmetic Fuchsian groups. Our numerics indicate a Gaussian limit value distribution for a real-valued rotation of E(z;s) as operatorname{Re} sD1/2 , operatorname{Im} sto infty and also, on non-arithmetic groups, a complex Gaussian limit distribution for E(z;s) when operatorname{Re} s>1/2 near 1/2 and operatorname{Im} sto infty , at least if we allow operatorname{Re} sto 1/2 at some rate. Furthermore, on non-arithmetic groups and for fixed s with operatorname{Re} s ge 1/2 near 1/2 , our numerics indicate a Gaussian limit distribution for the appropriately normalized Fourier coefficients.
Enhanced Graphics for Extended Scale Range
NASA Technical Reports Server (NTRS)
Hanson, Andrew J.; Chi-Wing Fu, Philip
2012-01-01
Enhanced Graphics for Extended Scale Range is a computer program for rendering fly-through views of scene models that include visible objects differing in size by large orders of magnitude. An example would be a scene showing a person in a park at night with the moon, stars, and galaxies in the background sky. Prior graphical computer programs exhibit arithmetic and other anomalies when rendering scenes containing objects that differ enormously in scale and distance from the viewer. The present program dynamically repartitions distance scales of objects in a scene during rendering to eliminate almost all such anomalies in a way compatible with implementation in other software and in hardware accelerators. By assigning depth ranges correspond ing to rendering precision requirements, either automatically or under program control, this program spaces out object scales to match the precision requirements of the rendering arithmetic. This action includes an intelligent partition of the depth buffer ranges to avoid known anomalies from this source. The program is written in C++, using OpenGL, GLUT, and GLUI standard libraries, and nVidia GEForce Vertex Shader extensions. The program has been shown to work on several computers running UNIX and Windows operating systems.
Zhou, Xinlin; Wei, Wei; Zhang, Yiyun; Cui, Jiaxin; Chen, Chuansheng
2015-01-01
Studies have shown that numerosity processing (e.g., comparison of numbers of dots in two dot arrays) is significantly correlated with arithmetic performance. Researchers have attributed this association to the fact that both tasks share magnitude processing. The current investigation tested an alternative hypothesis, which states that visual perceptual ability (as measured by a figure-matching task) can account for the close relation between numerosity processing and arithmetic performance (computational fluency). Four hundred and twenty four third- to fifth-grade children (220 boys and 204 girls, 8.0-11.0 years old; 120 third graders, 146 fourth graders, and 158 fifth graders) were recruited from two schools (one urban and one suburban) in Beijing, China. Six classes were randomly selected from each school, and all students in each selected class participated in the study. All children were given a series of cognitive and mathematical tests, including numerosity comparison, figure matching, forward verbal working memory, visual tracing, non-verbal matrices reasoning, mental rotation, choice reaction time, arithmetic tests and curriculum-based mathematical achievement test. Results showed that figure-matching ability had higher correlations with numerosity processing and computational fluency than did other cognitive factors (e.g., forward verbal working memory, visual tracing, non-verbal matrix reasoning, mental rotation, and choice reaction time). More important, hierarchical multiple regression showed that figure matching ability accounted for the well-established association between numerosity processing and computational fluency. In support of the visual perception hypothesis, the results suggest that visual perceptual ability, rather than magnitude processing, may be the shared component of numerosity processing and arithmetic performance.
Zhou, Xinlin; Wei, Wei; Zhang, Yiyun; Cui, Jiaxin; Chen, Chuansheng
2015-01-01
Studies have shown that numerosity processing (e.g., comparison of numbers of dots in two dot arrays) is significantly correlated with arithmetic performance. Researchers have attributed this association to the fact that both tasks share magnitude processing. The current investigation tested an alternative hypothesis, which states that visual perceptual ability (as measured by a figure-matching task) can account for the close relation between numerosity processing and arithmetic performance (computational fluency). Four hundred and twenty four third- to fifth-grade children (220 boys and 204 girls, 8.0–11.0 years old; 120 third graders, 146 fourth graders, and 158 fifth graders) were recruited from two schools (one urban and one suburban) in Beijing, China. Six classes were randomly selected from each school, and all students in each selected class participated in the study. All children were given a series of cognitive and mathematical tests, including numerosity comparison, figure matching, forward verbal working memory, visual tracing, non-verbal matrices reasoning, mental rotation, choice reaction time, arithmetic tests and curriculum-based mathematical achievement test. Results showed that figure-matching ability had higher correlations with numerosity processing and computational fluency than did other cognitive factors (e.g., forward verbal working memory, visual tracing, non-verbal matrix reasoning, mental rotation, and choice reaction time). More important, hierarchical multiple regression showed that figure matching ability accounted for the well-established association between numerosity processing and computational fluency. In support of the visual perception hypothesis, the results suggest that visual perceptual ability, rather than magnitude processing, may be the shared component of numerosity processing and arithmetic performance. PMID:26441740
High-order computer-assisted estimates of topological entropy
NASA Astrophysics Data System (ADS)
Grote, Johannes
The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
High-performance wavelet engine
NASA Astrophysics Data System (ADS)
Taylor, Fred J.; Mellot, Jonathon D.; Strom, Erik; Koren, Iztok; Lewis, Michael P.
1993-11-01
Wavelet processing has shown great promise for a variety of image and signal processing applications. Wavelets are also among the most computationally expensive techniques in signal processing. It is demonstrated that a wavelet engine constructed with residue number system arithmetic elements offers significant advantages over commercially available wavelet accelerators based upon conventional arithmetic elements. Analysis is presented predicting the dynamic range requirements of the reported residue number system based wavelet accelerator.
Environmental Gradient Analysis, Ordination, and Classification in Environmental Impact Assessments.
1987-09-01
agglomerative clustering algorithms for mainframe computers: (1) the unweighted pair-group method that V uses arithmetic averages ( UPGMA ), (2) the...hierarchical agglomerative unweighted pair-group method using arithmetic averages ( UPGMA ), which is also called average linkage clustering. This method was...dendrograms produced by weighted clustering (93). Sneath and Sokal (94), Romesburg (84), and Seber• (90) also strongly recommend the UPGMA . A dendrogram
Egeland, Jens; Bosnes, Ole; Johansen, Hans
2009-09-01
Confirmatory Factor Analyses (CFA) of the Wechsler Adult Intelligence Scale-III (WAIS-III) lend partial support to the four-factor model proposed in the test manual. However, the Arithmetic subtest has been especially difficult to allocate to one factor. Using the new Norwegian WAIS-III version, we tested factor models differing in the number of factors and in the placement of the Arithmetic subtest in a mixed clinical sample (n = 272). Only the four-factor solutions had adequate goodness-of-fit values. Allowing Arithmetic to load on both the Verbal Comprehension and Working Memory factors provided a more parsimonious solution compared to considering the subtest only as a measure of Working Memory. Effects of education were particularly high for both the Verbal Comprehension tests and Arithmetic.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
NASA Astrophysics Data System (ADS)
Ossendrijver, Mathieu
2016-01-01
The idea of computing a body’s displacement as an area in time-velocity space is usually traced back to 14th-century Europe. I show that in four ancient Babylonian cuneiform tablets, Jupiter’s displacement along the ecliptic is computed as the area of a trapezoidal figure obtained by drawing its daily displacement against time. This interpretation is prompted by a newly discovered tablet on which the same computation is presented in an equivalent arithmetical formulation. The tablets date from 350 to 50 BCE. The trapezoid procedures offer the first evidence for the use of geometrical methods in Babylonian mathematical astronomy, which was thus far viewed as operating exclusively with arithmetical concepts.
A Stable Clock Error Model Using Coupled First and Second Order Gauss-Markov Processes
NASA Technical Reports Server (NTRS)
Carpenter, Russell; Lee, Taesul
2008-01-01
Long data outages may occur in applications of global navigation satellite system technology to orbit determination for missions that spend significant fractions of their orbits above the navigation satellite constellation(s). Current clock error models based on the random walk idealization may not be suitable in these circumstances, since the covariance of the clock errors may become large enough to overflow flight computer arithmetic. A model that is stable, but which approximates the existing models over short time horizons is desirable. A coupled first- and second-order Gauss-Markov process is such a model.
Parent's Guide to Computers in Education.
ERIC Educational Resources Information Center
Moursund, David
Addressed to the parents of children taking computer courses in school, this booklet outlines the rationales for computer use in schools and explains for a lay audience the features and functions of computers. A look at the school of the future shows computers aiding the study of reading, writing, arithmetic, geography, and history. The features…
Arithmetic operations in optical computations using a modified trinary number system.
Datta, A K; Basuray, A; Mukhopadhyay, S
1989-05-01
A modified trinary number (MTN) system is proposed in which any binary number can be expressed with the help of trinary digits (1, 0, 1 ). Arithmetic operations can be performed in parallel without the need for carry and borrow steps when binary digits are converted to the MTN system. An optical implementation of the proposed scheme that uses spatial light modulators and color-coded light signals is described.
Multistate Memristive Tantalum Oxide Devices for Ternary Arithmetic
Kim, Wonjoo; Chattopadhyay, Anupam; Siemon, Anne; Linn, Eike; Waser, Rainer; Rana, Vikas
2016-01-01
Redox-based resistive switching random access memory (ReRAM) offers excellent properties to implement future non-volatile memory arrays. Recently, the capability of two-state ReRAMs to implement Boolean logic functionality gained wide interest. Here, we report on seven-states Tantalum Oxide Devices, which enable the realization of an intrinsic modular arithmetic using a ternary number system. Modular arithmetic, a fundamental system for operating on numbers within the limit of a modulus, is known to mathematicians since the days of Euclid and finds applications in diverse areas ranging from e-commerce to musical notations. We demonstrate that multistate devices not only reduce the storage area consumption drastically, but also enable novel in-memory operations, such as computing using high-radix number systems, which could not be implemented using two-state devices. The use of high radix number system reduces the computational complexity by reducing the number of needed digits. Thus the number of calculation operations in an addition and the number of logic devices can be reduced. PMID:27834352
Multistate Memristive Tantalum Oxide Devices for Ternary Arithmetic.
Kim, Wonjoo; Chattopadhyay, Anupam; Siemon, Anne; Linn, Eike; Waser, Rainer; Rana, Vikas
2016-11-11
Redox-based resistive switching random access memory (ReRAM) offers excellent properties to implement future non-volatile memory arrays. Recently, the capability of two-state ReRAMs to implement Boolean logic functionality gained wide interest. Here, we report on seven-states Tantalum Oxide Devices, which enable the realization of an intrinsic modular arithmetic using a ternary number system. Modular arithmetic, a fundamental system for operating on numbers within the limit of a modulus, is known to mathematicians since the days of Euclid and finds applications in diverse areas ranging from e-commerce to musical notations. We demonstrate that multistate devices not only reduce the storage area consumption drastically, but also enable novel in-memory operations, such as computing using high-radix number systems, which could not be implemented using two-state devices. The use of high radix number system reduces the computational complexity by reducing the number of needed digits. Thus the number of calculation operations in an addition and the number of logic devices can be reduced.
Multistate Memristive Tantalum Oxide Devices for Ternary Arithmetic
NASA Astrophysics Data System (ADS)
Kim, Wonjoo; Chattopadhyay, Anupam; Siemon, Anne; Linn, Eike; Waser, Rainer; Rana, Vikas
2016-11-01
Redox-based resistive switching random access memory (ReRAM) offers excellent properties to implement future non-volatile memory arrays. Recently, the capability of two-state ReRAMs to implement Boolean logic functionality gained wide interest. Here, we report on seven-states Tantalum Oxide Devices, which enable the realization of an intrinsic modular arithmetic using a ternary number system. Modular arithmetic, a fundamental system for operating on numbers within the limit of a modulus, is known to mathematicians since the days of Euclid and finds applications in diverse areas ranging from e-commerce to musical notations. We demonstrate that multistate devices not only reduce the storage area consumption drastically, but also enable novel in-memory operations, such as computing using high-radix number systems, which could not be implemented using two-state devices. The use of high radix number system reduces the computational complexity by reducing the number of needed digits. Thus the number of calculation operations in an addition and the number of logic devices can be reduced.
Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.
2012-01-01
The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.
Simulating Network Retrieval of Arithmetic Facts.
ERIC Educational Resources Information Center
Ashcraft, Mark H.
This report describes a simulation of adults' retrieval of arithmetic facts from a network-based memory representation. The goals of the simulation project are to: demonstrate in specific form the nature of a spreading activation model of mental arithmetic; account for three important reaction time effects observed in laboratory investigations;…
Transfluxor circuit amplifies sensing current for computer memories
NASA Technical Reports Server (NTRS)
Milligan, G. C.
1964-01-01
To transfer data from the magnetic memory core to an independent core, a reliable sensing amplifier has been developed. Later the data in the independent core is transferred to the arithmetical section of the computer.
Berg, Derek H
2008-04-01
The cognitive underpinnings of arithmetic calculation in children are noted to involve working memory; however, cognitive processes related to arithmetic calculation and working memory suggest that this relationship is more complex than stated previously. The purpose of this investigation was to examine the relative contributions of processing speed, short-term memory, working memory, and reading to arithmetic calculation in children. Results suggested four important findings. First, processing speed emerged as a significant contributor of arithmetic calculation only in relation to age-related differences in the general sample. Second, processing speed and short-term memory did not eliminate the contribution of working memory to arithmetic calculation. Third, individual working memory components--verbal working memory and visual-spatial working memory--each contributed unique variance to arithmetic calculation in the presence of all other variables. Fourth, a full model indicated that chronological age remained a significant contributor to arithmetic calculation in the presence of significant contributions from all other variables. Results are discussed in terms of directions for future research on working memory in arithmetic calculation.
Towards cortex sized artificial neural systems.
Johansson, Christopher; Lansner, Anders
2007-01-01
We propose, implement, and discuss an abstract model of the mammalian neocortex. This model is instantiated with a sparse recurrently connected neural network that has spiking leaky integrator units and continuous Hebbian learning. First we study the structure, modularization, and size of neocortex, and then we describe a generic computational model of the cortical circuitry. A characterizing feature of the model is that it is based on the modularization of neocortex into hypercolumns and minicolumns. Both a floating- and fixed-point arithmetic implementation of the model are presented along with simulation results. We conclude that an implementation on a cluster computer is not communication but computation bounded. A mouse and rat cortex sized version of our model executes in 44% and 23% of real-time respectively. Further, an instance of the model with 1.6 x 10(6) units and 2 x 10(11) connections performed noise reduction and pattern completion. These implementations represent the current frontier of large-scale abstract neural network simulations in terms of network size and running speed.
Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic
NASA Astrophysics Data System (ADS)
Haag, T.; Herrmann, J.; Hanss, M.
2010-10-01
For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.
On the Floating Point Performance of the i860 Microprocessor
NASA Technical Reports Server (NTRS)
Lee, King; Kutler, Paul (Technical Monitor)
1997-01-01
The i860 microprocessor is a pipelined processor that can deliver two double precision floating point results every clock. It is being used in the Touchstone project to develop a teraflop computer by the year 2000. With such high computational capabilities it was expected that memory bandwidth would limit performance on many kernels. Measured performance of three kernels showed performance is less than what memory bandwidth limitations would predict. This paper develops a model that explains the discrepancy in terms of memory latencies and points to some problems involved in moving data from memory to the arithmetic pipelines.
Design of Arithmetic Circuits for Complex Binary Number System
NASA Astrophysics Data System (ADS)
Jamil, Tariq
2011-08-01
Complex numbers play important role in various engineering applications. To represent these numbers efficiently for storage and manipulation, a (-1+j)-base complex binary number system (CBNS) has been proposed in the literature. In this paper, designs of nibble-size arithmetic circuits (adder, subtractor, multiplier, divider) have been presented. These circuits can be incorporated within von Neumann and associative dataflow processors to achieve higher performance in both sequential and parallel computing paradigms.
Hardware math for the 6502 microprocessor
NASA Technical Reports Server (NTRS)
Kissel, R.; Currie, J.
1985-01-01
A floating-point arithmetic unit is described which is being used in the Ground Facility of Large Space Structures Control Verification (GF/LSSCV). The experiment uses two complete inertial measurement units and a set of three gimbal torquers in a closed loop to control the structural vibrations in a flexible test article (beam). A 6502 (8-bit) microprocessor controls four AMD 9511A floating-point arithmetic units to do all the computation in 20 milliseconds.
Sparse distributed memory and related models
NASA Technical Reports Server (NTRS)
Kanerva, Pentti
1992-01-01
Described here is sparse distributed memory (SDM) as a neural-net associative memory. It is characterized by two weight matrices and by a large internal dimension - the number of hidden units is much larger than the number of input or output units. The first matrix, A, is fixed and possibly random, and the second matrix, C, is modifiable. The SDM is compared and contrasted to (1) computer memory, (2) correlation-matrix memory, (3) feet-forward artificial neural network, (4) cortex of the cerebellum, (5) Marr and Albus models of the cerebellum, and (6) Albus' cerebellar model arithmetic computer (CMAC). Several variations of the basic SDM design are discussed: the selected-coordinate and hyperplane designs of Jaeckel, the pseudorandom associative neural memory of Hassoun, and SDM with real-valued input variables by Prager and Fallside. SDM research conducted mainly at the Research Institute for Advanced Computer Science (RIACS) in 1986-1991 is highlighted.
Efficient Boundary Extraction of BSP Solids Based on Clipping Operations.
Wang, Charlie C L; Manocha, Dinesh
2013-01-01
We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.
On the use of inexact, pruned hardware in atmospheric modelling
Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.
2014-01-01
Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031
Metcalfe, Arron W. S.; Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Menon, Vinod
2013-01-01
Baddeley and Hitch’s multi-component working memory (WM) model has played an enduring and influential role in our understanding of cognitive abilities. Very little is known, however, about the neural basis of this multi-component WM model and the differential role each component plays in mediating arithmetic problem solving abilities in children. Here, we investigate the neural basis of the central executive (CE), phonological (PL) and visuo-spatial (VS) components of WM during a demanding mental arithmetic task in 7–9 year old children (N=74). The VS component was the strongest predictor of math ability in children and was associated with increased arithmetic complexity-related responses in left dorsolateral and right ventrolateral prefrontal cortices as well as bilateral intra-parietal sulcus and supramarginal gyrus in posterior parietal cortex. Critically, VS, CE and PL abilities were associated with largely distinct patterns of brain response. Overlap between VS and CE components was observed in left supramarginal gyrus and no overlap was observed between VS and PL components. Our findings point to a central role of visuo-spatial WM during arithmetic problem-solving in young grade-school children and highlight the usefulness of the multi-component Baddeley and Hitch WM model in fractionating the neural correlates of arithmetic problem solving during development. PMID:24212504
Heading-vector navigation based on head-direction cells and path integration.
Kubie, John L; Fenton, André A
2009-05-01
Insect navigation is guided by heading vectors that are computed by path integration. Mammalian navigation models, on the other hand, are typically based on map-like place representations provided by hippocampal place cells. Such models compute optimal routes as a continuous series of locations that connect the current location to a goal. We propose a "heading-vector" model in which head-direction cells or their derivatives serve both as key elements in constructing the optimal route and as the straight-line guidance during route execution. The model is based on a memory structure termed the "shortcut matrix," which is constructed during the initial exploration of an environment when a set of shortcut vectors between sequential pairs of visited waypoint locations is stored. A mechanism is proposed for calculating and storing these vectors that relies on a hypothesized cell type termed an "accumulating head-direction cell." Following exploration, shortcut vectors connecting all pairs of waypoint locations are computed by vector arithmetic and stored in the shortcut matrix. On re-entry, when local view or place representations query the shortcut matrix with a current waypoint and goal, a shortcut trajectory is retrieved. Since the trajectory direction is in head-direction compass coordinates, navigation is accomplished by tracking the firing of head-direction cells that are tuned to the heading angle. Section 1 of the manuscript describes the properties of accumulating head-direction cells. It then shows how accumulating head-direction cells can store local vectors and perform vector arithmetic to perform path-integration-based homing. Section 2 describes the construction and use of the shortcut matrix for computing direct paths between any pair of locations that have been registered in the shortcut matrix. In the discussion, we analyze the advantages of heading-based navigation over map-based navigation. Finally, we survey behavioral evidence that nonhippocampal, heading-based navigation is used in small mammals and humans. Copyright 2008 Wiley-Liss, Inc.
Calculating with light using a chip-scale all-optical abacus.
Feldmann, J; Stegmaier, M; Gruhler, N; Ríos, C; Bhaskaran, H; Wright, C D; Pernice, W H P
2017-11-02
Machines that simultaneously process and store multistate data at one and the same location can provide a new class of fast, powerful and efficient general-purpose computers. We demonstrate the central element of an all-optical calculator, a photonic abacus, which provides multistate compute-and-store operation by integrating functional phase-change materials with nanophotonic chips. With picosecond optical pulses we perform the fundamental arithmetic operations of addition, subtraction, multiplication, and division, including a carryover into multiple cells. This basic processing unit is embedded into a scalable phase-change photonic network and addressed optically through a two-pulse random access scheme. Our framework provides first steps towards light-based non-von Neumann arithmetic.
Discrete mathematical physics and particle modeling
NASA Astrophysics Data System (ADS)
Greenspan, D.
The theory and application of the arithmetic approach to the foundations of both Newtonian and special relativistic mechanics are explored. Using only arithmetic, a reformulation of the Newtonian approach is given for: gravity; particle modeling of solids, liquids, and gases; conservative modeling of laminar and turbulent fluid flow, heat conduction, and elastic vibration; and nonconservative modeling of heat convection, shock-wave generation, the liquid drop problem, porous flow, the interface motion of a melting solid, soap films, string vibrations, and solitons. An arithmetic reformulation of special relativistic mechanics is given for theory in one space dimension, relativistic harmonic oscillation, and theory in three space dimensions. A speculative quantum mechanical model of vibrations in the water molecule is also discussed.
A 640-MHz 32-megachannel real-time polyphase-FFT spectrum analyzer
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Garyantes, M. F.; Grimm, M. J.; Charny, B.
1991-01-01
A polyphase fast Fourier transform (FFT) spectrum analyzer being designed for NASA's Search for Extraterrestrial Intelligence (SETI) Sky Survey at the Jet Propulsion Laboratory is described. By replacing the time domain multiplicative window preprocessing with polyphase filter processing, much of the processing loss of windowed FFTs can be eliminated. Polyphase coefficient memory costs are minimized by effective use of run length compression. Finite word length effects are analyzed, producing a balanced system with 8 bit inputs, 16 bit fixed point polyphase arithmetic, and 24 bit fixed point FFT arithmetic. Fixed point renormalization midway through the computation is seen to be naturally accommodated by the matrix FFT algorithm proposed. Simulation results validate the finite word length arithmetic analysis and the renormalization technique.
Optical systolic array processor using residue arithmetic
NASA Technical Reports Server (NTRS)
Jackson, J.; Casasent, D.
1983-01-01
The use of residue arithmetic to increase the accuracy and reduce the dynamic range requirements of optical matrix-vector processors is evaluated. It is determined that matrix-vector operations and iterative algorithms can be performed totally in residue notation. A new parallel residue quantizer circuit is developed which significantly improves the performance of the systolic array feedback processor. Results are presented of a computer simulation of this system used to solve a set of three simultaneous equations.
Bounds for Asian basket options
NASA Astrophysics Data System (ADS)
Deelstra, Griselda; Diallo, Ibrahima; Vanmaele, Michèle
2008-09-01
In this paper we propose pricing bounds for European-style discrete arithmetic Asian basket options in a Black and Scholes framework. We start from methods used for basket options and Asian options. First, we use the general approach for deriving upper and lower bounds for stop-loss premia of sums of non-independent random variables as in Kaas et al. [Upper and lower bounds for sums of random variables, Insurance Math. Econom. 27 (2000) 151-168] or Dhaene et al. [The concept of comonotonicity in actuarial science and finance: theory, Insurance Math. Econom. 31(1) (2002) 3-33]. We generalize the methods in Deelstra et al. [Pricing of arithmetic basket options by conditioning, Insurance Math. Econom. 34 (2004) 55-57] and Vanmaele et al. [Bounds for the price of discrete sampled arithmetic Asian options, J. Comput. Appl. Math. 185(1) (2006) 51-90]. Afterwards we show how to derive an analytical closed-form expression for a lower bound in the non-comonotonic case. Finally, we derive upper bounds for Asian basket options by applying techniques as in Thompson [Fast narrow bounds on the value of Asian options, Working Paper, University of Cambridge, 1999] and Lord [Partially exact and bounded approximations for arithmetic Asian options, J. Comput. Finance 10 (2) (2006) 1-52]. Numerical results are included and on the basis of our numerical tests, we explain which method we recommend depending on moneyness and time-to-maturity.
Extreme D'Hondt and round-off effects in voting computations
NASA Astrophysics Data System (ADS)
Konstantinov, M. M.; Pelova, G. B.
2015-11-01
D'Hondt (or Jefferson) method and Hare-Niemeyer (or Hamilton) method are widely used worldwide for seat allocation in proportional systems. Everything seems to be well known in this area. However, this is not the case. For example the D'Hondt method can violate the quota rule from above but this effect is not analyzed as a function of the number of parties and/or the threshold used. Also, allocation methods are often implemented automatically as computer codes in machine arithmetic believing that following the IEEE standards for double precision binary arithmetics would guarantee correct results. Unfortunately this may not happen not only for double precision arithmetic (usually producing 15-16 true decimal digits) but also for any relative precision of the underlying binary machine arithmetics. This paper deals with the following new issues.Find conditions (threshold in particular) such that D'Hondt seat allocation violates maximally the quota rule. Analyze possible influence of rounding errors in the automatic implementation of Hare-Niemeyer method in machine arithmetic.Concerning the first issue, it is known that the maximal deviation of D'Hondt allocation from upper quota for the Bulgarian proportional system (240 MP and 4% barrier) is 5. This fact had been established in 1991. A classical treatment of voting issues is the monograph [1], while electoral problems specific for Bulgaria have been treated in [2, 4]. The effect of threshold on extreme seat allocations is also analyzed in [3]. Finally we would like to stress that Voting Theory may sometimes be mathematically trivial but always has great political impact. This is a strong motivation for further investigations in this area.
Fun and Arithmetic Practice with Days and Dates.
ERIC Educational Resources Information Center
Richbart, Lynn A.
1985-01-01
Two worksheets are given, outlining algorithms to help students determine the day of the week an event will occur and to find the date for Easter. The activity provides computational practice. A computer program for determining Easter is also included. (MNS)
A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus
NASA Astrophysics Data System (ADS)
Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei
2005-01-01
Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.
Arithmetic of five-part of leukocytes based on image process
NASA Astrophysics Data System (ADS)
Li, Yian; Wang, Guoyou; Liu, Jianguo
2007-12-01
This paper apply computer image processing and pattern recognizition methods to solve the problem of auto classification and counting of leukocytes (white blood cell) in peripheral blood. In this paper a new leukocyte arithmetic of five-part based on image process and pattern recognizition is presented, which relized auto classify of leukocyte. The first aim is detect the leukocytes . A major requirement of the whole system is to classify these leukocytes to 5 classes. This arithmetic bases on notability mechanism of eyes, process image by sequence, divides up leukocytes and pick up characters. Using the prior kwonledge of cells and image shape information, this arithmetic divides up the probable shape of Leukocyte first by a new method based on Chamfer and then gets the detail characters. It can reduce the mistake judge rate and the calculation greatly. It also has the learning fuction. This paper also presented a new measurement of karyon's shape which can provide more accurate information. This algorithm has great application value in clinical blood test .
Si, Jiwei; Li, Hongxia; Sun, Yan; Xu, Yanli; Sun, Yu
2016-01-01
The present study used the choice/no-choice method to investigate the effect of math anxiety on the strategy used in computational estimation and mental arithmetic tasks and to examine age-related differences in this regard. Fifty-seven fourth graders, 56 sixth graders, and 60 adults were randomly selected to participate in the experiment. Results showed the following: (1) High-anxious individuals were more likely to use a rounding-down strategy in the computational estimation task under the best-choice condition. Additionally, sixth-grade students and adults performed faster than fourth-grade students on the strategy execution parameter. Math anxiety affected response times (RTs) and the accuracy with which strategies were executed. (2) The execution of the partial-decomposition strategy was superior to that of the full-decomposition strategy on the mental arithmetic task. Low-math-anxious persons provided more accurate answers than did high-math-anxious participants under the no-choice condition. This difference was significant for sixth graders. With regard to the strategy selection parameter, the RTs for strategy selection varied with age. PMID:27803685
Si, Jiwei; Li, Hongxia; Sun, Yan; Xu, Yanli; Sun, Yu
2016-01-01
The present study used the choice/no-choice method to investigate the effect of math anxiety on the strategy used in computational estimation and mental arithmetic tasks and to examine age-related differences in this regard. Fifty-seven fourth graders, 56 sixth graders, and 60 adults were randomly selected to participate in the experiment. Results showed the following: (1) High-anxious individuals were more likely to use a rounding-down strategy in the computational estimation task under the best-choice condition. Additionally, sixth-grade students and adults performed faster than fourth-grade students on the strategy execution parameter. Math anxiety affected response times (RTs) and the accuracy with which strategies were executed. (2) The execution of the partial-decomposition strategy was superior to that of the full-decomposition strategy on the mental arithmetic task. Low-math-anxious persons provided more accurate answers than did high-math-anxious participants under the no-choice condition. This difference was significant for sixth graders. With regard to the strategy selection parameter, the RTs for strategy selection varied with age.
Van Rinsveld, Amandine; Brunner, Martin; Landerl, Karin; Schiltz, Christine; Ugen, Sonja
2015-01-01
Solving arithmetic problems is a cognitive task that heavily relies on language processing. One might thus wonder whether this language-reliance leads to qualitative differences (e.g., greater difficulties, error types, etc.) in arithmetic for bilingual individuals who frequently have to solve arithmetic problems in more than one language. The present study investigated how proficiency in two languages interacts with arithmetic problem solving throughout language acquisition in adolescents and young adults. Additionally, we examined whether the number word structure that is specific to a given language plays a role in number processing over and above bilingual proficiency. We addressed these issues in a German–French educational bilingual setting, where there is a progressive transition from German to French as teaching language. Importantly, German and French number naming structures differ clearly, as two-digit number names follow a unit-ten order in German, but a ten-unit order in French. We implemented a transversal developmental design in which bilingual pupils from grades 7, 8, 10, 11, and young adults were asked to solve simple and complex additions in both languages. The results confirmed that language proficiency is crucial especially for complex addition computation. Simple additions in contrast can be retrieved equally well in both languages after extended language practice. Additional analyses revealed that over and above language proficiency, language-specific number word structures (e.g., unit-ten vs. ten-unit) also induced significant modulations of bilinguals' arithmetic performances. Taken together, these findings support the view of a strong relation between language and arithmetic in bilinguals. PMID:25821442
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2013-07-01
We propose, describe, and demonstrate a new numerically stable implementation of the extended boundary-condition method (EBCM) to compute the T-matrix for electromagnetic scattering by spheroidal particles. Our approach relies on the fact that for many of the EBCM integrals in the special case of spheroids, a leading part of the integrand integrates exactly to zero, which causes catastrophic loss of precision in numerical computations. This feature was in fact first pointed out by Waterman in the context of acoustic scattering and electromagnetic scattering by infinite cylinders. We have recently studied it in detail in the case of electromagnetic scattering by particles. Based on this study, the principle of our new implementation is therefore to compute all the integrands without the problematic part to avoid the primary cause of loss of precision. Particular attention is also given to choosing the algorithms that minimise loss of precision in every step of the method, without compromising on speed. We show that the resulting implementation can efficiently compute in double precision arithmetic the T-matrix and therefore optical properties of spheroidal particles to a high precision, often down to a remarkable accuracy (10-10 relative error), over a wide range of parameters that are typically considered problematic. We discuss examples such as high-aspect ratio metallic nanorods and large size parameter (≈35) dielectric particles, which had been previously modelled only using quadruple-precision arithmetic codes.
Metcalfe, Arron W S; Ashkenazi, Sarit; Rosenberg-Lee, Miriam; Menon, Vinod
2013-10-01
Baddeley and Hitch's multi-component working memory (WM) model has played an enduring and influential role in our understanding of cognitive abilities. Very little is known, however, about the neural basis of this multi-component WM model and the differential role each component plays in mediating arithmetic problem solving abilities in children. Here, we investigate the neural basis of the central executive (CE), phonological (PL) and visuo-spatial (VS) components of WM during a demanding mental arithmetic task in 7-9 year old children (N=74). The VS component was the strongest predictor of math ability in children and was associated with increased arithmetic complexity-related responses in left dorsolateral and right ventrolateral prefrontal cortices as well as bilateral intra-parietal sulcus and supramarginal gyrus in posterior parietal cortex. Critically, VS, CE and PL abilities were associated with largely distinct patterns of brain response. Overlap between VS and CE components was observed in left supramarginal gyrus and no overlap was observed between VS and PL components. Our findings point to a central role of visuo-spatial WM during arithmetic problem-solving in young grade-school children and highlight the usefulness of the multi-component Baddeley and Hitch WM model in fractionating the neural correlates of arithmetic problem solving during development. Copyright © 2013 Elsevier Ltd. All rights reserved.
How to Teach Residue Number System to Computer Scientists and Engineers
ERIC Educational Resources Information Center
Navi, K.; Molahosseini, A. S.; Esmaeildoust, M.
2011-01-01
The residue number system (RNS) has been an important research field in computer arithmetic for many decades, mainly because of its carry-free nature, which can provide high-performance computing architectures with superior delay specifications. Recently, research on RNS has found new directions that have resulted in the introduction of efficient…
Fast associative memory + slow neural circuitry = the computational model of the brain.
NASA Astrophysics Data System (ADS)
Berkovich, Simon; Berkovich, Efraim; Lapir, Gennady
1997-08-01
We propose a computational model of the brain based on a fast associative memory and relatively slow neural processors. In this model, processing time is expensive but memory access is not, and therefore most algorithmic tasks would be accomplished by using large look-up tables as opposed to calculating. The essential feature of an associative memory in this context (characteristic for a holographic type memory) is that it works without an explicit mechanism for resolution of multiple responses. As a result, the slow neuronal processing elements, overwhelmed by the flow of information, operate as a set of templates for ranking of the retrieved information. This structure addresses the primary controversy in the brain architecture: distributed organization of memory vs. localization of processing centers. This computational model offers an intriguing explanation of many of the paradoxical features in the brain architecture, such as integration of sensors (through DMA mechanism), subliminal perception, universality of software, interrupts, fault-tolerance, certain bizarre possibilities for rapid arithmetics etc. In conventional computer science the presented type of a computational model did not attract attention as it goes against the technological grain by using a working memory faster than processing elements.
Learner Perceptions of Realism and Magic in Computer Simulations.
ERIC Educational Resources Information Center
Hennessy, Sara; O'Shea, Tim
1993-01-01
Discusses the possible lack of credibility in educational interactive computer simulations. Topics addressed include "Shopping on Mars," a collaborative adventure game for arithmetic calculation that uses direct manipulation in the microworld; the Alternative Reality Kit, a graphical animated environment for creating interactive…
NASA Technical Reports Server (NTRS)
Manos, P.; Turner, L. R.
1972-01-01
Approximations which can be evaluated with precision using floating-point arithmetic are presented. The particular set of approximations thus far developed are for the function TAN and the functions of USASI FORTRAN excepting SQRT and EXPONENTIATION. These approximations are, furthermore, specialized to particular forms which are especially suited to a computer with a small memory, in that all of the approximations can share one general purpose subroutine for the evaluation of a polynomial in the square of the working argument.
A componential model of human interaction with graphs: 1. Linear regression modeling
NASA Technical Reports Server (NTRS)
Gillan, Douglas J.; Lewis, Robert
1994-01-01
Task analyses served as the basis for developing the Mixed Arithmetic-Perceptual (MA-P) model, which proposes (1) that people interacting with common graphs to answer common questions apply a set of component processes-searching for indicators, encoding the value of indicators, performing arithmetic operations on the values, making spatial comparisons among indicators, and repsonding; and (2) that the type of graph and user's task determine the combination and order of the components applied (i.e., the processing steps). Two experiments investigated the prediction that response time will be linearly related to the number of processing steps according to the MA-P model. Subjects used line graphs, scatter plots, and stacked bar graphs to answer comparison questions and questions requiring arithmetic calculations. A one-parameter version of the model (with equal weights for all components) and a two-parameter version (with different weights for arithmetic and nonarithmetic processes) accounted for 76%-85% of individual subjects' variance in response time and 61%-68% of the variance taken across all subjects. The discussion addresses possible modifications in the MA-P model, alternative models, and design implications from the MA-P model.
NASA Astrophysics Data System (ADS)
Bucha, Blažej; Janák, Juraj
2013-07-01
We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.
Representation of natural numbers in quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, Paul
2001-03-01
This paper represents one approach to making explicit some of the assumptions and conditions implied in the widespread representation of numbers by composite quantum systems. Any nonempty set and associated operations is a set of natural numbers or a model of arithmetic if the set and operations satisfy the axioms of number theory or arithmetic. This paper is limited to k-ary representations of length L and to the axioms for arithmetic modulo k{sup L}. A model of the axioms is described based on an abstract L-fold tensor product Hilbert space H{sup arith}. Unitary maps of this space onto a physicalmore » parameter based product space H{sup phy} are then described. Each of these maps makes states in H{sup phy}, and the induced operators, a model of the axioms. Consequences of the existence of many of these maps are discussed along with the dependence of Grover's and Shor's algorithms on these maps. The importance of the main physical requirement, that the basic arithmetic operations are efficiently implementable, is discussed. This condition states that there exist physically realizable Hamiltonians that can implement the basic arithmetic operations and that the space-time and thermodynamic resources required are polynomial in L.« less
How to interpret cognitive training studies: A reply to Lindskog & Winman
Park, Joonkoo; Brannon, Elizabeth M.
2017-01-01
In our previous studies, we demonstrated that repeated training on an approximate arithmetic task selectively improves symbolic arithmetic performance (Park & Brannon, 2013, 2014). We proposed that mental manipulation of quantity is the common cognitive component between approximate arithmetic and symbolic arithmetic, driving the causal relationship between the two. In a commentary to our work, Lindskog and Winman argue that there is no evidence of performance improvement during approximate arithmetic training and that this challenges the proposed causal relationship between approximate arithmetic and symbolic arithmetic. Here, we argue that causality in cognitive training experiments is interpreted from the selectivity of transfer effects and does not hinge upon improved performance in the training task. This is because changes in the unobservable cognitive elements underlying the transfer effect may not be observable from performance measures in the training task. We also question the validity of Lindskog and Winman’s simulation approach for testing for a training effect, given that simulations require a valid and sufficient model of a decision process, which is often difficult to achieve. Finally we provide an empirical approach to testing the training effects in adaptive training. Our analysis reveals new evidence that approximate arithmetic performance improved over the course of training in Park and Brannon (2014). We maintain that our data supports the conclusion that approximate arithmetic training leads to improvement in symbolic arithmetic driven by the common cognitive component of mental quantity manipulation. PMID:26972469
Moll, Kristina; Snowling, Margaret J.; Göbel, Silke M.; Hulme, Charles
2015-01-01
Two important foundations for learning are language and executive skills. Data from a longitudinal study tracking the development of 93 children at family-risk of dyslexia and 76 controls was used to investigate the influence of these skills on the development of arithmetic. A two-group longitudinal path model assessed the relationships between language and executive skills at 3–4 years, verbal number skills (counting and number knowledge) and phonological processing skills at 4–5 years, and written arithmetic in primary school. The same cognitive processes accounted for variability in arithmetic skills in both groups. Early language and executive skills predicted variations in preschool verbal number skills, which in turn, predicted arithmetic skills in school. In contrast, phonological awareness was not a predictor of later arithmetic skills. These results suggest that verbal and executive processes provide the foundation for verbal number skills, which in turn influence the development of formal arithmetic skills. Problems in early language development may explain the comorbidity between reading and mathematics disorder. PMID:26412946
Kalinina, Elizabeth A
2013-08-01
The explicit Euler's method is known to be very easy and effective in implementation for many applications. This article extends results previously obtained for the systems of linear differential equations with constant coefficients to arbitrary systems of ordinary differential equations. Optimal (providing minimum total error) step size is calculated at each step of Euler's method. Several examples of solving stiff systems are included. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Crossbar Nanocomputer Development
2012-04-01
their utilization. Areas such as neuromorphic computing, signal processing, arithmetic processing, and crossbar computing are only some of the...due to its intrinsic, network-on- chip flexibility to re-route around defects. Preliminary efforts in crossbar computing have been demonstrated by...they approach their scaling limits [2]. Other applications that memristive devices are suited for include FPGA [3], encryption [4], and neuromorphic
Exact and efficient simulation of concordant computation
NASA Astrophysics Data System (ADS)
Cable, Hugo; Browne, Daniel E.
2015-11-01
Concordant computation is a circuit-based model of quantum computation for mixed states, that assumes that all correlations within the register are discord-free (i.e. the correlations are essentially classical) at every step of the computation. The question of whether concordant computation always admits efficient simulation by a classical computer was first considered by Eastin in arXiv:quant-ph/1006.4402v1, where an answer in the affirmative was given for circuits consisting only of one- and two-qubit gates. Building on this work, we develop the theory of classical simulation of concordant computation. We present a new framework for understanding such computations, argue that a larger class of concordant computations admit efficient simulation, and provide alternative proofs for the main results of arXiv:quant-ph/1006.4402v1 with an emphasis on the exactness of simulation which is crucial for this model. We include detailed analysis of the arithmetic complexity for solving equations in the simulation, as well as extensions to larger gates and qudits. We explore the limitations of our approach, and discuss the challenges faced in developing efficient classical simulation algorithms for all concordant computations.
Pontius, A A
1993-04-01
Potentially negative long-term consequences in four areas are emphasized, if specific neuromaturational, neurophysiological, and neuropsychological facts within a neurodevelopmental and ecological context are neglected in normal functional levels of child development and maturational lag of the frontal lobe system in "Attention Deficit Disorder," in education (reading/writing and arithmetic), in assessment of cognitive functioning in hunter-gatherer populations, specifically modified in the service of their survival, and in constructing computer models of the brain, neglecting consciousness and intentionality as criticized recently by Searle.
Mordell integrals and Giveon-Kutasov duality
NASA Astrophysics Data System (ADS)
Giasemidis, Georgios; Tierz, Miguel
2016-01-01
We solve, for finite N, the matrix model of supersymmetric U( N) Chern-Simons theory coupled to N f massive hypermultiplets of R-charge 1/2 , together with a Fayet-Iliopoulos term. We compute the partition function by identifying it with a determinant of a Hankel matrix, whose entries are parametric derivatives (of order N f - 1) of Mordell integrals. We obtain finite Gauss sums expressions for the partition functions. We also apply these results to obtain an exhaustive test of Giveon-Kutasov (GK) duality in the N=3 setting, by systematic computation of the matrix models involved. The phase factor that arises in the duality is then obtained explicitly. We give an expression characterized by modular arithmetic (mod 4) behavior that holds for all tested values of the parameters (checked up to N f = 12 flavours).
The calculating brain: an fMRI study.
Rickard, T C; Romero, S G; Basso, G; Wharton, C; Flitman, S; Grafman, J
2000-01-01
To explore brain areas involved in basic numerical computation, functional magnetic imaging (fMRI) scanning was performed on college students during performance of three tasks; simple arithmetic, numerical magnitude judgment, and a perceptual-motor control task. For the arithmetic relative to the other tasks, results for all eight subjects revealed bilateral activation in Brodmann's area 44, in dorsolateral prefrontal cortex (areas 9 and 10), in inferior and superior parietal areas, and in lingual and fusiform gyri. Activation was stronger on the left for all subjects, but only at Brodmann's area 44 and the parietal cortices. No activation was observed in the arithmetic task in several other areas previously implicated for arithmetic, including the angular and supramarginal gyri and the basal ganglia. In fact, angular and supramarginal gyri were significantly deactivated by the verification task relative to both the magnitude judgment and control tasks for every subject. Areas activated by the magnitude task relative to the control were more variable, but in five subjects included bilateral inferior parietal cortex. These results confirm some existing hypotheses regarding the neural basis of numerical processes, invite revision of others, and suggest productive lines for future investigation.
Humphries, Ailsa; Chen, Zhe; Neumann, Ewald
2017-01-01
Previous studies have shown that stimulus repetition can lead to reliable behavioral improvements. Although this repetition priming (RP) effect has been reported in a number of paradigms using a variety of stimuli including words, objects, and faces, only a few studies have investigated mathematical cognition involving arithmetic computation, and no prior research has directly compared RP effects in a linguistic task with an arithmetic task. In two experiments, we used a within-subjects design to investigate and compare the magnitude of RP, and the effects of changing the color or the response hand for repeated, otherwise identical, stimuli in a word and an arithmetic categorization task. The results show that the magnitude of RP was comparable between the two tasks and that changing the color or the response hand had a negligible effect on priming in either task. These results extended previous findings in mathematical cognition. They also indicate that priming does not vary with stimulus domain. The implications of the results were discussed with reference to both facilitation of component processes and episodic memory retrieval of stimulus-response binding.
Deaño, Manuel Deaño; Alfonso, Sonia; Das, Jagannath Prasad
2015-03-01
This study reports the cognitive and arithmetic improvement of a mathematical model based on the program PASS Remedial Program (PREP), which aims to improve specific cognitive processes underlying academic skills such as arithmetic. For this purpose, a group of 20 students from the last four grades of Primary Education was divided into two groups. One group (n=10) received training in the program and the other served as control. Students were assessed at pre and post intervention in the PASS cognitive processes (planning, attention, simultaneous and successive processing), general level of intelligence, and arithmetic performance in calculus and solving problems. Performance of children from the experimental group was significantly higher than that of the control group in cognitive process and arithmetic. This joint enhancement of cognitive and arithmetic processes was a result of the operationalization of training that promotes the encoding task, attention and planning, and learning by induction, mediation and verbalization. The implications of this are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Single-Boundary Accumulator Model of Response Times in an Addition Verification Task
Faulkenberry, Thomas J.
2017-01-01
Current theories of mathematical cognition offer competing accounts of the interplay between encoding and calculation in mental arithmetic. Additive models propose that manipulations of problem format do not interact with the cognitive processes used in calculation. Alternatively, interactive models suppose that format manipulations have a direct effect on calculation processes. In the present study, we tested these competing models by fitting participants' RT distributions in an arithmetic verification task with a single-boundary accumulator model (the shifted Wald distribution). We found that in addition to providing a more complete description of RT distributions, the accumulator model afforded a potentially more sensitive test of format effects. Specifically, we found that format affected drift rate, which implies that problem format has a direct impact on calculation processes. These data give further support for an interactive model of mental arithmetic. PMID:28769853
Design of Improved Arithmetic Logic Unit in Quantum-Dot Cellular Automata
NASA Astrophysics Data System (ADS)
Heikalabad, Saeed Rasouli; Gadim, Mahya Rahimpour
2018-06-01
The quantum-dot cellular automata (QCA) can be replaced to overcome the limitation of CMOS technology. An arithmetic logic unit (ALU) is a basic structure of any computer devices. In this paper, design of improved single-bit arithmetic logic unit in quantum dot cellular automata is presented. The proposed structure for ALU has AND, OR, XOR and ADD operations. A unique 2:1 multiplexer, an ultra-efficient two-input XOR and a low complexity full adder are used in the proposed structure. Also, an extended design of this structure is provided for two-bit ALU in this paper. The proposed structure of ALU is simulated by QCADesigner and simulation result is evaluated. Evaluation results show that the proposed design has best performance in terms of area, complexity and delay compared to the previous designs.
Design of Improved Arithmetic Logic Unit in Quantum-Dot Cellular Automata
NASA Astrophysics Data System (ADS)
Heikalabad, Saeed Rasouli; Gadim, Mahya Rahimpour
2018-03-01
The quantum-dot cellular automata (QCA) can be replaced to overcome the limitation of CMOS technology. An arithmetic logic unit (ALU) is a basic structure of any computer devices. In this paper, design of improved single-bit arithmetic logic unit in quantum dot cellular automata is presented. The proposed structure for ALU has AND, OR, XOR and ADD operations. A unique 2:1 multiplexer, an ultra-efficient two-input XOR and a low complexity full adder are used in the proposed structure. Also, an extended design of this structure is provided for two-bit ALU in this paper. The proposed structure of ALU is simulated by QCADesigner and simulation result is evaluated. Evaluation results show that the proposed design has best performance in terms of area, complexity and delay compared to the previous designs.
The generative basis of natural number concepts.
Leslie, Alan M; Gelman, Rochel; Gallistel, C R
2008-06-01
Number concepts must support arithmetic inference. Using this principle, it can be argued that the integer concept of exactly ONE is a necessary part of the psychological foundations of number, as is the notion of the exact equality - that is, perfect substitutability. The inability to support reasoning involving exact equality is a shortcoming in current theories about the development of numerical reasoning. A simple innate basis for the natural number concepts can be proposed that embodies the arithmetic principle, supports exact equality and also enables computational compatibility with real- or rational-valued mental magnitudes.
Fast Fuzzy Arithmetic Operations
NASA Technical Reports Server (NTRS)
Hampton, Michael; Kosheleva, Olga
1997-01-01
In engineering applications of fuzzy logic, the main goal is not to simulate the way the experts really think, but to come up with a good engineering solution that would (ideally) be better than the expert's control, In such applications, it makes perfect sense to restrict ourselves to simplified approximate expressions for membership functions. If we need to perform arithmetic operations with the resulting fuzzy numbers, then we can use simple and fast algorithms that are known for operations with simple membership functions. In other applications, especially the ones that are related to humanities, simulating experts is one of the main goals. In such applications, we must use membership functions that capture every nuance of the expert's opinion; these functions are therefore complicated, and fuzzy arithmetic operations with the corresponding fuzzy numbers become a computational problem. In this paper, we design a new algorithm for performing such operations. This algorithm is applicable in the case when negative logarithms - log(u(x)) of membership functions u(x) are convex, and reduces computation time from O(n(exp 2))to O(n log(n)) (where n is the number of points x at which we know the membership functions u(x)).
A new version of the CADNA library for estimating round-off error propagation in Fortran programs
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc
2010-11-01
The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: On 64-bit processors, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore a particular definition of mathematical functions for stochastic arguments has been included in the CADNA library to enable its use with the GNU Fortran compiler on 64-bit processors. Summary of revisions: If CADNA is used on a 64-bit processor with the GNU Fortran compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the stochastic argument of a mathematical function is never lost. Restrictions: CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf which shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs. The source code, which is located in the src directory, consists of one assembly language file (cadna_rounding.s) and eighteen Fortran language files. cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the Fortran compiler used. This assembly file contains routines which are frequently called in the CADNA Fortran files to change the rounding mode. The Fortran language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
Matrix computations in MACSYMA
NASA Technical Reports Server (NTRS)
Wang, P. S.
1977-01-01
Facilities built into MACSYMA for manipulating matrices with numeric or symbolic entries are described. Computations will be done exactly, keeping symbols as symbols. Topics discussed include how to form a matrix and create other matrices by transforming existing matrices within MACSYMA; arithmetic and other computation with matrices; and user control of computational processes through the use of optional variables. Two algorithms designed for sparse matrices are given. The computing times of several different ways to compute the determinant of a matrix are compared.
File compression and encryption based on LLS and arithmetic coding
NASA Astrophysics Data System (ADS)
Yu, Changzhi; Li, Hengjian; Wang, Xiyu
2018-03-01
e propose a file compression model based on arithmetic coding. Firstly, the original symbols, to be encoded, are input to the encoder one by one, we produce a set of chaotic sequences by using the Logistic and sine chaos system(LLS), and the values of this chaotic sequences are randomly modified the Upper and lower limits of current symbols probability. In order to achieve the purpose of encryption, we modify the upper and lower limits of all character probabilities when encoding each symbols. Experimental results show that the proposed model can achieve the purpose of data encryption while achieving almost the same compression efficiency as the arithmetic coding.
Power, Sarah D; Kushki, Azadeh; Chau, Tom
2011-12-01
Near-infrared spectroscopy (NIRS) has recently been investigated as a non-invasive brain-computer interface (BCI) for individuals with severe motor impairments. For the most part, previous research has investigated the development of NIRS-BCIs operating under synchronous control paradigms, which require the user to exert conscious control over their mental activity whenever the system is vigilant. Though functional, this is mentally demanding and an unnatural way to communicate. An attractive alternative to the synchronous control paradigm is system-paced control, in which users are required to consciously modify their brain activity only when they wish to affect the BCI output, and can remain in a more natural, 'no-control' state at all other times. In this study, we investigated the feasibility of a system-paced NIRS-BCI with one intentional control (IC) state corresponding to the performance of either mental arithmetic or mental singing. In particular, this involved determining if these tasks could be distinguished, individually, from the unconstrained 'no-control' state. Deploying a dual-wavelength frequency domain near-infrared spectrometer, we interrogated nine sites around the frontopolar locations (International 10-20 System) while eight able-bodied adults performed mental arithmetic and mental singing to answer multiple-choice questions within a system-paced paradigm. With a linear classifier trained on a six-dimensional feature set, an overall classification accuracy of 71.2% across participants was achieved for the mental arithmetic versus no-control classification problem. While the mental singing versus no-control classification was less successful across participants (62.7% on average), four participants did attain accuracies well in excess of chance, three of which were above 70%. Analyses were performed offline. Collectively, these results are encouraging, and demonstrate the potential of a system-paced NIRS-BCI with one IC state corresponding to either mental arithmetic or mental singing.
NASA Astrophysics Data System (ADS)
Power, Sarah D.; Kushki, Azadeh; Chau, Tom
2011-10-01
Near-infrared spectroscopy (NIRS) has recently been investigated as a non-invasive brain-computer interface (BCI) for individuals with severe motor impairments. For the most part, previous research has investigated the development of NIRS-BCIs operating under synchronous control paradigms, which require the user to exert conscious control over their mental activity whenever the system is vigilant. Though functional, this is mentally demanding and an unnatural way to communicate. An attractive alternative to the synchronous control paradigm is system-paced control, in which users are required to consciously modify their brain activity only when they wish to affect the BCI output, and can remain in a more natural, 'no-control' state at all other times. In this study, we investigated the feasibility of a system-paced NIRS-BCI with one intentional control (IC) state corresponding to the performance of either mental arithmetic or mental singing. In particular, this involved determining if these tasks could be distinguished, individually, from the unconstrained 'no-control' state. Deploying a dual-wavelength frequency domain near-infrared spectrometer, we interrogated nine sites around the frontopolar locations (International 10-20 System) while eight able-bodied adults performed mental arithmetic and mental singing to answer multiple-choice questions within a system-paced paradigm. With a linear classifier trained on a six-dimensional feature set, an overall classification accuracy of 71.2% across participants was achieved for the mental arithmetic versus no-control classification problem. While the mental singing versus no-control classification was less successful across participants (62.7% on average), four participants did attain accuracies well in excess of chance, three of which were above 70%. Analyses were performed offline. Collectively, these results are encouraging, and demonstrate the potential of a system-paced NIRS-BCI with one IC state corresponding to either mental arithmetic or mental singing.
Teichmann, Marc; Gaura, Véronique; Démonet, Jean-François; Supiot, Frédéric; Delliaux, Marie; Verny, Christophe; Renou, Pierre; Remy, Philippe; Bachoud-Lévi, Anne-Catherine
2008-04-01
The role of sub-cortical structures in language processing, and more specifically of the striatum, remains controversial. In line with psycholinguistic models stating that language processing implies both the recovery of lexical information and the application of combinatorial rules, the striatum has been claimed to be involved either in the former component or in the latter. The present study reconciles these conflicting views by showing the striatum's involvement in both language processes, depending on distinct striatal sub-regions. Using PET scanning in a model of striatal disorders, namely Huntington's disease (HD), we correlated metabolic data of 31 early stage HD patients regarding different striatal sub-regions with behavioural scores on three rule/lexicon tasks drawn from word morphology, syntax and from a non-linguistic domain, namely arithmetic. Behavioural results reflected impairment on both processing aspects, while deficits predominated on rule application. Both correlated with the left striatum but involved distinct striatal sub-regions. We suggest that the left striatum encompasses linguistic and arithmetic circuits, which differ with respect to their anatomical and functional specification, comprising ventrally located regions dedicated to rule computations and more dorsal portions pertaining to lexical devices.
The MONGOOSE Rational Arithmetic Toolbox.
Le, Christopher; Chindelevitch, Leonid
2018-01-01
The modeling of metabolic networks has seen a rapid expansion following the complete sequencing of thousands of genomes. The constraint-based modeling framework has emerged as one of the most popular approaches to reconstructing and analyzing genome-scale metabolic models. Its main assumption is that of a quasi-steady-state, requiring that the production of each internal metabolite be balanced by its consumption. However, due to the multiscale nature of the models, the large number of reactions and metabolites, and the use of floating-point arithmetic for the stoichiometric coefficients, ensuring that this assumption holds can be challenging.The MONGOOSE toolbox addresses this problem by using rational arithmetic, thus ensuring that models are analyzed in a reproducible manner and consistently with modeling assumptions. In this chapter we present a protocol for the complete analysis of a metabolic network model using the MONGOOSE toolbox, via its newly developed GUI, and describe how it can be used as a model-checking platform both during and after the model construction process.
Connaughton, Veronica M; Amiruddin, Azhani; Clunies-Ross, Karen L; French, Noel; Fox, Allison M
2017-05-01
A major model of the cerebral circuits that underpin arithmetic calculation is the triple-code model of numerical processing. This model proposes that the lateralization of mathematical operations is organized across three circuits: a left-hemispheric dominant verbal code; a bilateral magnitude representation of numbers and a bilateral Arabic number code. This study simultaneously measured the blood flow of both middle cerebral arteries using functional transcranial Doppler ultrasonography to assess hemispheric specialization during the performance of both language and arithmetic tasks. The propositions of the triple-code model were assessed in a non-clinical adult group by measuring cerebral blood flow during the performance of multiplication and subtraction problems. Participants were 17 adults aged between 18-27 years. We obtained laterality indices for each type of mathematical operation and compared these in participants with left-hemispheric language dominance. It was hypothesized that blood flow would lateralize to the left hemisphere during the performance of multiplication operations, but would not lateralize during the performance of subtraction operations. Hemispheric blood flow was significantly left lateralized during the multiplication task, but was not lateralized during the subtraction task. Compared to high spatial resolution neuroimaging techniques previously used to measure cerebral lateralization, functional transcranial Doppler ultrasonography is a cost-effective measure that provides a superior temporal representation of arithmetic cognition. These results provide support for the triple-code model of arithmetic processing and offer complementary evidence that multiplication operations are processed differently in the adult brain compared to subtraction operations. Copyright © 2017 Elsevier B.V. All rights reserved.
More Than the Rules of Precedence
ERIC Educational Resources Information Center
Liang, Yawei
2005-01-01
In a fundamental computer-programming course, such as CSE101, questions about how to evaluate an arithmetic expression are frequently used to check if our students know the rules of precedence. The author uses two of our final examination questions to show that more knowledge of computer science is needed to answer them correctly. Furthermore,…
Algebraic Functions, Computer Programming, and the Challenge of Transfer
ERIC Educational Resources Information Center
Schanzer, Emmanuel Tanenbaum
2015-01-01
Students' struggles with algebra are well documented. Prior to the introduction of functions, mathematics is typically focused on applying a set of arithmetic operations to compute an answer. The introduction of functions, however, marks the point at which mathematics begins to focus on building up abstractions as a way to solve complex problems.…
Business and Technology Concepts--Business Computations. Teacher's Guide.
ERIC Educational Resources Information Center
Illinois State Board of Education, Springfield. Dept. of Adult, Vocational and Technical Education.
This Illinois State Board of Education teacher's guide on business computations is for students enrolled in the 9th or 10th grade. The course provides a foundation in arithmetic skills and their applications to common business problems for the senior high school vocational business courses. The curriculum guide includes teacher and student…
Young Children "Solve for X" Using the Approximate Number System
ERIC Educational Resources Information Center
Kibbe, Melissa M.; Feigenson, Lisa
2015-01-01
The Approximate Number System (ANS) supports basic arithmetic computation in early childhood, but it is unclear whether the ANS also supports the more complex computations introduced later in formal education. "Solving for x" in addend-unknown problems is notoriously difficult for children, who often struggle with these types of problems…
Math anxiety differentially affects WAIS-IV arithmetic performance in undergraduates.
Buelow, Melissa T; Frakey, Laura L
2013-06-01
Previous research has shown that math anxiety can influence the math performance level; however, to date, it is unknown whether math anxiety influences performance on working memory tasks during neuropsychological evaluation. In the present study, 172 undergraduate students completed measures of math achievement (the Math Computation subtest from the Wide Range Achievement Test-IV), math anxiety (the Math Anxiety Rating Scale-Revised), general test anxiety (from the Adult Manifest Anxiety Scale-College version), and the three Working Memory Index tasks from the Wechsler Adult Intelligence Scale-IV Edition (WAIS-IV; Digit Span [DS], Arithmetic, Letter-Number Sequencing [LNS]). Results indicated that math anxiety predicted performance on Arithmetic, but not DS or LNS, above and beyond the effects of gender, general test anxiety, and math performance level. Our findings suggest that math anxiety can negatively influence WAIS-IV working memory subtest scores. Implications for clinical practice include the utilization of LNS in individuals expressing high math anxiety.
Desirable floating-point arithmetic and elementary functions for numerical computation
NASA Technical Reports Server (NTRS)
Hull, T. E.
1978-01-01
The topics considered are: (1) the base of the number system, (2) precision control, (3) number representation, (4) arithmetic operations, (5) other basic operations, (6) elementary functions, and (7) exception handling. The possibility of doing without fixed-point arithmetic is also mentioned. The specifications are intended to be entirely at the level of a programming language such as FORTRAN. The emphasis is on convenience and simplicity from the user's point of view. Conforming to such specifications would have obvious beneficial implications for the portability of numerical software, and for proving programs correct, as well as attempting to provide facilities which are most suitable for the user. The specifications are not complete in every detail, but it is intended that they be complete in spirit - some further details, especially syntatic details, would have to be provided, but the proposals are otherwise relatively complete.
System balance analysis for vector computers
NASA Technical Reports Server (NTRS)
Knight, J. C.; Poole, W. G., Jr.; Voight, R. G.
1975-01-01
The availability of vector processors capable of sustaining computing rates of 10 to the 8th power arithmetic results pers second raised the question of whether peripheral storage devices representing current technology can keep such processors supplied with data. By examining the solution of a large banded linear system on these computers, it was found that even under ideal conditions, the processors will frequently be waiting for problem data.
Computation of transform domain covariance matrices
NASA Technical Reports Server (NTRS)
Fino, B. J.; Algazi, V. R.
1975-01-01
It is often of interest in applications to compute the covariance matrix of a random process transformed by a fast unitary transform. Here, the recursive definition of fast unitary transforms is used to derive recursive relations for the covariance matrices of the transformed process. These relations lead to fast methods of computation of covariance matrices and to substantial reductions of the number of arithmetic operations required.
Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method
NASA Astrophysics Data System (ADS)
Gilbreth, C. N.; Alhassid, Y.
2015-03-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
Kodiak: An Implementation Framework for Branch and Bound Algorithms
NASA Technical Reports Server (NTRS)
Smith, Andrew P.; Munoz, Cesar A.; Narkawicz, Anthony J.; Markevicius, Mantas
2015-01-01
Recursive branch and bound algorithms are often used to refine and isolate solutions to several classes of global optimization problems. A rigorous computation framework for the solution of systems of equations and inequalities involving nonlinear real arithmetic over hyper-rectangular variable and parameter domains is presented. It is derived from a generic branch and bound algorithm that has been formally verified, and utilizes self-validating enclosure methods, namely interval arithmetic and, for polynomials and rational functions, Bernstein expansion. Since bounds computed by these enclosure methods are sound, this approach may be used reliably in software verification tools. Advantage is taken of the partial derivatives of the constraint functions involved in the system, firstly to reduce the branching factor by the use of bisection heuristics and secondly to permit the computation of bifurcation sets for systems of ordinary differential equations. The associated software development, Kodiak, is presented, along with examples of three different branch and bound problem types it implements.
Frontoparietal white matter diffusion properties predict mental arithmetic skills in children
Tsang, Jessica M.; Dougherty, Robert F.; Deutsch, Gayle K.; Wandell, Brian A.; Ben-Shachar, Michal
2009-01-01
Functional MRI studies of mental arithmetic consistently report blood oxygen level–dependent signals in the parietal and frontal regions. We tested whether white matter pathways connecting these regions are related to mental arithmetic ability by using diffusion tensor imaging (DTI) to measure these pathways in 28 children (age 10–15 years, 14 girls) and assessing their mental arithmetic skills. For each child, we identified anatomically the anterior portion of the superior longitudinal fasciculus (aSLF), a pathway connecting parietal and frontal cortex. We measured fractional anisotropy in a core region centered along the length of the aSLF. Fractional anisotropy in the left aSLF positively correlates with arithmetic approximation skill, as measured by a mental addition task with approximate answer choices. The correlation is stable in adjacent core aSLF regions but lower toward the pathway endpoints. The correlation is not explained by shared variance with other cognitive abilities and did not pass significance in the right aSLF. These measurements used DTI, a structural method, to test a specific functional model of mental arithmetic. PMID:19948963
Estimating stand age for Douglas-fir.
Floyd A. Johnson
1954-01-01
Stand age for Douglas-fir has been defined as the average age of dominant and codominant trees. It is commonly estimated by measuring the age of several dominants and codominants and computing their arithmetic average.
Calculating degree-based topological indices of dominating David derived networks
NASA Astrophysics Data System (ADS)
Ahmad, Muhammad Saeed; Nazeer, Waqas; Kang, Shin Min; Imran, Muhammad; Gao, Wei
2017-12-01
An important area of applied mathematics is the Chemical reaction network theory. The behavior of real world problems can be modeled by using this theory. Due to applications in theoretical chemistry and biochemistry, it has attracted researchers since its foundation. It also attracts pure mathematicians because it involves interesting mathematical structures. In this report, we compute newly defined topological indices, namely, Arithmetic-Geometric index (AG1 index), SK index, SK1 index, and SK2 index of the dominating David derived networks [1, 2, 3, 4, 5].
NASA Astrophysics Data System (ADS)
Maiti, Anup Kumar; Nath Roy, Jitendra; Mukhopadhyay, Sourangshu
2007-08-01
In the field of optical computing and parallel information processing, several number systems have been used for different arithmetic and algebraic operations. Therefore an efficient conversion scheme from one number system to another is very important. Modified trinary number (MTN) has already taken a significant role towards carry and borrow free arithmetic operations. In this communication, we propose a tree-net architecture based all optical conversion scheme from binary number to its MTN form. Optical switch using nonlinear material (NLM) plays an important role.
If Gravity is Geometry, is Dark Energy just Arithmetic?
NASA Astrophysics Data System (ADS)
Czachor, Marek
2017-04-01
Arithmetic operations (addition, subtraction, multiplication, division), as well as the calculus they imply, are non-unique. The examples of four-dimensional spaces, R+4 and (- L/2, L/2)4, are considered where different types of arithmetic and calculus coexist simultaneously. In all the examples there exists a non-Diophantine arithmetic that makes the space globally Minkowskian, and thus the laws of physics are formulated in terms of the corresponding calculus. However, when one switches to the `natural' Diophantine arithmetic and calculus, the Minkowskian character of the space is lost and what one effectively obtains is a Lorentzian manifold. I discuss in more detail the problem of electromagnetic fields produced by a pointlike charge. The solution has the standard form when expressed in terms of the non-Diophantine formalism. When the `natural' formalsm is used, the same solution looks as if the fields were created by a charge located in an expanding universe, with nontrivially accelerating expansion. The effect is clearly visible also in solutions of the Friedman equation with vanishing cosmological constant. All of this suggests that phenomena attributed to dark energy may be a manifestation of a miss-match between the arithmetic employed in mathematical modeling, and the one occurring at the level of natural laws. Arithmetic is as physical as geometry.
Exploiting data representation for fault tolerance
Hoemmen, Mark Frederick; Elliott, J.; Sandia National Lab.; ...
2015-01-06
Incorrect computer hardware behavior may corrupt intermediate computations in numerical algorithms, possibly resulting in incorrect answers. Prior work models misbehaving hardware by randomly flipping bits in memory. We start by accepting this premise, and present an analytic model for the error introduced by a bit flip in an IEEE 754 floating-point number. We then relate this finding to the linear algebra concepts of normalization and matrix equilibration. In particular, we present a case study illustrating that normalizing both vector inputs of a dot product minimizes the probability of a single bit flip causing a large error in the dot product'smore » result. Moreover, the absolute error is either less than one or very large, which allows detection of large errors. Then, we apply this to the GMRES iterative solver. We count all possible errors that can be introduced through faults in arithmetic in the computationally intensive orthogonalization phase of GMRES, and show that when the matrix is equilibrated, the absolute error is bounded above by one.« less
Mapping Computation with No Memory
NASA Astrophysics Data System (ADS)
Burckel, Serge; Gioan, Emeric; Thomé, Emmanuel
We investigate the computation of mappings from a set S n to itself with in situ programs, that is using no extra variables than the input, and performing modifications of one component at a time. We consider several types of mappings and obtain effective computation and decomposition methods, together with upper bounds on the program length (number of assignments). Our technique is combinatorial and algebraic (graph coloration, partition ordering, modular arithmetics).
How Much Does the 24 Game Increase the Recall of Arithmetic Facts?
ERIC Educational Resources Information Center
Eley, Jonquille
2009-01-01
Sixth grade students come to MS 331 with strong mathematics backgrounds from elementary school. Nevertheless, students often come with a dearth of skills when performing basic math computations. The focus of this study is to investigate the use of the 24 Game in quickening the ability of sixth graders to perform basic computations. The game…
Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie
2014-01-01
Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations. PMID:25291352
A remote sensing computer-assisted learning tool developed using the unified modeling language
NASA Astrophysics Data System (ADS)
Friedrich, J.; Karslioglu, M. O.
The goal of this work has been to create an easy-to-use and simple-to-make learning tool for remote sensing at an introductory level. Many students struggle to comprehend what seems to be a very basic knowledge of digital images, image processing and image arithmetic, for example. Because professional programs are generally too complex and overwhelming for beginners and often not tailored to the specific needs of a course regarding functionality, a computer-assisted learning (CAL) program was developed based on the unified modeling language (UML), the present standard for object-oriented (OO) system development. A major advantage of this approach is an easier transition from modeling to coding of such an application, if modern UML tools are being used. After introducing the constructed UML model, its implementation is briefly described followed by a series of learning exercises. They illustrate how the resulting CAL tool supports students taking an introductory course in remote sensing at the author's institution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGoldrick, P.R.; Allison, T.G.
The BASIC2 INTERPRETER was developed to provide a high-level easy-to-use language for performing both control and computational functions in the MCS-80. The package is supplied as two alternative implementations, hardware and software. The ''software'' implementation provides the following capabilities: entry and editing of BASIC programs, device-independent I/O, special functions to allow access from BASIC to any I/O port, formatted printing, special INPUT/OUTPUT-and-proceed statements to allow I/O without interrupting BASIC program execution, full arithmetic expressions, limited string manipulation (10 or fewer characters), shorthand forms for common BASIC keywords, immediate mode BASIC statement execution, and capability of running a BASIC program thatmore » is stored in PROM. The allowed arithmetic operations are addition, subtraction, multiplication, division, and raising a number to a positive integral power. In the second, or ''hardware'', implementation of BASIC2 requiring an Am9511 Arithmetic Processing Unit (APU) interfaced to the 8080 microprocessor, arithmetic operations are performed by the APU. The following additional built-in functions are available in this implementation: square root, sine, cosine, tangent, arcsine, arccosine, arctangent, exponential, logarithm base e, and logarithm base 10. MCS-80,8080-based microcomputers; 8080 Assembly language; Approximately 8K bytes of RAM to store the assembled interpreter, additional user program space, and necessary peripheral devices. The hardware implementation requires an Am9511 Arithmetic Processing Unit and an interface board (reference 2).« less
Graphics processing unit (GPU)-based computation of heat conduction in thermally anisotropic solids
NASA Astrophysics Data System (ADS)
Nahas, C. A.; Balasubramaniam, Krishnan; Rajagopal, Prabhu
2013-01-01
Numerical modeling of anisotropic media is a computationally intensive task since it brings additional complexity to the field problem in such a way that the physical properties are different in different directions. Largely used in the aerospace industry because of their lightweight nature, composite materials are a very good example of thermally anisotropic media. With advancements in video gaming technology, parallel processors are much cheaper today and accessibility to higher-end graphical processing devices has increased dramatically over the past couple of years. Since these massively parallel GPUs are very good in handling floating point arithmetic, they provide a new platform for engineers and scientists to accelerate their numerical models using commodity hardware. In this paper we implement a parallel finite difference model of thermal diffusion through anisotropic media using the NVIDIA CUDA (Compute Unified device Architecture). We use the NVIDIA GeForce GTX 560 Ti as our primary computing device which consists of 384 CUDA cores clocked at 1645 MHz with a standard desktop pc as the host platform. We compare the results from standard CPU implementation for its accuracy and speed and draw implications for simulation using the GPU paradigm.
Identifying Blocks Formed by Curbed Fractures Using Exact Arithmetic
NASA Astrophysics Data System (ADS)
Zheng, Y.; Xia, L.; Yu, Q.; Zhang, X.
2015-12-01
Identifying blocks formed by fractures is important in rock engineering. Most studies assume the fractures to be perfect planar whereas curved fractures are rarely considered. However, large fractures observed in the field are often curved. This paper presents a new method for identifying rock blocks formed by both curved and planar fractures based on the element-block-assembling approach. The curved and planar fractures are represented as triangle meshes and planar discs, respectively. In the beginning of the identification method, the intersection segments between different triangle meshes are calculated and the intersected triangles are re-meshed to construct a piecewise linear complex (PLC). Then, the modeling domain is divided into tetrahedral subdomains under the constraint of the PLC and these subdomains are further decomposed into element blocks by extended planar fractures. Finally, the element blocks are combined and the subdomains are assembled to form complex blocks. The combination of two subdomains is skipped if and only if the common facet lies on a curved fracture. In this study, the exact arithmetic is used to handle the computational errors, which may threat the robustness of the block identification program when the degenerated cases are encountered. Specifically, a real number is represented as the ratio between two integers and the basic arithmetic such as addition, subtraction, multiplication and division between different real numbers can be performed exactly if an arbitrary precision integer package is used. In this way, the exact construction of blocks can be achieved without introducing computational errors. Several analytical examples are given in this paper and the results show effectiveness of this method in handling arbitrary shaped blocks. Moreover, there is no limitation on the number of blocks in a block system. The results also show (suggest) that the degenerated cases can be handled without affecting the robustness of the identification program.
Vectors a Fortran 90 module for 3-dimensional vector and dyadic arithmetic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brock, B.C.
1998-02-01
A major advance contained in the new Fortran 90 language standard is the ability to define new data types and the operators associated with them. Writing computer code to implement computations with real and complex three-dimensional vectors and dyadics is greatly simplified if the equations can be implemented directly, without the need to code the vector arithmetic explicitly. The Fortran 90 module described here defines new data types for real and complex 3-dimensional vectors and dyadics, along with the common operations needed to work with these objects. Routines to allow convenient initialization and output of the new types are alsomore » included. In keeping with the philosophy of data abstraction, the details of the implementation of the data types are maintained private, and the functions and operators are made generic to simplify the combining of real, complex, single- and double-precision vectors and dyadics.« less
Multinode reconfigurable pipeline computer
NASA Technical Reports Server (NTRS)
Nosenchuck, Daniel M. (Inventor); Littman, Michael G. (Inventor)
1989-01-01
A multinode parallel-processing computer is made up of a plurality of innerconnected, large capacity nodes each including a reconfigurable pipeline of functional units such as Integer Arithmetic Logic Processors, Floating Point Arithmetic Processors, Special Purpose Processors, etc. The reconfigurable pipeline of each node is connected to a multiplane memory by a Memory-ALU switch NETwork (MASNET). The reconfigurable pipeline includes three (3) basic substructures formed from functional units which have been found to be sufficient to perform the bulk of all calculations. The MASNET controls the flow of signals from the memory planes to the reconfigurable pipeline and vice versa. the nodes are connectable together by an internode data router (hyperspace router) so as to form a hypercube configuration. The capability of the nodes to conditionally configure the pipeline at each tick of the clock, without requiring a pipeline flush, permits many powerful algorithms to be implemented directly.
On the convergence and accuracy of the FDTD method for nanoplasmonics.
Lesina, Antonino Calà; Vaccari, Alessandro; Berini, Pierre; Ramunno, Lora
2015-04-20
Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise - more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three "standard" nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna - for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large errors but reduces the computational resources required.
Implementation of Arithmetic and Nonarithmetic Functions on a Label-free and DNA-based Platform
NASA Astrophysics Data System (ADS)
Wang, Kun; He, Mengqi; Wang, Jin; He, Ronghuan; Wang, Jianhua
2016-10-01
A series of complex logic gates were constructed based on graphene oxide and DNA-templated silver nanoclusters to perform both arithmetic and nonarithmetic functions. For the purpose of satisfying the requirements of progressive computational complexity and cost-effectiveness, a label-free and universal platform was developed by integration of various functions, including half adder, half subtractor, multiplexer and demultiplexer. The label-free system avoided laborious modification of biomolecules. The designed DNA-based logic gates can be implemented with readout of near-infrared fluorescence, and exhibit great potential applications in the field of bioimaging as well as disease diagnosis.
Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guth, Larry, E-mail: lguth@math.mit.edu; Lubotzky, Alexander, E-mail: alex.lubotzky@mail.huji.ac.il
2014-08-15
Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance n{sup ε}. Their rate is evaluated via Euler characteristic arguments and their distance using Z{sub 2}-systolic geometry. This construction answers a question of Zémor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.
Dedovic, Katarina; Renwick, Robert; Mahani, Najmeh Khalili; Engert, Veronika; Lupien, Sonia J.; Pruessner, Jens C.
2005-01-01
Objective We developed a protocol for inducing moderate psychologic stress in a functional imaging setting and evaluated the effects of stress on physiology and brain activation. Methods The Montreal Imaging Stress Task (MIST), derived from the Trier Mental Challenge Test, consists of a series of computerized mental arithmetic challenges, along with social evaluative threat components that are built into the program or presented by the investigator. To allow the effects of stress and mental arithmetic to be investigated separately, the MIST has 3 test conditions (rest, control and experimental), which can be presented in either a block or an event-related design, for use with functional magnetic resonance imaging (fMRI) or positron emission tomography (PET). In the rest condition, subjects look at a static computer screen on which no tasks are shown. In the control condition, a series of mental arithmetic tasks are displayed on the computer screen, and subjects submit their answers by means of a response interface. In the experimental condition, the difficulty and time limit of the tasks are manipulated to be just beyond the individual's mental capacity. In addition, in this condition the presentation of the mental arithmetic tasks is supplemented by a display of information on individual and average performance, as well as expected performance. Upon completion of each task, the program presents a performance evaluation to further increase the social evaluative threat of the situation. Results In 2 independent studies using PET and a third independent study using fMRI, with a total of 42 subjects, levels of salivary free cortisol for the whole group were significantly increased under the experimental condition, relative to the control and rest conditions. Performing mental arithmetic was linked to activation of motor and visual association cortices, as well as brain structures involved in the performance of these tasks (e.g., the angular gyrus). Conclusions We propose the MIST as a tool for investigating the effects of perceiving and processing psychosocial stress in functional imaging studies. PMID:16151536
ERIC Educational Resources Information Center
Nordman, R.; Parker, J.
This report compares two methods of teaching BASIC programming used to develop computer literacy among children in grades three through seven in British Columbia. Phase one of the project was designed to instruct children in grades five to seven on the arithmetic operations of writing simple BASIC programs. Instructional methods included using job…
Separating OR, SUM, and XOR Circuits.
Find, Magnus; Göös, Mika; Järvisalo, Matti; Kaski, Petteri; Koivisto, Mikko; Korhonen, Janne H
2016-08-01
Given a boolean n × n matrix A we consider arithmetic circuits for computing the transformation x ↦ Ax over different semirings. Namely, we study three circuit models: monotone OR-circuits, monotone SUM-circuits (addition of non-negative integers), and non-monotone XOR-circuits (addition modulo 2). Our focus is on separating OR-circuits from the two other models in terms of circuit complexity: We show how to obtain matrices that admit OR-circuits of size O ( n ), but require SUM-circuits of size Ω( n 3/2 /log 2 n ).We consider the task of rewriting a given OR-circuit as a XOR-circuit and prove that any subquadratic-time algorithm for this task violates the strong exponential time hypothesis.
MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL
Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi
2017-01-01
A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively. PMID:29051701
MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL.
Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi
2017-01-01
A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.
Neural correlates of mathematical problem solving.
Lin, Chun-Ling; Jung, Melody; Wu, Ying Choon; She, Hsiao-Ching; Jung, Tzyy-Ping
2015-03-01
This study explores electroencephalography (EEG) brain dynamics associated with mathematical problem solving. EEG and solution latencies (SLs) were recorded as 11 neurologically healthy volunteers worked on intellectually challenging math puzzles that involved combining four single-digit numbers through basic arithmetic operators (addition, subtraction, division, multiplication) to create an arithmetic expression equaling 24. Estimates of EEG spectral power were computed in three frequency bands - θ (4-7 Hz), α (8-13 Hz) and β (14-30 Hz) - over a widely distributed montage of scalp electrode sites. The magnitude of power estimates was found to change in a linear fashion with SLs - that is, relative to a base of power spectrum, theta power increased with longer SLs, while alpha and beta power tended to decrease. Further, the topographic distribution of spectral fluctuations was characterized by more pronounced asymmetries along the left-right and anterior-posterior axes for solutions that involved a longer search phase. These findings reveal for the first time the topography and dynamics of EEG spectral activities important for sustained solution search during arithmetical problem solving.
Developmental Dissociation in the Neural Responses to Simple Multiplication and Subtraction Problems
ERIC Educational Resources Information Center
Prado, Jérôme; Mutreja, Rachna; Booth, James R.
2014-01-01
Mastering single-digit arithmetic during school years is commonly thought to depend upon an increasing reliance on verbally memorized facts. An alternative model, however, posits that fluency in single-digit arithmetic might also be achieved via the increasing use of efficient calculation procedures. To test between these hypotheses, we used a…
Measuring Middle Grades Teachers' Understanding of Rational Numbers with the Mixture Rasch Model
ERIC Educational Resources Information Center
Izsak, Andrew; Orrill, Chandra Hawley; Cohen, Allan S.; Brown, Rachael Eriksen
2010-01-01
We report the development of a multiple-choice instrument that measures the mathematical knowledge needed for teaching arithmetic with fractions, decimals, and proportions. In particular, the instrument emphasizes the knowledge needed to reason about such arithmetic when numbers are embedded in problem situations. We administered our instrument to…
Käser, Tanja; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; Richtmann, Verena; Grond, Ursina; Gross, Markus; von Aster, Michael
2013-01-01
This article presents the design and a first pilot evaluation of the computer-based training program Calcularis for children with developmental dyscalculia (DD) or difficulties in learning mathematics. The program has been designed according to insights on the typical and atypical development of mathematical abilities. The learning process is supported through multimodal cues, which encode different properties of numbers. To offer optimal learning conditions, a user model completes the program and allows flexible adaptation to a child's individual learning and knowledge profile. Thirty-two children with difficulties in learning mathematics completed the 6–12-weeks computer training. The children played the game for 20 min per day for 5 days a week. The training effects were evaluated using neuropsychological tests. Generally, children benefited significantly from the training regarding number representation and arithmetic operations. Furthermore, children liked to play with the program and reported that the training improved their mathematical abilities. PMID:23935586
On the structure of arithmetic sums of Cantor sets with constant ratios of dissection
NASA Astrophysics Data System (ADS)
Anisca, Razvan; Chlebovec, Christopher
2009-09-01
We investigate conditions which imply that the topological structure of the arithmetic sum of two Cantor sets with constant ratios of dissection at each step is either: a Cantor set, a finite union of closed intervals, or three mixed models (L, R and M-Cantorval). We obtain general results that apply in particular for the case of homogeneous Cantor sets, thus generalizing the results of Mendes and Oliveira. The method used here is new in this context. We also produce results regarding the arithmetic sum of two affine Cantor sets of a special kind.
Jenks, Kathleen M; de Moor, Jan; van Lieshout, Ernest C D M; Maathuis, Karel G B; Keus, Inge; Gorter, Jan Willem
2007-01-01
The development of addition and subtraction accuracy was assessed in first graders with cerebral palsy (CP) in both mainstream (16) and special education (41) and a control group of first graders in mainstream education (16). The control group out-performed the CP groups in addition and subtraction accuracy and this difference could not be fully explained by differences in intelligence. Both CP groups showed evidence of working memory deficits. The three groups exhibited different developmental patterns in the area of early numeracy skills. Children with CP in special education were found to receive less arithmetic instruction and instruction time was positively related to arithmetic accuracy. Structural equation modeling revealed that the effect of CP on arithmetic accuracy is mediated by intelligence, working memory, early numeracy, and instruction time.
Fatigue damage prognosis using affine arithmetic
NASA Astrophysics Data System (ADS)
Gbaguidi, Audrey; Kim, Daewon
2014-02-01
Among the essential steps to be taken in structural health monitoring systems, damage prognosis would be the field that is least investigated due to the complexity of the uncertainties. This paper presents the possibility of using Affine Arithmetic for uncertainty propagation of crack damage in damage prognosis. The structures examined are thin rectangular plates made of titanium alloys with central mode I cracks and a composite plate with an internal delamination caused by mixed mode I and II fracture modes, under a harmonic uniaxial loading condition. The model-based method for crack growth rates are considered using the Paris Erdogan law model for the isotropic plates and the delamination growth law model proposed by Kardomateas for the composite plate. The parameters for both models are randomly taken and their uncertainties are considered as defined by an interval instead of a probability distribution. A Monte Carlo method is also applied to check whether Affine Arithmetic (AA) leads to tight bounds on the lifetime of the structure.
When is working memory important for arithmetic? The impact of strategy and age.
Cragg, Lucy; Richardson, Sophie; Hubber, Paula J; Keeble, Sarah; Gilmore, Camilla
2017-01-01
Our ability to perform arithmetic relies heavily on working memory, the manipulation and maintenance of information in mind. Previous research has found that in adults, procedural strategies, particularly counting, rely on working memory to a greater extent than retrieval strategies. During childhood there are changes in the types of strategies employed, as well as an increase in the accuracy and efficiency of strategy execution. As such it seems likely that the role of working memory in arithmetic may also change, however children and adults have never been directly compared. This study used traditional dual-task methodology, with the addition of a control load condition, to investigate the extent to which working memory requirements for different arithmetic strategies change with age between 9-11 years, 12-14 years and young adulthood. We showed that both children and adults employ working memory when solving arithmetic problems, no matter what strategy they choose. This study highlights the importance of considering working memory in understanding the difficulties that some children and adults have with mathematics, as well as the need to include working memory in theoretical models of mathematical cognition.
Solution Strategies and Achievement in Dutch Complex Arithmetic: Latent Variable Modeling of Change
ERIC Educational Resources Information Center
Hickendorff, Marian; Heiser, Willem J.; van Putten, Cornelis M.; Verhelst, Norman D.
2009-01-01
In the Netherlands, national assessments at the end of primary school (Grade 6) show a decline of achievement on problems of complex or written arithmetic over the last two decades. The present study aims at contributing to an explanation of the large achievement decrease on complex division, by investigating the strategies students used in…
A Teachable Agent Game Engaging Primary School Children to Learn Arithmetic Concepts and Reasoning
ERIC Educational Resources Information Center
Pareto, Lena
2014-01-01
In this paper we will describe a learning environment designed to foster conceptual understanding and reasoning in mathematics among younger school children. The learning environment consists of 48 2-player game variants based on a graphical model of arithmetic where the mathematical content is intrinsically interwoven with the game idea. The…
Goode, D.J.; Appel, C.A.
1992-01-01
More accurate alternatives to the widely used harmonic mean interblock transmissivity are proposed for block-centered finite-difference models of ground-water flow in unconfined aquifers and in aquifers having smoothly varying transmissivity. The harmonic mean is the exact interblock transmissivity for steady-state one-dimensional flow with no recharge if the transmissivity is assumed to be spatially uniform over each finite-difference block, changing abruptly at the block interface. However, the harmonic mean may be inferior to other means if transmissivity varies in a continuous or smooth manner between nodes. Alternative interblock transmissivity functions are analytically derived for the case of steady-state one-dimensional flow with no recharge. The second author has previously derived the exact interblock transmissivity, the logarithmic mean, for one-dimensional flow when transmissivity is a linear function of distance in the direction of flow. We show that the logarithmic mean transmissivity is also exact for uniform flow parallel to the direction of changing transmissivity in a two- or three-dimensional model, regardless of grid orientation relative to the flow vector. For the case of horizontal flow in a homogeneous unconfined or water-table aquifer with a horizontal bottom and with areally distributed recharge, the exact interblock transmissivity is the unweighted arithmetic mean of transmissivity at the nodes. This mean also exhibits no grid-orientation effect for unidirectional flow in a two-dimensional model. For horizontal flow in an unconfined aquifer with no recharge where hydraulic conductivity is a linear function of distance in the direction of flow the exact interblock transmissivity is the product of the arithmetic mean saturated thickness and the logarithmic mean hydraulic conductivity. For several hypothetical two- and three-dimensional cases with smoothly varying transmissivity or hydraulic conductivity, the harmonic mean is shown to yield the least accurate solution to the flow equation of the alternatives considered. Application of the alternative interblock transmissivities to a regional aquifer system model indicates that the changes in computed heads and fluxes are typically small, relative to model calibration error. For this example, the use of alternative interblock transmissivities resulted in an increase in computational effort of less than 3 percent. Numerical algorithms to compute alternative interblock transmissivity functions in a modular three-dimensional flow model are presented and documented.
Using EEG To Detect and Monitor Mental Fatigue
NASA Technical Reports Server (NTRS)
Montgomery, Leslie; Luna, Bernadette; Trejo, Leonard J.; Montgomery, Richard
2001-01-01
This project aims to develop EEG-based methods for detecting and monitoring mental fatigue. Mental fatigue poses a serious risk, even when performance is not apparently degraded. When such fatigue is associated with sustained performance of a single type of cognitive task it may be related to the metabolic energy required for sustained activation of cortical areas specialized for that task. The objective of this study was to adapt EEG to monitor cortical energy over a long period of performance of a cognitive task. Multielectrode event related potentials (ERPs) were collected every 15 minutes in nine subjects who performed a mental arithmetic task (algebraic sum of four randomly generated negative or positive digits). A new problem was presented on a computer screen 0.5 seconds after each response; some subjects endured for as long as three hours. ERPs were transformed to a quantitative measure of scalp electrical field energy. The average energy level at electrode P3 (near the left angular gyrus), 100-300 msec latency, was compared over the series of ERPs. For most subjects, scalp energy density at P3 gradually fell over the period of task performance and dramatically increased just before the subject was unable to continue the task. This neural response can be simulated for individual subjects using, a differential equation model in which it is assumed that the mental arithmetic task requires a commitment of metabolic energy that would otherwise be used for brain activities that are temporarily neglected. Their cumulative neglect eventually requires a reallocation of energy away from the mental arithmetic task.
Optimization of Particle-in-Cell Codes on RISC Processors
NASA Technical Reports Server (NTRS)
Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.
1996-01-01
General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.
QUARTERLY TECHNICAL PROGRESS REPORT, JULY, AUGUST, SEPTEMBER 1966.
Contents: Circuit research program; Hardware systems research; Software systems research program; Numerical methods, computer arithmetic and...artificial languages; Library automation; Illiac II service , use, and program development; IBM service , use, and program development; Problem specifications; Switching theory and logical design; General laboratory information.
Can business and economics students perform elementary arithmetic?
Standing, Lionel G; Sproule, Robert A; Leung, Ambrose
2006-04-01
Business and economics majors (N=146) were tested on the D'Amore Test of Elementary Arithmetic, which employs third-grade test items from 1932. Only 40% of the subjects passed the test by answering 10 out of 10 items correctly. Self-predicted scores were a good predictor of actual scores, but performance was not associated with demographic variables, grades in calculus courses, liking for science or computers, or mathematics anxiety. Scores decreased over the subjects' initial years on campus. The hardest test item, with an error rate of 23%, required the subject to evaluate (36 x 7) + (33 x 7). The results are similar to those of Standing in 2006, despite methodological changes intended to maximize performance.
The cognitive foundations of reading and arithmetic skills in 7- to 10-year-olds.
Durand, Marianne; Hulme, Charles; Larkin, Rebecca; Snowling, Margaret
2005-06-01
A range of possible predictors of arithmetic and reading were assessed in a large sample (N=162) of children between ages 7 years 5 months and 10 years 4 months. A confirmatory factor analysis of the predictors revealed a good fit to a model consisting of four latent variables (verbal ability, nonverbal ability, search speed, and phonological memory) and two manifest variables (digit comparison and phoneme deletion). A path analysis showed that digit comparison and verbal ability were unique predictors of variations in arithmetic skills, whereas phoneme deletion and verbal ability were unique predictors of variations in reading skills. These results confirm earlier findings that phoneme deletion ability appears to be a critical foundation for learning to read (decode). In addition, variations in the speed of accessing numerical quantity information appear to be a critical foundation for the development of arithmetic skills.
Programmable full-adder computations in communicating three-dimensional cell cultures.
Ausländer, David; Ausländer, Simon; Pierrat, Xavier; Hellmann, Leon; Rachid, Leila; Fussenegger, Martin
2018-01-01
Synthetic biologists have advanced the design of trigger-inducible gene switches and their assembly into input-programmable circuits that enable engineered human cells to perform arithmetic calculations reminiscent of electronic circuits. By designing a versatile plug-and-play molecular-computation platform, we have engineered nine different cell populations with genetic programs, each of which encodes a defined computational instruction. When assembled into 3D cultures, these engineered cell consortia execute programmable multicellular full-adder logics in response to three trigger compounds.
GPU-based acceleration of computations in nonlinear finite element deformation analysis.
Mafi, Ramin; Sirouspour, Shahin
2014-03-01
The physics of deformation for biological soft-tissue is best described by nonlinear continuum mechanics-based models, which then can be discretized by the FEM for a numerical solution. However, computational complexity of such models have limited their use in applications requiring real-time or fast response. In this work, we propose a graphic processing unit-based implementation of the FEM using implicit time integration for dynamic nonlinear deformation analysis. This is the most general formulation of the deformation analysis. It is valid for large deformations and strains and can account for material nonlinearities. The data-parallel nature and the intense arithmetic computations of nonlinear FEM equations make it particularly suitable for implementation on a parallel computing platform such as graphic processing unit. In this work, we present and compare two different designs based on the matrix-free and conventional preconditioned conjugate gradients algorithms for solving the FEM equations arising in deformation analysis. The speedup achieved with the proposed parallel implementations of the algorithms will be instrumental in the development of advanced surgical simulators and medical image registration methods involving soft-tissue deformation. Copyright © 2013 John Wiley & Sons, Ltd.
Specific Learning Disorder: Prevalence and Gender Differences
Moll, Kristina; Kunze, Sarah; Neuhoff, Nina; Bruder, Jennifer; Schulte-Körne, Gerd
2014-01-01
Comprehensive models of learning disorders have to consider both isolated learning disorders that affect one learning domain only, as well as comorbidity between learning disorders. However, empirical evidence on comorbidity rates including all three learning disorders as defined by DSM-5 (deficits in reading, writing, and mathematics) is scarce. The current study assessed prevalence rates and gender ratios for isolated as well as comorbid learning disorders in a representative sample of 1633 German speaking children in 3rd and 4th Grade. Prevalence rates were analysed for isolated as well as combined learning disorders and for different deficit criteria, including a criterion for normal performance. Comorbid learning disorders occurred as frequently as isolated learning disorders, even when stricter cutoff criteria were applied. The relative proportion of isolated and combined disorders did not change when including a criterion for normal performance. Reading and spelling deficits differed with respect to their association with arithmetic problems: Deficits in arithmetic co-occurred more often with deficits in spelling than with deficits in reading. In addition, comorbidity rates for arithmetic and reading decreased when applying stricter deficit criteria, but stayed high for arithmetic and spelling irrespective of the chosen deficit criterion. These findings suggest that the processes underlying the relationship between arithmetic and reading might differ from those underlying the relationship between arithmetic and spelling. With respect to gender ratios, more boys than girls showed spelling deficits, while more girls were impaired in arithmetic. No gender differences were observed for isolated reading problems and for the combination of all three learning disorders. Implications of these findings for assessment and intervention of learning disorders are discussed. PMID:25072465
Specific learning disorder: prevalence and gender differences.
Moll, Kristina; Kunze, Sarah; Neuhoff, Nina; Bruder, Jennifer; Schulte-Körne, Gerd
2014-01-01
Comprehensive models of learning disorders have to consider both isolated learning disorders that affect one learning domain only, as well as comorbidity between learning disorders. However, empirical evidence on comorbidity rates including all three learning disorders as defined by DSM-5 (deficits in reading, writing, and mathematics) is scarce. The current study assessed prevalence rates and gender ratios for isolated as well as comorbid learning disorders in a representative sample of 1633 German speaking children in 3rd and 4th Grade. Prevalence rates were analysed for isolated as well as combined learning disorders and for different deficit criteria, including a criterion for normal performance. Comorbid learning disorders occurred as frequently as isolated learning disorders, even when stricter cutoff criteria were applied. The relative proportion of isolated and combined disorders did not change when including a criterion for normal performance. Reading and spelling deficits differed with respect to their association with arithmetic problems: Deficits in arithmetic co-occurred more often with deficits in spelling than with deficits in reading. In addition, comorbidity rates for arithmetic and reading decreased when applying stricter deficit criteria, but stayed high for arithmetic and spelling irrespective of the chosen deficit criterion. These findings suggest that the processes underlying the relationship between arithmetic and reading might differ from those underlying the relationship between arithmetic and spelling. With respect to gender ratios, more boys than girls showed spelling deficits, while more girls were impaired in arithmetic. No gender differences were observed for isolated reading problems and for the combination of all three learning disorders. Implications of these findings for assessment and intervention of learning disorders are discussed.
ERIC Educational Resources Information Center
Rickard, Timothy C.; Bajic, Daniel
2006-01-01
The applicability of the identical elements (IE) model of arithmetic fact retrieval (T. C. Rickard, A. F. Healy, & L. E. Bourne, 1994) to cued recall from episodic (image and sentence) memory was explored in 3 transfer experiments. In agreement with results from arithmetic, speedup following even minimal practice recalling a missing word from an…
The Cognitive Foundations of Reading and Arithmetic Skills in 7- to 10-Year-Olds
ERIC Educational Resources Information Center
Durand, Marianne; Hulme, Charles; Larkin, Rebecca; Snowling, Margaret
2005-01-01
A range of possible predictors of arithmetic and reading were assessed in a large sample (N=162) of children between ages 7 years 5 months and 10 years 4 months. A confirmatory factor analysis of the predictors revealed a good fit to a model consisting of four latent variables (verbal ability, nonverbal ability, search speed, and phonological…
ERIC Educational Resources Information Center
Wagner, William J.
The application of a linear learning model, which combines learning theory with a structural analysis of the exercises given to students, to an elementary mathematics curriculum is examined. Elementary arithmetic items taken by about 100 second-grade students on 26 weekly tests form the data base. Weekly predictions of group performance on…
ERIC Educational Resources Information Center
Suppes, Patrick; And Others
This report presents a theory of eye movement that accounts for main features of the stochastic behavior of eye-fixation durations and direction of movement of saccades in the process of solving arithmetic exercises of addition and subtraction. The best-fitting distribution of fixation durations with a relatively simple theoretical justification…
Pre-Algebra Groups. Concepts & Applications.
ERIC Educational Resources Information Center
Montgomery County Public Schools, Rockville, MD.
Discussion material and exercises related to pre-algebra groups are provided in this five chapter manual. Chapter 1 (mappings) focuses on restricted domains, order of operations (parentheses and exponents), rules of assignment, and computer extensions. Chapter 2 considers finite number systems, including binary operations, clock arithmetic,…
Computer Based Screening Dyscalculia: Cognitive and Neuropsychological Correlates
ERIC Educational Resources Information Center
Cangoz, Banu; Altun, Arif; Olkun, Sinan; Kacar, Funda
2013-01-01
Mathematical skills are becoming increasingly critical for achieving academic and professional success. Developmental dyscalculia (DD) is a childhood-onset disorder characterized by the presence of abnormalities in the acquisition of arithmetic skills affecting approximately 5% of school age children. Diagnosing students with possible dyscalculia…
A Cryptological Way of Teaching Mathematics
ERIC Educational Resources Information Center
Caballero-Gil, Pino; Bruno-Castaneda, Carlos
2007-01-01
This work addresses the subject of mathematics education at secondary schools from a current and stimulating point of view intimately related to computational science. Cryptology is a captivating way of introducing into the classroom different mathematical subjects such as functions, matrices, modular arithmetic, combinatorics, equations,…
Real time evolution at finite temperatures with operator space matrix product states
NASA Astrophysics Data System (ADS)
Pižorn, Iztok; Eisler, Viktor; Andergassen, Sabine; Troyer, Matthias
2014-07-01
We propose a method to simulate the real time evolution of one-dimensional quantum many-body systems at finite temperature by expressing both the density matrices and the observables as matrix product states. This allows the calculation of expectation values and correlation functions as scalar products in operator space. The simulations of density matrices in inverse temperature and the local operators in the Heisenberg picture are independent and result in a grid of expectation values for all intermediate temperatures and times. Simulations can be performed using real arithmetics with only polynomial growth of computational resources in inverse temperature and time for integrable systems. The method is illustrated for the XXZ model and the single impurity Anderson model.
Separating OR, SUM, and XOR Circuits☆
Find, Magnus; Göös, Mika; Järvisalo, Matti; Kaski, Petteri; Koivisto, Mikko; Korhonen, Janne H.
2017-01-01
Given a boolean n × n matrix A we consider arithmetic circuits for computing the transformation x ↦ Ax over different semirings. Namely, we study three circuit models: monotone OR-circuits, monotone SUM-circuits (addition of non-negative integers), and non-monotone XOR-circuits (addition modulo 2). Our focus is on separating OR-circuits from the two other models in terms of circuit complexity: We show how to obtain matrices that admit OR-circuits of size O(n), but require SUM-circuits of size Ω(n3/2/log2n).We consider the task of rewriting a given OR-circuit as a XOR-circuit and prove that any subquadratic-time algorithm for this task violates the strong exponential time hypothesis. PMID:28529379
Subpicosecond Optical Digital Computation Using Conjugate Parametric Generators
1989-03-31
Using Phase Conjugate Farametric Generators ..... 12. PERSONAL AUTHOR(S) Alfano, Robert- Eichmann . George; Dorsinville. Roger! Li. Yao 13a. TYPE OF...conjugation-based optical residue arithmetic processor," Y. Li, G. Eichmann , R. Dorsinville, and R. R. Alfano, Opt. Lett. 13, (1988). [2] "Parallel ultrafast...optical digital and symbolic computation via optical phase conjugation," Y. Li, G. Eichmann , R. Dorsinville, Appl. Opt. 27, 2025 (1988). [3
Benavides-Varela, S; Piva, D; Burgio, F; Passarini, L; Rolma, G; Meneghello, F; Semenza, C
2017-03-01
Arithmetical deficits in right-hemisphere damaged patients have been traditionally considered secondary to visuo-spatial impairments, although the exact relationship between the two deficits has rarely been assessed. The present study implemented a voxelwise lesion analysis among 30 right-hemisphere damaged patients and a controlled, matched-sample, cross-sectional analysis with 35 cognitively normal controls regressing three composite cognitive measures on standardized numerical measures. The results showed that patients and controls significantly differ in Number comprehension, Transcoding, and Written operations, particularly subtractions and multiplications. The percentage of patients performing below the cutoffs ranged between 27% and 47% across these tasks. Spatial errors were associated with extensive lesions in fronto-temporo-parietal regions -which frequently lead to neglect- whereas pure arithmetical errors appeared related to more confined lesions in the right angular gyrus and its proximity. Stepwise regression models consistently revealed that spatial errors were primarily predicted by composite measures of visuo-spatial attention/neglect and representational abilities. Conversely, specific errors of arithmetic nature linked to representational abilities only. Crucially, the proportion of arithmetical errors (ranging from 65% to 100% across tasks) was higher than that of spatial ones. These findings thus suggest that unilateral right hemisphere lesions can directly affect core numerical/arithmetical processes, and that right-hemisphere acalculia is not only ascribable to visuo-spatial deficits as traditionally thought. Copyright © 2017 Elsevier Ltd. All rights reserved.
Model-Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library along with state-of-the-art algorithms for building the transition relation and the state space of discrete state systems. We provide efficient algorithms for manipulating EVMDDs and give upper bounds of the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi-Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools: EVMDDs for encoding arithmetic expressions, identity-reduced MDDs for representing the transition relation, and the saturation algorithm for reachability analysis. We compare our new symbolic model checking EVMDD library with the widely used CUDD package and show that, in many cases, our tool is several orders of magnitude faster than CUDD.
NASA Astrophysics Data System (ADS)
Winarti, Yuyun Guna; Noviyanti, Lienda; Setyanto, Gatot R.
2017-03-01
The stock investment is a high risk investment. Therefore, there are derivative securities to reduce these risks. One of them is Asian option. The most fundamental of option is option pricing. Many factors that determine the option price are underlying asset price, strike price, maturity date, volatility, risk free interest rate and dividends. Various option pricing usually assume that risk free interest rate is constant. While in reality, this factor is stochastic process. The arithmetic Asian option is free from distribution, then, its pricing is done using the modified Black-Scholes model. In this research, the modification use the Curran approximation. This research focuses on the arithmetic Asian option pricing without dividends. The data used is the stock daily closing data of Telkom from January 1 2016 to June 30 2016. Finnaly, those option price can be used as an option trading strategy.
Decidable and undecidable arithmetic functions in actin filament networks
NASA Astrophysics Data System (ADS)
Schumann, Andrew
2018-01-01
The plasmodium of Physarum polycephalum is very sensitive to its environment, and reacts to stimuli with appropriate motions. Both the sensory and motor stages of these reactions are explained by hydrodynamic processes, based on fluid dynamics, with the participation of actin filament networks. This paper is devoted to actin filament networks as a computational medium. The point is that actin filaments, with contributions from many other proteins like myosin, are sensitive to extracellular stimuli (attractants as well as repellents), and appear and disappear at different places in the cell to change aspects of the cell structure—e.g. its shape. By assembling and disassembling actin filaments, some unicellular organisms, like Amoeba proteus, can move in response to various stimuli. As a result, these organisms can be considered a simple reversible logic gate—extracellular signals being its inputs and motions its outputs. In this way, we can implement various logic gates on amoeboid behaviours. These networks can embody arithmetic functions within p-adic valued logic. Furthermore, within these networks we can define the so-called diagonalization for deducing undecidable arithmetic functions.
NASA Astrophysics Data System (ADS)
Saputro, Dewi Retno Sari; Widyaningsih, Purnami
2017-08-01
In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).
When is working memory important for arithmetic? The impact of strategy and age
Richardson, Sophie; Hubber, Paula J.; Keeble, Sarah; Gilmore, Camilla
2017-01-01
Our ability to perform arithmetic relies heavily on working memory, the manipulation and maintenance of information in mind. Previous research has found that in adults, procedural strategies, particularly counting, rely on working memory to a greater extent than retrieval strategies. During childhood there are changes in the types of strategies employed, as well as an increase in the accuracy and efficiency of strategy execution. As such it seems likely that the role of working memory in arithmetic may also change, however children and adults have never been directly compared. This study used traditional dual-task methodology, with the addition of a control load condition, to investigate the extent to which working memory requirements for different arithmetic strategies change with age between 9–11 years, 12–14 years and young adulthood. We showed that both children and adults employ working memory when solving arithmetic problems, no matter what strategy they choose. This study highlights the importance of considering working memory in understanding the difficulties that some children and adults have with mathematics, as well as the need to include working memory in theoretical models of mathematical cognition. PMID:29228008
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.
De Queiroz, Ricardo; Chou, Philip A
2016-06-01
In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.
The multifacet graphically contracted function method. I. Formulation and implementation
NASA Astrophysics Data System (ADS)
Shepard, Ron; Gidofalvi, Gergely; Brozell, Scott R.
2014-08-01
The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that both the energy and the gradient computation scale as O(N2n4) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N2 dissociation, cubic H8 dissociation, the symmetric dissociation of H2O, and the insertion of Be into H2. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.
The multifacet graphically contracted function method. I. Formulation and implementation.
Shepard, Ron; Gidofalvi, Gergely; Brozell, Scott R
2014-08-14
The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that both the energy and the gradient computation scale as O(N(2)n(4)) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N2 dissociation, cubic H8 dissociation, the symmetric dissociation of H2O, and the insertion of Be into H2. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.
A Placer-Gold Evaluation Exercise.
ERIC Educational Resources Information Center
Tunley, A. Tom
1984-01-01
A laboratory exercise allowing students to use drillhole data to simulate the process of locating a placer gold paystreak is presented. As part of the activity students arithmetically compute the value of their gold, mining costs, and personal profits or losses, and decide on development plans for the claim. (BC)
Using Two Languages when Learning Mathematics
ERIC Educational Resources Information Center
Moschkovich, Judit
2007-01-01
This article reviews two sets of research studies from outside of mathematics education to consider how they may be relevant to the study of bilingual mathematics learners using two languages. The first set of studies is psycholinguistics experiments comparing monolinguals and bilinguals using two languages during arithmetic computation (language…
Predicting Arithmetic Abilities: The Role of Preparatory Arithmetic Markers and Intelligence
ERIC Educational Resources Information Center
Stock, Pieter; Desoete, Annemie; Roeyers, Herbert
2009-01-01
Arithmetic abilities acquired in kindergarten are found to be strong predictors for later deficient arithmetic abilities. This longitudinal study (N = 684) was designed to examine if it was possible to predict the level of children's arithmetic abilities in first and second grade from their performance on preparatory arithmetic abilities in…
Code of Federal Regulations, 2012 CFR
2012-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Code of Federal Regulations, 2010 CFR
2010-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Code of Federal Regulations, 2013 CFR
2013-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Code of Federal Regulations, 2011 CFR
2011-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Code of Federal Regulations, 2014 CFR
2014-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Development of a Cloud Resolving Model for Heterogeneous Supercomputers
NASA Astrophysics Data System (ADS)
Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.
2017-12-01
A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.
Neurocognitive predictors of financial capacity in traumatic brain injury.
Martin, Roy C; Triebel, Kristen; Dreer, Laura E; Novack, Thomas A; Turner, Crystal; Marson, Daniel C
2012-01-01
To develop cognitive models of financial capacity (FC) in patients with traumatic brain injury (TBI). Longitudinal design. Inpatient brain injury rehabilitation unit. Twenty healthy controls, and 24 adults with moderate-to-severe TBI were assessed at baseline (30 days postinjury) and 6 months postinjury. The FC instrument (FCI) and a neuropsychological test battery. Univariate correlation and multiple regression procedures were employed to develop cognitive models of FCI performance in the TBI group, at baseline and 6-month time follow-up. Three cognitive predictor models of FC were developed. At baseline, measures of mental arithmetic/working memory and immediate verbal memory predicted baseline FCI performance (R = 0.72). At 6-month follow-up, measures of executive function and mental arithmetic/working memory predicted 6-month FCI performance (R = 0.79), and a third model found that these 2 measures at baseline predicted 6-month FCI performance (R = 0.71). Multiple cognitive functions are associated with initial impairment and partial recovery of FC in moderate-to-severe TBI patients. In particular, arithmetic, working memory, and executive function skills appear critical to recovery of FC in TBI. The study results represent an initial step toward developing a neurocognitive model of FC in patients with TBI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCormick, B.H.; Narasimhan, R.
1963-01-01
The overall computer system contains three main parts: an input device, a pattern recognition unit (PRU), and a control computer. The bubble chamber picture is divided into a grid of st run. Concent 1-mm squares on the film. It is then processed in parallel in a two-dimensional array of 1024 identical processing modules (stalactites) of the PRU. The array can function as a two- dimensional shift register in which results of successive shifting operations can be accumulated. The pattern recognition process is generally controlled by a conventional arithmetic computer. (A.G.W.)
Parallel computations and control of adaptive structures
NASA Technical Reports Server (NTRS)
Park, K. C.; Alvin, Kenneth F.; Belvin, W. Keith; Chong, K. P. (Editor); Liu, S. C. (Editor); Li, J. C. (Editor)
1991-01-01
The equations of motion for structures with adaptive elements for vibration control are presented for parallel computations to be used as a software package for real-time control of flexible space structures. A brief introduction of the state-of-the-art parallel computational capability is also presented. Time marching strategies are developed for an effective use of massive parallel mapping, partitioning, and the necessary arithmetic operations. An example is offered for the simulation of control-structure interaction on a parallel computer and the impact of the approach presented for applications in other disciplines than aerospace industry is assessed.
Mathematics for the Middle Grades (5-9). 1982 Yearbook.
ERIC Educational Resources Information Center
Silvey, Linda, Ed.; Smart, James R., Ed.
This yearbook for teachers of mathematics in grades 5-9 contains three sections: (1) critical issues; (2) learning activities; and (3) games, contests, and student presentations. The first section includes articles on sex-related differences, learning disabled students, computer literacy, mental arithmetic, rational numbers, and problem solving.…
BIBLIOGRAPHIES, HIGH SCHOOL MATHEMATICS.
ERIC Educational Resources Information Center
WOODS, PAUL E.
THIS ANNOTATED BIBLIOGRAPHY IS A COMPILATION OF A NUMBER OF HIGHLY REGARDED BOOK LISTS CONSISTING OF LIBRARY BOOKS AND TEXTBOOKS FOR GRADES 7-12. THE BOOKS IN THIS LIST ARE CURRENTLY IN PRINT AND THE CONTENT IS REPRESENTATIVE OF THE FOLLOWING AREAS OF MATHEMATICS--MATHEMATICAL RECREATION, COMPUTERS, ARITHMETIC, ALGEBRA, EUCLIDEAN GEOMETRY,…
Interpreting Bivariate Regression Coefficients: Going beyond the Average
ERIC Educational Resources Information Center
Halcoussis, Dennis; Phillips, G. Michael
2010-01-01
Statistics, econometrics, investment analysis, and data analysis classes often review the calculation of several types of averages, including the arithmetic mean, geometric mean, harmonic mean, and various weighted averages. This note shows how each of these can be computed using a basic regression framework. By recognizing when a regression model…
Can Dyscalculics Estimate the Results of Arithmetic Problems?
ERIC Educational Resources Information Center
Ganor-Stern, Dana
2017-01-01
The present study is the first to examine the computation estimation skills of dyscalculics versus controls using the estimation comparison task. In this task, participants judged whether an estimated answer to a multidigit multiplication problem was larger or smaller than a given reference number. While dyscalculics were less accurate than…
An Elementary Algorithm to Evaluate Trigonometric Functions to High Precision
ERIC Educational Resources Information Center
Johansson, B. Tomas
2018-01-01
Evaluation of the cosine function is done via a simple Cordic-like algorithm, together with a package for handling arbitrary-precision arithmetic in the computer program Matlab. Approximations to the cosine function having hundreds of correct decimals are presented with a discussion around errors and implementation.
Objective Criteria for the Selection of Software.
ERIC Educational Resources Information Center
Burk, Laurena
The seven stages in the system development process are discussed in the context of implementing basic arithmetic drill and practice exercises on a computer-based system: (1) feasibility study; (2) requirements definition; (3) alternative specifications; (4) evaluation and selection of an alternative; (5) system design; (6) development and testing;…
NASA Astrophysics Data System (ADS)
Tohir, M.; Abidin, Z.; Dafik; Hobri
2018-04-01
Arithmetics is one of the topics in Mathematics, which deals with logic and detailed process upon generalizing formula. Creativity and flexibility are needed in generalizing formula of arithmetics series. This research aimed at analyzing students creative thinking skills in generalizing arithmetic series. The triangulation method and research-based learning was used in this research. The subjects were students of the Master Program of Mathematics Education in Faculty of Teacher Training and Education at Jember University. The data was collected by giving assignments to the students. The data collection was done by giving open problem-solving task and documentation study to the students to arrange generalization pattern based on the dependent function formula i and the function depend on i and j. Then, the students finished the next problem-solving task to construct arithmetic generalization patterns based on the function formula which depends on i and i + n and the sum formula of functions dependent on i and j of the arithmetic compiled. The data analysis techniques operative in this study was Miles and Huberman analysis model. Based on the result of data analysis on task 1, the levels of students creative thinking skill were classified as follows; 22,22% of the students categorized as “not creative” 38.89% of the students categorized as “less creative” category; 22.22% of the students categorized as “sufficiently creative” and 16.67% of the students categorized as “creative”. By contrast, the results of data analysis on task 2 found that the levels of students creative thinking skills were classified as follows; 22.22% of the students categorized as “sufficiently creative”, 44.44% of the students categorized as “creative” and 33.33% of the students categorized as “very creative”. This analysis result can set the basis for teaching references and actualizing a better teaching model in order to increase students creative thinking skills.
Raghubar, Kimberly P.; Barnes, Marcia A.; Dennis, Maureen; Cirino, Paul T.; Taylor, Heather; Landry, Susan
2015-01-01
Objective Math and attention are related in neurobiological and behavioral models of mathematical cognition. This study employed model-driven assessments of attention and math in children with spina bifida myelomeningocele (SBM), who have known math difficulties and specific attentional deficits, to more directly examine putative relations between attention and mathematical processing. The relation of other domain general abilities and math was also investigated. Method Participants were 9.5-year-old children with SBM (N = 44) and typically developing children (N = 50). Participants were administered experimental exact and approximate arithmetic tasks, and standardized measures of math fluency and calculation. Cognitive measures included the Attention Network Test (ANT), and standardized measures of fine motor skills, verbal working memory (WM), and visual-spatial WM. Results Children with SBM performed similarly to peers on exact arithmetic but more poorly on approximate and standardized arithmetic measures. On the ANT, children with SBM differed from controls on orienting attention but not alerting and executive attention. Multiple mediation models showed that: fine motor skills and verbal WM mediated the relation of group to approximate arithmetic; fine motor skills and visual-spatial WM mediated the relation of group to math fluency; and verbal and visual-spatial WM mediated the relation of group to math calculation. Attention was not a significant mediator of the effects of group for any aspect of math in this study. Conclusions Results are discussed with reference to models of attention, WM, and mathematical cognition. PMID:26011113
Raghubar, Kimberly P; Barnes, Marcia A; Dennis, Maureen; Cirino, Paul T; Taylor, Heather; Landry, Susan
2015-11-01
Math and attention are related in neurobiological and behavioral models of mathematical cognition. This study employed model-driven assessments of attention and math in children with spina bifida myelomeningocele (SBM), who have known math difficulties and specific attentional deficits, to more directly examine putative relations between attention and mathematical processing. The relation of other domain general abilities and math was also investigated. Participants were 9.5-year-old children with SBM (n = 44) and typically developing children (n = 50). Participants were administered experimental exact and approximate arithmetic tasks, and standardized measures of math fluency and calculation. Cognitive measures included the Attention Network Test (ANT), and standardized measures of fine motor skills, verbal working memory (WM), and visual-spatial WM. Children with SBM performed similarly to peers on exact arithmetic, but more poorly on approximate and standardized arithmetic measures. On the ANT, children with SBM differed from controls on orienting attention, but not on alerting and executive attention. Multiple mediation models showed that fine motor skills and verbal WM mediated the relation of group to approximate arithmetic; fine motor skills and visual-spatial WM mediated the relation of group to math fluency; and verbal and visual-spatial WM mediated the relation of group to math calculation. Attention was not a significant mediator of the effects of group for any aspect of math in this study. Results are discussed with reference to models of attention, WM, and mathematical cognition. (c) 2015 APA, all rights reserved).
Chen, Yuqian; Ke, Yufeng; Meng, Guifang; Jiang, Jin; Qi, Hongzhi; Jiao, Xuejun; Xu, Minpeng; Zhou, Peng; He, Feng; Ming, Dong
2017-12-01
As one of the most important brain-computer interface (BCI) paradigms, P300-Speller was shown to be significantly impaired once applied in practical situations due to effects of mental workload. This study aims to provide a new method of building training models to enhance performance of P300-Speller under mental workload. Three experiment conditions based on row-column P300-Speller paradigm were performed including speller-only, 3-back-speller and mental-arithmetic-speller. Data under dual-task conditions were introduced to speller-only data respectively to build new training models. Then performance of classifiers with different models was compared under the same testing condition. The results showed that when tasks of imported training data and testing data were the same, character recognition accuracies and round accuracies of P300-Speller with mixed-data training models significantly improved (FDR, p < 0.005). When they were different, performance significantly improved when tested on mental-arithmetic-speller (FDR, p < 0.05) while the improvement was modest when tested on n-back-speller (FDR, p < 0.1). The analysis of ERPs revealed that ERP difference between training data and testing data was significantly diminished when the dual-task data was introduced to training data (FDR, p < 0.05). The new method of training classifier on mixed data proved to be effective in enhancing performance of P300-Speller under mental workload, confirmed the feasibility to build a universal training model and overcome the effects of mental workload in its practical applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Online EEG-Based Workload Adaptation of an Arithmetic Learning Environment.
Walter, Carina; Rosenstiel, Wolfgang; Bogdan, Martin; Gerjets, Peter; Spüler, Martin
2017-01-01
In this paper, we demonstrate a closed-loop EEG-based learning environment, that adapts instructional learning material online, to improve learning success in students during arithmetic learning. The amount of cognitive workload during learning is crucial for successful learning and should be held in the optimal range for each learner. Based on EEG data from 10 subjects, we created a prediction model that estimates the learner's workload to obtain an unobtrusive workload measure. Furthermore, we developed an interactive learning environment that uses the prediction model to estimate the learner's workload online based on the EEG data and adapt the difficulty of the learning material to keep the learner's workload in an optimal range. The EEG-based learning environment was used by 13 subjects to learn arithmetic addition in the octal number system, leading to a significant learning effect. The results suggest that it is feasible to use EEG as an unobtrusive measure of cognitive workload to adapt the learning content. Further it demonstrates that a promptly workload prediction is possible using a generalized prediction model without the need for a user-specific calibration.
Memristive effects in oxygenated amorphous carbon nanodevices
NASA Astrophysics Data System (ADS)
Bachmann, T. A.; Koelmans, W. W.; Jonnalagadda, V. P.; Le Gallo, M.; Santini, C. A.; Sebastian, A.; Eleftheriou, E.; Craciun, M. F.; Wright, C. D.
2018-01-01
Computing with resistive-switching (memristive) memory devices has shown much recent progress and offers an attractive route to circumvent the von-Neumann bottleneck, i.e. the separation of processing and memory, which limits the performance of conventional computer architectures. Due to their good scalability and nanosecond switching speeds, carbon-based resistive-switching memory devices could play an important role in this respect. However, devices based on elemental carbon, such as tetrahedral amorphous carbon or ta-C, typically suffer from a low cycling endurance. A material that has proven to be capable of combining the advantages of elemental carbon-based memories with simple fabrication methods and good endurance performance for binary memory applications is oxygenated amorphous carbon, or a-CO x . Here, we examine the memristive capabilities of nanoscale a-CO x devices, in particular their ability to provide the multilevel and accumulation properties that underpin computing type applications. We show the successful operation of nanoscale a-CO x memory cells for both the storage of multilevel states (here 3-level) and for the provision of an arithmetic accumulator. We implement a base-16, or hexadecimal, accumulator and show how such a device can carry out hexadecimal arithmetic and simultaneously store the computed result in the self-same a-CO x cell, all using fast (sub-10 ns) and low-energy (sub-pJ) input pulses.
Dynamic response analysis of structure under time-variant interval process model
NASA Astrophysics Data System (ADS)
Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao
2016-10-01
Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.
Quality of Arithmetic Education for Children with Cerebral Palsy
ERIC Educational Resources Information Center
Jenks, Kathleen M.; de Moor, Jan; van Lieshout, Ernest C. D. M.; Withagen, Floortje
2010-01-01
The aim of this exploratory study was to investigate the quality of arithmetic education for children with cerebral palsy. The use of individual educational plans, amount of arithmetic instruction time, arithmetic instructional grouping, and type of arithmetic teaching method were explored in three groups: children with cerebral palsy (CP) in…
Hoffman, Joel C; Sierszen, Michael E; Cotter, Anne M
2015-11-15
Normalizing δ(13) C values of animal tissue for lipid content is necessary to accurately interpret food-web relationships from stable isotope analysis. To reduce the effort and expense associated with chemical extraction of lipids, various studies have tested arithmetic mass balance to mathematically normalize δ(13) C values for lipid content; however, the approach assumes that lipid content is related to the tissue C:N ratio. We evaluated two commonly used models for estimating tissue lipid content based on C:N ratio (a mass balance model and a stoichiometric model) by comparing model predictions to measure the lipid content of white muscle tissue. We then determined the effect of lipid model choice on δ(13) C values normalized using arithmetic mass balance. To do so, we used a collection of fish from Lake Superior spanning a wide range in lipid content (5% to 73% lipid). We found that the lipid content was positively related to the bulk muscle tissue C:N ratio. The two different lipid models produced similar estimates of lipid content based on tissue C:N, within 6% for tissue C:N values <7. Normalizing δ(13) C values using an arithmetic mass-balance equation based on either model yielded similar results, with a small bias (<1‰) compared with results based on chemical extraction. Among-species consistency in the relationship between fish muscle tissue C:N ratio and lipid content supports the application of arithmetic mass balance to normalize δ(13) C values for lipid content. The uncertainty associated with both lipid extraction quality and choice of model parameters constrains the achievable precision of normalized δ(13) C values to about ±1.0‰. Published in 2015. This article is a U.S. Government work and is in the public domain in the U.S.A.
CADNA_C: A version of CADNA for use with C or C++ programs
NASA Astrophysics Data System (ADS)
Lamotte, Jean-Luc; Chesneaux, Jean-Marie; Jézéquel, Fabienne
2010-11-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. The CADNA_C version enables this estimation in C or C++ programs, while the previous version had been developed for Fortran programs. The CADNA_C version has the same features as the previous one: with CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. New version program summaryProgram title: CADNA_C Catalogue identifier: AEGQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 60 075 No. of bytes in distributed program, including test data, etc.: 710 781 Distribution format: tar.gz Programming language: C++ Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 933 Does the new version supersede the previous version?: No Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: The previous version (AEAT_v1_0) enables the estimation of round-off error propagation in Fortran programs [2]. The new version has been developed to enable this estimation in C or C++ programs. Summary of revisions: The CADNA_C source code consists of one assembly language file (cadna_rounding.s) and twenty-three C++ language files (including three header files). cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the C++ compiler used. This assembly file contains routines which are frequently called in the CADNA_C C++ files to change the rounding mode. The C++ language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA_C specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. As a remark, on 64-bit processors, the mathematical library associated with the GNU C++ compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore, if CADNA_C is used on a 64-bit processor with the GNU C++ compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the argument of a mathematical function is never lost. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf and a reference guide named, ref_cadna.pdf. The user guide shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs.The reference guide briefly describes each function of the library. The source code (which consists of C++ and assembly files) is located in the src directory. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
NASA Astrophysics Data System (ADS)
Gbaguidi, Audrey J.-M.
Structural health monitoring (SHM) has become indispensable for reducing maintenance costs and increasing the in-service capacity of a structure. The increased use of lightweight composite materials in aircraft structures drastically increased the effects of fatigue induced damage on their critical structural components and thus the necessity to predict the remaining life of those components. Damage prognosis, one of the least investigated fields in SHM, uses the current damage state of the system to forecast its future performance by estimating the expected loading environments. A successful damage prediction model requires the integration of technologies in areas like measurements, materials science, mechanics of materials, and probability theories, but most importantly the quantification of uncertainty in all these areas. In this study, Affine Arithmetic is used as a method for incorporating the uncertainties due to the material properties into the fatigue life prognosis of composite plates subjected to cyclic compressive loadings. When loadings are compressive in nature, the composite plates undergo repeated buckling-unloading of the delaminated layer which induces mixed modes I and II states of stress at the tip of the delamination in the plates. The Kardomateas model-based prediction law is used to predict the growth of the delamination, while the integration of the effects of the uncertainties for modes I and II coefficients in the fatigue life prediction model is handled using Affine arithmetic. The Mode I and Mode II interlaminar fracture toughness and fatigue characterization of the composite plates are first experimentally studied to obtain the material coefficients and fracture toughness, respectively. Next, these obtained coefficients are used in the Kardomateas law to predict the delamination lengths in the composite plates while using Affine Arithmetic to handle their uncertainties. At last, the fatigue characterization of the composite plates during compressive-buckling loadings is experimentally studied, and the delamination lengths obtained are compared with the predicted values to check the performance of Affine Arithmetic as an uncertainty propagation tool.
Practising Arithmetic Using Educational Video Games with an Interpersonal Computer
ERIC Educational Resources Information Center
Beserra, Vagner; Nussbaum, Miguel; Zeni, Ricardo; Rodriguez, Werner; Wurman, Gabriel
2014-01-01
Studies show the positive effects that video games can have on student performance and attitude towards learning. In the past few years, strategies have been generated to optimize the use of technological resources with the aim of facilitating widespread adoption of technology in the classroom. Given its low acquisition and maintenance costs, the…
ERIC Educational Resources Information Center
Bolyard, Johnna; Moyer-Packenham, Patricia
2012-01-01
This study investigated how the use of virtual manipulatives in integer instruction impacts student achievement for integer addition and subtraction. Of particular interest was the influence of using virtual manipulatives on students' ability to create and translate among representations for integer computation. The research employed a…
Fostering At-Risk Preschoolers' Number Sense
ERIC Educational Resources Information Center
Baroody, Arthur; Eiland, Michael; Thompson, Bradley
2009-01-01
Research Findings: A 9-month study served to evaluate the effectiveness of a pre-kindergarten number sense curriculum. Phase 1 of the intervention involved manipulative-, game-based number sense instruction; Phase 2, computer-aided mental-arithmetic training with the simplest sums. Eighty 4- and 5-year-olds at risk for school failure were randomly…
ERIC Educational Resources Information Center
New York City Board of Education, Brooklyn, NY.
This curriculum bulletin is designed to help teachers meet the diverse needs in mathematics of the children in fifth grade classes. In addition to the emphasis that is placed on arithmetic computational skills, the bulletin shows how to include other areas considered important, such as concepts, skills, and ideas from algebra and geometry. The 80…
Further Studies in Achievement Testing, Hearing Impaired Students. United States: Spring 1971.
ERIC Educational Resources Information Center
Gallaudet Coll., Washington, DC. Office of Demographic Studies.
Reported are four studies resulting from achievement testing activities from 1971 to 1973 with approximately 17,000 hearing impaired students from under 6 to over 21 years of age. The first study reports the relationships between selected achievement test scores (Paragraph Meaning and Arithmetic Computation subtests) and the following variables:…
20 CFR 656.40 - Determination of prevailing wage for labor certification purposes.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Occupational Employment Statistics Survey shall be used to determine the arithmetic mean, unless the employer provides an acceptable survey under paragraph (g) of this section. (3) If the employer provides a survey... education and research entities. In computing the prevailing wage for a job opportunity in an occupational...
20 CFR 656.40 - Determination of prevailing wage for labor certification purposes.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Occupational Employment Statistics Survey shall be used to determine the arithmetic mean, unless the employer provides an acceptable survey under paragraph (g) of this section. (3) If the employer provides a survey... education and research entities. In computing the prevailing wage for a job opportunity in an occupational...
20 CFR 656.40 - Determination of prevailing wage for labor certification purposes.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Occupational Employment Statistics Survey shall be used to determine the arithmetic mean, unless the employer provides an acceptable survey under paragraph (g) of this section. (3) If the employer provides a survey... education and research entities. In computing the prevailing wage for a job opportunity in an occupational...
20 CFR 656.40 - Determination of prevailing wage for labor certification purposes.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Occupational Employment Statistics Survey shall be used to determine the arithmetic mean, unless the employer provides an acceptable survey under paragraph (g) of this section. (3) If the employer provides a survey... education and research entities. In computing the prevailing wage for a job opportunity in an occupational...
20 CFR 656.40 - Determination of prevailing wage for labor certification purposes.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Occupational Employment Statistics Survey shall be used to determine the arithmetic mean, unless the employer provides an acceptable survey under paragraph (g) of this section. (3) If the employer provides a survey... education and research entities. In computing the prevailing wage for a job opportunity in an occupational...
ERIC Educational Resources Information Center
Neef, Nancy A.; Marckel, Julie; Ferreri, Summer J.; Bicard, David F.; Endo, Sayaka; Aman, Michael G.; Miller, Kelly M.; Jung, Sunhwa; Nist, Lindsay; Armstrong, Nancy
2005-01-01
We conducted a brief computer-based assessment involving choices of concurrently presented arithmetic problems associated with competing reinforcer dimensions to assess impulsivity (choices controlled primarily by reinforcer immediacy) as well as the relative influence of other dimensions (reinforcer rate, quality, and response effort), with 58…
Mathematics in Baseball. Topical Module for Use in a Mathematics Laboratory Setting.
ERIC Educational Resources Information Center
Stitt, Mary; Ostrom, Nat
The objectives of this module include: (1) improving general arithmetic skills including whole numbers, fractions, and decimal fractions; (2) learning to compute averages; (3) strengthening knowledge of percent; (4) learning to locate needed information or statistical data; (5) reviewing or learning the use of the Pythagorean Theorem; (6)…
Raja, Muhammad Asif Zahoor; Kiani, Adiqa Kausar; Shehzad, Azam; Zameer, Aneela
2016-01-01
In this study, bio-inspired computing is exploited for solving system of nonlinear equations using variants of genetic algorithms (GAs) as a tool for global search method hybrid with sequential quadratic programming (SQP) for efficient local search. The fitness function is constructed by defining the error function for systems of nonlinear equations in mean square sense. The design parameters of mathematical models are trained by exploiting the competency of GAs and refinement are carried out by viable SQP algorithm. Twelve versions of the memetic approach GA-SQP are designed by taking a different set of reproduction routines in the optimization process. Performance of proposed variants is evaluated on six numerical problems comprising of system of nonlinear equations arising in the interval arithmetic benchmark model, kinematics, neurophysiology, combustion and chemical equilibrium. Comparative studies of the proposed results in terms of accuracy, convergence and complexity are performed with the help of statistical performance indices to establish the worth of the schemes. Accuracy and convergence of the memetic computing GA-SQP is found better in each case of the simulation study and effectiveness of the scheme is further established through results of statistics based on different performance indices for accuracy and complexity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gashkov, Sergey B; Sergeev, Igor' S
2012-10-31
This work suggests a method for deriving lower bounds for the complexity of polynomials with positive real coefficients implemented by circuits of functional elements over the monotone arithmetic basis {l_brace}x+y, x {center_dot} y{r_brace} Union {l_brace}a {center_dot} x | a Element-Of R{sub +}{r_brace}. Using this method, several new results are obtained. In particular, we construct examples of polynomials of degree m-1 in each of the n variables with coefficients 0 and 1 having additive monotone complexity m{sup (1-o(1))n} and multiplicative monotone complexity m{sup (1/2-o(1))n} as m{sup n}{yields}{infinity}. In this form, the lower bounds derived here are sharp. Bibliography: 72 titles.
Implicit Learning of Arithmetic Regularities Is Facilitated by Proximal Contrast
Prather, Richard W.
2012-01-01
Natural number arithmetic is a simple, powerful and important symbolic system. Despite intense focus on learning in cognitive development and educational research many adults have weak knowledge of the system. In current study participants learn arithmetic principles via an implicit learning paradigm. Participants learn not by solving arithmetic equations, but through viewing and evaluating example equations, similar to the implicit learning of artificial grammars. We expand this to the symbolic arithmetic system. Specifically we find that exposure to principle-inconsistent examples facilitates the acquisition of arithmetic principle knowledge if the equations are presented to the learning in a temporally proximate fashion. The results expand on research of the implicit learning of regularities and suggest that contrasting cases, show to facilitate explicit arithmetic learning, is also relevant to implicit learning of arithmetic. PMID:23119101
Bounds for the price of discrete arithmetic Asian options
NASA Astrophysics Data System (ADS)
Vanmaele, M.; Deelstra, G.; Liinev, J.; Dhaene, J.; Goovaerts, M. J.
2006-01-01
In this paper the pricing of European-style discrete arithmetic Asian options with fixed and floating strike is studied by deriving analytical lower and upper bounds. In our approach we use a general technique for deriving upper (and lower) bounds for stop-loss premiums of sums of dependent random variables, as explained in Kaas et al. (Ins. Math. Econom. 27 (2000) 151-168), and additionally, the ideas of Rogers and Shi (J. Appl. Probab. 32 (1995) 1077-1088) and of Nielsen and Sandmann (J. Financial Quant. Anal. 38(2) (2003) 449-473). We are able to create a unifying framework for European-style discrete arithmetic Asian options through these bounds, that generalizes several approaches in the literature as well as improves the existing results. We obtain analytical and easily computable bounds. The aim of the paper is to formulate an advice of the appropriate choice of the bounds given the parameters, investigate the effect of different conditioning variables and compare their efficiency numerically. Several sets of numerical results are included. We also discuss hedging using these bounds. Moreover, our methods are applicable to a wide range of (pricing) problems involving a sum of dependent random variables.
NASA Astrophysics Data System (ADS)
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
Specificity and Overlap in Skills Underpinning Reading and Arithmetical Fluency
ERIC Educational Resources Information Center
van Daal, Victor; van der Leij, Aryan; Ader, Herman
2013-01-01
The aim of this study was to examine unique and common causes of problems in reading and arithmetic fluency. 13- to 14-year-old students were placed into one of five groups: reading disabled (RD, n = 16), arithmetic disabled (AD, n = 34), reading and arithmetic disabled (RAD, n = 17), reading, arithmetic, and listening comprehension disabled…
Model Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library. We provide efficient algorithms for manipulating EVMDDs and review the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi- Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools. Compared to the CUDD package, our tool is several orders of magnitude faster
Peer-to-peer Monte Carlo simulation of photon migration in topical applications of biomedical optics
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
Doronin, Alexander; Meglinski, Igor
2012-09-01
In the framework of further development of the unified approach of photon migration in complex turbid media, such as biological tissues we present a peer-to-peer (P2P) Monte Carlo (MC) code. The object-oriented programming is used for generalization of MC model for multipurpose use in various applications of biomedical optics. The online user interface providing multiuser access is developed using modern web technologies, such as Microsoft Silverlight, ASP.NET. The emerging P2P network utilizing computers with different types of compute unified device architecture-capable graphics processing units (GPUs) is applied for acceleration and to overcome the limitations, imposed by multiuser access in the online MC computational tool. The developed P2P MC was validated by comparing the results of simulation of diffuse reflectance and fluence rate distribution for semi-infinite scattering medium with known analytical results, results of adding-doubling method, and with other GPU-based MC techniques developed in the past. The best speedup of processing multiuser requests in a range of 4 to 35 s was achieved using single-precision computing, and the double-precision computing for floating-point arithmetic operations provides higher accuracy.
Synthetic analog computation in living cells.
Daniel, Ramiz; Rubens, Jacob R; Sarpeshkar, Rahul; Lu, Timothy K
2013-05-30
A central goal of synthetic biology is to achieve multi-signal integration and processing in living cells for diagnostic, therapeutic and biotechnology applications. Digital logic has been used to build small-scale circuits, but other frameworks may be needed for efficient computation in the resource-limited environments of cells. Here we demonstrate that synthetic analog gene circuits can be engineered to execute sophisticated computational functions in living cells using just three transcription factors. Such synthetic analog gene circuits exploit feedback to implement logarithmically linear sensing, addition, ratiometric and power-law computations. The circuits exhibit Weber's law behaviour as in natural biological systems, operate over a wide dynamic range of up to four orders of magnitude and can be designed to have tunable transfer functions. Our circuits can be composed to implement higher-order functions that are well described by both intricate biochemical models and simple mathematical functions. By exploiting analog building-block functions that are already naturally present in cells, this approach efficiently implements arithmetic operations and complex functions in the logarithmic domain. Such circuits may lead to new applications for synthetic biology and biotechnology that require complex computations with limited parts, need wide-dynamic-range biosensing or would benefit from the fine control of gene expression.
Babies and math: A meta-analysis of infants' simple arithmetic competence.
Christodoulou, Joan; Lac, Andrew; Moore, David S
2017-08-01
Wynn's (1992) seminal research reported that infants looked longer at stimuli representing "incorrect" versus "correct" solutions of basic addition and subtraction problems and concluded that infants have innate arithmetical abilities. Since then, infancy researchers have attempted to replicate this effect, yielding mixed findings. The present meta-analysis aimed to systematically compile and synthesize all of the primary replications and extensions of Wynn (1992) that have been conducted to date. The synthesis included 12 studies consisting of 26 independent samples and 550 unique infants. The summary effect, computed using a random-effects model, was statistically significant, d = +0.34, p < .001, suggesting that the phenomenon Wynn originally reported is reliable. Five different tests of publication bias yielded mixed results, suggesting that while a moderate level of publication bias is probable, the summary effect would be positive even after accounting for this issue. Out of the 10 metamoderators tested, none were found to be significant, but most of the moderator subgroups were significantly different from a null effect. Although this meta-analysis provides support for Wynn's original findings, further research is warranted to understand the underlying mechanisms responsible for infants' visual preferences for "mathematically incorrect" test stimuli. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The multifacet graphically contracted function method. I. Formulation and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shepard, Ron; Brozell, Scott R.; Gidofalvi, Gergely
2014-08-14
The basic formulation for the multifacet generalization of the graphically contracted function (MFGCF) electronic structure method is presented. The analysis includes the discussion of linear dependency and redundancy of the arc factor parameters, the computation of reduced density matrices, Hamiltonian matrix construction, spin-density matrix construction, the computation of optimization gradients for single-state and state-averaged calculations, graphical wave function analysis, and the efficient computation of configuration state function and Slater determinant expansion coefficients. Timings are given for Hamiltonian matrix element and analytic optimization gradient computations for a range of model problems for full-CI Shavitt graphs, and it is observed that bothmore » the energy and the gradient computation scale as O(N{sup 2}n{sup 4}) for N electrons and n orbitals. The important arithmetic operations are within dense matrix-matrix product computational kernels, resulting in a computationally efficient procedure. An initial implementation of the method is used to present applications to several challenging chemical systems, including N{sub 2} dissociation, cubic H{sub 8} dissociation, the symmetric dissociation of H{sub 2}O, and the insertion of Be into H{sub 2}. The results are compared to the exact full-CI values and also to those of the previous single-facet GCF expansion form.« less
Slimeware: engineering devices with slime mold.
Adamatzky, Andrew
2013-01-01
The plasmodium of the acellular slime mold Physarum polycephalum is a gigantic single cell visible to the unaided eye. The cell shows a rich spectrum of behavioral patterns in response to environmental conditions. In a series of simple experiments we demonstrate how to make computing, sensing, and actuating devices from the slime mold. We show how to program living slime mold machines by configurations of repelling and attracting gradients and demonstrate the workability of the living machines on tasks of computational geometry, logic, and arithmetic.
Novel 3D Compression Methods for Geometry, Connectivity and Texture
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2016-06-01
A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.
Harmon, Frederick G; Frank, Andrew A; Joshi, Sanjay S
2005-01-01
A Simulink model, a propulsion energy optimization algorithm, and a CMAC controller were developed for a small parallel hybrid-electric unmanned aerial vehicle (UAV). The hybrid-electric UAV is intended for military, homeland security, and disaster-monitoring missions involving intelligence, surveillance, and reconnaissance (ISR). The Simulink model is a forward-facing simulation program used to test different control strategies. The flexible energy optimization algorithm for the propulsion system allows relative importance to be assigned between the use of gasoline, electricity, and recharging. A cerebellar model arithmetic computer (CMAC) neural network approximates the energy optimization results and is used to control the parallel hybrid-electric propulsion system. The hybrid-electric UAV with the CMAC controller uses 67.3% less energy than a two-stroke gasoline-powered UAV during a 1-h ISR mission and 37.8% less energy during a longer 3-h ISR mission.
Shelton, Chris
2016-06-01
The safe administration of drugs is a focus of attention in healthcare. It is regarded as acceptable that a formula card or mnemonic can be used to find the correct dose and fill a prescription even though this removes any requirement for performing the underlying computation. Feedback and discussion in class reveal that confidence in arithmetic skills can be low even when students are able to pass the end of semester drug calculation exam. To see if confidence in the understanding and performance of arithmetic for drug calculations can be increased by emphasising student's innate powers of logical reasoning after reflection. Remedial classes offered for students who have declared a dislike or lack of confidence in arithmetic have been developed from student feedback adopting a reasoning by logical step methodology. Students who gave up two hours of their free learning time were observed to engage seriously with the learning methods, focussing on the innate ability to perform logical reasoning necessary for drug calculation problems. Working in small groups allowed some discussion of the route to the answer and this was followed by class discussion and reflection. The results were recorded as weekly self-assessment scores for confidence in calculation. A self-selecting group who successfully completed the end of semester drug calculation exam reported low to moderate confidence in arithmetic. After four weeks focussing on logical skills a significant increase in self-belief was measured. This continued to rise in students who remained in the classes. Many students hold a negative belief regarding their own mathematical abilities. This restricts the learning of arithmetic skills making alternate routes using mnemonics and memorised steps an attractive alternative. Practising stepwise logical reasoning skills consolidated by personal reflection has been effective in developing student's confidence and awareness of their innate powers of deduction supporting an increase in competence in drug administration. Copyright © 2016 Elsevier Ltd. All rights reserved.
Foley, Alana E; Vasilyeva, Marina; Laski, Elida V
2017-06-01
This study examined the mediating role of children's use of decomposition strategies in the relation between visuospatial memory (VSM) and arithmetic accuracy. Children (N = 78; Age M = 9.36) completed assessments of VSM, arithmetic strategies, and arithmetic accuracy. Consistent with previous findings, VSM predicted arithmetic accuracy in children. Extending previous findings, the current study showed that the relation between VSM and arithmetic performance was mediated by the frequency of children's use of decomposition strategies. Identifying the role of arithmetic strategies in this relation has implications for increasing the math performance of children with lower VSM. Statement of contribution What is already known on this subject? The link between children's visuospatial working memory and arithmetic accuracy is well documented. Frequency of decomposition strategy use is positively related to children's arithmetic accuracy. Children's spatial skill positively predicts the frequency with which they use decomposition. What does this study add? Short-term visuospatial memory (VSM) positively relates to the frequency of children's decomposition use. Decomposition use mediates the relation between short-term VSM and arithmetic accuracy. Children with limited short-term VSM may struggle to use decomposition, decreasing accuracy. © 2016 The British Psychological Society.
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...
2017-01-18
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Reading instead of reasoning? Predictors of arithmetic skills in children with cochlear implants.
Huber, Maria; Kipman, Ulrike; Pletzer, Belinda
2014-07-01
The aim of the present study was to evaluate whether the arithmetic achievement of children with cochlear implants (CI) was lower or comparable to that of their normal hearing peers and to identify predictors of arithmetic achievement in children with CI. In particular we related the arithmetic achievement of children with CI to nonverbal IQ, reading skills and hearing variables. 23 children with CI (onset of hearing loss in the first 24 months, cochlear implantation in the first 60 months of life, atleast 3 years of hearing experience with the first CI) and 23 normal hearing peers matched by age, gender, and social background participated in this case control study. All attended grades two to four in primary schools. To assess their arithmetic achievement, all children completed the "Arithmetic Operations" part of the "Heidelberger Rechentest" (HRT), a German arithmetic test. To assess reading skills and nonverbal intelligence as potential predictors of arithmetic achievement, all children completed the "Salzburger Lesetest" (SLS), a German reading screening, and the Culture Fair Intelligence Test (CFIT), a nonverbal intelligence test. Children with CI did not differ significantly from hearing children in their arithmetic achievement. Correlation and regression analyses revealed that in children with CI, arithmetic achievement was significantly (positively) related to reading skills, but not to nonverbal IQ. Reading skills and nonverbal IQ were not related to each other. In normal hearing children, arithmetic achievement was significantly (positively) related to nonverbal IQ, but not to reading skills. Reading skills and nonverbal IQ were positively correlated. Hearing variables were not related to arithmetic achievement. Children with CI do not show lower performance in non-verbal arithmetic tasks, compared to normal hearing peers. Copyright © 2014. Published by Elsevier Ireland Ltd.
Application of software technology to a future spacecraft computer design
NASA Technical Reports Server (NTRS)
Labaugh, R. J.
1980-01-01
A study was conducted to determine how major improvements in spacecraft computer systems can be obtained from recent advances in hardware and software technology. Investigations into integrated circuit technology indicated that the CMOS/SOS chip set being developed for the Air Force Avionics Laboratory at Wright Patterson had the best potential for improving the performance of spaceborne computer systems. An integral part of the chip set is the bit slice arithmetic and logic unit. The flexibility allowed by microprogramming, combined with the software investigations, led to the specification of a baseline architecture and instruction set.
Navier-Stokes Simulation of Homogeneous Turbulence on the CYBER 205
NASA Technical Reports Server (NTRS)
Wu, C. T.; Ferziger, J. H.; Chapman, D. R.; Rogallo, R. S.
1984-01-01
A computer code which solves the Navier-Stokes equations for three dimensional, time-dependent, homogenous turbulence has been written for the CYBER 205. The code has options for both 64-bit and 32-bit arithmetic. With 32-bit computation, mesh sizes up to 64 (3) are contained within core of a 2 million 64-bit word memory. Computer speed timing runs were made for various vector lengths up to 6144. With this code, speeds a little over 100 Mflops have been achieved on a 2-pipe CYBER 205. Several problems encountered in the coding are discussed.
Asquith, William H.; Cleveland, Theodore G.; Roussel, Meghan C.
2011-01-01
Estimates of peak and time of peak streamflow for small watersheds (less than about 640 acres) in a suburban to urban, low-slope setting are needed for drainage design that is cost-effective and risk-mitigated. During 2007-10, the U.S. Geological Survey (USGS), in cooperation with the Harris County Flood Control District and the Texas Department of Transportation, developed a method to estimate peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area. To develop the method, 24 watersheds in the study area with drainage areas less than about 3.5 square miles (2,240 acres) and with concomitant rainfall and runoff data were selected. The method is based on conjunctive analysis of rainfall and runoff data in the context of the unit hydrograph method and the rational method. For the unit hydrograph analysis, a gamma distribution model of unit hydrograph shape (a gamma unit hydrograph) was chosen and parameters estimated through matching of modeled peak and time of peak streamflow to observed values on a storm-by-storm basis. Watershed mean or watershed-specific values of peak and time to peak ("time to peak" is a parameter of the gamma unit hydrograph and is distinct from "time of peak") of the gamma unit hydrograph were computed. Two regression equations to estimate peak and time to peak of the gamma unit hydrograph that are based on watershed characteristics of drainage area and basin-development factor (BDF) were developed. For the rational method analysis, a lag time (time-R), volumetric runoff coefficient, and runoff coefficient were computed on a storm-by-storm basis. Watershed-specific values of these three metrics were computed. A regression equation to estimate time-R based on drainage area and BDF was developed. Overall arithmetic means of volumetric runoff coefficient (0.41 dimensionless) and runoff coefficient (0.25 dimensionless) for the 24 watersheds were used to express the rational method in terms of excess rainfall (the excess rational method). Both the unit hydrograph method and excess rational method are shown to provide similar estimates of peak and time of peak streamflow. The results from the two methods can be combined by using arithmetic means. A nomograph is provided that shows the respective relations between the arithmetic-mean peak and time of peak streamflow to drainage areas ranging from 10 to 640 acres. The nomograph also shows the respective relations for selected BDF ranging from undeveloped to fully developed conditions. The nomograph represents the peak streamflow for 1 inch of excess rainfall based on drainage area and BDF; the peak streamflow for design storms from the nomograph can be multiplied by the excess rainfall to estimate peak streamflow. Time of peak streamflow is readily obtained from the nomograph. Therefore, given excess rainfall values derived from watershed-loss models, which are beyond the scope of this report, the nomograph represents a method for estimating peak and time of peak streamflow for applicable watersheds in the Houston metropolitan area. Lastly, analysis of the relative influence of BDF on peak streamflow is provided, and the results indicate a 0:04log10 cubic feet per second change of peak streamflow per positive unit of change in BDF. This relative change can be used to adjust peak streamflow from the method or other hydrologic methods for a given BDF to other BDF values; example computations are provided.
Sex and Personality Differences in Performance on Number Computation in 11-Year-Old Children.
ERIC Educational Resources Information Center
Riding, R. J.; Borg, M. G.
1987-01-01
Eighty-four 11-year-olds were grouped in three levels of extraversion on the basis of their Junior Eysenck Personality Inventory score. They were then given an arithmetic test consisting of addition, subtraction, division, and multiplication. A significant interaction between extraversion, sex, and type of operation was found. (Author/CH)
Examining Gender DIF on a Multiple-Choice Test of Mathematics: A Confirmatory Approach.
ERIC Educational Resources Information Center
Ryan, Katherine E.; Fan, Meichu
1996-01-01
Results for 3,244 female and 3,033 male junior high school students from the Second International Mathematics Study show that applied items in algebra, geometry, and computation were easier for males but arithmetic items were differentially easier for females. Implications of these findings for assessment and instruction are discussed. (SLD)
UNIX as an environment for producing numerical software
NASA Technical Reports Server (NTRS)
Schryer, N. L.
1978-01-01
The UNIX operating system supports a number of software tools; a mathematical equation-setting language, a phototypesetting language, a FORTRAN preprocessor language, a text editor, and a command interpreter. The design, implementation, documentation, and maintenance of a portable FORTRAN test of the floating-point arithmetic unit of a computer is used to illustrate these tools at work.
ERIC Educational Resources Information Center
Muiznieks, Viktors
This report provides a technical description and operating guidelines for the IMSAI 8080 microcomputer in the Department of Secondary Education at the University of Illinois. An overview of the microcomputer highlights the register array, address logic, arithmetic and logical unit, instruction register and control section, and the data bus buffer.…
The Y2K Problem: Will It Just Be Another New Year's Eve?
ERIC Educational Resources Information Center
Iwanowski, Jay
1998-01-01
Potential problems for college and university computing functions posed by arrival of the year 2000 (Y2K) are discussed, including arithmetic calculations and sorting functions based on two-digit year dates, embedding of two-digit dates in archival data, system coordination for data exchange, unique number generation, and leap year calculations. A…
Floating-point system quantization errors in digital control systems
NASA Technical Reports Server (NTRS)
Phillips, C. L.; Vallely, D. P.
1978-01-01
This paper considers digital controllers (filters) operating in floating-point arithmetic in either open-loop or closed-loop systems. A quantization error analysis technique is developed, and is implemented by a digital computer program that is based on a digital simulation of the system. The program can be integrated into existing digital simulations of a system.
Computational Performance of Group IV Personnel in Vocational Training Programs. Final Report.
ERIC Educational Resources Information Center
Main, Ray E.; Harrigan, Robert J.
The document evaluates Navy Group Four personnel gains in basic arithmetic skills after taking experimental courses in linear measurement and recipe conversion. Categorized as Mental Group Four by receiving scores from the 10th to the 30th percentile of the Armed Forces Qualification Test, trainees received instruction tailored to the level of…
ERIC Educational Resources Information Center
Coddington, Lorelei R.
2014-01-01
In the past decade, mathematics performance by all students, especially minority students in low socioeconomic schools, has shown limited improvement nationwide (NCES, 2011). Traditionally in the United States, mathematics has consisted of arithmetic and computational fluency; however, mathematics researchers widely believe that this method of…
Evidence for Use of Mathematical Inversion by Three-Year-Old Children
ERIC Educational Resources Information Center
Sherman, Jody; Bisanz, Jeffrey
2007-01-01
The principle of inversion--that a + b - b must equal a--requires a sensitivity to the relation between addition and subtraction that is critical for understanding arithmetic. Use of inversion, albeit inconsistent, has been observed in school-age children, but when use of a computational shortcut based on inversion emerges and how awareness of the…
My-Mini-Pet: A Handheld Pet-Nurturing Game to Engage Students in Arithmetic Practices
ERIC Educational Resources Information Center
Liao, C. C. Y.; Chen, Z-H.; Cheng, H. N. H.; Chen, F-C.; Chan, T-W.
2011-01-01
In the last decade, more and more games have been developed for handheld devices. Furthermore, the popularity of handheld devices and increase of wireless computing can be taken advantage of to provide students with more learning opportunities. Games also could bring promising benefits--specifically, motivating students to learn/play, sustaining…
G-cueing microcontroller (a microprocessor application in simulators)
NASA Technical Reports Server (NTRS)
Horattas, C. G.
1980-01-01
A g cueing microcontroller is described which consists of a tandem pair of microprocessors, dedicated to the task of simulating pilot sensed cues caused by gravity effects. This task includes execution of a g cueing model which drives actuators that alter the configuration of the pilot's seat. The g cueing microcontroller receives acceleration commands from the aerodynamics model in the main computer and creates the stimuli that produce physical acceleration effects of the aircraft seat on the pilots anatomy. One of the two microprocessors is a fixed instruction processor that performs all control and interface functions. The other, a specially designed bipolar bit slice microprocessor, is a microprogrammable processor dedicated to all arithmetic operations. The two processors communicate with each other by a shared memory. The g cueing microcontroller contains its own dedicated I/O conversion modules for interface with the seat actuators and controls, and a DMA controller for interfacing with the simulation computer. Any application which can be microcoded within the available memory, the available real time and the available I/O channels, could be implemented in the same controller.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Jamshid; Mahdizadeh, Kourosh; Afshar, Abbas
2004-08-01
Application of stochastic dynamic programming (SDP) models to reservoir optimization calls for state variables discretization. As an important variable discretization of reservoir storage volume has a pronounced effect on the computational efforts. The error caused by storage volume discretization is examined by considering it as a fuzzy state variable. In this approach, the point-to-point transitions between storage volumes at the beginning and end of each period are replaced by transitions between storage intervals. This is achieved by using fuzzy arithmetic operations with fuzzy numbers. In this approach, instead of aggregating single-valued crisp numbers, the membership functions of fuzzy numbers are combined. Running a simulated model with optimal release policies derived from fuzzy and non-fuzzy SDP models shows that a fuzzy SDP with a coarse discretization scheme performs as well as a classical SDP having much finer discretized space. It is believed that this advantage in the fuzzy SDP model is due to the smooth transitions between storage intervals which benefit from soft boundaries.
Non-verbal numerical cognition: from reals to integers.
Gallistel; Gelman
2000-02-01
Data on numerical processing by verbal (human) and non-verbal (animal and human) subjects are integrated by the hypothesis that a non-verbal counting process represents discrete (countable) quantities by means of magnitudes with scalar variability. These appear to be identical to the magnitudes that represent continuous (uncountable) quantities such as duration. The magnitudes representing countable quantity are generated by a discrete incrementing process, which defines next magnitudes and yields a discrete ordering. In the case of continuous quantities, the continuous accumulation process does not define next magnitudes, so the ordering is also continuous ('dense'). The magnitudes representing both countable and uncountable quantity are arithmetically combined in, for example, the computation of the income to be expected from a foraging patch. Thus, on the hypothesis presented here, the primitive machinery for arithmetic processing works with real numbers (magnitudes).
Formal verification of mathematical software
NASA Technical Reports Server (NTRS)
Sutherland, D.
1984-01-01
Methods are investigated for formally specifying and verifying the correctness of mathematical software (software which uses floating point numbers and arithmetic). Previous work in the field was reviewed. A new model of floating point arithmetic called the asymptotic paradigm was developed and formalized. Two different conceptual approaches to program verification, the classical Verification Condition approach and the more recently developed Programming Logic approach, were adapted to use the asymptotic paradigm. These approaches were then used to verify several programs; the programs chosen were simplified versions of actual mathematical software.
Arnold, Jeffrey
2018-05-14
Floating-point computations are at the heart of much of the computing done in high energy physics. The correctness, speed and accuracy of these computations are of paramount importance. The lack of any of these characteristics can mean the difference between new, exciting physics and an embarrassing correction. This talk will examine practical aspects of IEEE 754-2008 floating-point arithmetic as encountered in HEP applications. After describing the basic features of IEEE floating-point arithmetic, the presentation will cover: common hardware implementations (SSE, x87) techniques for improving the accuracy of summation, multiplication and data interchange compiler options for gcc and icc affecting floating-point operations hazards to be avoided. About the speaker: Jeffrey M Arnold is a Senior Software Engineer in the Intel Compiler and Languages group at Intel Corporation. He has been part of the Digital->Compaq->Intel compiler organization for nearly 20 years; part of that time, he worked on both low- and high-level math libraries. Prior to that, he was in the VMS Engineering organization at Digital Equipment Corporation. In the late 1980s, Jeff spent 2½ years at CERN as part of the CERN/Digital Joint Project. In 2008, he returned to CERN to spent 10 weeks working with CERN/openlab. Since that time, he has returned to CERN multiple times to teach at openlab workshops and consult with various LHC experiments. Jeff received his Ph.D. in physics from Case Western Reserve University.
CORDIC-based digital signal processing (DSP) element for adaptive signal processing
NASA Astrophysics Data System (ADS)
Bolstad, Gregory D.; Neeld, Kenneth B.
1995-04-01
The High Performance Adaptive Weight Computation (HAWC) processing element is a CORDIC based application specific DSP element that, when connected in a linear array, can perform extremely high throughput (100s of GFLOPS) matrix arithmetic operations on linear systems of equations in real time. In particular, it very efficiently performs the numerically intense computation of optimal least squares solutions for large, over-determined linear systems. Most techniques for computing solutions to these types of problems have used either a hard-wired, non-programmable systolic array approach, or more commonly, programmable DSP or microprocessor approaches. The custom logic methods can be efficient, but are generally inflexible. Approaches using multiple programmable generic DSP devices are very flexible, but suffer from poor efficiency and high computation latencies, primarily due to the large number of DSP devices that must be utilized to achieve the necessary arithmetic throughput. The HAWC processor is implemented as a highly optimized systolic array, yet retains some of the flexibility of a programmable data-flow system, allowing efficient implementation of algorithm variations. This provides flexible matrix processing capabilities that are one to three orders of magnitude less expensive and more dense than the current state of the art, and more importantly, allows a realizable solution to matrix processing problems that were previously considered impractical to physically implement. HAWC has direct applications in RADAR, SONAR, communications, and image processing, as well as in many other types of systems.
An O(log sup 2 N) parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix
NASA Technical Reports Server (NTRS)
Swarztrauber, Paul N.
1989-01-01
An O(log sup 2 N) parallel algorithm is presented for computing the eigenvalues of a symmetric tridiagonal matrix using a parallel algorithm for computing the zeros of the characteristic polynomial. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The exact behavior of the polynomials at the interval endpoints is used to eliminate the usual problems induced by finite precision arithmetic.
Lee, Jong-Eun Roselyn; Nass, Clifford I; Bailenson, Jeremy N
2014-04-01
Virtual environments employing avatars for self-representation-including the opportunity to represent or misrepresent social categories-raise interesting and intriguing questions as to how one's avatar-based social category shapes social identity dynamics, particularly when stereotypes prevalent in the offline world apply to the social categories visually represented by avatars. The present experiment investigated how social category representation via avatars (i.e., graphical representations of people in computer-mediated environments) affects stereotype-relevant task performance. In particular, building on and extending the Proteus effect model, we explored whether and how stereotype lift (i.e., a performance boost caused by the awareness of a domain-specific negative stereotype associated with outgroup members) occurred in virtual group settings in which avatar-based gender representation was arbitrary. Female and male participants (N=120) were randomly assigned either a female avatar or a male avatar through a process masked as a random drawing. They were then placed in a numerical minority status with respect to virtual gender-as the only virtual female (male) in a computer-mediated triad with two opposite-gendered avatars-and performed a mental arithmetic task either competitively or cooperatively. The data revealed that participants who were arbitrarily represented by a male avatar and competed against two ostensible female avatars showed strongest performance compared to others on the arithmetic task. This pattern occurred regardless of participants' actual gender, pointing to a virtual stereotype lift effect. Additional mediation tests showed that task motivation partially mediated the effect. Theoretical and practical implications for social identity dynamics in avatar-based virtual environments are discussed.
Lonnemann, Jan; Li, Su; Zhao, Pei; Li, Peng; Linkersdörfer, Janosch; Lindberg, Sven; Hasselhorn, Marcus; Yan, Song
2017-01-01
Human beings are assumed to possess an approximate number system (ANS) dedicated to extracting and representing approximate numerical magnitude information. The ANS is assumed to be fundamental to arithmetic learning and has been shown to be associated with arithmetic performance. It is, however, still a matter of debate whether better arithmetic skills are reflected in the ANS. To address this issue, Chinese and German adults were compared regarding their performance in simple arithmetic tasks and in a non-symbolic numerical magnitude comparison task. Chinese participants showed a better performance in solving simple arithmetic tasks and faster reaction times in the non-symbolic numerical magnitude comparison task without making more errors than their German peers. These differences in performance could not be ascribed to differences in general cognitive abilities. Better arithmetic skills were thus found to be accompanied by a higher speed of retrieving non-symbolic numerical magnitude knowledge but not by a higher precision of non-symbolic numerical magnitude representations. The group difference in the speed of retrieving non-symbolic numerical magnitude knowledge was fully mediated by the performance in arithmetic tasks, suggesting that arithmetic skills shape non-symbolic numerical magnitude processing skills. PMID:28384191
Lonnemann, Jan; Linkersdörfer, Janosch; Hasselhorn, Marcus; Lindberg, Sven
2016-01-01
Symbolic numerical magnitude processing skills are assumed to be fundamental to arithmetic learning. It is, however, still an open question whether better arithmetic skills are reflected in symbolic numerical magnitude processing skills. To address this issue, Chinese and German third graders were compared regarding their performance in arithmetic tasks and in a symbolic numerical magnitude comparison task. Chinese children performed better in the arithmetic tasks and were faster in deciding which one of two Arabic numbers was numerically larger. The group difference in symbolic numerical magnitude processing was fully mediated by the performance in arithmetic tasks. We assume that a higher degree of familiarity with arithmetic in Chinese compared to German children leads to a higher speed of retrieving symbolic numerical magnitude knowledge. PMID:27630606
Bartelet, Dimona; Vaessen, Anniek; Blomert, Leo; Ansari, Daniel
2014-01-01
Relations between children's mathematics achievement and their basic number processing skills have been reported in both cross-sectional and longitudinal studies. Yet, some key questions are currently unresolved, including which kindergarten skills uniquely predict children's arithmetic fluency during the first year of formal schooling and the degree to which predictors are contingent on children's level of arithmetic proficiency. The current study assessed kindergarteners' non-symbolic and symbolic number processing efficiency. In addition, the contribution of children's underlying magnitude representations to differences in arithmetic achievement was assessed. Subsequently, in January of Grade 1, their arithmetic proficiency was assessed. Hierarchical regression analysis revealed that children's efficiency to compare digits, count, and estimate numerosities uniquely predicted arithmetic differences above and beyond the non-numerical factors included. Moreover, quantile regression analysis indicated that symbolic number processing efficiency was consistently a significant predictor of arithmetic achievement scores regardless of children's level of arithmetic proficiency, whereas their non-symbolic number processing efficiency was not. Finally, none of the task-specific effects indexing children's representational precision was significantly associated with arithmetic fluency. The implications of the results are 2-fold. First, the findings indicate that children's efficiency to process symbols is important for the development of their arithmetic fluency in Grade 1 above and beyond the influence of non-numerical factors. Second, the impact of children's non-symbolic number processing skills does not depend on their arithmetic achievement level given that they are selected from a nonclinical population. Copyright © 2013 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Rhodes, Katherine T.; Branum-Martin, Lee; Washington, Julie A.; Fuchs, Lynn S.
2017-01-01
Using multitrait, multimethod data, and confirmatory factor analysis, the current study examined the effects of arithmetic item formatting and the possibility that across formats, abilities other than arithmetic may contribute to children's answers. Measurement hypotheses were guided by several leading theories of arithmetic cognition. With a…
Personal Experience and Arithmetic Meaning in Semantic Dementia
ERIC Educational Resources Information Center
Julien, Camille L.; Neary, David; Snowden, Julie S.
2010-01-01
Arithmetic skills are generally claimed to be preserved in semantic dementia (SD), suggesting functional independence of arithmetic knowledge from other aspects of semantic memory. However, in a recent case series analysis we showed that arithmetic performance in SD is not entirely normal. The finding of a direct association between severity of…
NASA Astrophysics Data System (ADS)
Jin, Chenxia; Li, Fachao; Tsang, Eric C. C.; Bulysheva, Larissa; Kataev, Mikhail Yu
2017-01-01
In many real industrial applications, the integration of raw data with a methodology can support economically sound decision-making. Furthermore, most of these tasks involve complex optimisation problems. Seeking better solutions is critical. As an intelligent search optimisation algorithm, genetic algorithm (GA) is an important technique for complex system optimisation, but it has internal drawbacks such as low computation efficiency and prematurity. Improving the performance of GA is a vital topic in academic and applications research. In this paper, a new real-coded crossover operator, called compound arithmetic crossover operator (CAC), is proposed. CAC is used in conjunction with a uniform mutation operator to define a new genetic algorithm CAC10-GA. This GA is compared with an existing genetic algorithm (AC10-GA) that comprises an arithmetic crossover operator and a uniform mutation operator. To judge the performance of CAC10-GA, two kinds of analysis are performed. First the analysis of the convergence of CAC10-GA is performed by the Markov chain theory; second, a pair-wise comparison is carried out between CAC10-GA and AC10-GA through two test problems available in the global optimisation literature. The overall comparative study shows that the CAC performs quite well and the CAC10-GA defined outperforms the AC10-GA.
Carvajal, Gonzalo; Figueroa, Miguel
2014-07-01
Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Real-time mental arithmetic task recognition from EEG signals.
Wang, Qiang; Sourina, Olga
2013-03-01
Electroencephalography (EEG)-based monitoring the state of the user's brain functioning and giving her/him the visual/audio/tactile feedback is called neurofeedback technique, and it could allow the user to train the corresponding brain functions. It could provide an alternative way of treatment for some psychological disorders such as attention deficit hyperactivity disorder (ADHD), where concentration function deficit exists, autism spectrum disorder (ASD), or dyscalculia where the difficulty in learning and comprehending the arithmetic exists. In this paper, a novel method for multifractal analysis of EEG signals named generalized Higuchi fractal dimension spectrum (GHFDS) was proposed and applied in mental arithmetic task recognition from EEG signals. Other features such as power spectrum density (PSD), autoregressive model (AR), and statistical features were analyzed as well. The usage of the proposed fractal dimension spectrum of EEG signal in combination with other features improved the mental arithmetic task recognition accuracy in both multi-channel and one-channel subject-dependent algorithms up to 97.87% and 84.15% correspondingly. Based on the channel ranking, four channels were chosen which gave the accuracy up to 97.11%. Reliable real-time neurofeedback system could be implemented based on the algorithms proposed in this paper.
One Mouse per Child: Interpersonal Computer for Individual Arithmetic Practice
ERIC Educational Resources Information Center
Alcoholado, C.; Nussbaum, M.; Tagle, A.; Gomez, F.; Denardin, F.; Susaeta, H.; Villalta, M.; Toyama, K.
2012-01-01
Single Display Groupware (SDG) allows multiple people in the same physical space to interact simultaneously over a single communal display through individual input devices that work on the same machine. The aim of this paper is to show how SDG can be used to improve the way resources are used in schools, allowing students to work simultaneously on…
Interactive Software For Astrodynamical Calculations
NASA Technical Reports Server (NTRS)
Schlaifer, Ronald S.; Skinner, David L.; Roberts, Phillip H.
1995-01-01
QUICK computer program provides user with facilities of sophisticated desk calculator performing scalar, vector, and matrix arithmetic; propagate conic-section orbits; determines planetary and satellite coordinates; and performs other related astrodynamic calculations within FORTRAN-like software environment. QUICK is interpreter, and no need to use compiler or linker to run QUICK code. Outputs plotted in variety of formats on variety of terminals. Written in RATFOR.
NASA Astrophysics Data System (ADS)
Olschanowsky, C.; Flores, A. N.; FitzGerald, K.; Masarik, M. T.; Rudisill, W. J.; Aguayo, M.
2017-12-01
Dynamic models of the spatiotemporal evolution of water, energy, and nutrient cycling are important tools to assess impacts of climate and other environmental changes on ecohydrologic systems. These models require spatiotemporally varying environmental forcings like precipitation, temperature, humidity, windspeed, and solar radiation. These input data originate from a variety of sources, including global and regional weather and climate models, global and regional reanalysis products, and geostatistically interpolated surface observations. Data translation measures, often subsetting in space and/or time and transforming and converting variable units, represent a seemingly mundane, but critical step in the application workflows. Translation steps can introduce errors, misrepresentations of data, slow execution time, and interrupt data provenance. We leverage a workflow that subsets a large regional dataset derived from the Weather Research and Forecasting (WRF) model and prepares inputs to the Parflow integrated hydrologic model to demonstrate the impact translation tool software quality on scientific workflow results and performance. We propose that such workflows will benefit from a community approved collection of data transformation components. The components should be self-contained composable units of code. This design pattern enables automated parallelization and software verification, improving performance and reliability. Ensuring that individual translation components are self-contained and target minute tasks increases reliability. The small code size of each component enables effective unit and regression testing. The components can be automatically composed for efficient execution. An efficient data translation framework should be written to minimize data movement. Composing components within a single streaming process reduces data movement. Each component will typically have a low arithmetic intensity, meaning that it requires about the same number of bytes to be read as the number of computations it performs. When several components' executions are coordinated the overall arithmetic intensity increases, leading to increased efficiency.
Pinel, Philippe; Dehaene, Stanislas
2010-01-01
Language and arithmetic are both lateralized to the left hemisphere in the majority of right-handed adults. Yet, does this similar lateralization reflect a single overall constraint of brain organization, such an overall "dominance" of the left hemisphere for all linguistic and symbolic operations? Is it related to the lateralization of specific cerebral subregions? Or is it merely coincidental? To shed light on this issue, we performed a "colateralization analysis" over 209 healthy subjects: We investigated whether normal variations in the degree of left hemispheric asymmetry in areas involved in sentence listening and reading are mirrored in the asymmetry of areas involved in mental arithmetic. Within the language network, a region-of-interest analysis disclosed partially dissociated patterns of lateralization, inconsistent with an overall "dominance" model. Only two of these areas presented a lateralization during sentence listening and reading which correlated strongly with the lateralization of two regions active during calculation. Specifically, the profile of asymmetry in the posterior superior temporal sulcus during sentence processing covaried with the asymmetry of calculation-induced activation in the intraparietal sulcus, and a similar colateralization linked the middle frontal gyrus with the superior posterior parietal lobule. Given recent neuroimaging results suggesting a late emergence of hemispheric asymmetries for symbolic arithmetic during childhood, we speculate that these colateralizations might constitute developmental traces of how the acquisition of linguistic symbols affects the cerebral organization of the arithmetic network.
A Pacific Ocean general circulation model for satellite data assimilation
NASA Technical Reports Server (NTRS)
Chao, Y.; Halpern, D.; Mechoso, C. R.
1991-01-01
A tropical Pacific Ocean General Circulation Model (OGCM) to be used in satellite data assimilation studies is described. The transfer of the OGCM from a CYBER-205 at NOAA's Geophysical Fluid Dynamics Laboratory to a CRAY-2 at NASA's Ames Research Center is documented. Two 3-year model integrations from identical initial conditions but performed on those two computers are compared. The model simulations are very similar to each other, as expected, but the simulations performed with the higher-precision CRAY-2 is smoother than that with the lower-precision CYBER-205. The CYBER-205 and CRAY-2 use 32 and 64-bit mantissa arithmetic, respectively. The major features of the oceanic circulation in the tropical Pacific, namely the North Equatorial Current, the North Equatorial Countercurrent, the South Equatorial Current, and the Equatorial Undercurrent, are realistically produced and their seasonal cycles are described. The OGCM provides a powerful tool for study of tropical oceans and for the assimilation of satellite altimetry data.
The biological microprocessor, or how to build a computer with biological parts
Moe-Behrens, Gerd HG
2013-01-01
Systemics, a revolutionary paradigm shift in scientific thinking, with applications in systems biology, and synthetic biology, have led to the idea of using silicon computers and their engineering principles as a blueprint for the engineering of a similar machine made from biological parts. Here we describe these building blocks and how they can be assembled to a general purpose computer system, a biological microprocessor. Such a system consists of biological parts building an input / output device, an arithmetic logic unit, a control unit, memory, and wires (busses) to interconnect these components. A biocomputer can be used to monitor and control a biological system. PMID:24688733
Sparse Matrices in MATLAB: Design and Implementation
NASA Technical Reports Server (NTRS)
Gilbert, John R.; Moler, Cleve; Schreiber, Robert
1992-01-01
The matrix computation language and environment MATLAB is extended to include sparse matrix storage and operations. The only change to the outward appearance of the MATLAB language is a pair of commands to create full or sparse matrices. Nearly all the operations of MATLAB now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportional to the number of arithmetic operations on nonzeros.
Y-MP floating point and Cholesky factorization
NASA Technical Reports Server (NTRS)
Carter, Russell
1991-01-01
The floating point arithmetics implemented in the Cray 2 and Cray Y-MP computer systems are nearly identical, but large scale computations performed on the two systems have exhibited significant differences in accuracy. The difference in accuracy is analyzed for Cholesky factorization algorithm, and it is found that the source of the difference is the subtract magnitude operation of the Cray Y-MP. The results from numerical experiments for a range of problem sizes are presented, and an efficient method for improving the accuracy of the factorization obtained on the Y-MP is presented.
2000-03-24
34, Proceedings of the 4th Int. Conf. on Computer-Aided Drafting, Design and Manufacturing Technology, Bejing , China , pp. 133-139, aug 1994. [4] C. J. Hsu...transmission and reflec- warped gyro-frequency. The prewarping opera - tion performance of a magnetized plasma slab tion conserves the d.c. gain and the...inverse NUDFT’s are obtained efficiently by the NUFFT algorithms with O(N log2 N) arithmetic opera - tions. Therefore the CG-NUFFT retains the
Successes and surprises with computer-extended series
NASA Astrophysics Data System (ADS)
van Dyke, M.
An alternative to purely numerical solution of flow problems showing promise is the seminumerical technique that involves extending a perturbation series to high order by delegating the mounting arithmetic to a computer. It is noted, however, that since the method is still under development, several erroneous conclusions have been published. First, three clear successes of this method are described. It is then shown how a failure to carefully assess results has in two cases led to false conclusions. Finally, two problems are discussed that yield surprising results not yet accepted by all other investigators.
Weber, Gerhard-Wilhelm; Ozöğür-Akyüz, Süreyya; Kropat, Erik
2009-06-01
An emerging research area in computational biology and biotechnology is devoted to mathematical modeling and prediction of gene-expression patterns; it nowadays requests mathematics to deeply understand its foundations. This article surveys data mining and machine learning methods for an analysis of complex systems in computational biology. It mathematically deepens recent advances in modeling and prediction by rigorously introducing the environment and aspects of errors and uncertainty into the genetic context within the framework of matrix and interval arithmetics. Given the data from DNA microarray experiments and environmental measurements, we extract nonlinear ordinary differential equations which contain parameters that are to be determined. This is done by a generalized Chebychev approximation and generalized semi-infinite optimization. Then, time-discretized dynamical systems are studied. By a combinatorial algorithm which constructs and follows polyhedra sequences, the region of parametric stability is detected. In addition, we analyze the topological landscape of gene-environment networks in terms of structural stability. As a second strategy, we will review recent model selection and kernel learning methods for binary classification which can be used to classify microarray data for cancerous cells or for discrimination of other kind of diseases. This review is practically motivated and theoretically elaborated; it is devoted to a contribution to better health care, progress in medicine, a better education, and more healthy living conditions.
Early but not late blindness leads to enhanced arithmetic and working memory abilities.
Dormal, Valérie; Crollen, Virginie; Baumans, Christine; Lepore, Franco; Collignon, Olivier
2016-10-01
Behavioural and neurophysiological evidence suggest that vision plays an important role in the emergence and development of arithmetic abilities. However, how visual deprivation impacts on the development of arithmetic processing remains poorly understood. We compared the performances of early (EB), late blind (LB) and sighted control (SC) individuals during various arithmetic tasks involving addition, subtraction and multiplication of various complexities. We also assessed working memory (WM) performances to determine if they relate to a blind person's arithmetic capacities. Results showed that EB participants performed better than LB and SC in arithmetic tasks, especially in conditions in which verbal routines and WM abilities are needed. Moreover, EB participants also showed higher WM abilities. Together, our findings demonstrate that the absence of developmental vision does not prevent the development of refined arithmetic skills and can even trigger the refinement of these abilities in specific tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Long, Imogen; Malone, Stephanie A; Tolan, Anne; Burgoyne, Kelly; Heron-Delaney, Michelle; Witteveen, Kate; Hulme, Charles
2016-12-01
Following on from ideas developed by Gerstmann, a body of work has suggested that impairments in finger gnosis may be causally related to children's difficulties in learning arithmetic. We report a study with a large sample of typically developing children (N=197) in which we assessed finger gnosis and arithmetic along with a range of other relevant cognitive predictors of arithmetic skills (vocabulary, counting, and symbolic and nonsymbolic magnitude judgments). Contrary to some earlier claims, we found no meaningful association between finger gnosis and arithmetic skills. Counting and symbolic magnitude comparison were, however, powerful predictors of arithmetic skills, replicating a number of earlier findings. Our findings seriously question theories that posit either a simple association or a causal connection between finger gnosis and the development of arithmetic skills. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
[Acquisition of arithmetic knowledge].
Fayol, Michel
2008-01-01
The focus of this paper is on contemporary research on the number counting and arithmetical competencies that emerge during infancy, the preschool years, and the elementary school. I provide a brief overview of the evolution of children's conceptual knowledge of arithmetic knowledge, the acquisition and use of counting and how they solve simple arithmetic problems (e.g. 4 + 3).
The Development of Arithmetic Principle Knowledge: How Do We Know What Learners Know?
ERIC Educational Resources Information Center
Prather, Richard W.; Alibali, Martha W.
2009-01-01
This paper reviews research on learners' knowledge of three arithmetic principles: "Commutativity", "Relation to Operands", and "Inversion." Studies of arithmetic principle knowledge vary along several dimensions, including the age of the participants, the context in which the arithmetic is presented, and most importantly, the type of knowledge…
NASA Astrophysics Data System (ADS)
Jorand, Rachel; Fehr, Annick; Koch, Andreas; Clauser, Christoph
2011-08-01
In this paper, we present a method that allows one to correct thermal conductivity measurements for the effect of water loss when extrapolating laboratory data to in situ conditions. The water loss in shales and unconsolidated rocks is a serious problem that can introduce errors in the characterization of reservoirs. For this study, we measure the thermal conductivity of four sandstones with and without clay minerals according to different water saturation levels using an optical scanner. Thermal conductivity does not decrease linearly with water saturation. At high saturation and very low saturation, thermal conductivity decreases more quickly because of spontaneous liquid displacement and capillarity effects. Apart from these two effects, thermal conductivity decreases quasi-linearly. We also notice that the samples containing clay minerals are not completely drained, and thermal conductivity reaches a minimum value. In order to fit the variation of thermal conductivity with the water saturation as a whole, we used modified models commonly presented in thermal conductivity studies: harmonic and arithmetic mean and geometric models. These models take into account different types of porosity, especially those attributable to the abundance of clay, using measurements obtained from nuclear magnetic resonance (NMR). For argillaceous sandstones, a modified arithmetic-harmonic model fits the data best. For clean quartz sandstones under low water saturation, the closest fit to the data is obtained with the modified arithmetic-harmonic model, while for high water saturation, a modified geometric mean model proves to be the best.
Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro
2003-04-15
Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003
The neural correlates of mental arithmetic in adolescents: a longitudinal fNIRS study.
Artemenko, Christina; Soltanlou, Mojtaba; Ehlis, Ann-Christine; Nuerk, Hans-Christoph; Dresler, Thomas
2018-03-10
Arithmetic processing in adults is known to rely on a frontal-parietal network. However, neurocognitive research focusing on the neural and behavioral correlates of arithmetic development has been scarce, even though the acquisition of arithmetic skills is accompanied by changes within the fronto-parietal network of the developing brain. Furthermore, experimental procedures are typically adjusted to constraints of functional magnetic resonance imaging, which may not reflect natural settings in which children and adolescents actually perform arithmetic. Therefore, we investigated the longitudinal neurocognitive development of processes involved in performing the four basic arithmetic operations in 19 adolescents. By using functional near-infrared spectroscopy, we were able to use an ecologically valid task, i.e., a written production paradigm. A common pattern of activation in the bilateral fronto-parietal network for arithmetic processing was found for all basic arithmetic operations. Moreover, evidence was obtained for decreasing activation during subtraction over the course of 1 year in middle and inferior frontal gyri, and increased activation during addition and multiplication in angular and middle temporal gyri. In the self-paced block design, parietal activation in multiplication and left angular and temporal activation in addition were observed to be higher for simple than for complex blocks, reflecting an inverse effect of arithmetic complexity. In general, the findings suggest that the brain network for arithmetic processing is already established in 12-14 year-old adolescents, but still undergoes developmental changes.
Approximate Arithmetic Training Improves Informal Math Performance in Low Achieving Preschoolers
Szkudlarek, Emily; Brannon, Elizabeth M.
2018-01-01
Recent studies suggest that practice with approximate and non-symbolic arithmetic problems improves the math performance of adults, school aged children, and preschoolers. However, the relative effectiveness of approximate arithmetic training compared to available educational games, and the type of math skills that approximate arithmetic targets are unknown. The present study was designed to (1) compare the effectiveness of approximate arithmetic training to two commercially available numeral and letter identification tablet applications and (2) to examine the specific type of math skills that benefit from approximate arithmetic training. Preschool children (n = 158) were pseudo-randomly assigned to one of three conditions: approximate arithmetic, letter identification, or numeral identification. All children were trained for 10 short sessions and given pre and post tests of informal and formal math, executive function, short term memory, vocabulary, alphabet knowledge, and number word knowledge. We found a significant interaction between initial math performance and training condition, such that children with low pretest math performance benefited from approximate arithmetic training, and children with high pretest math performance benefited from symbol identification training. This effect was restricted to informal, and not formal, math problems. There were also effects of gender, socio-economic status, and age on post-test informal math score after intervention. A median split on pretest math ability indicated that children in the low half of math scores in the approximate arithmetic training condition performed significantly better than children in the letter identification training condition on post-test informal math problems when controlling for pretest, age, gender, and socio-economic status. Our results support the conclusion that approximate arithmetic training may be especially effective for children with low math skills, and that approximate arithmetic training improves early informal, but not formal, math skills. PMID:29867624
Approximate Arithmetic Training Improves Informal Math Performance in Low Achieving Preschoolers.
Szkudlarek, Emily; Brannon, Elizabeth M
2018-01-01
Recent studies suggest that practice with approximate and non-symbolic arithmetic problems improves the math performance of adults, school aged children, and preschoolers. However, the relative effectiveness of approximate arithmetic training compared to available educational games, and the type of math skills that approximate arithmetic targets are unknown. The present study was designed to (1) compare the effectiveness of approximate arithmetic training to two commercially available numeral and letter identification tablet applications and (2) to examine the specific type of math skills that benefit from approximate arithmetic training. Preschool children ( n = 158) were pseudo-randomly assigned to one of three conditions: approximate arithmetic, letter identification, or numeral identification. All children were trained for 10 short sessions and given pre and post tests of informal and formal math, executive function, short term memory, vocabulary, alphabet knowledge, and number word knowledge. We found a significant interaction between initial math performance and training condition, such that children with low pretest math performance benefited from approximate arithmetic training, and children with high pretest math performance benefited from symbol identification training. This effect was restricted to informal, and not formal, math problems. There were also effects of gender, socio-economic status, and age on post-test informal math score after intervention. A median split on pretest math ability indicated that children in the low half of math scores in the approximate arithmetic training condition performed significantly better than children in the letter identification training condition on post-test informal math problems when controlling for pretest, age, gender, and socio-economic status. Our results support the conclusion that approximate arithmetic training may be especially effective for children with low math skills, and that approximate arithmetic training improves early informal, but not formal, math skills.
Power, Sarah D; Kushki, Azadeh; Chau, Tom
2012-01-01
Near-infrared spectroscopy (NIRS) has been recently investigated for use in noninvasive brain-computer interface (BCI) technologies. Previous studies have demonstrated the ability to classify patterns of neural activation associated with different mental tasks (e.g., mental arithmetic) using NIRS signals. Though these studies represent an important step towards the realization of an NIRS-BCI, there is a paucity of literature regarding the consistency of these responses, and the ability to classify them on a single-trial basis, over multiple sessions. This is important when moving out of an experimental context toward a practical system, where performance must be maintained over longer periods. When considering response consistency across sessions, two questions arise: 1) can the hemodynamic response to the activation task be distinguished from a baseline (or other task) condition, consistently across sessions, and if so, 2) are the spatiotemporal characteristics of the response which best distinguish it from the baseline (or other task) condition consistent across sessions. The answers will have implications for the viability of an NIRS-BCI system, and the design strategies (especially in terms of classifier training protocols) adopted. In this study, we investigated the consistency of classification of a mental arithmetic task and a no-control condition over five experimental sessions. Mixed model linear regression on intrasession classification accuracies indicate that the task and baseline states remain differentiable across multiple sessions, with no significant decrease in accuracy (p = 0.67). Intersession analysis, however, revealed inconsistencies in spatiotemporal response characteristics. Based on these results, we investigated several different practical classifier training protocols, including scenarios in which the training and test data come from 1) different sessions, 2) the same session, and 3) a combination of both. Results indicate that when selecting optimal classifier training protocols for NIRS-BCI, a compromise between accuracy and convenience (e.g., in terms of duration/frequency of training data collection) must be considered.
NASA Astrophysics Data System (ADS)
Lieu, Richard
2018-01-01
A hierarchy of statistics of increasing sophistication and accuracy is proposed, to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level, with the help of high precision computers, to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the bolometric flux measurement of a radio source.
Errors in Multi-Digit Arithmetic and Behavioral Inattention in Children With Math Difficulties
Raghubar, Kimberly; Cirino, Paul; Barnes, Marcia; Ewing-Cobbs, Linda; Fletcher, Jack; Fuchs, Lynn
2009-01-01
Errors in written multi-digit computation were investigated in children with math difficulties. Third-and fourth-grade children (n = 291) with coexisting math and reading difficulties, math difficulties, reading difficulties, or no learning difficulties were compared. A second analysis compared those with severe math learning difficulties, low average achievement in math, and no learning difficulties. Math fact errors were related to the severity of the math difficulties, not to reading status. Contrary to predictions, children with poorer reading, regardless of math achievement, committed more visually based errors. Operation switch errors were not systematically related to group membership. Teacher ratings of behavioral inattention were related to accuracy, math fact errors, and procedural bugs. The findings are discussed with respect to hypotheses about the cognitive origins of arithmetic errors and in relation to current discussions about how to conceptualize math disabilities. PMID:19380494
Ahmad, Peer Zahoor; Quadri, S M K; Ahmad, Firdous; Bahar, Ali Newaz; Wani, Ghulam Mohammad; Tantary, Shafiq Maqbool
2017-12-01
Quantum-dot cellular automata, is an extremely small size and a powerless nanotechnology. It is the possible alternative to current CMOS technology. Reversible QCA logic is the most important issue at present time to reduce power losses. This paper presents a novel reversible logic gate called the F-Gate. It is simplest in design and a powerful technique to implement reversible logic. A systematic approach has been used to implement a novel single layer reversible Full-Adder, Full-Subtractor and a Full Adder-Subtractor using the F-Gate. The proposed Full Adder-Subtractor has achieved significant improvements in terms of overall circuit parameters among the most previously cost-efficient designs that exploit the inevitable nano-level issues to perform arithmetic computing. The proposed designs have been authenticated and simulated using QCADesigner tool ver. 2.0.3.
ERIC Educational Resources Information Center
Hitt, Fernando; Saboya, Mireille; Cortés Zavala, Carlos
2016-01-01
This paper presents an experiment that attempts to mobilise an arithmetic-algebraic way of thinking in order to articulate between arithmetic thinking and the early algebraic thinking, which is considered a prelude to algebraic thinking. In the process of building this latter way of thinking, researchers analysed pupils' spontaneous production…
IQ of four-year-olds who go on to develop dyslexia.
van Bergen, Elsje; de Jong, Peter F; Maassen, Ben; Krikhaar, Evelien; Plakas, Anna; van der Leij, Aryan
2014-01-01
Do children who go on to develop dyslexia show normal verbal and nonverbal development before reading onset? According to the aptitude-achievement discrepancy model, dyslexia is defined as a discrepancy between intelligence and reading achievement. One of the underlying assumptions is that the general cognitive development of children who fail to learn to read has been normal. The current study tests this assumption. In addition, we investigated whether possible IQ deficits are uniquely related to later reading or are also related to arithmetic. Four-year-olds (N = 212) with and without familial risk for dyslexia were assessed on 10 IQ subtests. Reading and arithmetic skills were measured 4 years later, at the end of Grade 2. Relative to the controls, the at-risk group without dyslexia had subtle impairments only in the verbal domain, whereas the at-risk group with dyslexia lagged behind across IQ tasks. Nonverbal IQ was associated with both reading and arithmetic, whereas verbal IQ was uniquely related to later reading. The children who went on to develop dyslexia performed relatively poorly in both verbal and nonverbal abilities at age 4, which challenges the discrepancy model. Furthermore, we discuss possible causal and epiphenomenal models explaining the links between early IQ and later reading. © Hammill Institute on Disabilities 2013.
Numerical proof of stability of roll waves in the small-amplitude limit for inclined thin film flow
NASA Astrophysics Data System (ADS)
Barker, Blake
2014-10-01
We present a rigorous numerical proof based on interval arithmetic computations categorizing the linearized and nonlinear stability of periodic viscous roll waves of the KdV-KS equation modeling weakly unstable flow of a thin fluid film on an incline in the small-amplitude KdV limit. The argument proceeds by verification of a stability condition derived by Bar-Nepomnyashchy and Johnson-Noble-Rodrigues-Zumbrun involving inner products of various elliptic functions arising through the KdV equation. One key point in the analysis is a bootstrap argument balancing the extremely poor sup norm bounds for these functions against the extremely good convergence properties for analytic interpolation in order to obtain a feasible computation time. Another is the way of handling analytic interpolation in several variables by a two-step process carving up the parameter space into manageable pieces for rigorous evaluation. These and other general aspects of the analysis should serve as blueprints for more general analyses of spectral stability.
NASA Astrophysics Data System (ADS)
Kozynchenko, Alexander I.; Kozynchenko, Sergey A.
2017-03-01
In the paper, a problem of improving efficiency of the particle-particle- particle-mesh (P3M) algorithm in computing the inter-particle electrostatic forces is considered. The particle-mesh (PM) part of the algorithm is modified in such a way that the space field equation is solved by the direct method of summation of potentials over the ensemble of particles lying not too close to a reference particle. For this purpose, a specific matrix "pattern" is introduced to describe the spatial field distribution of a single point charge, so the "pattern" contains pre-calculated potential values. This approach allows to reduce a set of arithmetic operations performed at the innermost of nested loops down to an addition and assignment operators and, therefore, to decrease the running time substantially. The simulation model developed in C++ substantiates this view, showing the descent accuracy acceptable in particle beam calculations together with the improved speed performance.
FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.
Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid
2014-01-01
A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.
Failure detection in high-performance clusters and computers using chaotic map computations
Rao, Nageswara S.
2015-09-01
A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.
Human computers: the first pioneers of the information age.
Grier, D A
2001-03-01
Before computers were machines, they were people. They were men and women, young and old, well educated and common. They were the workers who convinced scientists that large-scale calculation had value. Long before Presper Eckert and John Mauchly built the ENIAC at the Moore School of Electronics, Philadelphia, or Maurice Wilkes designed the EDSAC for Manchester University, human computers had created the discipline of computation. They developed numerical methodologies and proved them on practical problems. These human computers were not savants or calculating geniuses. Some knew little more than basic arithmetic. A few were near equals of the scientists they served and, in a different time or place, might have become practicing scientists had they not been barred from a scientific career by their class, education, gender or ethnicity.
Fehr, Thorsten; Code, Chris; Herrmann, Manfred
2007-10-03
The issue of how and where arithmetic operations are represented in the brain has been addressed in numerous studies. Lesion studies suggest that a network of different brain areas are involved in mental calculation. Neuroimaging studies have reported inferior parietal and lateral frontal activations during mental arithmetic using tasks of different complexities and using different operators (addition, subtraction, etc.). Indeed, it has been difficult to compare brain activation across studies because of the variety of different operators and different presentation modalities used. The present experiment examined fMRI-BOLD activity in participants during calculation tasks entailing different arithmetic operations -- addition, subtraction, multiplication and division -- of different complexities. Functional imaging data revealed a common activation pattern comprising right precuneus, left and right middle and superior frontal regions during all arithmetic operations. All other regional activations were operation specific and distributed in prominently frontal, parietal and central regions when contrasting complex and simple calculation tasks. The present results largely confirm former studies suggesting that activation patterns due to mental arithmetic appear to reflect a basic anatomical substrate of working memory, numerical knowledge and processing based on finger counting, and derived from a network originally related to finger movement. We emphasize that in mental arithmetic research different arithmetic operations should always be examined and discussed independently of each other in order to avoid invalid generalizations on arithmetics and involved brain areas.
Gomez-Pulido, Juan A; Cerrada-Barrios, Jose L; Trinidad-Amado, Sebastian; Lanza-Gutierrez, Jose M; Fernandez-Diaz, Ramon A; Crawford, Broderick; Soto, Ricardo
2016-08-31
Metaheuristics are widely used to solve large combinatorial optimization problems in bioinformatics because of the huge set of possible solutions. Two representative problems are gene selection for cancer classification and biclustering of gene expression data. In most cases, these metaheuristics, as well as other non-linear techniques, apply a fitness function to each possible solution with a size-limited population, and that step involves higher latencies than other parts of the algorithms, which is the reason why the execution time of the applications will mainly depend on the execution time of the fitness function. In addition, it is usual to find floating-point arithmetic formulations for the fitness functions. This way, a careful parallelization of these functions using the reconfigurable hardware technology will accelerate the computation, specially if they are applied in parallel to several solutions of the population. A fine-grained parallelization of two floating-point fitness functions of different complexities and features involved in biclustering of gene expression data and gene selection for cancer classification allowed for obtaining higher speedups and power-reduced computation with regard to usual microprocessors. The results show better performances using reconfigurable hardware technology instead of usual microprocessors, in computing time and power consumption terms, not only because of the parallelization of the arithmetic operations, but also thanks to the concurrent fitness evaluation for several individuals of the population in the metaheuristic. This is a good basis for building accelerated and low-energy solutions for intensive computing scenarios.
Cui, Jiaxin; Georgiou, George K; Zhang, Yiyun; Li, Yixun; Shu, Hua; Zhou, Xinlin
2017-02-01
Rapid automatized naming (RAN) has been found to predict mathematics. However, the nature of their relationship remains unclear. Thus, the purpose of this study was twofold: (a) to examine how RAN (numeric and non-numeric) predicts a subdomain of mathematics (arithmetic fluency) and (b) to examine what processing skills may account for the RAN-arithmetic fluency relationship. A total of 160 third-year kindergarten Chinese children (83 boys and 77 girls, mean age=5.11years) were assessed on RAN (colors, objects, digits, and dice), nonverbal IQ, visual-verbal paired associate learning, phonological awareness, short-term memory, speed of processing, approximate number system acuity, and arithmetic fluency (addition and subtraction). The results indicated first that RAN was a significant correlate of arithmetic fluency and the correlations did not vary as a function of type of RAN or arithmetic fluency tasks. In addition, RAN continued to predict addition and subtraction fluency even after controlling for all other processing skills. Taken together, these findings challenge the existing theoretical accounts of the RAN-arithmetic fluency relationship and suggest that, similar to reading fluency, multiple processes underlie the RAN-arithmetic fluency relationship. Copyright © 2016 Elsevier Inc. All rights reserved.
Synthesis of geophysical data with space-acquired imagery: a review
Hastings, David A.
1983-01-01
Statistical correlation has been used to determine the applicability of specific data sets to the development of geologic or exploration models. Various arithmetic functions have proven useful in developing models from such data sets.
Math anxiety and its relationship with basic arithmetic skills among primary school children.
Sorvo, Riikka; Koponen, Tuire; Viholainen, Helena; Aro, Tuija; Räikkönen, Eija; Peura, Pilvi; Dowker, Ann; Aro, Mikko
2017-09-01
Children have been found to report and demonstrate math anxiety as early as the first grade. However, previous results concerning the relationship between math anxiety and performance are contradictory, with some studies establishing a correlation between them while others do not. These contradictory results might be related to varying operationalizations of math anxiety. In this study, we aimed to examine the prevalence of math anxiety and its relationship with basic arithmetic skills in primary school children, with explicit focus on two aspects of math anxiety: anxiety about failure in mathematics and anxiety in math-related situations. The participants comprised 1,327 children at grades 2-5. Math anxiety was assessed using six items, and basic arithmetic skills were assessed using three assessment tasks. Around one-third of the participants reported anxiety about being unable to do math, one-fifth about having to answer teachers' questions, and one tenth about having to do math. Confirmatory factor analysis indicated that anxiety about math-related situations and anxiety about failure in mathematics are separable aspects of math anxiety. Structural equation modelling suggested that anxiety about math-related situations was more strongly associated with arithmetic fluency than anxiety about failure. Anxiety about math-related situations was most common among second graders and least common among fifth graders. As math anxiety, particularly about math-related situations, was related to arithmetic fluency even as early as the second grade, children's negative feelings and math anxiety should be identified and addressed from the early primary school years. © 2017 The British Psychological Society.
Oppel, S.; Federer, R.N.; O'Brien, D. M.; Powell, A.N.; Hollmén, Tuula E.
2010-01-01
Many studies of nutrient allocation to egg production in birds use stable isotope ratios of egg yolk to identify the origin of nutrients. Dry egg yolk contains >50% lipids, which are known to be depleted in 13C. Currently, researchers remove lipids from egg yolk using a chemical lipid-extraction procedure before analyzing the isotopic composition of protein in egg yolk. We examined the effects of chemical lipid extraction on ??13C, ??15N, and ??34S of avian egg yolk and explored the utility of an arithmetic lipid correction model to adjust whole yolk ??13C for lipid content. We analyzed the dried yolk of 15 captive Spectacled Eider (Somateriafischeri) and 20 wild King Eider (S. spectabilis) eggs, both as whole yolk and after lipid extraction with a 2:1 chloroform:methanol solution. We found that chemical lipid extraction leads to an increase of (mean ?? SD) 3.3 ?? 1.1% in ??13C, 1.1 ?? 0.5% in ??15N, and 2.3 ?? 1.1% in ??34S. Arithmetic lipid correction provided accurate values for lipid-extracted S13C in captive Spectacled Eiders fed on a homogeneous high-quality diet. However, arithmetic lipid correction was unreliable for wild King Eiders, likely because of their differential incorporation of macronutrients from isotopically distinct environments during migration. For that reason, we caution against applying arithmetic lipid correction to the whole yolk ??13C of migratory birds, because these methods assume that all egg macronutrients are derived from the same dietary sources. ?? 2010 The American Ornithologists' Union.
Träff, Ulf
2013-10-01
This study examined the relative contributions of general cognitive abilities and number abilities to word problem solving, calculation, and arithmetic fact retrieval in a sample of 134 children aged 10 to 13 years. The following tasks were administered: listening span, visual matrix span, verbal fluency, color naming, Raven's Progressive Matrices, enumeration, number line estimation, and digit comparison. Hierarchical multiple regressions demonstrated that number abilities provided an independent contribution to fact retrieval and word problem solving. General cognitive abilities contributed to problem solving and calculation. All three number tasks accounted for a similar amount of variance in fact retrieval, whereas only the number line estimation task contributed unique variance in word problem solving. Verbal fluency and Raven's matrices accounted for an equal amount of variance in problem solving and calculation. The current findings demonstrate, in accordance with Fuchs and colleagues' developmental model of mathematical learning (Developmental Psychology, 2010, Vol. 46, pp. 1731-1746), that both number abilities and general cognitive abilities underlie 10- to 13-year-olds' proficiency in problem solving, whereas only number abilities underlie arithmetic fact retrieval. Thus, the amount and type of cognitive contribution to arithmetic proficiency varies between the different aspects of arithmetic. Furthermore, how closely linked a specific aspect of arithmetic is to the whole number representation systems is not the only factor determining the amount and type of cognitive contribution in 10- to 13-year-olds. In addition, the mathematical complexity of the task appears to influence the amount and type of cognitive support. Copyright © 2013 Elsevier Inc. All rights reserved.
Confirmatory factor analysis of the Early Arithmetic, Reading, and Learning Indicators (EARLI)☆
Norwalk, Kate E.; DiPerna, James Clyde; Lei, Pui-Wa
2015-01-01
Despite growing interest in early intervention, there are few measures available to monitor the progress of early academic skills in preschoolers. The Early Arithmetic, Reading, and Learning Indicators (EARLI; DiPerna, Morgan, & Lei, 2007) were developed as brief assessments of critical early literacy and numeracy skills. The purpose of the current study was to examine the factor structure of the EARLI probes via confirmatory factor analysis (CFA) in a sample of Head Start preschoolers (N = 289). A two-factor model with correlated error terms and a bifactor model provided comparable fit to the data, although there were some structural problems with the latter model. The utility of the bifactor model for explaining the structure of early academic skills as well as the utility of the EARLI probes as measures of literacy and numeracy skills in preschool are discussed. PMID:24495496
Students’ Relational Thinking of Impulsive and Reflective in Solving Mathematical Problem
NASA Astrophysics Data System (ADS)
Satriawan, M. A.; Budiarto, M. T.; Siswono, T. Y. E.
2018-01-01
This is a descriptive research which qualitatively investigates students’ relational thinking of impulsive and reflective cognitive style in solving mathematical problem. The method used in this research are test and interview. The data analyzed by reducing, presenting and concluding the data. The results of research show that the students’ reflective cognitive style can possibly help to find out important elements in understanding a problem. Reading more than one is useful to identify what is being questioned and write the information which is known, building relation in every element and connecting information with arithmetic operation, connecting between what is being questioned with known information, making equation model to find out the value by using substitution, and building a connection on re-checking, re-reading, and re-counting. The impulsive students’ cognitive style supports important elements in understanding problems, building a connection in every element, connecting information with arithmetic operation, building a relation about a problem comprehensively by connecting between what is being questioned with known information, finding out the unknown value by using arithmetic operation without making any equation model. The result of re-checking problem solving, impulsive student was only reading at glance without re-counting the result of problem solving.
Li, Yongxin; Hu, Yuzheng; Wang, Yunqi; Weng, Jian; Chen, Feiyan
2013-01-01
Arithmetic skill is of critical importance for academic achievement, professional success and everyday life, and childhood is the key period to acquire this skill. Neuroimaging studies have identified that left parietal regions are a key neural substrate for representing arithmetic skill. Although the relationship between functional brain activity in left parietal regions and arithmetic skill has been studied in detail, it remains unclear about the relationship between arithmetic achievement and structural properties in left inferior parietal area in schoolchildren. The current study employed a combination of voxel-based morphometry (VBM) for high-resolution T1-weighted images and fiber tracking on diffusion tensor imaging (DTI) to examine the relationship between structural properties in the inferior parietal area and arithmetic achievement in 10-year-old schoolchildren. VBM of the T1-weighted images revealed that individual differences in arithmetic scores were significantly and positively correlated with the gray matter (GM) volume in the left intraparietal sulcus (IPS). Fiber tracking analysis revealed that the forceps major, left superior longitudinal fasciculus (SLF), bilateral inferior longitudinal fasciculus (ILF) and inferior fronto-occipital fasciculus (IFOF) were the primary pathways connecting the left IPS with other brain areas. Furthermore, the regression analysis of the probabilistic pathways revealed a significant and positive correlation between the fractional anisotropy (FA) values in the left SLF, ILF and bilateral IFOF and arithmetic scores. The brain structure-behavior correlation analyses indicated that the GM volumes in the left IPS and the FA values in the tract pathways connecting left IPS were both related to children's arithmetic achievement. The present findings provide evidence that individual structural differences in the left IPS are associated with arithmetic scores in schoolchildren. PMID:24367320
Vukovic, Rose K; Lesaux, Nonie K
2013-06-01
This longitudinal study examined how language ability relates to mathematical development in a linguistically and ethnically diverse sample of children from 6 to 9 years of age. Study participants were 75 native English speakers and 92 language minority learners followed from first to fourth grades. Autoregression in a structural equation modeling (SEM) framework was used to evaluate the relation between children's language ability and gains in different domains of mathematical cognition (i.e., arithmetic, data analysis/probability, algebra, and geometry). The results showed that language ability predicts gains in data analysis/probability and geometry, but not in arithmetic or algebra, after controlling for visual-spatial working memory, reading ability, and sex. The effect of language on gains in mathematical cognition did not differ between language minority learners and native English speakers. These findings suggest that language influences how children make meaning of mathematics but is not involved in complex arithmetical procedures whether presented with Arabic symbols as in arithmetic or with abstract symbols as in algebraic reasoning. The findings further indicate that early language experiences are important for later mathematical development regardless of language background, denoting the need for intensive and targeted language opportunities for language minority and native English learners to develop mathematical concepts and representations. Copyright © 2013. Published by Elsevier Inc.
Unpacking symbolic number comparison and its relation with arithmetic in adults.
Sasanguie, Delphine; Lyons, Ian M; De Smedt, Bert; Reynvoet, Bert
2017-08-01
Symbolic number - or digit - comparison has been a central tool in the domain of numerical cognition for decades. More recently, individual differences in performance on this task have been shown to robustly relate to individual differences in more complex math processing - a result that has been replicated across many different age groups. In this study, we 'unpack' the underlying components of digit comparison (i.e. digit identification, digit to number-word matching, digit ordering and general comparison) in a sample of adults. In a first experiment, we showed that digit comparison performance was most strongly related to digit ordering ability - i.e., the ability to judge whether symbolic numbers are in numerical order. Furthermore, path analyses indicated that the relation between digit comparison and arithmetic was partly mediated by digit ordering and fully mediated when non-numerical (letter) ordering was also entered into the model. In a second experiment, we examined whether a general order working memory component could account for the relation between digit comparison and arithmetic. It could not. Instead, results were more consistent with the notion that fluent access and activation of long-term stored associations between numbers explains the relation between arithmetic and both digit comparison and digit ordering tasks. Copyright © 2017 Elsevier B.V. All rights reserved.
Hinault, T; Lemaire, P
2016-01-01
In this review, we provide an overview of how age-related changes in executive control influence aging effects in arithmetic processing. More specifically, we consider the role of executive control in strategic variations with age during arithmetic problem solving. Previous studies found that age-related differences in arithmetic performance are associated with strategic variations. That is, when they accomplish arithmetic problem-solving tasks, older adults use fewer strategies than young adults, use strategies in different proportions, and select and execute strategies less efficiently. Here, we review recent evidence, suggesting that age-related changes in inhibition, cognitive flexibility, and working memory processes underlie age-related changes in strategic variations during arithmetic problem solving. We discuss both behavioral and neural mechanisms underlying age-related changes in these executive control processes. © 2016 Elsevier B.V. All rights reserved.
Reconfigurable data path processor
NASA Technical Reports Server (NTRS)
Donohoe, Gregory (Inventor)
2005-01-01
A reconfigurable data path processor comprises a plurality of independent processing elements. Each of the processing elements advantageously comprising an identical architecture. Each processing element comprises a plurality of data processing means for generating a potential output. Each processor is also capable of through-putting an input as a potential output with little or no processing. Each processing element comprises a conditional multiplexer having a first conditional multiplexer input, a second conditional multiplexer input and a conditional multiplexer output. A first potential output value is transmitted to the first conditional multiplexer input, and a second potential output value is transmitted to the second conditional multiplexer output. The conditional multiplexer couples either the first conditional multiplexer input or the second conditional multiplexer input to the conditional multiplexer output, according to an output control command. The output control command is generated by processing a set of arithmetic status-bits through a logical mask. The conditional multiplexer output is coupled to a first processing element output. A first set of arithmetic bits are generated according to the processing of the first processable value. A second set of arithmetic bits may be generated from a second processing operation. The selection of the arithmetic status-bits is performed by an arithmetic-status bit multiplexer selects the desired set of arithmetic status bits from among the first and second set of arithmetic status bits. The conditional multiplexer evaluates the select arithmetic status bits according to logical mask defining an algorithm for evaluating the arithmetic status bits.
Conclusiveness of natural languages and recognition of images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojcik, Z.M.
1983-01-01
The conclusiveness is investigated using recognition processes and one-one correspondence between expressions of a natural language and graphs representing events. The graphs, as conceived in psycholinguistics, are obtained as a result of perception processes. It is possible to generate and process the graphs automatically, using computers and then to convert the resulting graphs into expressions of a natural language. Correctness and conclusiveness of the graphs and sentences are investigated using the fundamental condition for events representation processes. Some consequences of the conclusiveness are discussed, e.g. undecidability of arithmetic, human brain assymetry, correctness of statistical calculations and operations research. It ismore » suggested that the group theory should be imposed on mathematical models of any real system. Proof of the fundamental condition is also presented. 14 references.« less
ERIC Educational Resources Information Center
Berg, Derek H.; Hutchinson, Nancy L.
2010-01-01
This study investigated whether processing speed, short-term memory, and working memory accounted for the differential mental addition fluency between children typically achieving in arithmetic (TA) and children at-risk for failure in arithmetic (AR). Further, we drew attention to fluency differences in simple (e.g., 5 + 3) and complex (e.g., 16 +…
Convergence to equilibrium under a random Hamiltonian.
Brandão, Fernando G S L; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K; Mozrzymas, Marek
2012-09-01
We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.
Convergence to equilibrium under a random Hamiltonian
NASA Astrophysics Data System (ADS)
Brandão, Fernando G. S. L.; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K.; Mozrzymas, Marek
2012-09-01
We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.
SPAR improved structural-fluid dynamic analysis capability
NASA Technical Reports Server (NTRS)
Pearson, M. L.
1985-01-01
The results of a study whose objective was to improve the operation of the SPAR computer code by improving efficiency, user features, and documentation is presented. Additional capability was added to the SPAR arithmetic utility system, including trigonometric functions, numerical integration, interpolation, and matrix combinations. Improvements were made in the EIG processor. A processor was created to compute and store principal stresses in table-format data sets. An additional capability was developed and incorporated into the plot processor which permits plotting directly from table-format data sets. Documentation of all these features is provided in the form of updates to the SPAR users manual.
Teachers’ Beliefs and Practices Regarding the Role of Executive Functions in Reading and Arithmetic
Rapoport, Shirley; Rubinsten, Orly; Katzir, Tami
2016-01-01
The current study investigated early elementary school teachers’ beliefs and practices regarding the role of Executive Functions (EFs) in reading and arithmetic. A new research questionnaire was developed and judged by professionals in the academia and the field. Reponses were obtained from 144 teachers from Israel. Factor analysis divided the questionnaire into three valid and reliable subscales, reflecting (1) beliefs regarding the contribution of EFs to reading and arithmetic, (2) pedagogical practices, and (3) a connection between the cognitive mechanisms of reading and arithmetic. Findings indicate that teachers believe EFs affect students’ performance in reading and arithmetic. These beliefs were also correlated with pedagogical practices. Additionally, special education teachers’ scored higher on the different subscales compared to general education teachers. These findings shed light on the way teachers perceive the cognitive foundations of reading and arithmetic and indicate to which extent these perceptions guide their teaching practices. PMID:27799917
Teachers' Beliefs and Practices Regarding the Role of Executive Functions in Reading and Arithmetic.
Rapoport, Shirley; Rubinsten, Orly; Katzir, Tami
2016-01-01
The current study investigated early elementary school teachers' beliefs and practices regarding the role of Executive Functions (EFs) in reading and arithmetic. A new research questionnaire was developed and judged by professionals in the academia and the field. Reponses were obtained from 144 teachers from Israel. Factor analysis divided the questionnaire into three valid and reliable subscales, reflecting (1) beliefs regarding the contribution of EFs to reading and arithmetic, (2) pedagogical practices, and (3) a connection between the cognitive mechanisms of reading and arithmetic. Findings indicate that teachers believe EFs affect students' performance in reading and arithmetic. These beliefs were also correlated with pedagogical practices. Additionally, special education teachers' scored higher on the different subscales compared to general education teachers. These findings shed light on the way teachers perceive the cognitive foundations of reading and arithmetic and indicate to which extent these perceptions guide their teaching practices.
Very Large Scale Integrated Circuits for Military Systems.
1981-01-01
ABBREVIATIONS A/D Analog-to-digital C AGC Automatic Gain Control A A/J Anti-jam ASP Advanced Signal Processor AU Arithmetic Units C.AD Computer-Aided...ESM) equipments (Ref. 23); in lieu of an adequate automatic proces- sing capability, the function is now performed manually (Ref. 24), which involves...a human operator, displays, etc., and a sacrifice in performance (acquisition speed, saturation signal density). Various automatic processing
ERIC Educational Resources Information Center
Pyke, Aryn A.; LeFevre, Jo-Anne
2011-01-01
Why is subsequent recall sometimes better for self-generated answers than for answers obtained from an external source (e.g., calculator)? In this study, we explore the relative contribution of 2 processes, recall attempts and self-computation, to this "generation effect" (i.e., enhanced answer recall relative to when problems are practiced with a…
The Definition and Implementation of a Computer Programming Language Based on Constraints.
1980-08-01
though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that IISP , say...and detecting and resolving conflicts, just as iisp provides certain services such as automatic storage management, which records given dala in a...defined- it permits the statement of equalities and some simple arithmetic relationships. An implementation representation is chosen, and IISP code for a
1981-09-30
to perform a variety of local arithmetic operations. Our initial task will be to use it for computing 5X5 convolutions common to many low level...report presents the results of applying our relaxation based scene matching systein I1] to a new domain - automatic matching of pairs of images. The task...objects (corners of buildings) within the large image. But we did demonstrate the ability of our system to automatically segment, describe, and match
Improving Soldier Training: An Aptitude-Treatment Interaction Approach.
1979-06-01
magazines. Eighteen percent of American adults lack basic literacy skills to the point where they cannot even fill out basic forms. Dr. Food emphasized...designed to upgrade the literacy and computational skills of Army personnel found deficient. The magnitude of the problem is such, however, that the services...knowledge, (WK); arithmetic reasoning, AR); etc.) predict the aiount learned or the rate of learning or both. Special abilities such as psychomotor skills
Lithium Niobate Arithmetic Logic Unit
1991-03-01
Boot51] A.D. Booth, "A Signed Binary Multiplication Technique," Quarterly Journal of Mechanics and Applied Mathematics , Vol. IV Part 2, 1951. [ChWi79...Trans. Computers, Vol. C-26, No. 7, July 1977, pp. 681-687. [Wake8 I] John F. Wakerly , "Miocrocomputer Architecture and Programming," John Wiley and...different division methods and discusses their applicability to simple bit serial implementation. Several different designs are then presented and
GENPLOT: A formula-based Pascal program for data manipulation and plotting
NASA Astrophysics Data System (ADS)
Kramer, Matthew J.
Geochemical processes involving alteration, differentiation, fractionation, or migration of elements may be elucidated by a number of discrimination or variation diagrams (e.g., AFM, Harker, Pearce, and many others). The construction of these diagrams involves arithmetic combination of selective elements (involving major, minor, or trace elements). GENPLOT utilizes a formula-based algorithm (an expression parser) which enables the program to manipulate multiparameter databases and plot XY, ternary, tetrahedron, and REE type plots without needing to change either the source code or rearranging databases. Formulae may be any quadratic expression whose variables are the column headings of the data matrix. A full-screen editor with limited equations and arithmetic functions (spreadsheet) has been incorporated into the program to aid data entry and editing. Data are stored as ASCII files to facilitate interchange of data between other programs and computers. GENPLOT was developed in Turbo Pascal for the IBM and compatible computers but also is available in Apple Pascal for the Apple Ile and Ill. Because the source code is too extensive to list here (about 5200 lines of Pascal code), the expression parsing routine, which is central to GENPLOT's flexibility is incorporated into a smaller demonstration program named SOLVE. The following paper includes a discussion on how the expression parser works and a detailed description of GENPLOT's capabilities.
Shin, Jaeyoung; Müller, Klaus-R; Hwang, Han-Jeong
2016-01-01
We propose a near-infrared spectroscopy (NIRS)-based brain-computer interface (BCI) that can be operated in eyes-closed (EC) state. To evaluate the feasibility of NIRS-based EC BCIs, we compared the performance of an eye-open (EO) BCI paradigm and an EC BCI paradigm with respect to hemodynamic response and classification accuracy. To this end, subjects performed either mental arithmetic or imagined vocalization of the English alphabet as a baseline task with very low cognitive loading. The performances of two linear classifiers were compared; resulting in an advantage of shrinkage linear discriminant analysis (LDA). The classification accuracy of EC paradigm (75.6 ± 7.3%) was observed to be lower than that of EO paradigm (77.0 ± 9.2%), which was statistically insignificant (p = 0.5698). Subjects reported they felt it more comfortable (p = 0.057) and easier (p < 0.05) to perform the EC BCI tasks. The different task difficulty may become a cause of the slightly lower classification accuracy of EC data. From the analysis results, we could confirm the feasibility of NIRS-based EC BCIs, which can be a BCI option that may ultimately be of use for patients who cannot keep their eyes open consistently. PMID:27824089
Shin, Jaeyoung; Müller, Klaus-R; Hwang, Han-Jeong
2016-11-08
We propose a near-infrared spectroscopy (NIRS)-based brain-computer interface (BCI) that can be operated in eyes-closed (EC) state. To evaluate the feasibility of NIRS-based EC BCIs, we compared the performance of an eye-open (EO) BCI paradigm and an EC BCI paradigm with respect to hemodynamic response and classification accuracy. To this end, subjects performed either mental arithmetic or imagined vocalization of the English alphabet as a baseline task with very low cognitive loading. The performances of two linear classifiers were compared; resulting in an advantage of shrinkage linear discriminant analysis (LDA). The classification accuracy of EC paradigm (75.6 ± 7.3%) was observed to be lower than that of EO paradigm (77.0 ± 9.2%), which was statistically insignificant (p = 0.5698). Subjects reported they felt it more comfortable (p = 0.057) and easier (p < 0.05) to perform the EC BCI tasks. The different task difficulty may become a cause of the slightly lower classification accuracy of EC data. From the analysis results, we could confirm the feasibility of NIRS-based EC BCIs, which can be a BCI option that may ultimately be of use for patients who cannot keep their eyes open consistently.
Arán Filippetti, Vanessa; Richaud, María Cristina
2017-10-01
Though the relationship between executive functions (EFs) and mathematical skills has been well documented, little is known about how both EFs and IQ differentially support diverse math domains in primary students. Inconsistency of results may be due to the statistical techniques employed, specifically, if the analysis is conducted with observed variables, i.e., regression analysis, or at the latent level, i.e., structural equation modeling (SEM). The current study explores the contribution of both EFs and IQ in mathematics through an SEM approach. A total of 118 8- to 12-year-olds were administered measures of EFs, crystallized (Gc) and fluid (Gf) intelligence, and math abilities (i.e., number production, mental calculus and arithmetical problem-solving). Confirmatory factor analysis (CFA) offered support for the three-factor solution of EFs: (1) working memory (WM), (2) shifting, and (3) inhibition. Regarding the relationship among EFs, IQ and math abilities, the results of the SEM analysis showed that (i) WM and age predict number production and mental calculus, and (ii) shifting and sex predict arithmetical problem-solving. In all of the SEM models, EFs partially or totally mediated the relationship between IQ, age and math achievement. These results suggest that EFs differentially supports math abilities in primary-school children and is a more significant predictor of math achievement than IQ level.
Changing computing paradigms towards power efficiency
Klavík, Pavel; Malossi, A. Cristiano I.; Bekas, Costas; Curioni, Alessandro
2014-01-01
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. PMID:24842033
NASA Astrophysics Data System (ADS)
Kan-On, Yukio
2007-04-01
This paper is concerned with the bifurcation structure of positive stationary solutions for a generalized Lotka-Volterra competition model with diffusion. To establish the structure, the bifurcation theory and the interval arithmetic are employed.
Hecht, Steven A
2006-01-01
We used the choice/no-choice methodology in two experiments to examine patterns of strategy selection and execution in groups of undergraduates. Comparisons between choice and no-choice trials revealed three groups. Some participants good retrievers) were consistently able to use retrieval to solve almost all arithmetic problems. Other participants (perfectionists) successfully used retrieval substantially less often in choice-allowed trials than when strategy choices were prohibited. Not-so-good retrievers retrieved correct answers less often than the other participants in both the choice-allowed and no-choice conditions. No group differences emerged with respect to time needed to search and access answers from long-term memory; however, not-so-good retrievers were consistently slower than the other subgroups at executing fact-retrieval processes that are peripheral to memory search and access. Theoretical models of simple arithmetic, such as the Strategy Choice and Discovery Simulation (Shrager & Siegler, 1998), should be updated to include the existence of both perfectionist and not-so-good retriever adults.
Arithmetic learning with the use of graphic organiser
NASA Astrophysics Data System (ADS)
Sai, F. L.; Shahrill, M.; Tan, A.; Han, S. H.
2018-01-01
For this study, Zollman’s four corners-and-a-diamond mathematics graphic organiser embedded with Polya’s Problem Solving Model was used to investigate secondary school students’ performance in arithmetic word problems. This instructional learning tool was used to help students break down the given information into smaller units for better strategic planning. The participants were Year 7 students, comprised of 21 male and 20 female students, aged between 11-13 years old, from a co-ed secondary school in Brunei Darussalam. This study mainly adopted a quantitative approach to investigate the types of differences found in the arithmetic word problem pre- and post-tests results from the use of the learning tool. Although the findings revealed slight improvements in the overall comparisons of the students’ test results, the in-depth analysis of the students’ responses in their activity worksheets shows a different outcome. Some students were able to make good attempts in breaking down the key points into smaller information in order to solve the word problems.
Price, Gavin R; Yeo, Darren J; Wilkey, Eric D; Cutting, Laurie E
2018-04-01
The present study investigates the relation between resting-state functional connectivity (rsFC) of cytoarchitectonically defined subdivisions of the parietal cortex at the end of 1st grade and arithmetic performance at the end of 2nd grade. Results revealed a dissociable pattern of relations between rsFC and arithmetic competence among subdivisions of intraparietal sulcus (IPS) and angular gyrus (AG). rsFC between right hemisphere IPS subdivisions and contralateral IPS subdivisions positively correlated with arithmetic competence. In contrast, rsFC between the left hIP1 and the right medial temporal lobe, and rsFC between the left AG and left superior frontal gyrus, were negatively correlated with arithmetic competence. These results suggest that strong inter-hemispheric IPS connectivity is important for math development, reflecting either neurocognitive mechanisms specific to arithmetic processing, domain-general mechanisms that are particularly relevant to arithmetic competence, or structural 'cortical maturity'. Stronger connectivity between IPS, and AG, subdivisions and frontal and temporal cortices, however, appears to be negatively associated with math development, possibly reflecting the ability to disengage suboptimal problem-solving strategies during mathematical processing, or to flexibly reorient task-based networks. Importantly, the reported results pertain even when controlling for reading, spatial attention, and working memory, suggesting that the observed rsFC-behavior relations are specific to arithmetic competence. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...
2015-07-14
In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less
Special relativity from observer's mathematics point of view
NASA Astrophysics Data System (ADS)
Khots, Boris; Khots, Dmitriy
2015-09-01
When we create mathematical models for quantum theory of light we assume that the mathematical apparatus used in modeling, at least the simplest mathematical apparatus, is infallible. In particular, this relates to the use of "infinitely small" and "infinitely large" quantities in arithmetic and the use of Newton - Cauchy definitions of a limit and derivative in analysis. We believe that is where the main problem lies in contemporary study of nature. We have introduced a new concept of Observer's Mathematics (see www.mathrelativity.com). Observer's Mathematics creates new arithmetic, algebra, geometry, topology, analysis and logic which do not contain the concept of continuum, but locally coincide with the standard fields. We use Einstein special relativity principles and get the analogue of classical Lorentz transformation. This work considers this transformation from Observer's Mathematics point of view.
NASA Astrophysics Data System (ADS)
Toyokuni, G.; Takenaka, H.
2007-12-01
We propose a method to obtain effective grid parameters for the finite-difference (FD) method with standard Earth models using analytical ways. In spite of the broad use of the heterogeneous FD formulation for seismic waveform modeling, accurate treatment of material discontinuities inside the grid cells has been a serious problem for many years. One possible way to solve this problem is to introduce effective grid elastic moduli and densities (effective parameters) calculated by the volume harmonic averaging of elastic moduli and volume arithmetic averaging of density in grid cells. This scheme enables us to put a material discontinuity into an arbitrary position in the spatial grids. Most of the methods used for synthetic seismogram calculation today receives the blessing of the standard Earth models, such as the PREM, IASP91, SP6, and AK135, represented as functions of normalized radius. For the FD computation of seismic waveform with such models, we first need accurate treatment of material discontinuities in radius. This study provides a numerical scheme for analytical calculations of the effective parameters for an arbitrary spatial grids in radial direction as to these major four standard Earth models making the best use of their functional features. This scheme can analytically obtain the integral volume averages through partial fraction decompositions (PFDs) and integral formulae. We have developed a FORTRAN subroutine to perform the computations, which is opened to utilization in a large variety of FD schemes ranging from 1-D to 3-D, with conventional- and staggered-grids. In the presentation, we show some numerical examples displaying the accuracy of the FD synthetics simulated with the analytical effective parameters.
Morsanyi, Kinga; O'Mahony, Eileen; McCormack, Teresa
2017-12-01
Recent evidence has highlighted the important role that number-ordering skills play in arithmetic abilities, both in children and adults. In the current study, we demonstrated that number comparison and ordering skills were both significantly related to arithmetic performance in adults, and the effect size was greater in the case of ordering skills. Additionally, we found that the effect of number comparison skills on arithmetic performance was mediated by number-ordering skills. Moreover, performance on comparison and ordering tasks involving the months of the year was also strongly correlated with arithmetic skills, and participants displayed similar (canonical or reverse) distance effects on the comparison and ordering tasks involving months as when the tasks included numbers. This suggests that the processes responsible for the link between comparison and ordering skills and arithmetic performance are not specific to the domain of numbers. Finally, a factor analysis indicated that performance on comparison and ordering tasks loaded on a factor that included performance on a number line task and self-reported spatial thinking styles. These results substantially extend previous research on the role of order processing abilities in mental arithmetic.
Cognitive precursors of arithmetic development in primary school children with cerebral palsy.
Van Rooijen, M; Verhoeven, L; Smits, D W; Dallmeijer, A J; Becher, J G; Steenbergen, B
2014-04-01
The aim of this study was to examine the development of arithmetic performance and its cognitive precursors in children with CP from 7 till 9 years of age. Previous research has shown that children with CP are generally delayed in arithmetic performance compared to their typically developing peers. In children with CP, the developmental trajectory of the ability to solve addition- and subtraction tasks has, however, rarely been studied, as well as the cognitive factors affecting this trajectory. Sixty children (M=7.2 years, SD=.23 months at study entry) with CP participated in this study. Standardized tests were administered to assess arithmetic performance, word decoding skills, non-verbal intelligence, and working memory. The results showed that the ability to solve addition- and subtraction tasks increased over a two year period. Word decoding skills were positively related to the initial status of arithmetic performance. In addition, non-verbal intelligence and working memory were associated with the initial status and growth rate of arithmetic performance from 7 till 9 years of age. The current study highlights the importance of non-verbal intelligence and working memory to the development of arithmetic performance of children with CP. Copyright © 2014 Elsevier Ltd. All rights reserved.
Separating stages of arithmetic verification: An ERP study with a novel paradigm.
Avancini, Chiara; Soltész, Fruzsina; Szűcs, Dénes
2015-08-01
In studies of arithmetic verification, participants typically encounter two operands and they carry out an operation on these (e.g. adding them). Operands are followed by a proposed answer and participants decide whether this answer is correct or incorrect. However, interpretation of results is difficult because multiple parallel, temporally overlapping numerical and non-numerical processes of the human brain may contribute to task execution. In order to overcome this problem here we used a novel paradigm specifically designed to tease apart the overlapping cognitive processes active during arithmetic verification. Specifically, we aimed to separate effects related to detection of arithmetic correctness, detection of the violation of strategic expectations, detection of physical stimulus properties mismatch and numerical magnitude comparison (numerical distance effects). Arithmetic correctness, physical stimulus properties and magnitude information were not task-relevant properties of the stimuli. We distinguished between a series of temporally highly overlapping cognitive processes which in turn elicited overlapping ERP effects with distinct scalp topographies. We suggest that arithmetic verification relies on two major temporal phases which include parallel running processes. Our paradigm offers a new method for investigating specific arithmetic verification processes in detail. Copyright © 2015 Elsevier Ltd. All rights reserved.
Do Children Understand Fraction Addition?
ERIC Educational Resources Information Center
Braithwaite, David W.; Tian, Jing; Siegler, Robert S.
2017-01-01
Many children fail to master fraction arithmetic even after years of instruction. A recent theory of fraction arithmetic (Braithwaite, Pyke, & Siegler, in press) hypothesized that this poor learning of fraction arithmetic procedures reflects poor conceptual understanding of them. To test this hypothesis, we performed three experiments…
Propagation of Axially Symmetric Detonation Waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druce, R L; Roeske, F; Souers, P C
2002-06-26
We have studied the non-ideal propagation of detonation waves in LX-10 and in the insensitive explosive TATB. Explosively-driven, 5.8-mm-diameter, 0.125-mm-thick aluminum flyer plates were used to initiate 38-mm-diameter, hemispherical samples of LX-10 pressed to a density of 1.86 g/cm{sup 3} and of TATB at a density of 1.80 g/cm{sup 3}. The TATB powder was a grade called ultrafine (UFTATB), having an arithmetic mean particle diameter of about 8-10 {micro}m and a specific surface area of about 4.5 m{sup 2}/g. Using PMMA as a transducer, output pressure was measured at 5 discrete points on the booster using a Fabry-Perot velocimeter. Breakoutmore » time was measured on a line across the booster with a streak camera. Each of the experimental geometries was calculated using the Ignition and Growth Reactive Flow Model, the JWL++ Model and the Programmed Burn Model. Boosters at both ambient and cold (-20 C and -54 C) temperatures have been experimentally and computationally studied. A comparison of experimental and modeling results is presented.« less
Avancini, Chiara; Galfano, Giovanni; Szűcs, Dénes
2014-12-01
Event-related potential (ERP) studies have detected several characteristic consecutive amplitude modulations in both implicit and explicit mental arithmetic tasks. Implicit tasks typically focused on the arithmetic relatedness effect (in which performance is affected by semantic associations between numbers) while explicit tasks focused on the distance effect (in which performance is affected by the numerical difference of to-be-compared numbers). Both task types elicit morphologically similar ERP waves which were explained in functionally similar terms. However, to date, the relationship between these tasks has not been investigated explicitly and systematically. In order to fill this gap, here we examined whether ERP effects and their underlying cognitive processes in implicit and explicit mental arithmetic tasks differ from each other. The same group of participants performed both an implicit number-matching task (in which arithmetic knowledge is task-irrelevant) and an explicit arithmetic-verification task (in which arithmetic knowledge is task-relevant). 129-channel ERP data differed substantially between tasks. In the number-matching task, the arithmetic relatedness effect appeared as a negativity over left-frontal electrodes whereas the distance effect was more prominent over right centro-parietal electrodes. In the verification task, all probe types elicited similar N2b waves over right fronto-central electrodes and typical centro-parietal N400 effects over central electrodes. The distance effect appeared as an early-rising, long-lasting left parietal negativity. We suggest that ERP effects in the implicit task reflect access to semantic memory networks and to magnitude discrimination, respectively. In contrast, effects of expectation violation are more prominent in explicit tasks and may mask more delicate cognitive processes. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Avancini, Chiara; Galfano, Giovanni; Szűcs, Dénes
2014-01-01
Event-related potential (ERP) studies have detected several characteristic consecutive amplitude modulations in both implicit and explicit mental arithmetic tasks. Implicit tasks typically focused on the arithmetic relatedness effect (in which performance is affected by semantic associations between numbers) while explicit tasks focused on the distance effect (in which performance is affected by the numerical difference of to-be-compared numbers). Both task types elicit morphologically similar ERP waves which were explained in functionally similar terms. However, to date, the relationship between these tasks has not been investigated explicitly and systematically. In order to fill this gap, here we examined whether ERP effects and their underlying cognitive processes in implicit and explicit mental arithmetic tasks differ from each other. The same group of participants performed both an implicit number-matching task (in which arithmetic knowledge is task-irrelevant) and an explicit arithmetic-verification task (in which arithmetic knowledge is task-relevant). 129-channel ERP data differed substantially between tasks. In the number-matching task, the arithmetic relatedness effect appeared as a negativity over left-frontal electrodes whereas the distance effect was more prominent over right centro-parietal electrodes. In the verification task, all probe types elicited similar N2b waves over right fronto-central electrodes and typical centro-parietal N400 effects over central electrodes. The distance effect appeared as an early-rising, long-lasting left parietal negativity. We suggest that ERP effects in the implicit task reflect access to semantic memory networks and to magnitude discrimination, respectively. In contrast, effects of expectation violation are more prominent in explicit tasks and may mask more delicate cognitive processes. PMID:25450162
Reconfigurable pipelined processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saccardi, R.J.
1989-09-19
This patent describes a reconfigurable pipelined processor for processing data. It comprises: a plurality of memory devices for storing bits of data; a plurality of arithmetic units for performing arithmetic functions with the data; cross bar means for connecting the memory devices with the arithmetic units for transferring data therebetween; at least one counter connected with the cross bar means for providing a source of addresses to the memory devices; at least one variable tick delay device connected with each of the memory devices and arithmetic units; and means for providing control bits to the variable tick delay device formore » variably controlling the input and output operations thereof to selectively delay the memory devices and arithmetic units to align the data for processing in a selected sequence.« less
Single-digit arithmetic processing—anatomical evidence from statistical voxel-based lesion analysis
Mihulowicz, Urszula; Willmes, Klaus; Karnath, Hans-Otto; Klein, Elise
2014-01-01
Different specific mechanisms have been suggested for solving single-digit arithmetic operations. However, the neural correlates underlying basic arithmetic (multiplication, addition, subtraction) are still under debate. In the present study, we systematically assessed single-digit arithmetic in a group of acute stroke patients (n = 45) with circumscribed left- or right-hemispheric brain lesions. Lesion sites significantly related to impaired performance were found only in the left-hemisphere damaged (LHD) group. Deficits in multiplication and addition were related to subcortical/white matter brain regions differing from those for subtraction tasks, corroborating the notion of distinct processing pathways for different arithmetic tasks. Additionally, our results further point to the importance of investigating fiber pathways in numerical cognition. PMID:24847238
FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model
Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid
2014-01-01
A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well. PMID:25484854
Datta, Asit K; Munshi, Soumika
2002-03-10
Based on the negabinary number representation, parallel one-step arithmetic operations (that is, addition and subtraction), logical operations, and matrix-vector multiplication on data have been optically implemented, by use of a two-dimensional spatial-encoding technique. For addition and subtraction, one of the operands in decimal form is converted into the unsigned negabinary form, whereas the other decimal number is represented in the signed negabinary form. The result of operation is obtained in the mixed negabinary form and is converted back into decimal. Matrix-vector multiplication for unsigned negabinary numbers is achieved through the convolution technique. Both of the operands for logical operation are converted to their signed negabinary forms. All operations are implemented by use of a unique optical architecture. The use of a single liquid-crystal-display panel to spatially encode the input data, operational kernels, and decoding masks have simplified the architecture as well as reduced the cost and complexity.
Coding efficiency of AVS 2.0 for CBAC and CABAC engines
NASA Astrophysics Data System (ADS)
Cui, Jing; Choi, Youngkyu; Chae, Soo-Ik
2015-12-01
In this paper we compare the coding efficiency of AVS 2.0[1] for engines of the Context-based Binary Arithmetic Coding (CBAC)[2] in the AVS 2.0 and the Context-Adaptive Binary Arithmetic Coder (CABAC)[3] in the HEVC[4]. For fair comparison, the CABAC is embedded in the reference code RD10.1 because the CBAC is in the HEVC in our previous work[5]. The rate estimation table is employed only for RDOQ in the RD code. To reduce the computation complexity of the video encoder, therefore we modified the RD code so that the rate estimation table is employed for all RDO decision. Furthermore, we also simplify the complexity of rate estimation table by reducing the bit depth of its fractional part to 2 from 8. The simulation result shows that the CABAC has the BD-rate loss of about 0.7% compared to the CBAC. It seems that the CBAC is a little more efficient than that the CABAC in the AVS 2.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lieu, Richard
A hierarchy of statistics of increasing sophistication and accuracy is proposed to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level with the help of high-precision computers to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this methodmore » of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the signal-limited bolometric flux measurement of a radio source.« less
Interventions for Primary School Children With Difficulties in Mathematics.
Dowker, Ann
2017-01-01
Difficulty with arithmetic is a common problem for children and adults, though there has been some work on the topic for a surprisingly long time. This chapter will review some of the research that has been done over the years on interventions with primary school children. Interventions can be of various levels of intensiveness, ranging from whole-class approaches that take account of individual differences through small-group and limited-time individual interventions to extended-time individual interventions. Interventions discussed here include those involving peer tuition and group collaboration; those involving board and computer games; and those that involve assessing children's strengths and weaknesses in different components of mathematics; and targeting remedial activities to the assessed weaknesses. Most of the interventions discussed in this chapter specifically involve mathematics (usually mainly arithmetic), but there is also some discussion of attempts to improve mathematics by training children in domain-general skills, including Piagetian operations, metacognition, and executive functions. © 2017 Elsevier Inc. All rights reserved.
Fuchs, Lynn S.; Compton, Donald L.; Fuchs, Douglas; Powell, Sarah R.; Schumacher, Robin F.; Hamlett, Carol L.; Vernier, Emily; Namkung, Jessica M.; Vukovic, Rose K.
2012-01-01
The purpose of this study was to investigate the contributions of domain-general cognitive resources and different forms of arithmetic development to individual differences in pre-algebraic knowledge. Children (n=279; mean age=7.59 yrs) were assessed on 7 domain-general cognitive resources as well as arithmetic calculations and word problems at start of 2nd grade and on calculations, word problems, and pre-algebraic knowledge at end of 3rd grade. Multilevel path analysis, controlling for instructional effects associated with the sequence of classrooms in which students were nested across grades 2–3, indicated arithmetic calculations and word problems are foundational to pre-algebraic knowledge. Also, results revealed direct contributions of nonverbal reasoning and oral language to pre-algebraic knowledge, beyond indirect effects that are mediated via arithmetic calculations and word problems. By contrast, attentive behavior, phonological processing, and processing speed contributed to pre-algebraic knowledge only indirectly via arithmetic calculations and word problems. PMID:22409764
A natural history of mathematics: George Peacock and the making of English algebra.
Lambert, Kevin
2013-06-01
In a series of papers read to the Cambridge Philosophical Society through the 1820s, the Cambridge mathematician George Peacock laid the foundation for a natural history of arithmetic that would tell a story of human progress from counting to modern arithmetic. The trajectory of that history, Peacock argued, established algebraic analysis as a form of universal reasoning that used empirically warranted operations of mind to think with symbols on paper. The science of counting would suggest arithmetic, arithmetic would suggest arithmetical algebra, and, finally, arithmetical algebra would suggest symbolic algebra. This philosophy of suggestion provided the foundation for Peacock's "principle of equivalent forms," which justified the practice of nineteenth-century English symbolic algebra. Peacock's philosophy of suggestion owed a considerable debt to the early Cambridge Philosophical Society culture of natural history. The aim of this essay is to show how that culture of natural history was constitutively significant to the practice of nineteenth-century English algebra.
Barnes, Marcia A; Stubbs, Allison; Raghubar, Kimberly P; Agostino, Alba; Taylor, Heather; Landry, Susan; Fletcher, Jack M; Smith-Chant, Brenda
2011-05-01
Preschoolers with spina bifida (SB) were compared to typically developing (TD) children on tasks tapping mathematical knowledge at 36 months (n = 102) and 60 months of age (n = 98). The group with SB had difficulty compared to TD peers on all mathematical tasks except for transformation on quantities in the subitizable range. At 36 months, vocabulary knowledge, visual-spatial, and fine motor abilities predicted achievement on a measure of informal math knowledge in both groups. At 60 months of age, phonological awareness, visual-spatial ability, and fine motor skill were uniquely and differentially related to counting knowledge, oral counting, object-based arithmetic skills, and quantitative concepts. Importantly, the patterns of association between these predictors and mathematical performance were similar across the groups. A novel finding is that fine motor skill uniquely predicted object-based arithmetic abilities in both groups, suggesting developmental continuity in the neurocognitive correlates of early object-based and later symbolic arithmetic problem solving. Models combining 36-month mathematical ability and these language-based, visual-spatial, and fine motor abilities at 60 months accounted for considerable variance on 60-month informal mathematical outcomes. Results are discussed with reference to models of mathematical development and early identification of risk in preschoolers with neurodevelopmental disorder.
Barnes, Marcia A.; Stubbs, Allison; Raghubar, Kimberly P.; Agostino, Alba; Taylor, Heather; Landry, Susan; Fletcher, Jack M.; Smith-Chant, Brenda
2011-01-01
Preschoolers with spina bifida (SB) were compared to typically developing (TD) children on tasks tapping mathematical knowledge at 36 months (n = 102) and 60 months of age (n = 98). The group with SB had difficulty compared to TD peers on all mathematical tasks except for transformation on quantities in the subitizable range. At 36 months, vocabulary knowledge, visual–spatial, and fine motor abilities predicted achievement on a measure of informal math knowledge in both groups. At 60 months of age, phonological awareness, visual–spatial ability, and fine motor skill were uniquely and differentially related to counting knowledge, oral counting, object-based arithmetic skills, and quantitative concepts. Importantly, the patterns of association between these predictors and mathematical performance were similar across the groups. A novel finding is that fine motor skill uniquely predicted object-based arithmetic abilities in both groups, suggesting developmental continuity in the neurocognitive correlates of early object-based and later symbolic arithmetic problem solving. Models combining 36-month mathematical ability and these language-based, visual–spatial, and fine motor abilities at 60 months accounted for considerable variance on 60-month informal mathematical outcomes. Results are discussed with reference to models of mathematical development and early identification of risk in preschoolers with neurodevelopmental disorder. PMID:21418718
NASA Astrophysics Data System (ADS)
Adamatzky, Andrew; Armstrong, Rachel; Jones, Jeff; Gunji, Yukio-Pegio
2013-07-01
Slime mould Physarum polycephalum is large single cell with intriguingly smart behaviour. The slime mould shows outstanding abilities to adapt its protoplasmic network to varying environmental conditions. The slime mould can solve tasks of computational geometry, image processing, logics and arithmetics when data are represented by configurations of attractants and repellents. We attempt to map behavioural patterns of slime onto the cognitive control vs. schizotypy spectrum phase space and thus interpret slime mould's activity in terms of creativity.
ERIC Educational Resources Information Center
CEMREL, Inc., St. Louis, MO.
This material describes two games, Minicomputer Tug-of-War and Minicomputer Golf. The Papy Minicomputer derives its name from George Papy, who invented and introduced it in the 1950's. The Minicomputer is seen as an abacus with the flavor of a computer in its schematic representation of numbers. Its manner of representation combines decimal…
1985-12-01
Office of Scientific Research , and Air Force Space Division are sponsoring research for the development of a high speed DFT processor. This DFT...to the arithmetic circuitry through a master/slave 11-15 %v OPR ONESHOT OUTPUT OUTPUT .., ~ INITIALIZATION COLUMN’ 00 N DONE CUTRPLANE PLAtNE Figure...Since the TSP is an NP-complete problem, many mathematicians, operations researchers , computer scientists and the like have proposed heuristic
Jenks, Kathleen M; de Moor, Jan; van Lieshout, Ernest C D M
2009-07-01
Although it is believed that children with cerebral palsy are at high risk for learning difficulties and arithmetic difficulties in particular, few studies have investigated this issue. Arithmetic ability was longitudinally assessed in children with cerebral palsy in special (n = 41) and mainstream education (n = 16) and controls in mainstream education (n = 16). Second grade executive function and working memory scores were used to predict third grade arithmetic accuracy and response time. Children with cerebral palsy in special education were less accurate and slower than their peers on all arithmetic tests, even after controlling for IQ, whereas children with cerebral palsy in mainstream education performed as well as controls. Although the performance gap became smaller over time, it did not disappear. Children with cerebral palsy in special education showed evidence of executive function and working memory deficits in shifting, updating, visuospatial sketchpad and phonological loop (for digits, not words) whereas children with cerebral palsy in mainstream education only had a deficit in visuospatial sketchpad. Hierarchical regression revealed that, after controlling for intelligence, components of executive function and working memory explained large proportions of unique variance in arithmetic accuracy and response time and these variables were sufficient to explain group differences in simple, but not complex, arithmetic. Children with cerebral palsy are at risk for specific executive function and working memory deficits that, when present, increase the risk for arithmetic difficulties in these children.
Conceptual Knowledge of Fraction Arithmetic
ERIC Educational Resources Information Center
Siegler, Robert S.; Lortie-Forgues, Hugues
2015-01-01
Understanding an arithmetic operation implies, at minimum, knowing the direction of effects that the operation produces. However, many children and adults, even those who execute arithmetic procedures correctly, may lack this knowledge on some operations and types of numbers. To test this hypothesis, we presented preservice teachers (Study 1),…
ERIC Educational Resources Information Center
Rourke, Byron P.; Conway, James A.
1997-01-01
Reviews current research on brain-behavior relationships in disabilities of arithmetic and mathematical reasoning from both a neurological and a neuropsychological perspective. Defines developmental dyscalculia and the developmental importance of right versus left hemisphere integrity for the mediation of arithmetic learning and explores…
A programmable computational image sensor for high-speed vision
NASA Astrophysics Data System (ADS)
Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian
2013-08-01
In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.
Graffeo, Michele; Polonio, Luca; Bonini, Nicolao
2015-01-01
In this paper, we investigate whether cognitive reflection and numeracy skills affect the quality of the consumers' decision-making process in a purchase decision context. In a first (field) experiment, an identical product was on sale in two shops with different initial prices and discounts. One of the two deals was better than the other and the consumers were asked to choose the best one and to describe which arithmetic operations they used to solve the problem; then they were asked to complete the numeracy scale (Lipkus et al., 2001). The choice procedures used by the consumers were classified as "complete decision approach" when all the arithmetic operations needed to solve the problem were computed, and as "partial decision approach" when only some operations were computed. A mediation model shows that higher numeracy is associated with use of the complete decision approach. In turn, this approach is positively associated with the quality of the purchase decision. Given that these findings highlight the importance of the decision processes, in a second (laboratory) experiment we used a supplementary method to study the type of information search used by the participants: eye-tracking. In this experiment the participants were presented with decision problems similar to those used in Experiment 1 and they completed the Lipkus numeracy scale and the Cognitive Reflection Test (CRT; Frederick, 2005). Participants with a high CRT score chose the best deal more frequently, and showed a more profound and detailed information search pattern compared to participants with a low CRT score. Overall, results indicate that higher levels of cognitive reflection and numeracy skills predict the use of a more thorough decision process (measured with two different techniques: retrospective verbal reports and eye movements). In both experiments the decision process is a crucial factor which greatly affects the quality of the purchase decision.
Graffeo, Michele; Polonio, Luca; Bonini, Nicolao
2015-01-01
In this paper, we investigate whether cognitive reflection and numeracy skills affect the quality of the consumers’ decision-making process in a purchase decision context. In a first (field) experiment, an identical product was on sale in two shops with different initial prices and discounts. One of the two deals was better than the other and the consumers were asked to choose the best one and to describe which arithmetic operations they used to solve the problem; then they were asked to complete the numeracy scale (Lipkus et al., 2001). The choice procedures used by the consumers were classified as “complete decision approach” when all the arithmetic operations needed to solve the problem were computed, and as “partial decision approach” when only some operations were computed. A mediation model shows that higher numeracy is associated with use of the complete decision approach. In turn, this approach is positively associated with the quality of the purchase decision. Given that these findings highlight the importance of the decision processes, in a second (laboratory) experiment we used a supplementary method to study the type of information search used by the participants: eye-tracking. In this experiment the participants were presented with decision problems similar to those used in Experiment 1 and they completed the Lipkus numeracy scale and the Cognitive Reflection Test (CRT; Frederick, 2005). Participants with a high CRT score chose the best deal more frequently, and showed a more profound and detailed information search pattern compared to participants with a low CRT score. Overall, results indicate that higher levels of cognitive reflection and numeracy skills predict the use of a more thorough decision process (measured with two different techniques: retrospective verbal reports and eye movements). In both experiments the decision process is a crucial factor which greatly affects the quality of the purchase decision. PMID:26136721
Children Learn Spurious Associations in Their Math Textbooks: Examples from Fraction Arithmetic
ERIC Educational Resources Information Center
Braithwaite, David W.; Siegler, Robert S.
2018-01-01
Fraction arithmetic is among the most important and difficult topics children encounter in elementary and middle school mathematics. Braithwaite, Pyke, and Siegler (2017) hypothesized that difficulties learning fraction arithmetic often reflect reliance on associative knowledge--rather than understanding of mathematical concepts and procedures--to…
Individual Differences in Children's Understanding of Inversion and Arithmetical Skill
ERIC Educational Resources Information Center
Gilmore, Camilla K.; Bryant, Peter
2006-01-01
Background and aims: In order to develop arithmetic expertise, children must understand arithmetic principles, such as the inverse relationship between addition and subtraction, in addition to learning calculation skills. We report two experiments that investigate children's understanding of the principle of inversion and the relationship between…
The Practice of Arithmetic in Liberian Schools.
ERIC Educational Resources Information Center
Brenner, Mary E.
1985-01-01
Describes a study of Liberian schools in which students of the Vai tribe are instructed in Western mathematical practices which differ from those of the students' home culture. Reports that the Vai children employed syncretic arithmetic practices, combining two distinct systems of arithmetic in a classroom environment that tacitly facilitated the…
From Arithmetic Sequences to Linear Equations
ERIC Educational Resources Information Center
Matsuura, Ryota; Harless, Patrick
2012-01-01
The first part of the article focuses on deriving the essential properties of arithmetic sequences by appealing to students' sense making and reasoning. The second part describes how to guide students to translate their knowledge of arithmetic sequences into an understanding of linear equations. Ryota Matsuura originally wrote these lessons for…
Baby Arithmetic: One Object Plus One Tone
ERIC Educational Resources Information Center
Kobayashi, Tessei; Hiraki, Kazuo; Mugitani, Ryoko; Hasegawa, Toshikazu
2004-01-01
Recent studies using a violation-of-expectation task suggest that preverbal infants are capable of recognizing basic arithmetical operations involving visual objects. There is still debate, however, over whether their performance is based on any expectation of the arithmetical operations, or on a general perceptual tendency to prefer visually…
Conceptual Knowledge of Decimal Arithmetic
ERIC Educational Resources Information Center
Lortie-Forgues, Hugues; Siegler, Robert S.
2016-01-01
In two studies (N's = 55 and 54), we examined a basic form of conceptual understanding of rational number arithmetic, the direction of effect of decimal arithmetic operations, at a level of detail useful for informing instruction. Middle school students were presented tasks examining knowledge of the direction of effects (e.g., "True or…
Children learn spurious associations in their math textbooks: Examples from fraction arithmetic.
Braithwaite, David W; Siegler, Robert S
2018-04-26
Fraction arithmetic is among the most important and difficult topics children encounter in elementary and middle school mathematics. Braithwaite, Pyke, and Siegler (2017) hypothesized that difficulties learning fraction arithmetic often reflect reliance on associative knowledge-rather than understanding of mathematical concepts and procedures-to guide choices of solution strategies. They further proposed that this associative knowledge reflects distributional characteristics of the fraction arithmetic problems children encounter. To test these hypotheses, we examined textbooks and middle school children in the United States (Experiments 1 and 2) and China (Experiment 3). We asked the children to predict which arithmetic operation would accompany a specified pair of operands, to generate operands to accompany a specified arithmetic operation, and to match operands and operations. In both countries, children's responses indicated that they associated operand pairs having equal denominators with addition and subtraction, and operand pairs having a whole number and a fraction with multiplication and division. The children's associations paralleled the textbook input in both countries, which was consistent with the hypothesis that children learned the associations from the practice problems. Differences in the effects of such associative knowledge on U.S. and Chinese children's fraction arithmetic performance are discussed, as are implications of these differences for educational practice. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Cipora, Krzysztof; Nuerk, Hans-Christoph
2013-01-01
The SNARC (spatial-numerical association of response codes) described that larger numbers are responded faster with the right hand and smaller numbers with the left hand. It is held in the literature that arithmetically skilled and nonskilled adults differ in the SNARC. However, the respective data are descriptive, and the decisive tests are nonsignificant. Possible reasons for this nonsignificance could be that in previous studies (a) very small samples were used, (b) there were too few repetitions producing too little power and, consequently, reliabilities that were too small to reach conventional significance levels for the descriptive skill differences in the SNARC, and (c) general mathematical ability was assessed by the field of study of students, while individual arithmetic skills were not examined. Therefore we used a much bigger sample, a lot more repetitions, and direct assessment of arithmetic skills to explore relations between the SNARC effect and arithmetic skills. Nevertheless, a difference in SNARC effect between arithmetically skilled and nonskilled participants was not obtained. Bayesian analysis showed positive evidence of a true null effect, not just a power problem. Hence we conclude that the idea that arithmetically skilled and nonskilled participants generally differ in the SNARC effect is not warranted by our data.
Lightweight fuzzy processes in clinical computing.
Hurdle, J F
1997-09-01
In spite of advances in computing hardware, many hospitals still have a hard time finding extra capacity in their production clinical information system to run artificial intelligence (AI) modules, for example: to support real-time drug-drug or drug-lab interactions; to track infection trends; to monitor compliance with case specific clinical guidelines; or to monitor/ control biomedical devices like an intelligent ventilator. Historically, adding AI functionality was not a major design concern when a typical clinical system is originally specified. AI technology is usually retrofitted 'on top of the old system' or 'run off line' in tandem with the old system to ensure that the routine work load would still get done (with as little impact from the AI side as possible). To compound the burden on system performance, most institutions have witnessed a long and increasing trend for intramural and extramural reporting, (e.g. the collection of data for a quality-control report in microbiology, or a meta-analysis of a suite of coronary artery bypass grafts techniques, etc.) and these place an ever-growing burden on typical the computer system's performance. We discuss a promising approach to adding extra AI processing power to a heavily-used system based on the notion 'lightweight fuzzy processing (LFP)', that is, fuzzy modules designed from the outset to impose a small computational load. A formal model for a useful subclass of fuzzy systems is defined below and is used as a framework for the automated generation of LFPs. By seeking to reduce the arithmetic complexity of the model (a hand-crafted process) and the data complexity of the model (an automated process), we show how LFPs can be generated for three sample datasets of clinical relevance.
Changing computing paradigms towards power efficiency.
Klavík, Pavel; Malossi, A Cristiano I; Bekas, Costas; Curioni, Alessandro
2014-06-28
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Optimal space communications techniques. [all digital phase locked loop for FM demodulation
NASA Technical Reports Server (NTRS)
Schilling, D. L.
1973-01-01
The design, development, and analysis are reported of a digital phase-locked loop (DPLL) for FM demodulation and threshold extension. One of the features of the developed DPLL is its synchronous, real time operation. The sampling frequency is constant and all the required arithmetic and logic operations are performed within one sampling period, generating an output sequence which is converted to analog form and filtered. An equation relating the sampling frequency to the carrier frequency must be satisfied to guarantee proper DPLL operation. The synchronous operation enables a time-shared operation of one DPLL to demodulate several FM signals simultaneously. In order to obtain information about the DPLL performance at low input signal-to-noise ratios, a model of an input noise spike was introduced, and the DPLL equation was solved using a digital computer. The spike model was successful in finding a second order DPLL which yielded a five db threshold extension beyond that of a first order DPLL.
Doubly stochastic Poisson processes in artificial neural learning.
Card, H C
1998-01-01
This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.
Rickard, Timothy C; Bajic, Daniel
2006-07-01
The applicability of the identical elements (IE) model of arithmetic fact retrieval (T. C. Rickard, A. F. Healy, & L. E. Bourne, 1994) to cued recall from episodic (image and sentence) memory was explored in 3 transfer experiments. In agreement with results from arithmetic, speedup following even minimal practice recalling a missing word from an episodically bound word triplet did not transfer positively to other cued recall items involving the same triplet. The shape of the learning curve further supported a shift from episode-based to IE-based recall, extending some models of skill learning to cued recall practice. In contrast with previous findings, these results indicate that a form of representation that is independent of the original episodic memory underlies cued-recall performance following minimal practice. Copyright 2006 APA, all rights reserved.
A structural equation modeling analysis of students' understanding in basic mathematics
NASA Astrophysics Data System (ADS)
Oktavia, Rini; Arif, Salmawaty; Ferdhiana, Ridha; Yuni, Syarifah Meurah; Ihsan, Mahyus
2017-11-01
This research, in general, aims to identify incoming students' understanding and misconceptions of several basic concepts in mathematics. The participants of this study are the 2015 incoming students of Faculty of Mathematics and Natural Science of Syiah Kuala University, Indonesia. Using an instrument that were developed based on some anecdotal and empirical evidences on students' misconceptions, a survey involving 325 participants was administered and several quantitative and qualitative analysis of the survey data were conducted. In this article, we discuss the confirmatory factor analysis using Structural Equation Modeling (SEM) on factors that determine the new students' overall understanding of basic mathematics. The results showed that students' understanding on algebra, arithmetic, and geometry were significant predictors for their overall understanding of basic mathematics. This result supported that arithmetic and algebra are not the only predictors of students' understanding of basic mathematics.
An interval model updating strategy using interval response surface models
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin
2015-08-01
Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.
Simplified methods for computing total sediment discharge with the modified Einstein procedure
Colby, Bruce R.; Hubbell, David Wellington
1961-01-01
A procedure was presented in 1950 by H. A. Einstein for computing the total discharge of sediment particles of sizes that are in appreciable quantities in the stream bed. This procedure was modified by the U.S. Geological Survey and adapted to computing the total sediment discharge of a stream on the basis of samples of bed sediment, depth-integrated samples of suspended sediment, streamflow measurements, and water temperature. This paper gives simplified methods for computing total sediment discharge by the modified Einstein procedure. Each of four homographs appreciably simplifies a major step in the computations. Within the stated limitations, use of the homographs introduces much less error than is present in either the basic data or the theories on which the computations of total sediment discharge are based. The results are nearly as accurate mathematically as those that could be obtained from the longer and more complex arithmetic and algebraic computations of the Einstein procedure.
Individual differences in children's understanding of inversion and arithmetical skill.
Gilmore, Camilla K; Bryant, Peter
2006-06-01
Background and aims. In order to develop arithmetic expertise, children must understand arithmetic principles, such as the inverse relationship between addition and subtraction, in addition to learning calculation skills. We report two experiments that investigate children's understanding of the principle of inversion and the relationship between their conceptual understanding and arithmetical skills. A group of 127 children from primary schools took part in the study. The children were from 2 age groups (6-7 and 8-9 years). Children's accuracy on inverse and control problems in a variety of presentation formats and in canonical and non-canonical forms was measured. Tests of general arithmetic ability were also administered. Children consistently performed better on inverse than control problems, which indicates that they could make use of the inverse principle. Presentation format affected performance: picture presentation allowed children to apply their conceptual understanding flexibly regardless of the problem type, while word problems restricted their ability to use their conceptual knowledge. Cluster analyses revealed three subgroups with different profiles of conceptual understanding and arithmetical skill. Children in the 'high ability' and 'low ability' groups showed conceptual understanding that was in-line with their arithmetical skill, whilst a 3rd group of children had more advanced conceptual understanding than arithmetical skill. The three subgroups may represent different points along a single developmental path or distinct developmental paths. The discovery of the existence of the three groups has important consequences for education. It demonstrates the importance of considering the pattern of individual children's conceptual understanding and problem-solving skills.
A Substituting Meaning for the Equals Sign in Arithmetic Notating Tasks
ERIC Educational Resources Information Center
Jones, Ian; Pratt, Dave
2012-01-01
Three studies explore arithmetic tasks that support both substitutive and basic relational meanings for the equals sign. The duality of meanings enabled children to engage meaningfully and purposefully with the structural properties of arithmetic statements in novel ways. Some, but not all, children were successful at the adapted task and were…
Children's Acquisition of Arithmetic Principles: The Role of Experience
ERIC Educational Resources Information Center
Prather, Richard; Alibali, Martha W.
2011-01-01
The current study investigated how young learners' experiences with arithmetic equations can lead to learning of an arithmetic principle. The focus was elementary school children's acquisition of the Relation to Operands principle for subtraction (i.e., for natural numbers, the difference must be less than the minuend). In Experiment 1, children…
ERIC Educational Resources Information Center
Koontz, Kristine L.; Berch, Daniel B.
1996-01-01
Children with arithmetic learning disabilities (n=16) and normally achieving controls (n=16) in grades 3-5 were administered a battery of computerized tasks. Memory spans for both letters and digits were found to be smaller among the arithmetic learning disabled children. Implications for teaching are discussed. (Author/CMS)
Arithmetic Abilities in Children with Developmental Dyslexia: Performance on French ZAREKI-R Test
ERIC Educational Resources Information Center
De Clercq-Quaegebeur, Maryse; Casalis, Séverine; Vilette, Bruno; Lemaitre, Marie-Pierre; Vallée, Louis
2018-01-01
A high comorbidity between reading and arithmetic disabilities has already been reported. The present study aims at identifying more precisely patterns of arithmetic performance in children with developmental dyslexia, defined with severe and specific criteria. By means of a standardized test of achievement in mathematics ("Calculation and…
How Is Phonological Processing Related to Individual Differences in Children's Arithmetic Skills?
ERIC Educational Resources Information Center
De Smedt, Bert; Taylor, Jessica; Archibald, Lisa; Ansari, Daniel
2010-01-01
While there is evidence for an association between the development of reading and arithmetic, the precise locus of this relationship remains to be determined. Findings from cognitive neuroscience research that point to shared neural correlates for phonological processing and arithmetic as well as recent behavioral evidence led to the present…
Arithmetic Performance of Children with Cerebral Palsy: The Influence of Cognitive and Motor Factors
ERIC Educational Resources Information Center
van Rooijen, Maaike; Verhoeven, Ludo; Smits, Dirk-Wouter; Ketelaar, Marjolijn; Becher, Jules G.; Steenbergen, Bert
2012-01-01
Children diagnosed with cerebral palsy (CP) often show difficulties in arithmetic compared to their typically developing peers. The present study explores whether cognitive and motor variables are related to arithmetic performance of a large group of primary school children with CP. More specifically, the relative influence of non-verbal…
Cognitive Arithmetic: Evidence for the Development of Automaticity.
ERIC Educational Resources Information Center
LeFevre, Jo-Anne; Bisanz, Jeffrey
To determine whether children's knowledge of arithmetic facts becomes increasingly "automatic" with age, 7-year-olds, 11-year-olds, and adults were given a number-matching task for which mental arithmetic should have been irrelevant. Specifically, students were required to verify the presence of a probe number in a previously presented pair (e.g.,…
ERIC Educational Resources Information Center
McNeil, Nicole M.; Rittle-Johnson, Bethany; Hattikudur, Shanta; Petersen, Lori A.
2010-01-01
This study examined if solving arithmetic problems hinders undergraduates' accuracy on algebra problems. The hypothesis was that solving arithmetic problems would hinder accuracy because it activates an operational view of equations, even in educated adults who have years of experience with algebra. In three experiments, undergraduates (N = 184)…
Fostering Formal Commutativity Knowledge with Approximate Arithmetic
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
Raman, M R Gauthama; Somu, Nivethitha; Kirthivasan, Kannan; Sriram, V S Shankar
2017-08-01
Over the past few decades, the design of an intelligent Intrusion Detection System (IDS) remains an open challenge to the research community. Continuous efforts by the researchers have resulted in the development of several learning models based on Artificial Neural Network (ANN) to improve the performance of the IDSs. However, there exists a tradeoff with respect to the stability of ANN architecture and the detection rate for less frequent attacks. This paper presents a novel approach based on Helly property of Hypergraph and Arithmetic Residue-based Probabilistic Neural Network (HG AR-PNN) to address the classification problem in IDS. The Helly property of Hypergraph was exploited for the identification of the optimal feature subset and the arithmetic residue of the optimal feature subset was used to train the PNN. The performance of HG AR-PNN was evaluated using KDD CUP 1999 intrusion dataset. Experimental results prove the dominance of HG AR-PNN classifier over the existing classifiers with respect to the stability and improved detection rate for less frequent attacks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bug Distribution and Statistical Pattern Classification.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.
1987-01-01
The rule space model permits measurement of cognitive skill acquisition and error diagnosis. Further discussion introduces Bayesian hypothesis testing and bug distribution. An illustration involves an artificial intelligence approach to testing fractions and arithmetic. (Author/GDC)
ERIC Educational Resources Information Center
Pape, Stephen J.
2004-01-01
Many children read mathematics word problems and directly translate them to arithmetic operations. More sophisticated problem solvers transform word problems into object-based or mental models. Subsequent solutions are often qualitatively different because these models differentially support cognitive processing. Based on a conception of problem…
Chakraverty, S; Sahoo, B K; Rao, T D; Karunakar, P; Sapra, B K
2018-02-01
Modelling radon transport in the earth crust is a useful tool to investigate the changes in the geo-physical processes prior to earthquake event. Radon transport is modeled generally through the deterministic advection-diffusion equation. However, in order to determine the magnitudes of parameters governing these processes from experimental measurements, it is necessary to investigate the role of uncertainties in these parameters. Present paper investigates this aspect by combining the concept of interval uncertainties in transport parameters such as soil diffusivity, advection velocity etc, occurring in the radon transport equation as applied to soil matrix. The predictions made with interval arithmetic have been compared and discussed with the results of classical deterministic model. The practical applicability of the model is demonstrated through a case study involving radon flux measurements at the soil surface with an accumulator deployed in steady-state mode. It is possible to detect the presence of very low levels of advection processes by applying uncertainty bounds on the variations in the observed concentration data in the accumulator. The results are further discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kundeti, Vamsi; Rajasekaran, Sanguthevar
2012-06-01
Efficient tile sets for self assembling rectilinear shapes is of critical importance in algorithmic self assembly. A lower bound on the tile complexity of any deterministic self assembly system for an n × n square is [Formula: see text] (inferred from the Kolmogrov complexity). Deterministic self assembly systems with an optimal tile complexity have been designed for squares and related shapes in the past. However designing [Formula: see text] unique tiles specific to a shape is still an intensive task in the laboratory. On the other hand copies of a tile can be made rapidly using PCR (polymerase chain reaction) experiments. This led to the study of self assembly on tile concentration programming models. We present two major results in this paper on the concentration programming model. First we show how to self assemble rectangles with a fixed aspect ratio ( α:β ), with high probability, using Θ( α + β ) tiles. This result is much stronger than the existing results by Kao et al. (Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008) and Doty (Randomized self-assembly for exact shapes. In: proceedings of the 50th annual IEEE symposium on foundations of computer science (FOCS), IEEE, Atlanta. pp 85-94, 2009)-which can only self assembly squares and rely on tiles which perform binary arithmetic. On the other hand, our result is based on a technique called staircase sampling . This technique eliminates the need for sub-tiles which perform binary arithmetic, reduces the constant in the asymptotic bound, and eliminates the need for approximate frames (Kao et al. Randomized self-assembly for approximate shapes, LNCS, vol 5125. Springer, Heidelberg, 2008). Our second result applies staircase sampling on the equimolar concentration programming model (The tile complexity of linear assemblies. In: proceedings of the 36th international colloquium automata, languages and programming: Part I on ICALP '09, Springer-Verlag, pp 235-253, 2009), to self assemble rectangles (of fixed aspect ratio) with high probability. The tile complexity of our algorithm is Θ(log( n )) and is optimal on the probabilistic tile assembly model (PTAM)- n being an upper bound on the dimensions of a rectangle.
Kos, Bor; Valič, Blaž; Kotnik, Tadej; Gajšek, Peter
2012-10-07
Induction heating equipment is a source of strong and nonhomogeneous magnetic fields, which can exceed occupational reference levels. We investigated a case of an induction tempering tunnel furnace. Measurements of the emitted magnetic flux density (B) were performed during its operation and used to validate a numerical model of the furnace. This model was used to compute the values of B and the induced in situ electric field (E) for 15 different body positions relative to the source. For each body position, the computed B values were used to determine their maximum and average values, using six spatial averaging schemes (9-285 averaging points) and two averaging algorithms (arithmetic mean and quadratic mean). Maximum and average B values were compared to the ICNIRP reference level, and E values to the ICNIRP basic restriction. Our results show that in nonhomogeneous fields, the maximum B is an overly conservative predictor of overexposure, as it yields many false positives. The average B yielded fewer false positives, but as the number of averaging points increased, false negatives emerged. The most reliable averaging schemes were obtained for averaging over the torso with quadratic averaging, with no false negatives even for the maximum number of averaging points investigated.
Miller, Justin B; Axelrod, Bradley N; Schutte, Christian
2012-01-01
The recent release of the Wechsler Memory Scale Fourth Edition contains many improvements from a theoretical and administration perspective, including demographic corrections using the Advanced Clinical Solutions. Although the administration time has been reduced from previous versions, a shortened version may be desirable in certain situations given practical time limitations in clinical practice. The current study evaluated two- and three-subtest estimations of demographically corrected Immediate and Delayed Memory index scores using both simple arithmetic prorating and regression models. All estimated values were significantly associated with observed index scores. Use of Lin's Concordance Correlation Coefficient as a measure of agreement showed a high degree of precision and virtually zero bias in the models, although the regression models showed a stronger association than prorated models. Regression-based models proved to be more accurate than prorated estimates with less dispersion around observed values, particularly when using three subtest regression models. Overall, the present research shows strong support for estimating demographically corrected index scores on the WMS-IV in clinical practice with an adequate performance using arithmetically prorated models and a stronger performance using regression models to predict index scores.
VLSI architectures for computing multiplications and inverses in GF(2m)
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.
1985-01-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
VLSI architectures for computing multiplications and inverses in GF(2-m)
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.; Omura, J. K.; Reed, I. S.
1983-01-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that are easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. A pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal-basis representation used together with this multiplier, a pipeline architecture is also developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.
Computationally efficient method for optical simulation of solar cells and their applications
NASA Astrophysics Data System (ADS)
Semenikhin, I.; Zanuccoli, M.; Fiegna, C.; Vyurkov, V.; Sangiorgi, E.
2013-01-01
This paper presents two novel implementations of the Differential method to solve the Maxwell equations in nanostructured optoelectronic solid state devices. The first proposed implementation is based on an improved and computationally efficient T-matrix formulation that adopts multiple-precision arithmetic to tackle the numerical instability problem which arises due to evanescent modes. The second implementation adopts the iterative approach that allows to achieve low computational complexity O(N logN) or better. The proposed algorithms may work with structures with arbitrary spatial variation of the permittivity. The developed two-dimensional numerical simulator is applied to analyze the dependence of the absorption characteristics of a thin silicon slab on the morphology of the front interface and on the angle of incidence of the radiation with respect to the device surface.
VLSI architectures for computing multiplications and inverses in GF(2m).
Wang, C C; Truong, T K; Shao, H M; Deutsch, L J; Omura, J K; Reed, I S
1985-08-01
Finite field arithmetic logic is central in the implementation of Reed-Solomon coders and in some cryptographic algorithms. There is a need for good multiplication and inversion algorithms that can be easily realized on VLSI chips. Massey and Omura recently developed a new multiplication algorithm for Galois fields based on a normal basis representation. In this paper, a pipeline structure is developed to realize the Massey-Omura multiplier in the finite field GF(2m). With the simple squaring property of the normal basis representation used together with this multiplier, a pipeline architecture is developed for computing inverse elements in GF(2m). The designs developed for the Massey-Omura multiplier and the computation of inverse elements are regular, simple, expandable, and therefore, naturally suitable for VLSI implementation.
ERIC Educational Resources Information Center
Berg, Derek H.
2008-01-01
The cognitive underpinnings of arithmetic calculation in children are noted to involve working memory; however, cognitive processes related to arithmetic calculation and working memory suggest that this relationship is more complex than stated previously. The purpose of this investigation was to examine the relative contributions of processing…
Arithmetic Achievement in Children with Cerebral Palsy or Spina Bifida Meningomyelocele
ERIC Educational Resources Information Center
Jenks, Kathleen M.; van Lieshout, Ernest C. D. M.; de Moor, Jan
2009-01-01
The aim of this study was to establish whether children with a physical disability resulting from central nervous system disorders (CNSd) show a level of arithmetic achievement lower than that of non-CNSd children and whether this is related to poor automaticity of number facts or reduced arithmetic instruction time. Twenty-two children with CNSd…
The Association between Arithmetic and Reading Performance in School: A Meta-Analytic Study
ERIC Educational Resources Information Center
Singer, Vivian; Strasser, Kathernie
2017-01-01
Many studies of school achievement find a significant association between reading and arithmetic achievement. The magnitude of the association varies widely across the studies, but the sources of this variation have not been identified. The purpose of this paper is to examine the magnitude and determinants of the relation between arithmetic and…
24 CFR Appendix E to Part 3500 - Arithmetic Steps
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 5 2010-04-01 2010-04-01 false Arithmetic Steps E Appendix E to...—Arithmetic Steps I. Example Illustrating Aggregate Analysis: ASSUMPTIONS: Disbursements: $360 for school... Payment: July 1 Step 1—Initial Trial Balance Aggregate pmt disb bal Jun 0 0 0 Jul 130 500 −370 Aug 130 0...
ERIC Educational Resources Information Center
Jenks, Kathleen M.; de Moor, Jan; van Lieshout, Ernest C. D. M.
2009-01-01
Background: Although it is believed that children with cerebral palsy are at high risk for learning difficulties and arithmetic difficulties in particular, few studies have investigated this issue. Methods: Arithmetic ability was longitudinally assessed in children with cerebral palsy in special (n = 41) and mainstream education (n = 16) and…
ERIC Educational Resources Information Center
Berg, Derek H.
2008-01-01
An age-matched/achievement-matched design was utilized to examine the cognitive functioning of children with severe arithmetic difficulties. A battery of cognitive tasks was administered to three groups of elementary aged children: 20 children with severe arithmetic difficulties (SAD), 20 children matched in age (CAM) to the children with SAD, and…
ERIC Educational Resources Information Center
Yang, Ma Tzu-Lin; Cobb, Paul
1995-01-01
Compares mathematics achievement of children in Taiwan and the United States by analyzing the arithmetical learning contexts of each. Interviews with parents and teachers identify cultural beliefs about learning arithmetic; interviews with students identify level of sophistication of arithmetical concepts. Found greater understanding by Chinese…
The Model Method: Singapore Children's Tool for Representing and Solving Algebraic Word Problems
ERIC Educational Resources Information Center
Ng, Swee Fong; Lee, Kerry
2009-01-01
Solving arithmetic and algebraic word problems is a key component of the Singapore elementary mathematics curriculum. One heuristic taught, the model method, involves drawing a diagram to represent key information in the problem. We describe the model method and a three-phase theoretical framework supporting its use. We conducted 2 studies to…
Changes of brain response induced by simulated weightlessness
NASA Astrophysics Data System (ADS)
Wei, Jinhe; Yan, Gongdong; Guan, Zhiqiang
The characteristics change of brain response was studied during 15° head-down tilt (HDT) comparing with 45° head-up tilt (HUT). The brain responses evaluated included the EEG power spectra change at rest and during mental arithmetic, and the event-related potentials (ERPs) of somatosensory, selective attention and mental arithmetic activities. The prominent feature of brain response change during HDT revealed that the brain function was inhibited to some extent. Such inhibition included that the significant increment of "40Hz" activity during HUT arithmetic almost disappeared during HDT arithmetic, and that the positive-potential effect induced by HDT presented in all kinds of ERPs measured, but the slow negative wave reflecting mental arithmetic and memory process was elongated. These data suggest that the brain function be affected profoundly by the simulated weightlessness, therefore, the brain function change during space flight should be studied systematically.
Jenks, Kathleen M; van Lieshout, Ernest C D M; de Moor, Jan
2009-05-01
Arithmetic ability was tested in children with cerebral palsy without severe intellectual impairment (verbal IQ >or= 70) attending special (n = 41) or mainstream education (n = 16) as well as control children in mainstream education (n = 16) throughout first and second grade. Children with cerebral palsy in special education did not appear to have fully automatized arithmetic facts by the end of second grade. Their lower accuracy and consistently slower (verbal) response times raise important concerns for their future arithmetic development. Differences in arithmetic performance between children with cerebral palsy in special or mainstream education were not related to localization of cerebral palsy or to gross motor impairment. Rather, lower accuracy and slower verbal responses were related to differences in nonverbal intelligence and the presence of epilepsy. Left-hand impairment was related to slower verbal responses but not to lower accuracy.
Vanbinst, Kiran; Ansari, Daniel; Ghesquière, Pol; De Smedt, Bert
2016-01-01
In this article, we tested, using a 1-year longitudinal design, whether symbolic numerical magnitude processing or children’s numerical representation of Arabic digits, is as important to arithmetic as phonological awareness is to reading. Children completed measures of symbolic comparison, phonological awareness, arithmetic, reading at the start of third grade and the latter two were retested at the start of fourth grade. Cross-sectional and longitudinal correlations indicated that symbolic comparison was a powerful domain-specific predictor of arithmetic and that phonological awareness was a unique predictor of reading. Crucially, the strength of these independent associations was not significantly different. This indicates that symbolic numerical magnitude processing is as important to arithmetic development as phonological awareness is to reading and suggests that symbolic numerical magnitude processing is a good candidate for screening children at risk for developing mathematical difficulties. PMID:26942935
Naive Probability: Model-Based Estimates of Unique Events.
Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N
2015-08-01
We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. © 2014 Cognitive Science Society, Inc.
Lamb, Berton Lee; Burkardt, Nina
2008-01-01
When Linda Pilkey- Jarvis and Orrin Pilkey state in their article, "Useless Arithmetic," that "mathematical models are simplified, generalized representations of a process or system," they probably do not mean to imply that these models are simple. Rather, the models are simpler than nature and that is the heart of the problem with predictive models. We have had a long professional association with the developers and users of one of these simplifications of nature in the form of a mathematical model known as Physical Habitat Simulation (PHABSIM), which is part of the Instream Flow Incremental Methodology (IFIM). The IFIM is a suite of techniques, including PHABSIM, that allows the analyst to incorporate hydrology , hydraulics, habitat, water quality, stream temperature, and other variables into a tradeoff analysis that decision makers can use to design a flow regime to meet management objectives (Stalnaker et al. 1995). Although we are not the developers of the IFIM, we have worked with those who did design it, and we have tried to understand how the IFIM and PHABSIM are actually used in decision making (King, Burkardt, and Clark 2006; Lamb 1989).
Using a Cray Y-MP as an array processor for a RISC Workstation
NASA Technical Reports Server (NTRS)
Lamaster, Hugh; Rogallo, Sarah J.
1992-01-01
As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.
Darwin v. 2.0: an interpreted computer language for the biosciences.
Gonnet, G H; Hallett, M T; Korostensky, C; Bernardin, L
2000-02-01
We announce the availability of the second release of Darwin v. 2.0, an interpreted computer language especially tailored to researchers in the biosciences. The system is a general tool applicable to a wide range of problems. This second release improves Darwin version 1.6 in several ways: it now contains (1) a larger set of libraries touching most of the classical problems from computational biology (pairwise alignment, all versus all alignments, tree construction, multiple sequence alignment), (2) an expanded set of general purpose algorithms (search algorithms for discrete problems, matrix decomposition routines, complex/long integer arithmetic operations), (3) an improved language with a cleaner syntax, (4) better on-line help, and (5) a number of fixes to user-reported bugs. Darwin is made available for most operating systems free of char ge from the Computational Biochemistry Research Group (CBRG), reachable at http://chrg.inf.ethz.ch. darwin@inf.ethz.ch
ERIC Educational Resources Information Center
Education Development Center, Inc., Newton, MA.
This is one of a series of 20 booklets designed for participants in an in-service course for teachers of elementary mathematics. The course, developed by the University of Illinois Arithmetic Project, is designed to be conducted by local school personnel. In addition to these booklets, a course package includes films showing mathematics being…
Sex Differences in Mental Arithmetic, Digit Span, and "g" Defined as Working Memory Capacity
ERIC Educational Resources Information Center
Lynn, Richard; Irwing, Paul
2008-01-01
Meta-analyses are presented of sex differences in (1) the (mental) arithmetic subtest of the Wechsler intelligence tests for children and adolescents (the WISC and WPPSI tests), showing that boys obtained a mean advantage of 0.11d; (2) the (mental) arithmetic subtest of the Wechsler intelligence tests for adults (the WAIS tests) showing a mean…
ERIC Educational Resources Information Center
Barrouillet, Pierre; Poirier, Louise
1997-01-01
Outlines Piaget's late ideas on categories and morphisms and the impact of these ideas on the comprehension of the inclusion relationship and the solution of arithmetic problems. Reports a study in which fourth through sixth graders were given arithmetic problems involving two known quantities associated with changes rather than states. Identified…
ERIC Educational Resources Information Center
Andersson, Ulf
2008-01-01
Background: The study was conducted in an attempt to further our understanding of how working memory contributes to written arithmetical skills in children. Aim: The aim was to pinpoint the contribution of different central executive functions and to examine the contribution of the two subcomponents of children's written arithmetical skills.…
ERIC Educational Resources Information Center
Fuchs, Lynn S.; Compton, Donald L.; Fuchs, Douglas; Powell, Sarah R.; Schumacher, Robin F.; Hamlett, Carol L.; Vernier, Emily; Namkung, Jessica M.; Vukovic, Rose K.
2012-01-01
The purpose of this study was to investigate the contributions of domain-general cognitive resources and different forms of arithmetic development to individual differences in pre-algebraic knowledge. Children (n = 279, mean age = 7.59 years) were assessed on 7 domain-general cognitive resources as well as arithmetic calculations and word problems…
ERIC Educational Resources Information Center
McNeil, Nicole M.
2008-01-01
Do typical arithmetic problems hinder learning of mathematical equivalence? Second and third graders (7-9 years old; N= 80) received lessons on mathematical equivalence either with or without typical arithmetic problems (e.g., 15 + 13 = 28 vs. 28 = 28, respectively). Children then solved math equivalence problems (e.g., 3 + 9 + 5 = 6 + __),…
Patterns of problem-solving in children's literacy and arithmetic.
Farrington-Flint, Lee; Vanuxem-Cotterill, Sophie; Stiller, James
2009-11-01
Patterns of problem-solving among 5-to-7 year-olds' were examined on a range of literacy (reading and spelling) and arithmetic-based (addition and subtraction) problem-solving tasks using verbal self-reports to monitor strategy choice. The results showed higher levels of variability in the children's strategy choice across Years I and 2 on the arithmetic (addition and subtraction) than literacy-based tasks (reading and spelling). However, across all four tasks, the children showed a tendency to move from less sophisticated procedural-based strategies, which included phonological strategies for reading and spelling and counting-all and finger modellingfor addition and subtraction, to more efficient retrieval methods from Years I to 2. Distinct patterns in children's problem-solving skill were identified on the literacy and arithmetic tasks using two separate cluster analyses. There was a strong association between these two profiles showing that those children with more advanced problem-solving skills on the arithmetic tasks also showed more advanced profiles on the literacy tasks. The results highlight how different-aged children show flexibility in their use of problem-solving strategies across literacy and arithmetical contexts and reinforce the importance of studying variations in children's problem-solving skill across different educational contexts.
On the Certain Topological Indices of Titania Nanotube TiO2[m, n
NASA Astrophysics Data System (ADS)
Javaid, M.; Liu, Jia-Bao; Rehman, M. A.; Wang, Shaohui
2017-07-01
A numeric quantity that characterises the whole structure of a molecular graph is called the topological index that predicts the physical features, chemical reactivities, and boiling activities of the involved chemical compound in the molecular graph. In this article, we give new mathematical expressions for the multiple Zagreb indices, the generalised Zagreb index, the fourth version of atom-bond connectivity (ABC4) index, and the fifth version of geometric-arithmetic (GA5) index of TiO2[m, n]. In addition, we compute the latest developed topological index called by Sanskruti index. At the end, a comparison is also included to estimate the efficiency of the computed indices. Our results extended some known conclusions.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting.
Alomar, Miquel L; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting.
FPGA-Based Stochastic Echo State Networks for Time-Series Forecasting
Alomar, Miquel L.; Canals, Vincent; Perez-Mora, Nicolas; Martínez-Moll, Víctor; Rosselló, Josep L.
2016-01-01
Hardware implementation of artificial neural networks (ANNs) allows exploiting the inherent parallelism of these systems. Nevertheless, they require a large amount of resources in terms of area and power dissipation. Recently, Reservoir Computing (RC) has arisen as a strategic technique to design recurrent neural networks (RNNs) with simple learning capabilities. In this work, we show a new approach to implement RC systems with digital gates. The proposed method is based on the use of probabilistic computing concepts to reduce the hardware required to implement different arithmetic operations. The result is the development of a highly functional system with low hardware resources. The presented methodology is applied to chaotic time-series forecasting. PMID:26880876