NLSE: Parameter-Based Inversion Algorithm
NASA Astrophysics Data System (ADS)
Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.
Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Rassbach, M. E.
1979-01-01
Discussed in this report is the clustering algorithm CLASSY, including detailed descriptions of its general structure and mathematical background and of the various major subroutines. The report provides a development of the logic and equations used with specific reference to program variables. Some comments on timing and proposed optimization techniques are included.
Combinatorial-topological framework for the analysis of global dynamics.
Bush, Justin; Gameiro, Marcio; Harker, Shaun; Kokubu, Hiroshi; Mischaikow, Konstantin; Obayashi, Ippei; Pilarczyk, Paweł
2012-12-01
We discuss an algorithmic framework based on efficient graph algorithms and algebraic-topological computational tools. The framework is aimed at automatic computation of a database of global dynamics of a given m-parameter semidynamical system with discrete time on a bounded subset of the n-dimensional phase space. We introduce the mathematical background, which is based upon Conley's topological approach to dynamics, describe the algorithms for the analysis of the dynamics using rectangular grids both in phase space and parameter space, and show two sample applications.
Combinatorial-topological framework for the analysis of global dynamics
NASA Astrophysics Data System (ADS)
Bush, Justin; Gameiro, Marcio; Harker, Shaun; Kokubu, Hiroshi; Mischaikow, Konstantin; Obayashi, Ippei; Pilarczyk, Paweł
2012-12-01
We discuss an algorithmic framework based on efficient graph algorithms and algebraic-topological computational tools. The framework is aimed at automatic computation of a database of global dynamics of a given m-parameter semidynamical system with discrete time on a bounded subset of the n-dimensional phase space. We introduce the mathematical background, which is based upon Conley's topological approach to dynamics, describe the algorithms for the analysis of the dynamics using rectangular grids both in phase space and parameter space, and show two sample applications.
Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation.
Zana, F; Klein, J C
2001-01-01
This paper presents an algorithm based on mathematical morphology and curvature evaluation for the detection of vessel-like patterns in a noisy environment. Such patterns are very common in medical images. Vessel detection is interesting for the computation of parameters related to blood flow. Its tree-like geometry makes it a usable feature for registration between images that can be of a different nature. In order to define vessel-like patterns, segmentation is performed with respect to a precise model. We define a vessel as a bright pattern, piece-wise connected, and locally linear, mathematical morphology is very well adapted to this description, however other patterns fit such a morphological description. In order to differentiate vessels from analogous background patterns, a cross-curvature evaluation is performed. They are separated out as they have a specific Gaussian-like profile whose curvature varies smoothly along the vessel. The detection algorithm that derives directly from this modeling is based on four steps: (1) noise reduction; (2) linear pattern with Gaussian-like profile improvement; (3) cross-curvature evaluation; (4) linear filtering. We present its theoretical background and illustrate it on real images of various natures, then evaluate its robustness and its accuracy with respect to noise.
Margin based ontology sparse vector learning algorithm and applied in biology science.
Gao, Wei; Qudair Baig, Abdul; Ali, Haidar; Sajjad, Wasim; Reza Farahani, Mohammad
2017-01-01
In biology field, the ontology application relates to a large amount of genetic information and chemical information of molecular structure, which makes knowledge of ontology concepts convey much information. Therefore, in mathematical notation, the dimension of vector which corresponds to the ontology concept is often very large, and thus improves the higher requirements of ontology algorithm. Under this background, we consider the designing of ontology sparse vector algorithm and application in biology. In this paper, using knowledge of marginal likelihood and marginal distribution, the optimized strategy of marginal based ontology sparse vector learning algorithm is presented. Finally, the new algorithm is applied to gene ontology and plant ontology to verify its efficiency.
Distributed Algorithms for Probabilistic Solution of Computational Vision Problems.
1988-03-01
34 targets. Legters and Young (1982) developed an operator-based approach r% using foreground and background models and solved a least-squares minimiza...1960), "Finite Markov Chains", Van Nostrand, , - New York. Legters , G.R., and Young, T.Y. (1982), "A Mathematical Model for Computer Image Tracking
The systems biology simulation core algorithm
2013-01-01
Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941
Generalized image contrast enhancement technique based on Heinemann contrast discrimination model
NASA Astrophysics Data System (ADS)
Liu, Hong; Nodine, Calvin F.
1994-03-01
This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.
NASA Astrophysics Data System (ADS)
Ivanova, Violeta M.; Sousa, Rita; Murrihy, Brian; Einstein, Herbert H.
2014-06-01
This paper presents results from research conducted at MIT during 2010-2012 on modeling of natural rock fracture systems with the GEOFRAC three-dimensional stochastic model. Following a background summary of discrete fracture network models and a brief introduction of GEOFRAC, the paper provides a thorough description of the newly developed mathematical and computer algorithms for fracture intensity, aperture, and intersection representation, which have been implemented in MATLAB. The new methods optimize, in particular, the representation of fracture intensity in terms of cumulative fracture area per unit volume, P32, via the Poisson-Voronoi Tessellation of planes into polygonal fracture shapes. In addition, fracture apertures now can be represented probabilistically or deterministically whereas the newly implemented intersection algorithms allow for computing discrete pathways of interconnected fractures. In conclusion, results from a statistical parametric study, which was conducted with the enhanced GEOFRAC model and the new MATLAB-based Monte Carlo simulation program FRACSIM, demonstrate how fracture intensity, size, and orientations influence fracture connectivity.
NASA Astrophysics Data System (ADS)
Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian
2015-09-01
As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.
Two Meanings of Algorithmic Mathematics.
ERIC Educational Resources Information Center
Maurer, Stephen B.
1984-01-01
Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…
Automated Design Tools for Integrated Mixed-Signal Microsystems (NeoCAD)
2005-02-01
method, Model Order Reduction (MOR) tools, system-level, mixed-signal circuit synthesis and optimization tools, and parsitic extraction tools. A unique...Mission Area: Command and Control mixed signal circuit simulation parasitic extraction time-domain simulation IC design flow model order reduction... Extraction 1.2 Overall Program Milestones CHAPTER 2 FAST TIME DOMAIN MIXED-SIGNAL CIRCUIT SIMULATION 2.1 HAARSPICE Algorithms 2.1.1 Mathematical Background
The Teaching and Learning of Algorithms in School Mathematics. 1998 Yearbook.
ERIC Educational Resources Information Center
Morrow, Lorna J., Ed.; Kenney, Margaret J., Ed.
This 1998 yearbook aims to stimulate and answer questions that all educators of mathematics need to consider to adapt school mathematics for the 21st century. The papers included in this book cover a wide variety of topics, including student-invented algorithms, the assessment of such algorithms, algorithms from history and other cultures, ways…
Results of NASA's First Autonomous Formation Flying Experiment: Earth Observing-1 (EO-1)
NASA Technical Reports Server (NTRS)
Folta, David C.; Hawkins, Albin; Bauer, Frank H. (Technical Monitor)
2001-01-01
NASA's first autonomous formation flying mission completed its primary goal of demonstrating an advanced technology called enhanced formation flying. To enable this technology, the Guidance, Navigation, and Control center at the Goddard Space Flight Center (GSFC) implemented a universal 3-axis formation flying algorithm in an autonomous executive flight code onboard the New Millennium Program's (NMP) Earth Observing-1 (EO-1) spacecraft. This paper describes the mathematical background of the autonomous formation flying algorithm and the onboard flight design and presents the validation results of this unique system. Results from functionality assessment through fully autonomous maneuver control are presented as comparisons between the onboard EO-1 operational autonomous control system called AutoCon(tm), its ground-based predecessor, and a standalone algorithm.
Science modelling in pre-calculus: how to make mathematics problems contextually meaningful
NASA Astrophysics Data System (ADS)
Sokolowski, Andrzej; Yalvac, Bugrahan; Loving, Cathleen
2011-04-01
'Use of mathematical representations to model and interpret physical phenomena and solve problems is one of the major teaching objectives in high school math curriculum' (National Council of Teachers of Mathematics (NCTM), Principles and Standards for School Mathematics, NCTM, Reston, VA, 2000). Commonly used pre-calculus textbooks provide a wide range of application problems. However, these problems focus students' attention on evaluating or solving pre-arranged formulas for given values. The role of scientific content is reduced to provide a background for these problems instead of being sources of data gathering for inducing mathematical tools. Students are neither required to construct mathematical models based on the contexts nor are they asked to validate or discuss the limitations of applied formulas. Using these contexts, the instructor may think that he/she is teaching problem solving, where in reality he/she is teaching algorithms of the mathematical operations (G. Kulm (ed.), New directions for mathematics assessment, in Assessing Higher Order Thinking in Mathematics, Erlbaum, Hillsdale, NJ, 1994, pp. 221-240). Without a thorough representation of the physical phenomena and the mathematical modelling processes undertaken, problem solving unintentionally appears as simple algorithmic operations. In this article, we deconstruct the representations of mathematics problems from selected pre-calculus textbooks and explicate their limitations. We argue that the structure and content of those problems limits students' coherent understanding of mathematical modelling, and this could result in weak student problem-solving skills. Simultaneously, we explore the ways to enhance representations of those mathematical problems, which we have characterized as lacking a meaningful physical context and limiting coherent student understanding. In light of our discussion, we recommend an alternative to strengthen the process of teaching mathematical modelling - utilization of computer-based science simulations. Although there are several exceptional computer-based science simulations designed for mathematics classes (see, e.g. Kinetic Book (http://www.kineticbooks.com/) or Gizmos (http://www.explorelearning.com/)), we concentrate mainly on the PhET Interactive Simulations developed at the University of Colorado at Boulder (http://phet.colorado.edu/) in generating our argument that computer simulations more accurately represent the contextual characteristics of scientific phenomena than their textual descriptions.
Using Mathematical Algorithms to Modify Glomerular Filtration Rate Estimation Equations
Zhu, Bei; Wu, Jianqing; Zhu, Jin; Zhao, Weihong
2013-01-01
Background The equations provide a rapid and low-cost method of evaluating glomerular filtration rate (GFR). Previous studies indicated that the Modification of Diet in Renal Disease (MDRD), Chronic Kidney Disease-Epidemiology (CKD-EPI) and MacIsaac equations need further modification for application in Chinese population. Thus, this study was designed to modify the three equations, and compare the diagnostic accuracy of the equations modified before and after. Methodology With the use of 99 mTc-DTPA renal dynamic imaging as the reference GFR (rGFR), the MDRD, CKD-EPI and MacIsaac equations were modified by two mathematical algorithms: the hill-climbing and the simulated-annealing algorithms. Results A total of 703 Chinese subjects were recruited, with the average rGFR 77.14±25.93 ml/min. The entire modification process was based on a random sample of 80% of subjects in each GFR level as a training sample set, the rest of 20% of subjects as a validation sample set. After modification, the three equations performed significant improvement in slop, intercept, correlated coefficient, root mean square error (RMSE), total deviation index (TDI), and the proportion of estimated GFR (eGFR) within 10% and 30% deviation of rGFR (P10 and P30). Of the three modified equations, the modified CKD-EPI equation showed the best accuracy. Conclusions Mathematical algorithms could be a considerable tool to modify the GFR equations. Accuracy of all the three modified equations was significantly improved in which the modified CKD-EPI equation could be the optimal one. PMID:23472113
ERIC Educational Resources Information Center
Stanford Univ., CA. School Mathematics Study Group.
This is the second unit of a 15-unit School Mathematics Study Group (SMSG) mathematics text for high school students. Topics presented in the first chapter (Informal Algorithms and Flow Charts) include: changing a flat tire; algorithms, flow charts, and computers; assignment and variables; input and output; using a variable as a counter; decisions…
Sex and mathematical background as predictors of anxiety and self-efficacy in mathematics.
Lussier, G
1996-12-01
Anxiety and self-efficacy in mathematics as a function of sex and mathematical background were investigated. This study employed an ex post facto 2 x 2 factorial design in which sex and mathematical background were classification variables. It was predicted that men would report lower anxiety scores and higher self-efficacy scores than women and that students with a high mathematical background would report lower anxiety scores and higher self-efficacy scores than those with a low background in mathematics. An interaction between sex and mathematical background was also predicted. 51 subjects were given the revised Mathematics Anxiety Scale and the Mathematics Self-efficacy Scale. Results supported the hypotheses with respect to background in mathematics for anxiety in mathematics, and all of the hypotheses were supported for self-efficacy in mathematics.
ERIC Educational Resources Information Center
Nanna, Robert J.
2016-01-01
Algorithms and representations have been an important aspect of the work of mathematics, especially for understanding concepts and communicating ideas about concepts and mathematical relationships. They have played a key role in various mathematics standards documents, including the Common Core State Standards for Mathematics. However, there have…
Verifying a Computer Algorithm Mathematically.
ERIC Educational Resources Information Center
Olson, Alton T.
1986-01-01
Presents an example of mathematics from an algorithmic point of view, with emphasis on the design and verification of this algorithm. The program involves finding roots for algebraic equations using the half-interval search algorithm. The program listing is included. (JN)
Measuring Leaf Area in Soy Plants by HSI Color Model Filtering and Mathematical Morphology
NASA Astrophysics Data System (ADS)
Benalcázar, M.; Padín, J.; Brun, M.; Pastore, J.; Ballarin, V.; Peirone, L.; Pereyra, G.
2011-12-01
There has been lately a significant progress in automating tasks for the agricultural sector. One of the advances is the development of robots, based on computer vision, applied to care and management of soy crops. In this task, digital image processing plays an important role, but must solve some important problems, like the ones associated to the variations in lighting conditions during image acquisition. Such variations influence directly on the brightness level of the images to be processed. In this paper we propose an algorithm to segment and measure automatically the leaf area of soy plants. This information is used by the specialists to evaluate and compare the growth of different soy genotypes. This algorithm, based on color filtering using the HSI model, detects green objects from the image background. The segmentation of leaves (foliage) was made applying Mathematical Morphology. The foliage area was estimated counting the pixels that belong to the segmented leaves. From several experiments, consisting in applying the algorithm to measure the foliage of about fifty plants of various genotypes of soy, at different growth stages, we obtained successful results, despite the high brightness variations and shadows in the processed images.
Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth
2010-01-01
Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968
Preliminary Results of NASA's First Autonomous Formation Flying Experiment: Earth Observing-1 (EO-1)
NASA Technical Reports Server (NTRS)
Folta, David; Hawkins, Albin
2001-01-01
NASA's first autonomous formation flying mission is completing a primary goal of demonstrating an advanced technology called enhanced formation flying. To enable this technology, the Guidance, Navigation, and Control center at the Goddard Space Flight Center has implemented an autonomous universal three-axis formation flying algorithm in executive flight code onboard the New Millennium Program's (NMP) Earth Observing-1 (EO-1) spacecraft. This paper describes the mathematical background of the autonomous formation flying algorithm and the onboard design and presents the preliminary validation results of this unique system. Results from functionality assessment and autonomous maneuver control are presented as comparisons between the onboard EO-1 operational autonomous control system called AutoCon(tm), its ground-based predecessor, and a stand-alone algorithm.
NASA Astrophysics Data System (ADS)
Liu, Hong; Nodine, Calvin F.
1996-07-01
This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.
Hyperspectral feature mapping classification based on mathematical morphology
NASA Astrophysics Data System (ADS)
Liu, Chang; Li, Junwei; Wang, Guangping; Wu, Jingli
2016-03-01
This paper proposed a hyperspectral feature mapping classification algorithm based on mathematical morphology. Without the priori information such as spectral library etc., the spectral and spatial information can be used to realize the hyperspectral feature mapping classification. The mathematical morphological erosion and dilation operations are performed respectively to extract endmembers. The spectral feature mapping algorithm is used to carry on hyperspectral image classification. The hyperspectral image collected by AVIRIS is applied to evaluate the proposed algorithm. The proposed algorithm is compared with minimum Euclidean distance mapping algorithm, minimum Mahalanobis distance mapping algorithm, SAM algorithm and binary encoding mapping algorithm. From the results of the experiments, it is illuminated that the proposed algorithm's performance is better than that of the other algorithms under the same condition and has higher classification accuracy.
2013-01-01
Background Matching pursuit algorithm (MP), especially with recent multivariate extensions, offers unique advantages in analysis of EEG and MEG. Methods We propose a novel construction of an optimal Gabor dictionary, based upon the metrics introduced in this paper. We implement this construction in a freely available software for MP decomposition of multivariate time series, with a user friendly interface via the Svarog package (Signal Viewer, Analyzer and Recorder On GPL, http://braintech.pl/svarog), and provide a hands-on introduction to its application to EEG. Finally, we describe numerical and mathematical optimizations used in this implementation. Results Optimal Gabor dictionaries, based on the metric introduced in this paper, for the first time allowed for a priori assessment of maximum one-step error of the MP algorithm. Variants of multivariate MP, implemented in the accompanying software, are organized according to the mathematical properties of the algorithms, relevant in the light of EEG/MEG analysis. Some of these variants have been successfully applied to both multichannel and multitrial EEG and MEG in previous studies, improving preprocessing for EEG/MEG inverse solutions and parameterization of evoked potentials in single trials; we mention also ongoing work and possible novel applications. Conclusions Mathematical results presented in this paper improve our understanding of the basics of the MP algorithm. Simple introduction of its properties and advantages, together with the accompanying stable and user-friendly Open Source software package, pave the way for a widespread and reproducible analysis of multivariate EEG and MEG time series and novel applications, while retaining a high degree of compatibility with the traditional, visual analysis of EEG. PMID:24059247
Szczegielniak, Jan; Łuniewski, Jacek; Stanisławski, Rafał; Bogacz, Katarzyna; Krajczy, Marcin; Rydel, Marek
2018-01-01
Background The six-minute walk test (6MWT) is considered to be a simple and inexpensive tool for the assessment of functional tolerance of submaximal effort. The aim of this work was 1) to background the nonlinear nature of the energy expenditure process due to physical activity, 2) to compare the results/scores of the submaximal treadmill exercise test and those of 6MWT in pulmonary patients and 3) to develop nonlinear mathematical models relating the two. Methods The study group included patients with the COPD. All patients were subjected to a submaximal exercise test and a 6MWT. To develop an optimal mathematical solution and compare the results of the exercise test and the 6MWT, the least squares and genetic algorithms were employed to estimate parameters of polynomial expansion and piecewise linear models. Results Mathematical analysis enabled to construct nonlinear models for estimating the MET result of submaximal exercise test based on average walk velocity (or distance) in the 6MWT. Conclusions Submaximal effort tolerance in COPD patients can be effectively estimated from new, rehabilitation-oriented, nonlinear models based on the generalized MET concept and the 6MWT. PMID:29425213
Ho, Tsung-Jung; Kuo, Ching-Hua; Wang, San-Yuan; Chen, Guan-Yuan; Tseng, Yufeng J
2013-02-01
Liquid Chromatography-Time of Flight Mass Spectrometry has become an important technique for toxicological screening and metabolomics. We describe TIPick a novel algorithm that accurately and sensitively detects target compounds in biological samples. TIPick comprises two main steps: background subtraction and peak picking. By subtracting a blank chromatogram, TIPick eliminates chemical signals of blank injections and reduces false positive results. TIPick detects peaks by calculating the S(CC(INI)) values of extracted ion chromatograms (EICs) without considering peak shapes, and it is able to detect tailing and fronting peaks. TIPick also uses duplicate injections to enhance the signals of the peaks and thus improve the peak detection power. Commonly seen split peaks caused by either saturation of the mass spectrometer detector or a mathematical background subtraction algorithm can be resolved by adjusting the mass error tolerance of the EICs and by comparing the EICs before and after background subtraction. The performance of TIPick was tested in a data set containing 297 standard mixtures; the recall, precision and F-score were 0.99, 0.97 and 0.98, respectively. TIPick was successfully used to construct and analyze the NTU MetaCore metabolomics chemical standards library, and it was applied for toxicological screening and metabolomics studies. Copyright © 2013 John Wiley & Sons, Ltd.
2011-01-01
Background Network inference methods reconstruct mathematical models of molecular or genetic networks directly from experimental data sets. We have previously reported a mathematical method which is exclusively data-driven, does not involve any heuristic decisions within the reconstruction process, and deliveres all possible alternative minimal networks in terms of simple place/transition Petri nets that are consistent with a given discrete time series data set. Results We fundamentally extended the previously published algorithm to consider catalysis and inhibition of the reactions that occur in the underlying network. The results of the reconstruction algorithm are encoded in the form of an extended Petri net involving control arcs. This allows the consideration of processes involving mass flow and/or regulatory interactions. As a non-trivial test case, the phosphate regulatory network of enterobacteria was reconstructed using in silico-generated time-series data sets on wild-type and in silico mutants. Conclusions The new exact algorithm reconstructs extended Petri nets from time series data sets by finding all alternative minimal networks that are consistent with the data. It suggested alternative molecular mechanisms for certain reactions in the network. The algorithm is useful to combine data from wild-type and mutant cells and may potentially integrate physiological, biochemical, pharmacological, and genetic data in the form of a single model. PMID:21762503
NASA Astrophysics Data System (ADS)
Vámos, Tibor
The gist of the paper is the fundamental uncertain nature of all kinds of uncertainties and consequently a critical epistemic review of historical and recent approaches, computational methods, algorithms. The review follows the development of the notion from the beginnings of thinking, via the Aristotelian and Skeptic view, the medieval nominalism and the influential pioneering metaphors of ancient India and Persia to the birth of modern mathematical disciplinary reasoning. Discussing the models of uncertainty, e.g. the statistical, other physical and psychological background we reach a pragmatic model related estimation perspective, a balanced application orientation for different problem areas. Data mining, game theories and recent advances in approximation algorithms are discussed in this spirit of modest reasoning.
A refined methodology for modeling volume quantification performance in CT
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Wilson, Joshua; Samei, Ehsan
2014-03-01
The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.
Development of a stained cell nuclei counting system
NASA Astrophysics Data System (ADS)
Timilsina, Niranjan; Moffatt, Christopher; Okada, Kazunori
2011-03-01
This paper presents a novel cell counting system which exploits the Fast Radial Symmetry Transformation (FRST) algorithm [1]. The driving force behind our system is a research on neurogenesis in the intact nervous system of Manduca Sexta or the Tobacco Hornworm, which was being studied to assess the impact of age, food and environment on neurogenesis. The varying thickness of the intact nervous system in this species often yields images with inhomogeneous background and inconsistencies such as varying illumination, variable contrast, and irregular cell size. For automated counting, such inhomogeneity and inconsistencies must be addressed, which no existing work has done successfully. Thus, our goal is to devise a new cell counting algorithm for the images with non-uniform background. Our solution adapts FRST: a computer vision algorithm which is designed to detect points of interest on circular regions such as human eyes. This algorithm enhances the occurrences of the stained-cell nuclei in 2D digital images and negates the problems caused by their inhomogeneity. Besides FRST, our algorithm employs standard image processing methods, such as mathematical morphology and connected component analysis. We have evaluated the developed cell counting system with fourteen digital images of Tobacco Hornworm's nervous system collected for this study with ground-truth cell counts by biology experts. Experimental results show that our system has a minimum error of 1.41% and mean error of 16.68% which is at least forty-four percent better than the algorithm without FRST.
Quantum algorithm for solving some discrete mathematical problems by probing their energy spectra
NASA Astrophysics Data System (ADS)
Wang, Hefeng; Fan, Heng; Li, Fuli
2014-01-01
When a probe qubit is coupled to a quantum register that represents a physical system, the probe qubit will exhibit a dynamical response only when it is resonant with a transition in the system. Using this principle, we propose a quantum algorithm for solving discrete mathematical problems based on the circuit model. Our algorithm has favorable scaling properties in solving some discrete mathematical problems.
Wilson, Anna J; Dehaene, Stanislas; Pinel, Philippe; Revkin, Susannah K; Cohen, Laurent; Cohen, David
2006-01-01
Background Adaptive game software has been successful in remediation of dyslexia. Here we describe the cognitive and algorithmic principles underlying the development of similar software for dyscalculia. Our software is based on current understanding of the cerebral representation of number and the hypotheses that dyscalculia is due to a "core deficit" in number sense or in the link between number sense and symbolic number representations. Methods "The Number Race" software trains children on an entertaining numerical comparison task, by presenting problems adapted to the performance level of the individual child. We report full mathematical specifications of the algorithm used, which relies on an internal model of the child's knowledge in a multidimensional "learning space" consisting of three difficulty dimensions: numerical distance, response deadline, and conceptual complexity (from non-symbolic numerosity processing to increasingly complex symbolic operations). Results The performance of the software was evaluated both by mathematical simulations and by five weeks of use by nine children with mathematical learning difficulties. The results indicate that the software adapts well to varying levels of initial knowledge and learning speeds. Feedback from children, parents and teachers was positive. A companion article [1] describes the evolution of number sense and arithmetic scores before and after training. Conclusion The software, open-source and freely available online, is designed for learning disabled children aged 5–8, and may also be useful for general instruction of normal preschool children. The learning algorithm reported is highly general, and may be applied in other domains. PMID:16734905
Spinning projectile's attitude measurement with LW infrared radiation under sea-sky background
NASA Astrophysics Data System (ADS)
Xu, Miaomiao; Bu, Xiongzhu; Yu, Jing; He, Zilu
2018-05-01
With the further development of infrared radiation research in sea-sky background and the requirement of spinning projectile's attitude measurement, the sea-sky infrared radiation field is used to carry out spinning projectile's attitude angle instead of inertial sensors. Firstly, the generation mechanism of sea-sky infrared radiation is analysed. The mathematical model of sea-sky infrared radiation is deduced in LW (long wave) infrared 8 ∼ 14 μm band by calculating the sea surface and sky infrared radiation. Secondly, according to the movement characteristics of spinning projectile, the attitude measurement model of infrared sensors on projectile's three axis is established. And the feasibility of the model is analysed by simulation. Finally, the projectile's attitude calculation algorithm is designed to improve the attitude angle estimation accuracy. The results of semi-physical experiments show that the segmented interactive algorithm estimation error of pitch and roll angle is within ±1.5°. The attitude measurement method is effective and feasible, and provides accurate measurement basis for the guidance of spinning projectile.
Isaacson, M D; Srinivasan, S; Lloyd, L L
2010-01-01
MathSpeak is a set of rules for non speaking of mathematical expressions. These rules have been incorporated into a computerised module that translates printed mathematics into the non-ambiguous MathSpeak form for synthetic speech rendering. Differences between individual utterances produced with the translator module are difficult to discern because of insufficient pausing between utterances; hence, the purpose of this study was to develop an algorithm for improving the synthetic speech rendering of MathSpeak. To improve synthetic speech renderings, an algorithm for inserting pauses was developed based upon recordings of middle and high school math teachers speaking mathematic expressions. Efficacy testing of this algorithm was conducted with college students without disabilities and high school/college students with visual impairments. Parameters measured included reception accuracy, short-term memory retention, MathSpeak processing capacity and various rankings concerning the quality of synthetic speech renderings. All parameters measured showed statistically significant improvements when the algorithm was used. The algorithm improves the quality and information processing capacity of synthetic speech renderings of MathSpeak. This increases the capacity of individuals with print disabilities to perform mathematical activities and to successfully fulfill science, technology, engineering and mathematics academic and career objectives.
ERIC Educational Resources Information Center
Raveh, Ira; Koichu, Boris; Peled, Irit; Zaslavsky, Orit
2016-01-01
In this article we present an integrative framework of knowledge for teaching the standard algorithms of the four basic arithmetic operations. The framework is based on a mathematical analysis of the algorithms, a connectionist perspective on teaching mathematics and an analogy with previous frameworks of knowledge for teaching arithmetic…
A problem of optimal control and observation for distributed homogeneous multi-agent system
NASA Astrophysics Data System (ADS)
Kruglikov, Sergey V.
2017-12-01
The paper considers the implementation of a algorithm for controlling a distributed complex of several mobile multi-robots. The concept of a unified information space of the controlling system is applied. The presented information and mathematical models of participants and obstacles, as real agents, and goals and scenarios, as virtual agents, create the base forming the algorithmic and software background for computer decision support system. The controlling scheme assumes the indirect management of the robotic team on the basis of optimal control and observation problem predicting intellectual behavior in a dynamic, hostile environment. A basic content problem is a compound cargo transportation by a group of participants in the case of a distributed control scheme in the terrain with multiple obstacles.
The challenge of computer mathematics.
Barendregt, Henk; Wiedijk, Freek
2005-10-15
Progress in the foundations of mathematics has made it possible to formulate all thinkable mathematical concepts, algorithms and proofs in one language and in an impeccable way. This is not in spite of, but partially based on the famous results of Gödel and Turing. In this way statements are about mathematical objects and algorithms, proofs show the correctness of statements and computations, and computations are dealing with objects and proofs. Interactive computer systems for a full integration of defining, computing and proving are based on this. The human defines concepts, constructs algorithms and provides proofs, while the machine checks that the definitions are well formed and the proofs and computations are correct. Results formalized so far demonstrate the feasibility of this 'computer mathematics'. Also there are very good applications. The challenge is to make the systems more mathematician-friendly, by building libraries and tools. The eventual goal is to help humans to learn, develop, communicate, referee and apply mathematics.
Infrared image enhancement based on the edge detection and mathematical morphology
NASA Astrophysics Data System (ADS)
Zhang, Linlin; Zhao, Yuejin; Dong, Liquan; Liu, Xiaohua; Yu, Xiaomei; Hui, Mei; Chu, Xuhong; Gong, Cheng
2010-11-01
The development of the un-cooled infrared imaging technology from military necessity. At present, It is widely applied in industrial, medicine, scientific and technological research and so on. The infrared radiation temperature distribution of the measured object's surface can be observed visually. The collection of infrared images from our laboratory has following characteristics: Strong spatial correlation, Low contrast , Poor visual effect; Without color or shadows because of gray image , and has low resolution; Low definition compare to the visible light image; Many kinds of noise are brought by the random disturbances of the external environment. Digital image processing are widely applied in many areas, it can now be studied up close and in detail in many research field. It has become one kind of important means of the human visual continuation. Traditional methods for image enhancement cannot capture the geometric information of images and tend to amplify noise. In order to remove noise and improve visual effect. Meanwhile, To overcome the above enhancement issues. The mathematical model of FPA unit was constructed based on matrix transformation theory. According to characteristics of FPA, Image enhancement algorithm which combined with mathematical morphology and edge detection are established. First of all, Image profile is obtained by using the edge detection combine with mathematical morphological operators. And then, through filling the template profile by original image to get the ideal background image, The image noise can be removed on the base of the above method. The experiments show that utilizing the proposed algorithm can enhance image detail and the signal to noise ratio.
A review on principles, theory and practices of 2D-QSAR.
Roy, Kunal; Das, Rudra Narayan
2014-01-01
The central axiom of science purports the explanation of every natural phenomenon using all possible logics coming from pure as well as mixed scientific background. The quantitative structure-activity relationship (QSAR) analysis is a study correlating the behavioral manifestation of compounds with their structures employing the interdisciplinary knowledge of chemistry, mathematics, biology as well as physics. Several studies have attempted to mathematically correlate the chemistry and property (physicochemical/ biological/toxicological) of molecules using various computationally or experimentally derived quantitative parameters termed as descriptors. The dimensionality of the descriptors depends on the type of algorithm employed and defines the nature of QSAR analysis. The most interesting feature of predictive QSAR models is that the behavior of any new or even hypothesized molecule can be predicted by the use of the mathematical equations. The phrase "2D-QSAR" signifies development of QSAR models using 2D-descriptors. Such predictor variables are the most widely practised ones because of their simple and direct mathematical algorithmic nature involving no time consuming energy computations and having reproducible operability. 2D-descriptors have a deluge of contributions in extracting chemical attributes and they are also capable of representing the 3D molecular features to some extent; although in no case they should be considered as the ultimate one, since they often suffer from the problems of intercorrelation, insufficient chemical information as well as lack of interpretation. However, by following rational approaches, novel 2D-descriptors may be developed to obviate various existing problems giving potential 2D-QSAR equations, thereby solving the innumerable chemical mysteries still unexplored.
Grossi, Enzo
2006-01-01
Background In recent years a number of algorithms for cardiovascular risk assessment has been proposed to the medical community. These algorithms consider a number of variables and express their results as the percentage risk of developing a major fatal or non-fatal cardiovascular event in the following 10 to 20 years Discussion The author has identified three major pitfalls of these algorithms, linked to the limitation of the classical statistical approach in dealing with this kind of non linear and complex information. The pitfalls are the inability to capture the disease complexity, the inability to capture process dynamics, and the wide confidence interval of individual risk assessment. Artificial Intelligence tools can provide potential advantage in trying to overcome these limitations. The theoretical background and some application examples related to artificial neural networks and fuzzy logic have been reviewed and discussed. Summary The use of predictive algorithms to assess individual absolute risk of cardiovascular future events is currently hampered by methodological and mathematical flaws. The use of newer approaches, such as fuzzy logic and artificial neural networks, linked to artificial intelligence, seems to better address both the challenge of increasing complexity resulting from a correlation between predisposing factors, data on the occurrence of cardiovascular events, and the prediction of future events on an individual level. PMID:16672045
NASA Astrophysics Data System (ADS)
Diaz, Kristians; Castañeda, Benjamín; Miranda, César; Lavarello, Roberto; Llanos, Alejandro
2010-03-01
We developed a protocol for the acquisition of digital images and an algorithm for a color-based automatic segmentation of cutaneous lesions of Leishmaniasis. The protocol for image acquisition provides control over the working environment to manipulate brightness, lighting and undesirable shadows on the injury using indirect lighting. Also, this protocol was used to accurately calculate the area of the lesion expressed in mm2 even in curved surfaces by combining the information from two consecutive images. Different color spaces were analyzed and compared using ROC curves in order to determine the color layer with the highest contrast between the background and the wound. The proposed algorithm is composed of three stages: (1) Location of the wound determined by threshold and mathematical morphology techniques to the H layer of the HSV color space, (2) Determination of the boundaries of the wound by analyzing the color characteristics in the YIQ space based on masks (for the wound and the background) estimated from the first stage, and (3) Refinement of the calculations obtained on the previous stages by using the discrete dynamic contours algorithm. The segmented regions obtained with the algorithm were compared with manual segmentations made by a medical specialist. Broadly speaking, our results support that color provides useful information during segmentation and measurement of wounds of cutaneous Leishmaniasis. Results from ten images showed 99% specificity, 89% sensitivity, and 98% accuracy.
ERIC Educational Resources Information Center
Jonsson, Bert; Kulaksiz, Yagmur C.; Lithner, Johan
2016-01-01
Two separate studies, Jonsson et al. ("J. Math Behav." 2014;36: 20-32) and Karlsson Wirebring et al. ("Trends Neurosci Educ." 2015;4(1-2):6-14), showed that learning mathematics using creative mathematical reasoning and constructing their own solution methods can be more efficient than if students use algorithmic reasoning and…
A True-Color Sensor and Suitable Evaluation Algorithm for Plant Recognition
Schmittmann, Oliver; Schulze Lammers, Peter
2017-01-01
Plant-specific herbicide application requires sensor systems for plant recognition and differentiation. A literature review reveals a lack of sensor systems capable of recognizing small weeds in early stages of development (in the two- or four-leaf stage) and crop plants, of making spraying decisions in real time and, in addition, are that are inexpensive and ready for practical use in sprayers. The system described in this work is based on free cascadable and programmable true-color sensors for real-time recognition and identification of individual weed and crop plants. The application of this type of sensor is suitable for municipal areas and farmland with and without crops to perform the site-specific application of herbicides. Initially, databases with reflection properties of plants, natural and artificial backgrounds were created. Crop and weed plants should be recognized by the use of mathematical algorithms and decision models based on these data. They include the characteristic color spectrum, as well as the reflectance characteristics of unvegetated areas and areas with organic material. The CIE-Lab color-space was chosen for color matching because it contains information not only about coloration (a- and b-channel), but also about luminance (L-channel), thus increasing accuracy. Four different decision making algorithms based on different parameters are explained: (i) color similarity (ΔE); (ii) color similarity split in ΔL, Δa and Δb; (iii) a virtual channel ‘d’ and (iv) statistical distribution of the differences of reflection backgrounds and plants. Afterwards, the detection success of the recognition system is described. Furthermore, the minimum weed/plant coverage of the measuring spot was calculated by a mathematical model. Plants with a size of 1–5% of the spot can be recognized, and weeds in the two-leaf stage can be identified with a measuring spot size of 5 cm. By choosing a decision model previously, the detection quality can be increased. Depending on the characteristics of the background, different models are suitable. Finally, the results of field trials on municipal areas (with models of plants), winter wheat fields (with artificial plants) and grassland (with dock) are shown. In each experimental variant, objects and weeds could be recognized. PMID:28786922
A True-Color Sensor and Suitable Evaluation Algorithm for Plant Recognition.
Schmittmann, Oliver; Schulze Lammers, Peter
2017-08-08
Plant-specific herbicide application requires sensor systems for plant recognition and differentiation. A literature review reveals a lack of sensor systems capable of recognizing small weeds in early stages of development (in the two- or four-leaf stage) and crop plants, of making spraying decisions in real time and, in addition, are that are inexpensive and ready for practical use in sprayers. The system described in this work is based on free cascadable and programmable true-color sensors for real-time recognition and identification of individual weed and crop plants. The application of this type of sensor is suitable for municipal areas and farmland with and without crops to perform the site-specific application of herbicides. Initially, databases with reflection properties of plants, natural and artificial backgrounds were created. Crop and weed plants should be recognized by the use of mathematical algorithms and decision models based on these data. They include the characteristic color spectrum, as well as the reflectance characteristics of unvegetated areas and areas with organic material. The CIE-Lab color-space was chosen for color matching because it contains information not only about coloration (a- and b-channel), but also about luminance (L-channel), thus increasing accuracy. Four different decision making algorithms based on different parameters are explained: (i) color similarity (ΔE); (ii) color similarity split in ΔL, Δa and Δb; (iii) a virtual channel 'd' and (iv) statistical distribution of the differences of reflection backgrounds and plants. Afterwards, the detection success of the recognition system is described. Furthermore, the minimum weed/plant coverage of the measuring spot was calculated by a mathematical model. Plants with a size of 1-5% of the spot can be recognized, and weeds in the two-leaf stage can be identified with a measuring spot size of 5 cm. By choosing a decision model previously, the detection quality can be increased. Depending on the characteristics of the background, different models are suitable. Finally, the results of field trials on municipal areas (with models of plants), winter wheat fields (with artificial plants) and grassland (with dock) are shown. In each experimental variant, objects and weeds could be recognized.
NASA Astrophysics Data System (ADS)
Ligomenides, Panos A.
2009-05-01
The power of mathematics is discussed as a way of expressing reasoning, aesthetics and insight in symbolic non-verbal communication. The human culture of discovering mathematical ways of thinking in the enterprise of exploring the understanding of the nature and the evolution of our world through hypotheses, theories and experimental affirmation of the scientific notion of algorithmic and non-algorithmic [`]computation', is examined and commended upon.
Comparison of genetic algorithms with conjugate gradient methods
NASA Technical Reports Server (NTRS)
Bosworth, J. L.; Foo, N. Y.; Zeigler, B. P.
1972-01-01
Genetic algorithms for mathematical function optimization are modeled on search strategies employed in natural adaptation. Comparisons of genetic algorithms with conjugate gradient methods, which were made on an IBM 1800 digital computer, show that genetic algorithms display superior performance over gradient methods for functions which are poorly behaved mathematically, for multimodal functions, and for functions obscured by additive random noise. Genetic methods offer performance comparable to gradient methods for many of the standard functions.
NASA Astrophysics Data System (ADS)
Ham, Woonchul; Song, Chulgyu
2017-05-01
In this paper, we propose a new three-dimensional stereo image reconstruction algorithm for a photoacoustic medical imaging system. We also introduce and discuss a new theoretical algorithm by using the physical concept of Radon transform. The main key concept of proposed theoretical algorithm is to evaluate the existence possibility of the acoustic source within a searching region by using the geometric distance between each sensor element of acoustic detector and the corresponding searching region denoted by grid. We derive the mathematical equation for the magnitude of the existence possibility which can be used for implementing a new proposed algorithm. We handle and derive mathematical equations of proposed algorithm for the one-dimensional sensing array case as well as two dimensional sensing array case too. A mathematical k-wave simulation data are used for comparing the image quality of the proposed algorithm with that of general conventional algorithm in which the FFT should be necessarily used. From the k-wave Matlab simulation results, we can prove the effectiveness of the proposed reconstruction algorithm.
Genetic algorithms using SISAL parallel programming language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tejada, S.
1994-05-06
Genetic algorithms are a mathematical optimization technique developed by John Holland at the University of Michigan [1]. The SISAL programming language possesses many of the characteristics desired to implement genetic algorithms. SISAL is a deterministic, functional programming language which is inherently parallel. Because SISAL is functional and based on mathematical concepts, genetic algorithms can be efficiently translated into the language. Several of the steps involved in genetic algorithms, such as mutation, crossover, and fitness evaluation, can be parallelized using SISAL. In this paper I will l discuss the implementation and performance of parallel genetic algorithms in SISAL.
Model-Based Sensor-Augmented Pump Therapy
Grosman, Benyamin; Voskanyan, Gayane; Loutseiko, Mikhail; Roy, Anirban; Mehta, Aloke; Kurtz, Natalie; Parikh, Neha; Kaufman, Francine R.; Mastrototaro, John J.; Keenan, Barry
2013-01-01
Background In insulin pump therapy, optimization of bolus and basal insulin dose settings is a challenge. We introduce a new algorithm that provides individualized basal rates and new carbohydrate ratio and correction factor recommendations. The algorithm utilizes a mathematical model of blood glucose (BG) as a function of carbohydrate intake and delivered insulin, which includes individualized parameters derived from sensor BG and insulin delivery data downloaded from a patient’s pump. Methods A mathematical model of BG as a function of carbohydrate intake and delivered insulin was developed. The model includes fixed parameters and several individualized parameters derived from the subject’s BG measurements and pump data. Performance of the new algorithm was assessed using n = 4 diabetic canine experiments over a 32 h duration. In addition, 10 in silico adults from the University of Virginia/Padova type 1 diabetes mellitus metabolic simulator were tested. Results The percentage of time in glucose range 80–180 mg/dl was 86%, 85%, 61%, and 30% using model-based therapy and [78%, 100%] (brackets denote multiple experiments conducted under the same therapy and animal model), [75%, 67%], 47%, and 86% for the control experiments for dogs 1 to 4, respectively. The BG measurements obtained in the simulation using our individualized algorithm were in 61–231 mg/dl min–max envelope, whereas use of the simulator’s default treatment resulted in BG measurements 90–210 mg/dl min–max envelope. Conclusions The study results demonstrate the potential of this method, which could serve as a platform for improving, facilitating, and standardizing insulin pump therapy based on a single download of data. PMID:23567006
An automated approach to detecting signals in electroantennogram data
Slone, D.H.; Sullivan, B.T.
2007-01-01
Coupled gas chromatography/electroantennographic detection (GC-EAD) is a widely used method for identifying insect olfactory stimulants present in mixtures of volatiles, and it can greatly accelerate the identification of insect semiochemicals. In GC-EAD, voltage changes across an insect's antenna are measured while the antenna is exposed to compounds eluting from a gas chromatograph. The antenna thus serves as a selective GC detector whose output can be compared to that of a "general" GC detector, commonly a flame ionization detector. Appropriate interpretation of GC-EAD results requires that olfaction-related voltage changes in the antenna be distinguishable from background noise that arises inevitably from antennal preparations and the GC-EAD-associated hardware. In this paper, we describe and compare mathematical algorithms for discriminating olfaction-generated signals in an EAD trace from background noise. The algorithms amplify signals by recognizing their characteristic shape and wavelength while suppressing unstructured noise. We have found these algorithms to be both powerful and highly discriminatory even when applied to noisy traces where the signals would be difficult to discriminate by eye. This new methodology removes operator bias as a factor in signal identification, can improve realized sensitivity of the EAD system, and reduces the number of runs required to confirm the identity of an olfactory stimulant. ?? 2007 Springer Science+Business Media, LLC.
Language and counting: Some recent results
NASA Astrophysics Data System (ADS)
Bell, Garry
1990-02-01
It has long been recognised that the language of mathematics is an important variable in the learning of mathematics, and there has been useful work in isolating and describing the linkage. Steffe and his co-workers at Georgia, for example, (Steffe, von Glasersfeld, Richardson and Cobb, 1983) have suggested that young children may construct verbal countable items to count objects which are hidden from their view. Although there has been a surge of research interest in counting and early childhood mathematics, and in cultural differences in mathematics attainment, there has been little work reported on the linkage between culture as exemplified by language, and initial concepts of numeration. This paper reports on some recent clinical research with kindergarten children of European and Asian background in Australia and America. The research examines the influence that number naming grammar appears to have on young children's understandings of two-digit numbers and place value. It appears that Transparent Standard Number Word Sequences such as Japanese, Chinese and Vietnamese which follow the numerical representation pattern by naming tens and units in order ("two tens three"), may be associated with distinctive place value concepts which may support sophisticated mental algorithms.
An algebra-based method for inferring gene regulatory networks
2014-01-01
Background The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. Results This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the dynamic patterns present in the network. Conclusions Boolean polynomial dynamical systems provide a powerful modeling framework for the reverse engineering of gene regulatory networks, that enables a rich mathematical structure on the model search space. A C++ implementation of the method, distributed under LPGL license, is available, together with the source code, at http://www.paola-vera-licona.net/Software/EARevEng/REACT.html. PMID:24669835
Mathematical background and attitudes toward statistics in a sample of Spanish college students.
Carmona, José; Martínez, Rafael J; Sánchez, Manuel
2005-08-01
To examine the relation of mathematical background and initial attitudes toward statistics of Spanish college students in social sciences the Survey of Attitudes Toward Statistics was given to 827 students. Multivariate analyses tested the effects of two indicators of mathematical background (amount of exposure and achievement in previous courses) on the four subscales. Analysis suggested grades in previous courses are more related to initial attitudes toward statistics than the number of mathematics courses taken. Mathematical background was related with students' affective responses to statistics but not with their valuing of statistics. Implications of possible research are discussed.
A real-time tracking system of infrared dim and small target based on FPGA and DSP
NASA Astrophysics Data System (ADS)
Rong, Sheng-hui; Zhou, Hui-xin; Qin, Han-lin; Wang, Bing-jian; Qian, Kun
2014-11-01
A core technology in the infrared warning system is the detection tracking of dim and small targets with complicated background. Consequently, running the detection algorithm on the hardware platform has highly practical value in the military field. In this paper, a real-time detection tracking system of infrared dim and small target which is used FPGA (Field Programmable Gate Array) and DSP (Digital Signal Processor) as the core was designed and the corresponding detection tracking algorithm and the signal flow is elaborated. At the first stage, the FPGA obtain the infrared image sequence from the sensor, then it suppresses background clutter by mathematical morphology method and enhances the target intensity by Laplacian of Gaussian operator. At the second stage, the DSP obtain both the original image and the filtered image form the FPGA via the video port. Then it segments the target from the filtered image by an adaptive threshold segmentation method and gets rid of false target by pipeline filter. Experimental results show that our system can achieve higher detection rate and lower false alarm rate.
A comparison of common programming languages used in bioinformatics
Fourment, Mathieu; Gillings, Michael R
2008-01-01
Background The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Results Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from Conclusion This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language. PMID:18251993
Combinatorial and Algorithmic Rigidity: Beyond Two Dimensions
2012-12-01
problem. Manuscript, 2010. [35] G. Panina and I. Streinu. Flattening single-vertex origami : the non- expansive case. Computational Geometry : Theory and...in 2008, under the DARPA solicitation “Mathemat- ical Challenges, BAA 07-68”. It addressed Mathematical Challenge Ten: Al- gorithmic Origami and...a number of optimal algorithms and provided critical complexity analysis. The topic of algorithmic origami was successfully engaged from the same
Robust Constrained Blackbox Optimization with Surrogates
2015-05-21
algorithms with OPAL . Mathematical Programming Computation, 6(3):233–254, 2014. 6. M.S. Ouali, H. Aoudjit, and C. Audet. Replacement scheduling of a fleet of...Orban. Optimization of Algorithms with OPAL . Mathematical Programming Computation, 6(3), 233-254, September 2014. DISTRIBUTION A: Distribution
Assessing semantic similarity of texts - Methods and algorithms
NASA Astrophysics Data System (ADS)
Rozeva, Anna; Zerkova, Silvia
2017-12-01
Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.
Edge-directed inference for microaneurysms detection in digital fundus images
NASA Astrophysics Data System (ADS)
Huang, Ke; Yan, Michelle; Aviyente, Selin
2007-03-01
Microaneurysms (MAs) detection is a critical step in diabetic retinopathy screening, since MAs are the earliest visible warning of potential future problems. A variety of algorithms have been proposed for MAs detection in mass screening. Different methods have been proposed for MAs detection. The core technology for most of existing methods is based on a directional mathematical morphological operation called "Top-Hat" filter that requires multiple filtering operations at each pixel. Background structure, uneven illumination and noise often cause confusion between MAs and some non-MA structures and limits the applicability of the filter. In this paper, a novel detection framework based on edge directed inference is proposed for MAs detection. The candidate MA regions are first delineated from the edge map of a fundus image. Features measuring shape, brightness and contrast are extracted for each candidate MA region to better exclude false detection from true MAs. Algorithmic analysis and empirical evaluation reveal that the proposed edge directed inference outperforms the "Top-Hat" based algorithm in both detection accuracy and computational speed.
Heuristic and algorithmic processing in English, mathematics, and science education.
Sharps, Matthew J; Hess, Adam B; Price-Sharps, Jana L; Teh, Jane
2008-01-01
Many college students experience difficulties in basic academic skills. Recent research suggests that much of this difficulty may lie in heuristic competency--the ability to use and successfully manage general cognitive strategies. In the present study, the authors evaluated this possibility. They compared participants' performance on a practice California Basic Educational Skills Test and on a series of questions in the natural sciences with heuristic and algorithmic performance on a series of mathematics and reading comprehension exercises. Heuristic competency in mathematics was associated with better scores in science and mathematics. Verbal and algorithmic skills were associated with better reading comprehension. These results indicate the importance of including heuristic training in educational contexts and highlight the importance of a relatively domain-specific approach to questions of cognition in higher education.
Doing Mathematics with Purpose: Mathematical Text Types
ERIC Educational Resources Information Center
Dostal, Hannah M.; Robinson, Richard
2018-01-01
Mathematical literacy includes learning to read and write different types of mathematical texts as part of purposeful mathematical meaning making. Thus in this article, we describe how learning to read and write mathematical texts (proof text, algorithmic text, algebraic/symbolic text, and visual text) supports the development of students'…
Not mathematics Education, not Mathematics education but Mathematics Education
ERIC Educational Resources Information Center
Galbraith, P. L.
1977-01-01
Weaknesses in the initial preparation of school mathematics teachers are proposed. Emphasis is on the underdevelopment of global understanding in lieu of the manipulation of symbols or the performing of complex algorithms. (MN)
A pedagogical approach to the Boltzmann factor through experiments and simulations
NASA Astrophysics Data System (ADS)
Battaglia, O. R.; Bonura, A.; Sperandeo-Mineo, R. M.
2009-09-01
The Boltzmann factor is the basis of a huge amount of thermodynamic and statistical physics, both classical and quantum. It governs the behaviour of all systems in nature that are exchanging energy with their environment. To understand why the expression has this specific form involves a deep mathematical analysis, whose flow of logic is hard to see and is not at the level of high school or college students' preparation. We here present some experiments and simulations aimed at directly deriving its mathematical expression and illustrating the fundamental concepts on which it is grounded. Experiments use easily available apparatuses, and simulations are developed in the Net-Logo environment that, besides having a user-friendly interface, allows an easy interaction with the algorithm. The approach supplies pedagogical support for the introduction of the Boltzmann factor at the undergraduate level to students without a background in statistical mechanics.
NASA Astrophysics Data System (ADS)
Ganzert, Steven; Guttmann, Josef; Steinmann, Daniel; Kramer, Stefan
Lung protective ventilation strategies reduce the risk of ventilator associated lung injury. To develop such strategies, knowledge about mechanical properties of the mechanically ventilated human lung is essential. This study was designed to develop an equation discovery system to identify mathematical models of the respiratory system in time-series data obtained from mechanically ventilated patients. Two techniques were combined: (i) the usage of declarative bias to reduce search space complexity and inherently providing the processing of background knowledge. (ii) A newly developed heuristic for traversing the hypothesis space with a greedy, randomized strategy analogical to the GSAT algorithm. In 96.8% of all runs the applied equation discovery system was capable to detect the well-established equation of motion model of the respiratory system in the provided data. We see the potential of this semi-automatic approach to detect more complex mathematical descriptions of the respiratory system from respiratory data.
NASA Astrophysics Data System (ADS)
Jonsson, Bert; Kulaksiz, Yagmur C.; Lithner, Johan
2016-11-01
Two separate studies, Jonsson et al. (J. Math Behav. 2014;36: 20-32) and Karlsson Wirebring et al. (Trends Neurosci Educ. 2015;4(1-2):6-14), showed that learning mathematics using creative mathematical reasoning and constructing their own solution methods can be more efficient than if students use algorithmic reasoning and are given the solution procedures. It was argued that effortful struggle was the key that explained this difference. It was also argued that the results could not be explained by the effects of transfer-appropriate processing, although this was not empirically investigated. This study evaluated the hypotheses of transfer-appropriate processing and effortful struggle in relation to the specific characteristics associated with algorithmic reasoning task and creative mathematical reasoning task. In a between-subjects design, upper-secondary students were matched according to their working memory capacity.
Relation between brain architecture and mathematical ability in children: a DBM study.
Han, Zhaoying; Davis, Nicole; Fuchs, Lynn; Anderson, Adam W; Gore, John C; Dawant, Benoit M
2013-12-01
Population-based studies indicate that between 5 and 9 percent of US children exhibit significant deficits in mathematical reasoning, yet little is understood about the brain morphological features related to mathematical performances. In this work, deformation-based morphometry (DBM) analyses have been performed on magnetic resonance images of the brains of 79 third graders to investigate whether there is a correlation between brain morphological features and mathematical proficiency. Group comparison was also performed between Math Difficulties (MD-worst math performers) and Normal Controls (NC), where each subgroup consists of 20 age and gender matched subjects. DBM analysis is based on the analysis of the deformation fields generated by non-rigid registration algorithms, which warp the individual volumes to a common space. To evaluate the effect of registration algorithms on DBM results, five nonrigid registration algorithms have been used: (1) the Adaptive Bases Algorithm (ABA); (2) the Image Registration Toolkit (IRTK); (3) the FSL Nonlinear Image Registration Tool; (4) the Automatic Registration Tool (ART); and (5) the normalization algorithm available in SPM8. The deformation field magnitude (DFM) was used to measure the displacement at each voxel, and the Jacobian determinant (JAC) was used to quantify local volumetric changes. Results show there are no statistically significant volumetric differences between the NC and the MD groups using JAC. However, DBM analysis using DFM found statistically significant anatomical variations between the two groups around the left occipital-temporal cortex, left orbital-frontal cortex, and right insular cortex. Regions of agreement between at least two algorithms based on voxel-wise analysis were used to define Regions of Interest (ROIs) to perform an ROI-based correlation analysis on all 79 volumes. Correlations between average DFM values and standard mathematical scores over these regions were found to be significant. We also found that the choice of registration algorithm has an impact on DBM-based results, so we recommend using more than one algorithm when conducting DBM studies. To the best of our knowledge, this is the first study that uses DBM to investigate brain anatomical features related to mathematical performance in a relatively large population of children. © 2013.
Mathematical model and coordination algorithms for ensuring complex security of an organization
NASA Astrophysics Data System (ADS)
Novoseltsev, V. I.; Orlova, D. E.; Dubrovin, A. S.; Irkhin, V. P.
2018-03-01
The mathematical model of coordination when ensuring complex security of the organization is considered. On the basis of use of a method of casual search three types of algorithms of effective coordination adequate to mismatch level concerning security are developed: a coordination algorithm at domination of instructions of the coordinator; a coordination algorithm at domination of decisions of performers; a coordination algorithm at parity of interests of the coordinator and performers. Assessment of convergence of the algorithms considered above it was made by carrying out a computing experiment. The described algorithms of coordination have property of convergence in the sense stated above. And, the following regularity is revealed: than more simply in the structural relation the algorithm, for the smaller number of iterations is provided to those its convergence.
Reading Bombelli's x-purgated Algebra.
ERIC Educational Resources Information Center
Arcavi, Abraham; Bruckheimer, Maxim
1991-01-01
Presents the algorithm to approximate square roots as reproduced from the 1579 edition of an algebra book by Rafael Bombelli. The sequence of activities illustrates that the process of understanding an original source of mathematics, first at the algorithmic level and then with respect to its mathematical validity in modern terms, can be an…
Matched field localization based on CS-MUSIC algorithm
NASA Astrophysics Data System (ADS)
Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng
2016-04-01
The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.
Research on infrared dim-point target detection and tracking under sea-sky-line complex background
NASA Astrophysics Data System (ADS)
Dong, Yu-xing; Li, Yan; Zhang, Hai-bo
2011-08-01
Target detection and tracking technology in infrared image is an important part of modern military defense system. Infrared dim-point targets detection and recognition under complex background is a difficulty and important strategic value and challenging research topic. The main objects that carrier-borne infrared vigilance system detected are sea-skimming aircrafts and missiles. Due to the characteristics of wide field of view of vigilance system, the target is usually under the sea clutter. Detection and recognition of the target will be taken great difficulties .There are some traditional point target detection algorithms, such as adaptive background prediction detecting method. When background has dispersion-decreasing structure, the traditional target detection algorithms would be more useful. But when the background has large gray gradient, such as sea-sky-line, sea waves etc .The bigger false-alarm rate will be taken in these local area .It could not obtain satisfactory results. Because dim-point target itself does not have obvious geometry or texture feature ,in our opinion , from the perspective of mathematics, the detection of dim-point targets in image is about singular function analysis .And from the perspective image processing analysis , the judgment of isolated singularity in the image is key problem. The foregoing points for dim-point targets detection, its essence is a separation of target and background of different singularity characteristics .The image from infrared sensor usually accompanied by different kinds of noise. These external noises could be caused by the complicated background or from the sensor itself. The noise might affect target detection and tracking. Therefore, the purpose of the image preprocessing is to reduce the effects from noise, also to raise the SNR of image, and to increase the contrast of target and background. According to the low sea-skimming infrared flying small target characteristics , the median filter is used to eliminate noise, improve signal-to-noise ratio, then the multi-point multi-storey vertical Sobel algorithm will be used to detect the sea-sky-line ,so that we can segment sea and sky in the image. Finally using centroid tracking method to capture and trace target. This method has been successfully used to trace target under the sea-sky complex background.
A computerized compensator design algorithm with launch vehicle applications
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.
1976-01-01
This short paper presents a computerized algorithm for the design of compensators for large launch vehicles. The algorithm is applicable to the design of compensators for linear, time-invariant, control systems with a plant possessing a single control input and multioutputs. The achievement of frequency response specifications is cast into a strict constraint mathematical programming format. An improved solution algorithm for solving this type of problem is given, along with the mathematical necessities for application to systems of the above type. A computer program, compensator improvement program (CIP), has been developed and applied to a pragmatic space-industry-related example.
Maignen, François; Hauben, Manfred; Dogné, Jean-Michel
2017-01-01
Background: The lower bound of the 95% confidence interval of measures of disproportionality (Lower95CI) is widely used in signal detection. Masking is a statistical issue by which true signals of disproportionate reporting are hidden by the presence of other medicines. The primary objective of our study is to develop and validate a mathematical framework for assessing the masking effect of Lower95CI. Methods: We have developed our new algorithm based on the masking ratio (MR) developed for the measures of disproportionality. A MR for the Lower95CI (MRCI) is proposed. A simulation study to validate this algorithm was also conducted. Results: We have established the existence of a very close mathematical relation between MR and MRCI. For a given drug–event pair, the same product will be responsible for the highest masking effect with the measure of disproportionality and its Lower95CI. The extent of masking is likely to be very similar across the two methods. An important proportion of identical drug–event associations affected by the presence of an important masking effect is revealed by the unmasking exercise, whether the proportional reporting ratio (PRR) or its confidence interval are used. Conclusion: The detection of the masking effect of Lower95CI can be automated. The real benefits of this unmasking in terms of new true-positive signals (rate of true-positive/false-positive) or time gained by the revealing of signals using this method have not been fully assessed. These benefits should be demonstrated in the context of prospective studies. PMID:28845231
An effective automatic procedure for testing parameter identifiability of HIV/AIDS models.
Saccomani, Maria Pia
2011-08-01
Realistic HIV models tend to be rather complex and many recent models proposed in the literature could not yet be analyzed by traditional identifiability testing techniques. In this paper, we check a priori global identifiability of some of these nonlinear HIV models taken from the recent literature, by using a differential algebra algorithm based on previous work of the author. The algorithm is implemented in a software tool, called DAISY (Differential Algebra for Identifiability of SYstems), which has been recently released (DAISY is freely available on the web site http://www.dei.unipd.it/~pia/ ). The software can be used to automatically check global identifiability of (linear and) nonlinear models described by polynomial or rational differential equations, thus providing a general and reliable tool to test global identifiability of several HIV models proposed in the literature. It can be used by researchers with a minimum of mathematical background.
Twelve automated thresholding methods for segmentation of PET images: a phantom study.
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M
2012-06-21
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Twelve automated thresholding methods for segmentation of PET images: a phantom study
NASA Astrophysics Data System (ADS)
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.
2012-06-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
ERIC Educational Resources Information Center
Ercikan, Kadriye; Chen, Michelle Y.; Lyons-Thomas, Juliette; Goodrich, Shawna; Sandilands, Debra; Roth, Wolff-Michael; Simon, Marielle
2015-01-01
The purpose of this research is to examine the comparability of mathematics and science scores for students from English language backgrounds (ELB) and non-English language backgrounds (NELB). We examine the relationship between English reading proficiency and performance on mathematics and science assessments in Australia, Canada, the United…
The Impact of Critical Thinking and Logico-Mathematical Intelligence on Algorithmic Design Skills
ERIC Educational Resources Information Center
Korkmaz, Ozgen
2012-01-01
The present study aims to reveal the impact of students' critical thinking and logico-mathematical intelligence levels of students on their algorithm design skills. This research was a descriptive study and carried out by survey methods. The sample consisted of 45 first-year educational faculty undergraduate students. The data was collected by…
VLSI implementation of RSA encryption system using ancient Indian Vedic mathematics
NASA Astrophysics Data System (ADS)
Thapliyal, Himanshu; Srinivas, M. B.
2005-06-01
This paper proposes the hardware implementation of RSA encryption/decryption algorithm using the algorithms of Ancient Indian Vedic Mathematics that have been modified to improve performance. The recently proposed hierarchical overlay multiplier architecture is used in the RSA circuitry for multiplication operation. The most significant aspect of the paper is the development of a division architecture based on Straight Division algorithm of Ancient Indian Vedic Mathematics and embedding it in RSA encryption/decryption circuitry for improved efficiency. The coding is done in Verilog HDL and the FPGA synthesis is done using Xilinx Spartan library. The results show that RSA circuitry implemented using Vedic division and multiplication is efficient in terms of area/speed compared to its implementation using conventional multiplication and division architectures.
Technology Focus: Enhancing Conceptual Knowledge of Linear Programming with a Flash Tool
ERIC Educational Resources Information Center
Garofalo, Joe; Cory, Beth
2007-01-01
Mathematical knowledge can be categorized in different ways. One commonly used way is to distinguish between procedural mathematical knowledge and conceptual mathematical knowledge. Procedural knowledge of mathematics refers to formal language, symbols, algorithms, and rules. Conceptual knowledge is essential for meaningful understanding of…
ERIC Educational Resources Information Center
van der Hoff, Quay
2017-01-01
The science of biology has been transforming dramatically and so the need for a stronger mathematical background for biology students has increased. Biological students reaching the senior or post-graduate level often come to realize that their mathematical background is insufficient. Similarly, students in a mathematics programme, interested in…
ERIC Educational Resources Information Center
Stevens, Tara; Aguirre-Munoz, Zenaida; Harris, Gary; Higgins, Raegan; Liu, Xun
2013-01-01
growth of teachers with more and less mathematics background as the teachers participated in professional development across two summers. Professional development activities were associated with increases in teachers' self-efficacy; however, without considering mathematics…
ERIC Educational Resources Information Center
Sayeski, Kristin L.; Paulsen, Kim J.
2010-01-01
In many general education classrooms today, teachers are using "reform" mathematics curricula. These curricula emphasize the application of mathematics in real-life contexts and include such practices as collaborative, group problem solving and student-generated algorithms. Students with learning disabilities in the area of mathematics can…
NASA Astrophysics Data System (ADS)
Demaine, Erik
2012-02-01
Our understanding of the mathematics and algorithms behind paper folding, and geometric folding in general, has increased dramatically over the past several years. These developments have found a surprisingly broad range of applications. In the art of origami, it has helped spur the technical origami revolution. In engineering and science, it has helped solve problems in areas such as manufacturing, robotics, graphics, and protein folding. On the recreational side, it has led to new kinds of folding puzzles and magic. I will give an overview of the mathematics and algorithms of folding, with a focus on new mathematics and sculpture.
NASA Astrophysics Data System (ADS)
Turan, Muhammed K.; Sehirli, Eftal; Elen, Abdullah; Karas, Ismail R.
2015-07-01
Gel electrophoresis (GE) is one of the most used method to separate DNA, RNA, protein molecules according to size, weight and quantity parameters in many areas such as genetics, molecular biology, biochemistry, microbiology. The main way to separate each molecule is to find borders of each molecule fragment. This paper presents a software application that show columns edges of DNA fragments in 3 steps. In the first step the application obtains lane histograms of agarose gel electrophoresis images by doing projection based on x-axis. In the second step, it utilizes k-means clustering algorithm to classify point values of lane histogram such as left side values, right side values and undesired values. In the third step, column edges of DNA fragments is shown by using mean algorithm and mathematical processes to separate DNA fragments from the background in a fully automated way. In addition to this, the application presents locations of DNA fragments and how many DNA fragments exist on images captured by a scientific camera.
Vortex methods for separated flows
NASA Technical Reports Server (NTRS)
Spalart, Philippe R.
1988-01-01
The numerical solution of the Euler or Navier-Stokes equations by Lagrangian vortex methods is discussed. The mathematical background is presented and includes the relationship with traditional point-vortex studies, convergence to smooth solutions of the Euler equations, and the essential differences between two and three-dimensional cases. The difficulties in extending the method to viscous or compressible flows are explained. Two-dimensional flows around bluff bodies are emphasized. Robustness of the method and the assessment of accuracy, vortex-core profiles, time-marching schemes, numerical dissipation, and efficient programming are treated. Operation counts for unbounded and periodic flows are given, and two algorithms designed to speed up the calculations are described.
ERIC Educational Resources Information Center
Gultepe, Nejla; Yalcin Celik, Ayse; Kilic, Ziya
2013-01-01
The purpose of the study was to examine the effects of students' conceptual understanding of chemical concepts and mathematical processing skills on algorithmic problem-solving skills. The sample (N = 554) included grades 9, 10, and 11 students in Turkey. Data were collected using the instrument "MPC Test" and with interviews. The MPC…
NASA Astrophysics Data System (ADS)
Ramírez-López, A.; Romero-Romo, M. A.; Muñoz-Negron, D.; López-Ramírez, S.; Escarela-Pérez, R.; Duran-Valencia, C.
2012-10-01
Computational models are developed to create grain structures using mathematical algorithms based on the chaos theory such as cellular automaton, geometrical models, fractals, and stochastic methods. Because of the chaotic nature of grain structures, some of the most popular routines are based on the Monte Carlo method, statistical distributions, and random walk methods, which can be easily programmed and included in nested loops. Nevertheless, grain structures are not well defined as the results of computational errors and numerical inconsistencies on mathematical methods. Due to the finite definition of numbers or the numerical restrictions during the simulation of solidification, damaged images appear on the screen. These images must be repaired to obtain a good measurement of grain geometrical properties. Some mathematical algorithms were developed to repair, measure, and characterize grain structures obtained from cellular automata in the present work. An appropriate measurement of grain size and the corrected identification of interfaces and length are very important topics in materials science because they are the representation and validation of mathematical models with real samples. As a result, the developed algorithms are tested and proved to be appropriate and efficient to eliminate the errors and characterize the grain structures.
A Mathematical Basis for the Safety Analysis of Conflict Prevention Algorithms
NASA Technical Reports Server (NTRS)
Maddalon, Jeffrey M.; Butler, Ricky W.; Munoz, Cesar A.; Dowek, Gilles
2009-01-01
In air traffic management systems, a conflict prevention system examines the traffic and provides ranges of guidance maneuvers that avoid conflicts. This guidance takes the form of ranges of track angles, vertical speeds, or ground speeds. These ranges may be assembled into prevention bands: maneuvers that should not be taken. Unlike conflict resolution systems, which presume that the aircraft already has a conflict, conflict prevention systems show conflicts for all maneuvers. Without conflict prevention information, a pilot might perform a maneuver that causes a near-term conflict. Because near-term conflicts can lead to safety concerns, strong verification of correct operation is required. This paper presents a mathematical framework to analyze the correctness of algorithms that produce conflict prevention information. This paper examines multiple mathematical approaches: iterative, vector algebraic, and trigonometric. The correctness theories are structured first to analyze conflict prevention information for all aircraft. Next, these theories are augmented to consider aircraft which will create a conflict within a given lookahead time. Certain key functions for a candidate algorithm, which satisfy this mathematical basis are presented; however, the proof that a full algorithm using these functions completely satisfies the definition of safety is not provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Roscoe
2010-03-31
GlobiPack contains a small collection of optimization globalization algorithms. These algorithms are used by optimization and various nonlinear equation solver algorithms.Used as the line-search procedure with Newton and Quasi-Newton optimization and nonlinear equation solver methods. These are standard published 1-D line search algorithms such as are described in the book Nocedal and Wright Numerical Optimization: 2nd edition, 2006. One set of algorithms were copied and refactored from the existing open-source Trilinos package MOOCHO where the linear search code is used to globalize SQP methods. This software is generic to any mathematical optimization problem where smooth derivatives exist. There is nomore » specific connection or mention whatsoever to any specific application, period. You cannot find more general mathematical software.« less
The effect of explanations on mathematical reasoning tasks
NASA Astrophysics Data System (ADS)
Norqvist, Mathias
2018-01-01
Studies in mathematics education often point to the necessity for students to engage in more cognitively demanding activities than just solving tasks by applying given solution methods. Previous studies have shown that students that engage in creative mathematically founded reasoning to construct a solution method, perform significantly better in follow up tests than students that are given a solution method and engage in algorithmic reasoning. However, teachers and textbooks, at least occasionally, provide explanations together with an algorithmic method, and this could possibly be more efficient than creative reasoning. In this study, three matched groups practiced with either creative, algorithmic, or explained algorithmic tasks. The main finding was that students that practiced with creative tasks did, outperform the students that practiced with explained algorithmic tasks in a post-test, despite a much lower practice score. The two groups that got a solution method presented, performed similarly in both practice and post-test, even though one group got an explanation to the given solution method. Additionally, there were some differences between the groups in which variables predicted the post-test score.
Parametric diagnosis of the adaptive gas path in the automatic control system of the aircraft engine
NASA Astrophysics Data System (ADS)
Kuznetsova, T. A.
2017-01-01
The paper dwells on the adaptive multimode mathematical model of the gas-turbine aircraft engine (GTE) embedded in the automatic control system (ACS). The mathematical model is based on the throttle performances, and is characterized by high accuracy of engine parameters identification in stationary and dynamic modes. The proposed on-board engine model is the state space linearized low-level simulation. The engine health is identified by the influence of the coefficient matrix. The influence coefficient is determined by the GTE high-level mathematical model based on measurements of gas-dynamic parameters. In the automatic control algorithm, the sum of squares of the deviation between the parameters of the mathematical model and real GTE is minimized. The proposed mathematical model is effectively used for gas path defects detecting in on-line GTE health monitoring. The accuracy of the on-board mathematical model embedded in ACS determines the quality of adaptive control and reliability of the engine. To improve the accuracy of identification solutions and sustainability provision, the numerical method of Monte Carlo was used. The parametric diagnostic algorithm based on the LPτ - sequence was developed and tested. Analysis of the results suggests that the application of the developed algorithms allows achieving higher identification accuracy and reliability than similar models used in practice.
Van Houdenhoven, Mark; van Oostrum, Jeroen M; Hans, Erwin W; Wullink, Gerhard; Kazemier, Geert
2007-09-01
An operating room (OR) department has adopted an efficient business model and subsequently investigated how efficiency could be further improved. The aim of this study is to show the efficiency improvement of lowering organizational barriers and applying advanced mathematical techniques. We applied advanced mathematical algorithms in combination with scenarios that model relaxation of various organizational barriers using prospectively collected data. The setting is the main inpatient OR department of a university hospital, which sets its surgical case schedules 2 wk in advance using a block planning method. The main outcome measures are the number of freed OR blocks and OR utilization. Lowering organizational barriers and applying mathematical algorithms can yield a 4.5% point increase in OR utilization (95% confidence interval 4.0%-5.0%). This is obtained by reducing the total required OR time. Efficient OR departments can further improve their efficiency. The paper shows that a radical cultural change that comprises the use of mathematical algorithms and lowering organizational barriers improves OR utilization.
Could Elementary Mathematics Textbooks Help Give Attention to Reasons in the Classroom?
ERIC Educational Resources Information Center
Newton, Douglas P.; Newton, Lynn D.
2007-01-01
Trainee teachers, new and non-specialist teachers of elementary mathematics have a tendency to avoid thought about reasons in mathematics. Instead, they tend to favour the development of computational skill through the rote application of procedures, routines and algorithms. Could elementary mathematics textbooks serve as models of practice and…
Questions To Ask and Issues To Consider While Supervising Elementary Mathematics Student Teachers.
ERIC Educational Resources Information Center
Philip, Randolph A.
2000-01-01
Presents four questions to consider when supervising elementary mathematics teachers, who come with many preconceptions about teaching and learning mathematics: What mathematical concepts, procedures, or algorithms are you teaching? Are the concepts and procedures part of a unit? What types of questions do you pose? and What understanding of…
Saccomani, Maria Pia; Audoly, Stefania; Bellu, Giuseppina; D'Angiò, Leontina
2010-04-01
DAISY (Differential Algebra for Identifiability of SYstems) is a recently developed computer algebra software tool which can be used to automatically check global identifiability of (linear and) nonlinear dynamic models described by differential equations involving polynomial or rational functions. Global identifiability is a fundamental prerequisite for model identification which is important not only for biological or medical systems but also for many physical and engineering systems derived from first principles. Lack of identifiability implies that the parameter estimation techniques may not fail but any obtained numerical estimates will be meaningless. The software does not require understanding of the underlying mathematical principles and can be used by researchers in applied fields with a minimum of mathematical background. We illustrate the DAISY software by checking the a priori global identifiability of two benchmark nonlinear models taken from the literature. The analysis of these two examples includes comparison with other methods and demonstrates how identifiability analysis is simplified by this tool. Thus we illustrate the identifiability analysis of other two examples, by including discussion of some specific aspects related to the role of observability and knowledge of initial conditions in testing identifiability and to the computational complexity of the software. The main focus of this paper is not on the description of the mathematical background of the algorithm, which has been presented elsewhere, but on illustrating its use and on some of its more interesting features. DAISY is available on the web site http://www.dei.unipd.it/ approximately pia/. 2010 Elsevier Ltd. All rights reserved.
German undergraduate mathematics enrolment numbers: background and change
NASA Astrophysics Data System (ADS)
Ammann, Claudia; Frauendiener, Jörg; Holton, Derek
2010-06-01
Before we consider the German tertiary system, we review the education system and consider other relevant background details. We then concentrate on the tertiary system and observe that the mathematical enrolments are keeping up with the overall student enrolments. At the same time, the first year mathematics enrolments for women are greater than that for men, although more men are still studying mathematics at university. Finally, we note that the German economy seems to play a role in mathematics enrolments though not necessarily to its comparative detriment.
Effects of Background and School Factors on the Mathematics Achievement.
ERIC Educational Resources Information Center
Papanastasiou, Constantinos
2002-01-01
Using a structural equation model, this study investigated the mathematics achievement of eighth graders in Cyprus enrolled in the year 1994-1995. The model considered two exogenous constructs related to student background and five endogenous constructs. Although attitudes, teaching, and beliefs had direct effect on mathematics outcomes, these…
Profiling Student Use of Calculators in the Learning of High School Mathematics
ERIC Educational Resources Information Center
Crowe, Cheryll E.; Ma, Xin
2010-01-01
Using data from the 2005 National Assessment of Educational Progress, students' use of calculators in the learning of high school mathematics was profiled based on their family background, curriculum background, and advanced mathematics coursework. A statistical method new to educational research--classification and regression trees--was applied…
The Relationship among Mathematics Achievement, Affective Variables and Home Background.
ERIC Educational Resources Information Center
Wong, Ngai-ying
1992-01-01
Investigated the relationships among mathematics achievement, affect, and home background for Hong Kong students (n=1766) in grades 7-13. Achievement was most closely related to academic and nonacademic self-concepts and attitudes toward mathematics, and the latter was most influenced by self- and parental expectations. (LDR)
High-Ability Women and Men in Undergraduate Mathematics and Chemistry Courses.
ERIC Educational Resources Information Center
Bali, John; And Others
1985-01-01
Using samples of college students of very high ability and strong academic background, sex differences in performance and perceptions of performance in introductory chemistry and mathematics courses were studied. Considerable differences favoring men were found, and these appeared to be due primarily to differences in mathematics background.…
The Evolution of Random Number Generation in MUVES
2017-01-01
mathematical basis and statistical justification for algorithms used in the code. The working code provided produces results identical to the current...MUVES, includ- ing the mathematical basis and statistical justification for algorithms used in the code. The working code provided produces results...questionable numerical and statistical properties. The development of the modern system is traced through software change requests, resulting in a random number
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.
2017-02-01
This study is focused on solving an inverse mathematical modelling problem for dynamical systems based on observation data and control inputs. The mathematical model is being searched in the form of a linear differential equation, which determines the system with multiple inputs and a single output, and a vector of the initial point coordinates. The described problem is complex and multimodal and for this reason the proposed evolutionary-based optimization technique, which is oriented on a dynamical system identification problem, was applied. To improve its performance an algorithm restart operator was implemented.
ERIC Educational Resources Information Center
Rye, James A.
1999-01-01
Presents an activity that integrates mathematics and science and focuses on estimation, percent, proportionality, ratio, interconverting units, deriving algorithms mathematically, energy transformation, interactions of energy and matter, bioavailability, composition, density, inferring, and data gathering through scientific interpretation.…
Designing Fuzzy Algorithms to Develop Healthy Dietary Pattern
Asghari, Golaleh; Ejtahed, Hanieh-Sadat; Sarsharzadeh, Mohammad Mahdi; Nazeri, Pantea; Mirmiran, Parvin
2013-01-01
Background Fuzzy logic, a mathematical approach, defines the percentage of desirability for recommended amount of food groups and describes the range of intakes, from deficiency to excess. Objectives The purpose of this research was to find the best fuzzy dietary pattern that constraints energy and nutrients by the iterative algorithm. Materials and Methods An index is derived that reflects how closely the diet of an individual meets all the nutrient requirements set by the dietary reference intake. Fuzzy pyramid pattern was applied for the energy levels from 1000 to 4000 Kcal which estimated the range of recommended servings for seven food groups including fruits, vegetables, grains, meats, milk, oils, fat and added sugar. Results The optimum (lower attention – upper attention) recommended servings per day for fruits, vegetables, grain, meat, dairy, and oils of the 2000 kcal diet were 4.06 (3.75-4.25), 6.69 (6.25-7.00), 5.69 (5.75-6.25), 4.94 (4.5-5.2), 2.75(2.50-3.00), and 2.56 (2.5-2.75), respectively. The fuzzy pattern met most recommended nutrient intake levels except for potassium and vitamin E, which were estimated at 98% and 69% of the dietary reference intake, respectively. Conclusions Using fuzzy logic provides an elegant mathematical solution for finding the optimum point of food groups in dietary pattern. PMID:24454416
NASA Astrophysics Data System (ADS)
Knypiński, Łukasz
2017-12-01
In this paper an algorithm for the optimization of excitation system of line-start permanent magnet synchronous motors will be presented. For the basis of this algorithm, software was developed in the Borland Delphi environment. The software consists of two independent modules: an optimization solver, and a module including the mathematical model of a synchronous motor with a self-start ability. The optimization module contains the bat algorithm procedure. The mathematical model of the motor has been developed in an Ansys Maxwell environment. In order to determine the functional parameters of the motor, additional scripts in Visual Basic language were developed. Selected results of the optimization calculation are presented and compared with results for the particle swarm optimization algorithm.
NASA Technical Reports Server (NTRS)
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Research on registration algorithm for check seal verification
NASA Astrophysics Data System (ADS)
Wang, Shuang; Liu, Tiegen
2008-03-01
Nowadays seals play an important role in China. With the development of social economy, the traditional method of manual check seal identification can't meet the need s of banking transactions badly. This paper focus on pre-processing and registration algorithm for check seal verification using theory of image processing and pattern recognition. First of all, analyze the complex characteristics of check seals. To eliminate the difference of producing conditions and the disturbance caused by background and writing in check image, many methods are used in the pre-processing of check seal verification, such as color components transformation, linearity transform to gray-scale image, medium value filter, Otsu, close calculations and labeling algorithm of mathematical morphology. After the processes above, the good binary seal image can be obtained. On the basis of traditional registration algorithm, a double-level registration method including rough and precise registration method is proposed. The deflection angle of precise registration method can be precise to 0.1°. This paper introduces the concepts of difference inside and difference outside and use the percent of difference inside and difference outside to judge whether the seal is real or fake. The experimental results of a mass of check seals are satisfied. It shows that the methods and algorithmic presented have good robustness to noise sealing conditions and satisfactory tolerance of difference within class.
What Have We Achieved in 50 Years of Equity in School Mathematics?
ERIC Educational Resources Information Center
Jorgensen, Robyn; Lowrie, Tom
2015-01-01
This paper explores the relationship between social backgrounds and geographical locations with mathematical achievement. Using the national testing system in Australia, correlations between the variables were explored and it was found that students from rural and low SES backgrounds are still being marginalised in school mathematics--in terms of…
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.
1971-01-01
An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.
Grossi, Enzo
2006-05-03
In recent years a number of algorithms for cardiovascular risk assessment has been proposed to the medical community. These algorithms consider a number of variables and express their results as the percentage risk of developing a major fatal or non-fatal cardiovascular event in the following 10 to 20 years The author has identified three major pitfalls of these algorithms, linked to the limitation of the classical statistical approach in dealing with this kind of non linear and complex information. The pitfalls are the inability to capture the disease complexity, the inability to capture process dynamics, and the wide confidence interval of individual risk assessment. Artificial Intelligence tools can provide potential advantage in trying to overcome these limitations. The theoretical background and some application examples related to artificial neural networks and fuzzy logic have been reviewed and discussed. The use of predictive algorithms to assess individual absolute risk of cardiovascular future events is currently hampered by methodological and mathematical flaws. The use of newer approaches, such as fuzzy logic and artificial neural networks, linked to artificial intelligence, seems to better address both the challenge of increasing complexity resulting from a correlation between predisposing factors, data on the occurrence of cardiovascular events, and the prediction of future events on an individual level.
Research on an augmented Lagrangian penalty function algorithm for nonlinear programming
NASA Technical Reports Server (NTRS)
Frair, L.
1978-01-01
The augmented Lagrangian (ALAG) Penalty Function Algorithm for optimizing nonlinear mathematical models is discussed. The mathematical models of interest are deterministic in nature and finite dimensional optimization is assumed. A detailed review of penalty function techniques in general and the ALAG technique in particular is presented. Numerical experiments are conducted utilizing a number of nonlinear optimization problems to identify an efficient ALAG Penalty Function Technique for computer implementation.
Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan
2014-08-20
In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.
The knowledge instinct, cognitive algorithms, modeling of language and cultural evolution
NASA Astrophysics Data System (ADS)
Perlovsky, Leonid I.
2008-04-01
The talk discusses mechanisms of the mind and their engineering applications. The past attempts at designing "intelligent systems" encountered mathematical difficulties related to algorithmic complexity. The culprit turned out to be logic, which in one way or another was used not only in logic rule systems, but also in statistical, neural, and fuzzy systems. Algorithmic complexity is related to Godel's theory, a most fundamental mathematical result. These difficulties were overcome by replacing logic with a dynamic process "from vague to crisp," dynamic logic. It leads to algorithms overcoming combinatorial complexity, and resulting in orders of magnitude improvement in classical problems of detection, tracking, fusion, and prediction in noise. I present engineering applications to pattern recognition, detection, tracking, fusion, financial predictions, and Internet search engines. Mathematical and engineering efficiency of dynamic logic can also be understood as cognitive algorithm, which describes fundamental property of the mind, the knowledge instinct responsible for all our higher cognitive functions: concepts, perception, cognition, instincts, imaginations, intuitions, emotions, including emotions of the beautiful. I present our latest results in modeling evolution of languages and cultures, their interactions in these processes, and role of music in cultural evolution. Experimental data is presented that support the theory. Future directions are outlined.
Developing a Pedagogically Useful Content Knowledge in Elementary Mathematics.
ERIC Educational Resources Information Center
Peck, Donald M.; Connell, Michael L.
Elementary school teacher candidates typically enter their professional training with deficiencies in their conceptual understanding of the topics of elementary school mathematics and with a reliance upon procedural (algorithmic) approaches to the solutions of mathematical problems. If elementary school teacher candidates are expected to teach…
Modelling and Optimizing Mathematics Learning in Children
ERIC Educational Resources Information Center
Käser, Tanja; Busetto, Alberto Giovanni; Solenthaler, Barbara; Baschera, Gian-Marco; Kohn, Juliane; Kucian, Karin; von Aster, Michael; Gross, Markus
2013-01-01
This study introduces a student model and control algorithm, optimizing mathematics learning in children. The adaptive system is integrated into a computer-based training system for enhancing numerical cognition aimed at children with developmental dyscalculia or difficulties in learning mathematics. The student model consists of a dynamic…
Using Physical Models to Explain a Division Algorithm.
ERIC Educational Resources Information Center
Vest, Floyd
1985-01-01
Develops a division algorithm in terms of familiar manipulations of concrete objects and presents it with a series of questions for diagnosis of students' understanding of the algorithm in terms of the concrete model utilized. Also offers general guidelines for using concrete illustrations to explain algorithms and other mathematical principles.…
Onishi, Hideo; Motomura, Nobutoku; Takahashi, Masaaki; Yanagisawa, Masamichi; Ogawa, Koichi
2010-03-01
Degradation of SPECT images results from various physical factors. The primary aim of this study was the development of a digital phantom for use in the characterization of factors that contribute to image degradation in clinical SPECT studies. A 3-dimensional mathematic cylinder (3D-MAC) phantom was devised and developed. The phantom (200 mm in diameter and 200 mm long) comprised 3 imbedded stacks of five 30-mm-long cylinders (diameters, 4, 10, 20, 40, and 60 mm). In simulations, the 3 stacks and the background were assigned radioisotope concentrations and attenuation coefficients. SPECT projection datasets that included Compton scattering effects, photoelectric effects, and gamma-camera models were generated using the electron gamma-shower Monte Carlo simulation program. Collimator parameters, detector resolution, total photons acquired, number of projections acquired, and radius of rotation were varied in simulations. The projection data were formatted in Digital Imaging and Communications in Medicine (DICOM) and imported to and reconstructed using commercial reconstruction software on clinical SPECT workstations. Using the 3D-MAC phantom, we validated that contrast depended on size of region of interest (ROI) and was overestimated when the ROI was small. The low-energy general-purpose collimator caused a greater partial-volume effect than did the low-energy high-resolution collimator, and contrast in the cold region was higher using the filtered backprojection algorithm than using the ordered-subset expectation maximization algorithm in the SPECT images. We used imported DICOM projection data and reconstructed these data using vendor software; in addition, we validated reconstructed images. The devised and developed 3D-MAC SPECT phantom is useful for the characterization of various physical factors, contrasts, partial-volume effects, reconstruction algorithms, and such, that contribute to image degradation in clinical SPECT studies.
DOT National Transportation Integrated Search
2012-05-01
The purpose of this document is to fully define and describe the logic flow and mathematical equations for a predictive braking enforcement algorithm intended for implementation in a Positive Train Control (PTC) system.
Mathematics Performance and the Role Played by Affective and Background Factors
ERIC Educational Resources Information Center
Grootenboer, Peter; Hemmings, Brian
2007-01-01
In this article, we report on a study examining those factors which contribute to the mathematics performance of a sample of children aged between 8 and 13 years. The study was designed specifically to consider the potency of a number of mathematical affective factors, as well as background characteristics (viz., gender, ethnicity, and…
ERIC Educational Resources Information Center
Ayalon, Hanna
2003-01-01
Using data on applicants to an Israeli university, researchers examined whether high school course-taking patterns affected gender segregation in higher education. Women were underrepresented among applicants to mathematics-related studies. Mathematical background in high school effectively narrowed the gender gap in applying to selective and…
Symmetrical group theory for mathematical complexity reduction of digital holograms
NASA Astrophysics Data System (ADS)
Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.
2017-10-01
This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.
Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John
2016-01-01
Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.
Getting With It: Flow Diagrams
ERIC Educational Resources Information Center
Ritchie, W. A.
1975-01-01
The use of flow charts in the teaching of college mathematics enhances students' understanding of mathematical processes. Used appropriately in elementary and secondary schools they could also nurture understanding of algorithms. (SD)
Minor Distortions with Major Consequences: Correcting Distortions in Imaging Spectrographs
Esmonde-White, Francis W. L.; Esmonde-White, Karen A.; Morris, Michael D.
2010-01-01
Projective transformation is a mathematical correction (implemented in software) used in the remote imaging field to produce distortion-free images. We present the application of projective transformation to correct minor alignment and astigmatism distortions that are inherent in dispersive spectrographs. Patterned white-light images and neon emission spectra were used to produce registration points for the transformation. Raman transects collected on microscopy and fiber-optic systems were corrected using established methods and compared with the same transects corrected using the projective transformation. Even minor distortions have a significant effect on reproducibility and apparent fluorescence background complexity. Simulated Raman spectra were used to optimize the projective transformation algorithm. We demonstrate that the projective transformation reduced the apparent fluorescent background complexity and improved reproducibility of measured parameters of Raman spectra. Distortion correction using a projective transformation provides a major advantage in reducing the background fluorescence complexity even in instrumentation where slit-image distortions and camera rotation were minimized using manual or mechanical means. We expect these advantages should be readily applicable to other spectroscopic modalities using dispersive imaging spectrographs. PMID:21211158
Supporting Mathematical Discussions: The Roles of Comparison and Cognitive Load
ERIC Educational Resources Information Center
Richland, Lindsey E.; Begolli, Kreshnik Nasi; Simms, Nina; Frausel, Rebecca R.; Lyons, Emily A.
2016-01-01
Mathematical discussions in which students compare alternative solutions to a problem can be powerful modes for students to engage and refine their misconceptions into conceptual understanding, as well as to develop understanding of the mathematics underlying common algorithms. At the same time, these discussions are challenging to lead…
Supporting Mathematical Discussions: The Roles of Comparison and Cognitive Load
ERIC Educational Resources Information Center
Richland, Lindsey E.; Begolli, Kreshnik Nasi; Simms, Nina; Frausel, Rebecca R.; Lyons, Emily A.
2017-01-01
Mathematical discussions in which students compare alternative solutions to a problem can be powerful modes for students to engage and refine their misconceptions into conceptual understanding, as well as to develop understanding of the mathematics underlying common algorithms. At the same time, these discussions are challenging to lead…
Feature and contrast enhancement of mammographic image based on multiscale analysis and morphology.
Wu, Shibin; Yu, Shaode; Yang, Yuhan; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII).
Feature and Contrast Enhancement of Mammographic Image Based on Multiscale Analysis and Morphology
Wu, Shibin; Xie, Yaoqin
2013-01-01
A new algorithm for feature and contrast enhancement of mammographic images is proposed in this paper. The approach bases on multiscale transform and mathematical morphology. First of all, the Laplacian Gaussian pyramid operator is applied to transform the mammography into different scale subband images. In addition, the detail or high frequency subimages are equalized by contrast limited adaptive histogram equalization (CLAHE) and low-pass subimages are processed by mathematical morphology. Finally, the enhanced image of feature and contrast is reconstructed from the Laplacian Gaussian pyramid coefficients modified at one or more levels by contrast limited adaptive histogram equalization and mathematical morphology, respectively. The enhanced image is processed by global nonlinear operator. The experimental results show that the presented algorithm is effective for feature and contrast enhancement of mammogram. The performance evaluation of the proposed algorithm is measured by contrast evaluation criterion for image, signal-noise-ratio (SNR), and contrast improvement index (CII). PMID:24416072
NASA Technical Reports Server (NTRS)
Eren, K.
1980-01-01
The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.
Sequence spaces [Formula: see text] and [Formula: see text] with application in clustering.
Khan, Mohd Shoaib; Alamri, Badriah As; Mursaleen, M; Lohani, Qm Danish
2017-01-01
Distance measures play a central role in evolving the clustering technique. Due to the rich mathematical background and natural implementation of [Formula: see text] distance measures, researchers were motivated to use them in almost every clustering process. Beside [Formula: see text] distance measures, there exist several distance measures. Sargent introduced a special type of distance measures [Formula: see text] and [Formula: see text] which is closely related to [Formula: see text]. In this paper, we generalized the Sargent sequence spaces through introduction of [Formula: see text] and [Formula: see text] sequence spaces. Moreover, it is shown that both spaces are BK -spaces, and one is a dual of another. Further, we have clustered the two-moon dataset by using an induced [Formula: see text]-distance measure (induced by the Sargent sequence space [Formula: see text]) in the k-means clustering algorithm. The clustering result established the efficacy of replacing the Euclidean distance measure by the [Formula: see text]-distance measure in the k-means algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L
2014-08-01
Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrumentmore » on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.« less
Implementation of several mathematical algorithms to breast tissue density classification
NASA Astrophysics Data System (ADS)
Quintana, C.; Redondo, M.; Tirao, G.
2014-02-01
The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories.
ERIC Educational Resources Information Center
Wheater, Rebecca; Durbin, Ben; McNamara, Stephen; Classick, Rachel
2016-01-01
The impact of socio-economic background on mathematics performance in England can be seen from the most to least disadvantaged. As socio-economic background of pupils increases, so does average mathematics performance; the gap between the most and least disadvantaged is equivalent to over three years' of schooling. However, many factors other than…
Calabi-Yau Geometries: Algorithms, Databases and Physics
NASA Astrophysics Data System (ADS)
He, Yang-Hui
2013-08-01
With a bird's-eye view, we survey the landscape of Calabi-Yau threefolds, compact and noncompact, smooth and singular. Emphasis will be placed on the algorithms and databases which have been established over the years, and how they have been useful in the interaction between the physics and the mathematics, especially in string and gauge theories. A skein which runs through this review will be algorithmic and computational algebraic geometry and how, implementing its principles on powerful computers and experimenting with the vast mathematical data, new physics can be learnt. It is hoped that this interdisciplinary glimpse will be of some use to the beginning student.
Subject design and factors affecting achievement in mathematics for biomedical science
NASA Astrophysics Data System (ADS)
Carnie, Steven; Morphett, Anthony
2017-01-01
Reports such as Bio2010 emphasize the importance of integrating mathematical modelling skills into undergraduate biology and life science programmes, to ensure students have the skills and knowledge needed for biological research in the twenty-first century. One way to do this is by developing a dedicated mathematics subject to teach modelling and mathematical concepts in biological contexts. We describe such a subject at a research-intensive Australian university, and discuss the considerations informing its design. We also present an investigation into the effect of mathematical and biological background, prior mathematical achievement, and gender, on student achievement in the subject. The investigation shows that several factors known to predict performance in standard calculus subjects apply also to specialized discipline-specific mathematics subjects, and give some insight into the relative importance of mathematical versus biological background for a biology-focused mathematics subject.
Direct integration of the inverse Radon equation for X-ray computed tomography.
Libin, E E; Chakhlov, S V; Trinca, D
2016-11-22
A new mathematical appoach using the inverse Radon equation for restoration of images in problems of linear two-dimensional x-ray tomography is formulated. In this approach, Fourier transformation is not used, and it gives the chance to create the practical computing algorithms having more reliable mathematical substantiation. Results of software implementation show that for especially for low number of projections, the described approach performs better than standard X-ray tomographic reconstruction algorithms.
Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem
Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi
2013-01-01
Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429
Teaching Multidigit Multiplication: Combining Multiple Frameworks to Analyse a Class Episode
ERIC Educational Resources Information Center
Clivaz, Stéphane
2017-01-01
This paper provides an analysis of a teaching episode of the multidigit algorithm for multiplication, with a focus on the influence of the teacher's mathematical knowledge on their teaching. The theoretical framework uses Mathematical Knowledge for Teaching, mathematical pertinence of the teacher and structuration of the milieu in a descending and…
Das, Swagatam; Mukhopadhyay, Arpan; Roy, Anwit; Abraham, Ajith; Panigrahi, Bijaya K
2011-02-01
The theoretical analysis of evolutionary algorithms is believed to be very important for understanding their internal search mechanism and thus to develop more efficient algorithms. This paper presents a simple mathematical analysis of the explorative search behavior of a recently developed metaheuristic algorithm called harmony search (HS). HS is a derivative-free real parameter optimization algorithm, and it draws inspiration from the musical improvisation process of searching for a perfect state of harmony. This paper analyzes the evolution of the population-variance over successive generations in HS and thereby draws some important conclusions regarding the explorative power of HS. A simple but very useful modification to the classical HS has been proposed in light of the mathematical analysis undertaken here. A comparison with the most recently published variants of HS and four other state-of-the-art optimization algorithms over 15 unconstrained and five constrained benchmark functions reflects the efficiency of the modified HS in terms of final accuracy, convergence speed, and robustness.
The Mucciardi-Gose Clustering Algorithm and Its Applications in Automatic Pattern Recognition.
A procedure known as the Mucciardi- Gose clustering algorithm, CLUSTR, for determining the geometrical or statistical relationships among groups of N...discussion of clustering algorithms is given; the particular advantages of the Mucciardi- Gose procedure are described. The mathematical basis for, and the
Development and application of unified algorithms for problems in computational science
NASA Technical Reports Server (NTRS)
Shankar, Vijaya; Chakravarthy, Sukumar
1987-01-01
A framework is presented for developing computationally unified numerical algorithms for solving nonlinear equations that arise in modeling various problems in mathematical physics. The concept of computational unification is an attempt to encompass efficient solution procedures for computing various nonlinear phenomena that may occur in a given problem. For example, in Computational Fluid Dynamics (CFD), a unified algorithm will be one that allows for solutions to subsonic (elliptic), transonic (mixed elliptic-hyperbolic), and supersonic (hyperbolic) flows for both steady and unsteady problems. The objectives are: development of superior unified algorithms emphasizing accuracy and efficiency aspects; development of codes based on selected algorithms leading to validation; application of mature codes to realistic problems; and extension/application of CFD-based algorithms to problems in other areas of mathematical physics. The ultimate objective is to achieve integration of multidisciplinary technologies to enhance synergism in the design process through computational simulation. Specific unified algorithms for a hierarchy of gas dynamics equations and their applications to two other areas: electromagnetic scattering, and laser-materials interaction accounting for melting.
A mathematical model for computer image tracking.
Legters, G R; Young, T Y
1982-06-01
A mathematical model using an operator formulation for a moving object in a sequence of images is presented. Time-varying translation and rotation operators are derived to describe the motion. A variational estimation algorithm is developed to track the dynamic parameters of the operators. The occlusion problem is alleviated by using a predictive Kalman filter to keep the tracking on course during severe occlusion. The tracking algorithm (variational estimation in conjunction with Kalman filter) is implemented to track moving objects with occasional occlusion in computer-simulated binary images.
Mathematical filtering minimizes metallic halation of titanium implants in MicroCT images.
Ha, Jee; Osher, Stanley J; Nishimura, Ichiro
2013-01-01
Microcomputed tomography (MicroCT) images containing titanium implant suffer from x-rays scattering, artifact and the implant surface is critically affected by metallic halation. To improve the metallic halation artifact, a nonlinear Total Variation denoising algorithm such as Split Bregman algorithm was applied to the digital data set of MicroCT images. This study demonstrated that the use of a mathematical filter could successfully reduce metallic halation, facilitating the osseointegration evaluation at the bone implant interface in the reconstructed images.
The averaging method in applied problems
NASA Astrophysics Data System (ADS)
Grebenikov, E. A.
1986-04-01
The totality of methods, allowing to research complicated non-linear oscillating systems, named in the literature "averaging method" has been given. THe author is describing the constructive part of this method, or a concrete form and corresponding algorithms, on mathematical models, sufficiently general , but built on concrete problems. The style of the book is that the reader interested in the Technics and algorithms of the asymptotic theory of the ordinary differential equations, could solve individually such problems. For specialists in the area of applied mathematics and mechanics.
Introduction to Numerical Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schoonover, Joseph A.
2016-06-14
These are slides for a lecture for the Parallel Computing Summer Research Internship at the National Security Education Center. This gives an introduction to numerical methods. Repetitive algorithms are used to obtain approximate solutions to mathematical problems, using sorting, searching, root finding, optimization, interpolation, extrapolation, least squares regresion, Eigenvalue problems, ordinary differential equations, and partial differential equations. Many equations are shown. Discretizations allow us to approximate solutions to mathematical models of physical systems using a repetitive algorithm and introduce errors that can lead to numerical instabilities if we are not careful.
ERIC Educational Resources Information Center
Clark, Lawrence M.; Badertscher, Eden M.; Napp, Carolina
2013-01-01
Background/Context: Recent research in mathematics education has employed sociocultural and historical lenses to better understand how students experience school mathematics and come to see themselves as capable mathematics learners. This work has identified mathematics classrooms as places where power struggles related to students'…
Spatial predictive mapping using artificial neural networks
NASA Astrophysics Data System (ADS)
Noack, S.; Knobloch, A.; Etzold, S. H.; Barth, A.; Kallmeier, E.
2014-11-01
The modelling or prediction of complex geospatial phenomena (like formation of geo-hazards) is one of the most important tasks for geoscientists. But in practice it faces various difficulties, caused mainly by the complexity of relationships between the phenomena itself and the controlling parameters, as well by limitations of our knowledge about the nature of physical/ mathematical relationships and by restrictions regarding accuracy and availability of data. In this situation methods of artificial intelligence, like artificial neural networks (ANN) offer a meaningful alternative modelling approach compared to the exact mathematical modelling. In the past, the application of ANN technologies in geosciences was primarily limited due to difficulties to integrate it into geo-data processing algorithms. In consideration of this background, the software advangeo® was developed to provide a normal GIS user with a powerful tool to use ANNs for prediction mapping and data preparation within his standard ESRI ArcGIS environment. In many case studies, such as land use planning, geo-hazards analysis and prevention, mineral potential mapping, agriculture & forestry advangeo® has shown its capabilities and strengths. The approach is able to add considerable value to existing data.
A Model for Minimizing Numeric Function Generator Complexity and Delay
2007-12-01
allow computation of difficult mathematical functions in less time and with less hardware than commonly employed methods. They compute piecewise...Programmable Gate Arrays (FPGAs). The algorithms and estimation techniques apply to various NFG architectures and mathematical functions. This...thesis compares hardware utilization and propagation delay for various NFG architectures, mathematical functions, word widths, and segmentation methods
Cognitive Correlates of Performance in Advanced Mathematics
ERIC Educational Resources Information Center
Wei, Wei; Yuan, Hongbo; Chen, Chuansheng; Zhou, Xinlin
2012-01-01
Background: Much research has been devoted to understanding cognitive correlates of elementary mathematics performance, but little such research has been done for advanced mathematics (e.g., modern algebra, statistics, and mathematical logic).Aims: To promote mathematical knowledge among college students, it is necessary to understand what factors…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brickell, E.F.; Davis, J.A.; Simmons, G.J.
A study of the algorithm and the underlying mathematical concepts of A Polynomial Time Algorithm for Breaking Merkle-Hellman Cryptosystems, by Adi Shamir, is presented. Ways of protecting the Merkle-Hellman knapsack from cryptanalysis are given with derivations. (GHT)
The Algorithms of Euclid and Jacobi
ERIC Educational Resources Information Center
Johnson, R. W.; Waterman, M. S.
1976-01-01
In a thesis written for the Doctor of Arts in Mathematics, the connection between Euclid's algorithm and continued fractions is developed and extended to n dimensions. Applications to computer sciences are noted. (SD)
Reverse engineering a gene network using an asynchronous parallel evolution strategy
2010-01-01
Background The use of reverse engineering methods to infer gene regulatory networks by fitting mathematical models to gene expression data is becoming increasingly popular and successful. However, increasing model complexity means that more powerful global optimisation techniques are required for model fitting. The parallel Lam Simulated Annealing (pLSA) algorithm has been used in such approaches, but recent research has shown that island Evolutionary Strategies can produce faster, more reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. Results Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse engineering problem: inferring parameters in the gap gene network. We find that the asynchronous piES exhibits very little communication overhead, and shows significant speed-up for up to 50 nodes: the piES running on 50 nodes is nearly 10 times faster than the best serial algorithm. We compare the asynchronous piES to pLSA on the same test problem, measuring the time required to reach particular levels of residual error, and show that it shows much faster convergence than pLSA across all optimisation conditions tested. Conclusions Our results demonstrate that the piES is consistently faster and more reliable than the pLSA algorithm on this problem, and scales better with increasing numbers of nodes. In addition, the piES is especially well suited to further improvements and adaptations: Firstly, the algorithm's fast initial descent speed and high reliability make it a good candidate for being used as part of a global/local search hybrid algorithm. Secondly, it has the potential to be used as part of a hierarchical evolutionary algorithm, which takes advantage of modern multi-core computing architectures. PMID:20196855
Applications of airborne ultrasound in human-computer interaction.
Dahl, Tobias; Ealo, Joao L; Bang, Hans J; Holm, Sverre; Khuri-Yakub, Pierre
2014-09-01
Airborne ultrasound is a rapidly developing subfield within human-computer interaction (HCI). Touchless ultrasonic interfaces and pen tracking systems are part of recent trends in HCI and are gaining industry momentum. This paper aims to provide the background and overview necessary to understand the capabilities of ultrasound and its potential future in human-computer interaction. The latest developments on the ultrasound transducer side are presented, focusing on capacitive micro-machined ultrasonic transducers, or CMUTs. Their introduction is an important step toward providing real, low-cost multi-sensor array and beam-forming options. We also provide a unified mathematical framework for understanding and analyzing algorithms used for ultrasound detection and tracking for some of the most relevant applications. Copyright © 2014. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakhleh, Luay
I proposed to develop computationally efficient tools for accurate detection and reconstruction of microbes' complex evolutionary mechanisms, thus enabling rapid and accurate annotation, analysis and understanding of their genomes. To achieve this goal, I proposed to address three aspects. (1) Mathematical modeling. A major challenge facing the accurate detection of HGT is that of distinguishing between these two events on the one hand and other events that have similar "effects." I proposed to develop a novel mathematical approach for distinguishing among these events. Further, I proposed to develop a set of novel optimization criteria for the evolutionary analysis of microbialmore » genomes in the presence of these complex evolutionary events. (2) Algorithm design. In this aspect of the project, I proposed to develop an array of e cient and accurate algorithms for analyzing microbial genomes based on the formulated optimization criteria. Further, I proposed to test the viability of the criteria and the accuracy of the algorithms in an experimental setting using both synthetic as well as biological data. (3) Software development. I proposed the nal outcome to be a suite of software tools which implements the mathematical models as well as the algorithms developed.« less
Physiological time-series analysis: what does regularity quantify?
NASA Technical Reports Server (NTRS)
Pincus, S. M.; Goldberger, A. L.
1994-01-01
Approximate entropy (ApEn) is a recently developed statistic quantifying regularity and complexity that appears to have potential application to a wide variety of physiological and clinical time-series data. The focus here is to provide a better understanding of ApEn to facilitate its proper utilization, application, and interpretation. After giving the formal mathematical description of ApEn, we provide a multistep description of the algorithm as applied to two contrasting clinical heart rate data sets. We discuss algorithm implementation and interpretation and introduce a general mathematical hypothesis of the dynamics of a wide class of diseases, indicating the utility of ApEn to test this hypothesis. We indicate the relationship of ApEn to variability measures, the Fourier spectrum, and algorithms motivated by study of chaotic dynamics. We discuss further mathematical properties of ApEn, including the choice of input parameters, statistical issues, and modeling considerations, and we conclude with a section on caveats to ensure correct ApEn utilization.
NASA Astrophysics Data System (ADS)
Mahalakshmi; Murugesan, R.
2018-04-01
This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.
A Mathematical Model and Algorithm for Routing Air Traffic Under Weather Uncertainty
NASA Technical Reports Server (NTRS)
Sadovsky, Alexander V.
2016-01-01
A central challenge in managing today's commercial en route air traffic is the task of routing the aircraft in the presence of adverse weather. Such weather can make regions of the airspace unusable, so all affected flights must be re-routed. Today this task is carried out by conference and negotiation between human air traffic controllers (ATC) responsible for the involved sectors of the airspace. One can argue that, in so doing, ATC try to solve an optimization problem without giving it a precise quantitative formulation. Such a formulation gives the mathematical machinery for constructing and verifying algorithms that are aimed at solving the problem. This paper contributes one such formulation and a corresponding algorithm. The algorithm addresses weather uncertainty and has closed form, which allows transparent analysis of correctness, realism, and computational costs.
NASA Astrophysics Data System (ADS)
Jafari, Hamed; Salmasi, Nasser
2015-09-01
The nurse scheduling problem (NSP) has received a great amount of attention in recent years. In the NSP, the goal is to assign shifts to the nurses in order to satisfy the hospital's demand during the planning horizon by considering different objective functions. In this research, we focus on maximizing the nurses' preferences for working shifts and weekends off by considering several important factors such as hospital's policies, labor laws, governmental regulations, and the status of nurses at the end of the previous planning horizon in one of the largest hospitals in Iran i.e., Milad Hospital. Due to the shortage of available nurses, at first, the minimum total number of required nurses is determined. Then, a mathematical programming model is proposed to solve the problem optimally. Since the proposed research problem is NP-hard, a meta-heuristic algorithm based on simulated annealing (SA) is applied to heuristically solve the problem in a reasonable time. An initial feasible solution generator and several novel neighborhood structures are applied to enhance performance of the SA algorithm. Inspired from our observations in Milad hospital, random test problems are generated to evaluate the performance of the SA algorithm. The results of computational experiments indicate that the applied SA algorithm provides solutions with average percentage gap of 5.49 % compared to the upper bounds obtained from the mathematical model. Moreover, the applied SA algorithm provides significantly better solutions in a reasonable time than the schedules provided by the head nurses.
Geomagnetic matching navigation algorithm based on robust estimation
NASA Astrophysics Data System (ADS)
Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan
2017-08-01
The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.
Mathematics Teachers' Ideas about Mathematical Models: A Diverse Landscape
ERIC Educational Resources Information Center
Bautista, Alfredo; Wilkerson-Jerde, Michelle H.; Tobin, Roger G.; Brizuela, Bárbara M.
2014-01-01
This paper describes the ideas that mathematics teachers (grades 5-9) have regarding mathematical models of real-world phenomena, and explores how teachers' ideas differ depending on their educational background. Participants were 56 United States in-service mathematics teachers. We analyzed teachers' written responses to three open-ended…
Measuring the self-similarity exponent in Lévy stable processes of financial time series
NASA Astrophysics Data System (ADS)
Fernández-Martínez, M.; Sánchez-Granero, M. A.; Trinidad Segovia, J. E.
2013-11-01
Geometric method-based procedures, which will be called GM algorithms herein, were introduced in [M.A. Sánchez Granero, J.E. Trinidad Segovia, J. García Pérez, Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A 387 (2008) 5543-5551], to efficiently calculate the self-similarity exponent of a time series. In that paper, the authors showed empirically that these algorithms, based on a geometrical approach, are more accurate than the classical algorithms, especially with short length time series. The authors checked that GM algorithms are good when working with (fractional) Brownian motions. Moreover, in [J.E. Trinidad Segovia, M. Fernández-Martínez, M.A. Sánchez-Granero, A note on geometric method-based procedures to calculate the Hurst exponent, Phys. A 391 (2012) 2209-2214], a mathematical background for the validity of such procedures to estimate the self-similarity index of any random process with stationary and self-affine increments was provided. In particular, they proved theoretically that GM algorithms are also valid to explore long-memory in (fractional) Lévy stable motions. In this paper, we prove empirically by Monte Carlo simulation that GM algorithms are able to calculate accurately the self-similarity index in Lévy stable motions and find empirical evidence that they are more precise than the absolute value exponent (denoted by AVE onwards) and the multifractal detrended fluctuation analysis (MF-DFA) algorithms, especially with a short length time series. We also compare them with the generalized Hurst exponent (GHE) algorithm and conclude that both GM2 and GHE algorithms are the most accurate to study financial series. In addition to that, we provide empirical evidence, based on the accuracy of GM algorithms to estimate the self-similarity index in Lévy motions, that the evolution of the stocks of some international market indices, such as U.S. Small Cap and Nasdaq100, cannot be modelized by means of a Brownian motion.
Andrés-Toro, B; Girón-Sierra, J M; Fernández-Blanco, P; López-Orozco, J A; Besada-Portas, E
2004-04-01
This paper describes empirical research on the model, optimization and supervisory control of beer fermentation. Conditions in the laboratory were made as similar as possible to brewery industry conditions. Since mathematical models that consider realistic industrial conditions were not available, a new mathematical model design involving industrial conditions was first developed. Batch fermentations are multiobjective dynamic processes that must be guided along optimal paths to obtain good results. The paper describes a direct way to apply a Pareto set approach with multiobjective evolutionary algorithms (MOEAs). Successful finding of optimal ways to drive these processes were reported. Once obtained, the mathematical fermentation model was used to optimize the fermentation process by using an intelligent control based on certain rules.
Gas leak detection in infrared video with background modeling
NASA Astrophysics Data System (ADS)
Zeng, Xiaoxia; Huang, Likun
2018-03-01
Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.
The Laws of Nature and the Effectiveness of Mathematics
NASA Astrophysics Data System (ADS)
Dorato, Mauro
In this paper I try to evaluate what I regard as the main attempts at explaining the effectiveness of mathematics in the natural sciences, namely (1) Antinaturalism, (2) Kantism, (3) Semanticism, (4) Algorithmic Complexity Theory. The first position has been defended by Mark Steiner, who claims that the "user friendliness" of nature for the applied mathematician is the best argument against a naturalistic explanation of the origin of the universe. The second is naturalistic and mixes the Kantian tradition with evolutionary studies about our innate mathematical abilities. The third turns to the Fregean tradition and considers mathematics a particular kind of language, thus treating the effectiveness of mathematics as a particular instance of the effectiveness of natural languages. The fourth hypothesis, building on formal results by Kolmogorov, Solomonov and Chaitin, claims that mathematics is so useful in describing the natural world because it is the science of the abbreviation of sequences, and mathematically formulated laws of nature enable us to compress the information contained in the sequence of numbers in which we code our observations. In this tradition, laws are equivalent to the shortest algorithms capable of generating the lists of zeros and ones representing the empirical data. Along the way, I present and reject the "deflationary explanation", which claims that in wondering about the applicability of so many mathematical structures to nature, we tend to forget the many cases in which no application is possible.
ERIC Educational Resources Information Center
Gill, Michele Gregoire; Boote, David
2012-01-01
Background/Context: Despite the tremendous amount of effort devoted by many mathematics educators to promote, defend, and implement reform-based mathematics education, procedural mathematics, which locates mathematical correctness in the procedures learned from textbooks and teachers, persists. Many researchers have identified school and classroom…
How can Steganography BE AN Interpretation of the Redundancy in Pre-Mrna Ribbon?
NASA Astrophysics Data System (ADS)
Regoli, Massimo
2013-01-01
In the past years we have developed a new symmetric encryption algorithm based on a new interpretation of the biological phenomenon of the presence of redundant sequences inside pre-mRNA (the introns apparently junk DNA) from a `science of information' point of view. For the first, we have shown the flow of the algorithm by creating a parallel between the various biological aspects of the phenomenon of redundancy and the corresponding agents in our encryption algorithm. Then we set a strict mathematical terminology identifying spaces and mathematical operators for the correct application and interpretation of the algorithm. Finally, last year, we proved that our algorithm has excellent statistics behavior being able to exceed the standard static tests. This year we will try to add a new operator (agent) that is capable of allowing the introduction of a mechanisms like a steganographic sub message (sub ribbon of mRNA) inside the original message (mRNA ribbon).
Optimization Techniques for Analysis of Biological and Social Networks
2012-03-28
analyzing a new metaheuristic technique, variable objective search. 3. Experimentation and application: Implement the proposed algorithms , test and fine...alternative mathematical programming formulations, their theoretical analysis, the development of exact algorithms , and heuristics. Originally, clusters...systematic fashion under a unifying theoretical and algorithmic framework. Optimization, Complex Networks, Social Network Analysis, Computational
Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions
ERIC Educational Resources Information Center
Torbeyns, Joke; Verschaffel, Lieven
2016-01-01
This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…
Using Mathematics to Make Computing on Encrypted Data Secure and Practical
2015-12-01
LLL) lattice basis reduction algorithm, G-Lattice, Cryptography , Security, Gentry-Szydlo Algorithm, Ring-LWE 16. SECURITY CLASSIFICATION OF: 17...with symmetry be further developed, in order to quantify the security of lattice-based cryptography , including especially the security of homomorphic...the Gentry-Szydlo algorithm, and the ideas should be applicable to a range of questions in cryptography . The new algorithm of Lenstra and Silverberg
Development of PET projection data correction algorithm
NASA Astrophysics Data System (ADS)
Bazhanov, P. V.; Kotina, E. D.
2017-12-01
Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.
NASA Technical Reports Server (NTRS)
Bakhshiyan, B. T.; Nazirov, R. R.; Elyasberg, P. E.
1980-01-01
The problem of selecting the optimal algorithm of filtration and the optimal composition of the measurements is examined assuming that the precise values of the mathematical expectancy and the matrix of covariation of errors are unknown. It is demonstrated that the optimal algorithm of filtration may be utilized for making some parameters more precise (for example, the parameters of the gravitational fields) after preliminary determination of the elements of the orbit by a simpler method of processing (for example, the method of least squares).
Computations on the massively parallel processor at the Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Strong, James P.
1991-01-01
Described are four significant algorithms implemented on the massively parallel processor (MPP) at the Goddard Space Flight Center. Two are in the area of image analysis. Of the other two, one is a mathematical simulation experiment and the other deals with the efficient transfer of data between distantly separated processors in the MPP array. The first algorithm presented is the automatic determination of elevations from stereo pairs. The second algorithm solves mathematical logistic equations capable of producing both ordered and chaotic (or random) solutions. This work can potentially lead to the simulation of artificial life processes. The third algorithm is the automatic segmentation of images into reasonable regions based on some similarity criterion, while the fourth is an implementation of a bitonic sort of data which significantly overcomes the nearest neighbor interconnection constraints on the MPP for transferring data between distant processors.
The Goddard Profiling Algorithm (GPROF): Description and Current Applications
NASA Technical Reports Server (NTRS)
Olson, William S.; Yang, Song; Stout, John E.; Grecu, Mircea
2004-01-01
Atmospheric scientists use different methods for interpreting satellite data. In the early days of satellite meteorology, the analysis of cloud pictures from satellites was primarily subjective. As computer technology improved, satellite pictures could be processed digitally, and mathematical algorithms were developed and applied to the digital images in different wavelength bands to extract information about the atmosphere in an objective way. The kind of mathematical algorithm one applies to satellite data may depend on the complexity of the physical processes that lead to the observed image, and how much information is contained in the satellite images both spatially and at different wavelengths. Imagery from satellite-borne passive microwave radiometers has limited horizontal resolution, and the observed microwave radiances are the result of complex physical processes that are not easily modeled. For this reason, a type of algorithm called a Bayesian estimation method is utilized to interpret passive microwave imagery in an objective, yet computationally efficient manner.
Mathematics Achievement Levels of Black and White Youth. Report No. 165.
ERIC Educational Resources Information Center
Jones, Lyle V.; And Others
Based on data provided by the National Assessment of Educational Progress, this study examines mathematics achievement in relation to various background variables, contrasts achievement levels of black and white (females and males) youth, and evaluates group achievement differences in the light of group differences in background variables.…
A novel highly parallel algorithm for linearly unmixing hyperspectral images
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto
2014-10-01
Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.
A Spreadsheet in the Mathematics Classroom.
ERIC Educational Resources Information Center
Watkins, Will; Taylor, Monty
1989-01-01
Demonstrates how spreadsheets can be used to implement linear system solving algorithms in college mathematics classes. Lotus 1-2-3 is described, a linear system of equations is illustrated using spreadsheets, and the interplay between applications, computations, and theory is discussed. (four references) (LRW)
Preliminary Investigation of Profiling Tools and Methods
2011-06-01
1 Jaccard coefficient is a unique mathematical way to measure behaviour co-occurancesrd’s coefficient (measure similarity) 4 DRDC Toronto TM...a few heuristics (that are the basis for the mathematical algorithms used in GP systems) these individuals perform just as well as the system...route that GP is a holistic method of data interpretation with unsystematic methodologies, practices and varying mathematical principles, then anecdotes
South Carolina Guide for Mathematics for the Technologies (Applied Vocational Mathematics).
ERIC Educational Resources Information Center
Moore, Charles; And Others
In this instructional guide, a third-level, two-semester mathematics course specifically for the student who plans a career in a vocational field is presented. The course is designed to meet the needs of students with varying mathematical backgrounds and to teach the mathematical skills required by various technical areas. In this practical…
ERIC Educational Resources Information Center
Nunez, Rafael E.
This paper gives a brief introduction to a discipline called the cognitive science of mathematics. The theoretical background of the arguments is based on embodied cognition and findings in cognitive linguistics. It discusses Mathematical Idea Analysis, a set of techniques for studying implicit structures in mathematics. Particular attention is…
ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra
2011-01-01
Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817
Student Math Skills Reference Manual.
ERIC Educational Resources Information Center
Wilson, Odell; And Others
This mathematics support guide is intended for use by vocational students and instructors as a review of essential mathematics concepts and for problem-solving exercises in the vocations. It is designed to accompany the "Mathematical Skills Inventory," which tests mathematics skills, attitudes, and background. A section entitled Arithmetic Skills…
Modern Versus Traditional Mathematics
ERIC Educational Resources Information Center
Roberts, A. M.
1974-01-01
The effect of different secondary school mathematics syllabi on first-year performance in college-level mathematics was studied in an attempt to evaluate the syllabus change. Students with a modern mathematics background performed sigficantly better on most first-year units. A topic-by-topic analysis of results is included. (DT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murav’ev, V. P., E-mail: murval1@mail.ru; Kochetkov, A. V.; Glazova, E. G.
A mathematical model and algorithms are proposed for automatic calculation of the optimum flow rate of cooling water in nuclear and thermal power plants with cooling systems of arbitrary complexity. An unlimited number of configuration and design variants are assumed with the possibility of obtaining a result for any computational time interval, from monthly to hourly. The structural solutions corresponding to an optimum cooling water flow rate can be used for subsequent engineering-economic evaluation of the best cooling system variant. The computerized mathematical model and algorithms make it possible to determine the availability and degree of structural changes for themore » cooling system in all stages of the life cycle of a plant.« less
The analysis of isotherms of radionuclides sorption by inorganic sorbents
NASA Astrophysics Data System (ADS)
Bykova, E. P.; Nedobukh, T. A.
2017-09-01
The isotherm of cesium sorption by an inorganic sorbent based on granulated glauconite obtained in a wide cesium concentrations range was mathematically treated using Langmuir, Freundlich and Redlich-Peterson sorption models. The algorithms of mathematical treatment of experimental data using these models were described; parameters of all isotherms were determined. It was shown that estimating the correctness of various sorption models relies not only on the correlation coefficient values but also on the closeness of the calculated and experimental data. Various types of sorption sites were found as a result of mathematical treatment of the isotherm of cesium sorption. The algorithm was described and calculation of parameters of the isotherm was performed under the assumption that simultaneous sorption on all three types of sorption sites occurs in accordance with Langmuir isotherm.
Sensitivity analysis of dynamic biological systems with time-delays.
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2010-10-15
Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.
In-Situ Assays Using a New Advanced Mathematical Algorithm - 12400
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oginni, B.M.; Bronson, F.L.; Field, M.B.
2012-07-01
Current mathematical efficiency modeling software for in-situ counting, such as the commercially available In-Situ Object Calibration Software (ISOCS), typically allows the description of measurement geometries via a list of well-defined templates which describe regular objects, such as boxes, cylinder, or spheres. While for many situations, these regular objects are sufficient to describe the measurement conditions, there are occasions in which a more detailed model is desired. We have developed a new all-purpose geometry template that can extend the flexibility of current ISOCS templates. This new template still utilizes the same advanced mathematical algorithms as current templates, but allows the extensionmore » to a multitude of shapes and objects that can be placed at any location and even combined. In addition, detectors can be placed anywhere and aimed at any location within the measurement scene. Several applications of this algorithm to in-situ waste assay measurements, as well as, validations of this template using Monte Carlo calculations and experimental measurements are studied. Presented in this paper is a new template of the mathematical algorithms for evaluating efficiencies. This new template combines all the advantages of the ISOCS and it allows the use of very complex geometries, it also allows stacking of geometries on one another in the same measurement scene and it allows the detector to be placed anywhere in the measurement scene and pointing in any direction. We have shown that the template compares well with the previous ISOCS software within the limit of convergence of the code, and also compare well with the MCNPX and measured data within the joint uncertainties for the code and the data. The new template agrees with ISOCS to within 1.5% at all energies. It agrees with the MCNPX to within 10% at all energies and it agrees with most geometries within 5%. It finally agrees with measured data to within 10%. This mathematical algorithm can now be used for quickly and accurately evaluating efficiencies for wider range of gamma-ray spectroscopy applications. (authors)« less
Teaching Computation in Primary School without Traditional Written Algorithms
ERIC Educational Resources Information Center
Hartnett, Judy
2015-01-01
Concerns regarding the dominance of the traditional written algorithms in schools have been raised by many mathematics educators, yet the teaching of these procedures remains a dominant focus in in primary schools. This paper reports on a project in one school where the staff agreed to put the teaching of the traditional written algorithm aside,…
Short description of mathematical support programs for space experiments in the Interkosmos program
NASA Technical Reports Server (NTRS)
Elyasberg, P. Y.
1979-01-01
A synopsis of programs of mathematical support designed at the Institute for Cosmic Research of the USSR Academy of Sciences for cosmic experiments being conducted in the Interkosmos Program is presented. A short description of the appropriate algorithm is given.
An Experimental Approach to Mathematical Modeling in Biology
ERIC Educational Resources Information Center
Ledder, Glenn
2008-01-01
The simplest age-structured population models update a population vector via multiplication by a matrix. These linear models offer an opportunity to introduce mathematical modeling to students of limited mathematical sophistication and background. We begin with a detailed discussion of mathematical modeling, particularly in a biological context.…
Student Perceptions about Applied Mathematics.
ERIC Educational Resources Information Center
Keif, Malcolm G.; Stewart, Bob R.
Background information on the history and rationale for Tech Prep introduces the description of a study that examines the perceptions of students enrolled in Applied Mathematics 1 and Applied Mathematics 2 courses which are based on the Center for Occupational Research and Development's (CORD) applied mathematics curriculum. The primary goal is to…
Problem Posing with the Multiplication Table
ERIC Educational Resources Information Center
Dickman, Benjamin
2014-01-01
Mathematical problem posing is an important skill for teachers of mathematics, and relates readily to mathematical creativity. This article gives a bit of background information on mathematical problem posing, lists further references to connect problem posing and creativity, and then provides 20 problems based on the multiplication table to be…
Confidence in Teaching Mathematics among Malaysian Pre-Service Teachers
ERIC Educational Resources Information Center
Yunus, Aida Suraya Md.; Hamzah, Ramlah; Ismail, Habsah; Husain, Sharifah Kartini Said; Ismail, Mat Rofa
2006-01-01
This study focuses on the confidence level of mathematics education students in teaching school mathematics. Respondents were 165 final year students from four Malaysian universities. It was found that the respondents showed a strong foundation in mathematics upon entrance to the university. In spite of their strong background in school…
ERIC Educational Resources Information Center
Stripling, Christopher T.; Roberts, T. Grady
2013-01-01
The purpose of this exploratory study was to investigate the relationships between mathematics ability, personal mathematics efficacy, mathematics teaching efficacy, personal teaching efficacy, and background characteristics of preservice agricultural education teachers. Data were collected for two years at the University of Florida. Fourteen…
Image analysis applied to luminescence microscopy
NASA Astrophysics Data System (ADS)
Maire, Eric; Lelievre-Berna, Eddy; Fafeur, Veronique; Vandenbunder, Bernard
1998-04-01
We have developed a novel approach to study luminescent light emission during migration of living cells by low-light imaging techniques. The equipment consists in an anti-vibration table with a hole for a direct output under the frame of an inverted microscope. The image is directly captured by an ultra low- light level photon-counting camera equipped with an image intensifier coupled by an optical fiber to a CCD sensor. This installation is dedicated to measure in a dynamic manner the effect of SF/HGF (Scatter Factor/Hepatocyte Growth Factor) both on activation of gene promoter elements and on cell motility. Epithelial cells were stably transfected with promoter elements containing Ets transcription factor-binding sites driving a luciferase reporter gene. Luminescent light emitted by individual cells was measured by image analysis. Images of luminescent spots were acquired with a high aperture objective and time exposure of 10 - 30 min in photon-counting mode. The sensitivity of the camera was adjusted to a high value which required the use of a segmentation algorithm dedicated to eliminate the background noise. Hence, image segmentation and treatments by mathematical morphology were particularly indicated in these experimental conditions. In order to estimate the orientation of cells during their migration, we used a dedicated skeleton algorithm applied to the oblong spots of variable intensities emitted by the cells. Kinetic changes of luminescent sources, distance and speed of migration were recorded and then correlated with cellular morphological changes for each spot. Our results highlight the usefulness of the mathematical morphology to quantify kinetic changes in luminescence microscopy.
ERIC Educational Resources Information Center
Hyatt, Sherry
2013-01-01
Research shows that children of different backgrounds and cultures learn and perform differently in mathematics despite similar intelligence levels and mathematics instruction (Alvarez & Bali, 2004). Ethnomathematics strives to explore and explain such phenomena in terms of the complex role culture plays in one's background experiences and…
Fuzzy Performance between Surface Fitting and Energy Distribution in Turbulence Runner
Liang, Zhongwei; Liu, Xiaochu; Ye, Bangyan; Brauwer, Richard Kars
2012-01-01
Because the application of surface fitting algorithms exerts a considerable fuzzy influence on the mathematical features of kinetic energy distribution, their relation mechanism in different external conditional parameters must be quantitatively analyzed. Through determining the kinetic energy value of each selected representative position coordinate point by calculating kinetic energy parameters, several typical algorithms of complicated surface fitting are applied for constructing microkinetic energy distribution surface models in the objective turbulence runner with those obtained kinetic energy values. On the base of calculating the newly proposed mathematical features, we construct fuzzy evaluation data sequence and present a new three-dimensional fuzzy quantitative evaluation method; then the value change tendencies of kinetic energy distribution surface features can be clearly quantified, and the fuzzy performance mechanism discipline between the performance results of surface fitting algorithms, the spatial features of turbulence kinetic energy distribution surface, and their respective environmental parameter conditions can be quantitatively analyzed in detail, which results in the acquirement of final conclusions concerning the inherent turbulence kinetic energy distribution performance mechanism and its mathematical relation. A further turbulence energy quantitative study can be ensured. PMID:23213287
Decoding algorithm for vortex communications receiver
NASA Astrophysics Data System (ADS)
Kupferman, Judy; Arnon, Shlomi
2018-01-01
Vortex light beams can provide a tremendous alphabet for encoding information. We derive a symbol decoding algorithm for a direct detection matrix detector vortex beam receiver using Laguerre Gauss (LG) modes, and develop a mathematical model of symbol error rate (SER) for this receiver. We compare SER as a function of signal to noise ratio (SNR) for our algorithm and for the Pearson correlation algorithm. To our knowledge, this is the first comprehensive treatment of a decoding algorithm of a matrix detector for an LG receiver.
Koppers, Lars; Wormer, Holger; Ickstadt, Katja
2017-08-01
The quality and authenticity of images is essential for data presentation, especially in the life sciences. Questionable images may often be a first indicator for questionable results, too. Therefore, a tool that uses mathematical methods to detect suspicious images in large image archives can be a helpful instrument to improve quality assurance in publications. As a first step towards a systematic screening tool, especially for journal editors and other staff members who are responsible for quality assurance, such as laboratory supervisors, we propose a basic classification of image manipulation. Based on this classification, we developed and explored some simple algorithms to detect copied areas in images. Using an artificial image and two examples of previously published modified images, we apply quantitative methods such as pixel-wise comparison, a nearest neighbor and a variance algorithm to detect copied-and-pasted areas or duplicated images. We show that our algorithms are able to detect some simple types of image alteration, such as copying and pasting background areas. The variance algorithm detects not only identical, but also very similar areas that differ only by brightness. Further types could, in principle, be implemented in a standardized scanning routine. We detected the copied areas in a proven case of image manipulation in Germany and showed the similarity of two images in a retracted paper from the Kato labs, which has been widely discussed on sites such as pubpeer and retraction watch.
Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code
NASA Astrophysics Data System (ADS)
Phillips, William; Russwurm, George M.
1999-02-01
This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.
Parallel algorithm of real-time infrared image restoration based on total variation theory
NASA Astrophysics Data System (ADS)
Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei
2015-10-01
Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.
Logic via Computer Programming.
ERIC Educational Resources Information Center
Wieschenberg, Agnes A.
This paper proposed the question "How do we teach logical thinking and sophisticated mathematics to unsophisticated college students?" One answer among many is through the writing of computer programs. The writing of computer algorithms is mathematical problem solving and logic in disguise and it may attract students who would otherwise stop…
Learning with Calculator Games
ERIC Educational Resources Information Center
Frahm, Bruce
2013-01-01
Educational games provide a fun introduction to new material and a review of mathematical algorithms. Specifically, games can be designed to assist students in developing mathematical skills as an incidental consequence of the game-playing process. The programs presented in this article are adaptations of board games or television shows that…
Techtalk: Mobile Apps and College Mathematics
ERIC Educational Resources Information Center
Hoang, Theresa V.; Caverly, David C.
2013-01-01
In this column, the authors discuss apps useful in developing mathematical reasoning. They place these into a theoretical framework, suggesting how they could be used in an instructional model such as the Algorithmic Instructional Technique (AIT) developed by Vasquez (2003). This model includes four stages: modeling, practice, transition, and…
Generalised Assignment Matrix Methodology in Linear Programming
ERIC Educational Resources Information Center
Jerome, Lawrence
2012-01-01
Discrete Mathematics instructors and students have long been struggling with various labelling and scanning algorithms for solving many important problems. This paper shows how to solve a wide variety of Discrete Mathematics and OR problems using assignment matrices and linear programming, specifically using Excel Solvers although the same…
Satellite orbit computation methods
NASA Technical Reports Server (NTRS)
1977-01-01
Mathematical and algorithmical techniques for solution of problems in satellite dynamics were developed, along with solutions to satellite orbit motion. Dynamical analysis of shuttle on-orbit operations were conducted. Computer software routines for use in shuttle mission planning were developed and analyzed, while mathematical models of atmospheric density were formulated.
ERIC Educational Resources Information Center
Snapp, Robert R.; Neumann, Maureen D.
2015-01-01
The rapid growth of digital technology, including the worldwide adoption of mobile and embedded computers, places new demands on K-grade 12 educators and their students. Young people should have an opportunity to learn the technical knowledge of computer science (e.g., computer programming, mathematical logic, and discrete mathematics) in order to…
Enhanced MHT encryption scheme for chosen plaintext attack
NASA Astrophysics Data System (ADS)
Xie, Dahua; Kuo, C. C. J.
2003-11-01
Efficient multimedia encryption algorithms play a key role in multimedia security protection. One multimedia encryption algorithm known as the MHT (Multiple Huffman Tables) method was recently developed by Wu and Kuo. Even though MHT has many desirable properties, it is vulnerable to the chosen-plaintext attack (CPA). An enhanced MHT algorithm is proposed in this work to overcome this drawback. It is proved mathematically that the proposed algorithm is secure against the chosen plaintext attack.
Quantitative Analysis of the Interdisciplinarity of Applied Mathematics.
Xie, Zheng; Duan, Xiaojun; Ouyang, Zhenzheng; Zhang, Pengyuan
2015-01-01
The increasing use of mathematical techniques in scientific research leads to the interdisciplinarity of applied mathematics. This viewpoint is validated quantitatively here by statistical and network analysis on the corpus PNAS 1999-2013. A network describing the interdisciplinary relationships between disciplines in a panoramic view is built based on the corpus. Specific network indicators show the hub role of applied mathematics in interdisciplinary research. The statistical analysis on the corpus content finds that algorithms, a primary topic of applied mathematics, positively correlates, increasingly co-occurs, and has an equilibrium relationship in the long-run with certain typical research paradigms and methodologies. The finding can be understood as an intrinsic cause of the interdisciplinarity of applied mathematics.
ERIC Educational Resources Information Center
Akyuz, Gozde; Berberoglu, Giray
2010-01-01
Background: Teacher-related factors such as gender, experience, conceptions related to mathematics, instructional practices have effects with various magnitudes on students' mathematics achievement. Classroom related factors such as class size, class climate and limitations to teaching and their relation to mathematics achievement have also been…
Mathematics Learning Styles of Chinese Immigrant Students. Final Research Report.
ERIC Educational Resources Information Center
Tsang, Sau-Lim
Major revision in the U.S. mathematics curriculum since the 1960s have led to significant differences between the mathematics curriculum of the United States and those of many other countries. This study explored how eight Chinese immigrant students, with different cultural backgrounds, mathematics knowledge, and learning styles, learned in an…
Mathematics Teachers' Support and Retention: Using Maslow's Hierarchy to Understand Teachers' Needs
ERIC Educational Resources Information Center
Fisher, Molly H.; Royster, David
2016-01-01
As part of a larger study, four mathematics teachers from diverse backgrounds and teaching situations report their ideas on teacher stress, mathematics teacher retention, and their feelings about the needs of mathematics teachers, as well as other information crucial to retaining quality teachers. The responses from the participants were used to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David
In the January 2002 edition of SIAM News, Nick Trefethen announced the '$100, 100-Digit Challenge'. In this note he presented ten easy-to-state but hard-to-solve problems of numerical analysis, and challenged readers to find each answer to ten-digit accuracy. Trefethen closed with the enticing comment: 'Hint: They're hard! If anyone gets 50 digits in total, I will be impressed.' This challenge obviously struck a chord in hundreds of numerical mathematicians worldwide, as 94 teams from 25 nations later submitted entries. Many of these submissions exceeded the target of 50 correct digits; in fact, 20 teams achieved a perfect score of 100more » correct digits. Trefethen had offered $100 for the best submission. Given the overwhelming response, a generous donor (William Browning, founder of Applied Mathematics, Inc.) provided additional funds to provide a $100 award to each of the 20 winning teams. Soon after the results were out, four participants, each from a winning team, got together and agreed to write a book about the problems and their solutions. The team is truly international: Bornemann is from Germany, Laurie is from South Africa, Wagon is from the USA, and Waldvogel is from Switzerland. This book provides some mathematical background for each problem, and then shows in detail how each of them can be solved. In fact, multiple solution techniques are mentioned in each case. The book describes how to extend these solutions to much larger problems and much higher numeric precision (hundreds or thousands of digit accuracy). The authors also show how to compute error bounds for the results, so that one can say with confidence that one's results are accurate to the level stated. Numerous numerical software tools are demonstrated in the process, including the commercial products Mathematica, Maple and Matlab. Computer programs that perform many of the algorithms mentioned in the book are provided, both in an appendix to the book and on a website. In the process, the authors take the reader on a wide-ranging tour of modern numerical mathematics, with enough background material so that even readers with little or no training in numerical analysis can follow. Here is a list of just a few of the topics visited: numerical quadrature (i.e., numerical integration), series summation, sequence extrapolation, contour integration, Fourier integrals, high-precision arithmetic, interval arithmetic, symbolic computing, numerical linear algebra, perturbation theory, Euler-Maclaurin summation, global minimization, eigenvalue methods, evolutionary algorithms, matrix preconditioning, random walks, special functions, elliptic functions, Monte-Carlo methods, and numerical differentiation.« less
Physical Models for Particle Tracking Simulations in the RF Gap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shishlo, Andrei P.; Holmes, Jeffrey A.
2015-06-01
This document describes the algorithms that are used in the PyORBIT code to track the particles accelerated in the Radio-Frequency cavities. It gives the mathematical description of the algorithms and the assumptions made in each case. The derived formulas have been implemented in the PyORBIT code. The necessary data for each algorithm are described in detail.
Unsupervised Learning of Overlapping Image Components Using Divisive Input Modulation
Spratling, M. W.; De Meyer, K.; Kompass, R.
2009-01-01
This paper demonstrates that nonnegative matrix factorisation is mathematically related to a class of neural networks that employ negative feedback as a mechanism of competition. This observation inspires a novel learning algorithm which we call Divisive Input Modulation (DIM). The proposed algorithm provides a mathematically simple and computationally efficient method for the unsupervised learning of image components, even in conditions where these elementary features overlap considerably. To test the proposed algorithm, a novel artificial task is introduced which is similar to the frequently-used bars problem but employs squares rather than bars to increase the degree of overlap between components. Using this task, we investigate how the proposed method performs on the parsing of artificial images composed of overlapping features, given the correct representation of the individual components; and secondly, we investigate how well it can learn the elementary components from artificial training images. We compare the performance of the proposed algorithm with its predecessors including variations on these algorithms that have produced state-of-the-art performance on the bars problem. The proposed algorithm is more successful than its predecessors in dealing with overlap and occlusion in the artificial task that has been used to assess performance. PMID:19424442
Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.
Huson, Daniel H; Linz, Simone
2018-01-01
A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.
A clustering algorithm for determining community structure in complex networks
NASA Astrophysics Data System (ADS)
Jin, Hong; Yu, Wei; Li, ShiJun
2018-02-01
Clustering algorithms are attractive for the task of community detection in complex networks. DENCLUE is a representative density based clustering algorithm which has a firm mathematical basis and good clustering properties allowing for arbitrarily shaped clusters in high dimensional datasets. However, this method cannot be directly applied to community discovering due to its inability to deal with network data. Moreover, it requires a careful selection of the density parameter and the noise threshold. To solve these issues, a new community detection method is proposed in this paper. First, we use a spectral analysis technique to map the network data into a low dimensional Euclidean Space which can preserve node structural characteristics. Then, DENCLUE is applied to detect the communities in the network. A mathematical method named Sheather-Jones plug-in is chosen to select the density parameter which can describe the intrinsic clustering structure accurately. Moreover, every node on the network is meaningful so there were no noise nodes as a result the noise threshold can be ignored. We test our algorithm on both benchmark and real-life networks, and the results demonstrate the effectiveness of our algorithm over other popularity density based clustering algorithms adopted to community detection.
Nonlinear convergence active vibration absorber for single and multiple frequency vibration control
NASA Astrophysics Data System (ADS)
Wang, Xi; Yang, Bintang; Guo, Shufeng; Zhao, Wenqiang
2017-12-01
This paper presents a nonlinear convergence algorithm for active dynamic undamped vibration absorber (ADUVA). The damping of absorber is ignored in this algorithm to strengthen the vibration suppressing effect and simplify the algorithm at the same time. The simulation and experimental results indicate that this nonlinear convergence ADUVA can help significantly suppress vibration caused by excitation of both single and multiple frequency. The proposed nonlinear algorithm is composed of equivalent dynamic modeling equations and frequency estimator. Both the single and multiple frequency ADUVA are mathematically imitated by the same mechanical structure with a mass body and a voice coil motor (VCM). The nonlinear convergence estimator is applied to simultaneously satisfy the requirements of fast convergence rate and small steady state frequency error, which are incompatible for linear convergence estimator. The convergence of the nonlinear algorithm is mathematically proofed, and its non-divergent characteristic is theoretically guaranteed. The vibration suppressing experiments demonstrate that the nonlinear ADUVA can accelerate the convergence rate of vibration suppressing and achieve more decrement of oscillation attenuation than the linear ADUVA.
ERIC Educational Resources Information Center
Muijs, Daniel; Reynolds, David
2003-01-01
In this article, we have studied the effect of student social background, classroom social context, classroom organisation, and teacher behaviours on mathematics achievement and attainment in English and Welsh primary schools. Data were collected over 2 years as part of a programme evaluation, for which we observed 138 teachers and tested and…
ERIC Educational Resources Information Center
Nonoyama-Tarumi, Yuko; Hughes, Kathleen; Willms, J. Douglas
2015-01-01
This article compares the effects of family background and school resources on fourth-grade students' math achievement, using data from the 2011 Trends in International Mathematics and Science Study (TIMSS). In order to ameliorate potential floor effects, it uses relative risk and population attributable risk to examine the effects of family…
Reorganizing Freshman Business Mathematics I: Background and Philosophy
ERIC Educational Resources Information Center
Green, Kris; Emerson, Allen
2008-01-01
This article is the first of the two-part discussion of the development of a new Freshman Business Mathematics (FBM) course at our college. Part I of the article describes the background and history behind the course, and provides a theoretical framework for the design of the course. This design involves students in learning and applying…
ERIC Educational Resources Information Center
Muir, Carrie
2012-01-01
The purpose of this study was to compare the performance of first year college students with similar high school mathematics backgrounds in two introductory level college mathematics courses, "Fundamentals and Techniques of College Algebra and Quantitative Reasoning and Mathematical Skills," and to compare the performance of students…
Maire, E; Lelièvre, E; Brau, D; Lyons, A; Woodward, M; Fafeur, V; Vandenbunder, B
2000-04-10
We have developed an approach to study in single living epithelial cells both cell migration and transcriptional activation, which was evidenced by the detection of luminescence emission from cells transfected with luciferase reporter vectors. The image acquisition chain consists of an epifluorescence inverted microscope, connected to an ultralow-light-level photon-counting camera and an image-acquisition card associated to specialized image analysis software running on a PC computer. Using a simple method based on a thin calibrated light source, the image acquisition chain has been optimized following comparisons of the performance of microscopy objectives and photon-counting cameras designed to observe luminescence. This setup allows us to measure by image analysis the luminescent light emitted by individual cells stably expressing a luciferase reporter vector. The sensitivity of the camera was adjusted to a high value, which required the use of a segmentation algorithm to eliminate the background noise. Following mathematical morphology treatments, kinetic changes of luminescent sources were analyzed and then correlated with the distance and speed of migration. Our results highlight the usefulness of our image acquisition chain and mathematical morphology software to quantify the kinetics of luminescence changes in migrating cells.
Using a Card Trick to Teach Discrete Mathematics
ERIC Educational Resources Information Center
Simonson, Shai; Holm, Tara S.
2003-01-01
We present a card trick that can be used to review or teach a variety of topics in discrete mathematics. We address many subjects, including permutations, combinations, functions, graphs, depth first search, the pigeonhole principle, greedy algorithms, and concepts from number theory. Moreover, the trick motivates the use of computers in…
Optimization of a new mathematical model for bacterial growth
USDA-ARS?s Scientific Manuscript database
The objective of this research is to optimize a new mathematical equation as a primary model to describe the growth of bacteria under constant temperature conditions. An optimization algorithm was used in combination with a numerical (Runge-Kutta) method to solve the differential form of the new gr...
Mathematical Reasoning in Teachers' Presentations
ERIC Educational Resources Information Center
Bergqvist, Tomas; Lithner, Johan
2012-01-01
This paper presents a study of the opportunities presented to students that allow them to learn different types of mathematical reasoning during teachers' ordinary task solving presentations. The characteristics of algorithmic and creative reasoning that are seen in the presentations are analyzed. We find that most task solutions are based on…
An Application of Discrete Mathematics to Coding Theory.
ERIC Educational Resources Information Center
Donohoe, L. Joyce
1992-01-01
Presents a public-key cryptosystem application to introduce students to several topics in discrete mathematics. A computer algorithms using recursive methods is presented to solve a problem in which one person wants to send a coded message to a second person while keeping the message secret from a third person. (MDH)
Supporting Mathematics Instruction through Community
ERIC Educational Resources Information Center
Amidon, Joel C.; Trevathan, Morgan L.
2016-01-01
Raising expectations is nothing new. Every iteration of standards elevates the expectations for what students should know and be able to do. The Common Core State Standards for Mathematics (CCSSM) is no exception, with standards for content and practice that move beyond memorization of traditional algorithms to "make sense of problems and…
Researching Race in Mathematics Education
ERIC Educational Resources Information Center
Martin, Danny Bernard
2009-01-01
Background: Within mathematics education research, policy, and practice, race remains undertheorized in relation to mathematics learning and participation. Although race is characterized in the sociological and critical theory literatures as socially and politically constructed with structural expressions, most studies of differential outcomes in…
Collective Properties of Neural Systems and Their Relation to Other Physical Models
1988-08-05
been computed explicitly. This has been achieved algorithmically by utilizing methods introduced earlier. It should be emphasized that in addition to...Research Institute for Mathematical Sciences. K’oto Universin. K roto 606. .apan and E. BAROUCH Department of Mathematics and Computer Sciene. Clarkon...Mathematics and Computer Science, Clarkson University, where this work was collaborated. References I. IBabu, S. V. and Barouch E., An exact soIlution for the
What Diagrams Argue in Late Imperial Chinese Combinatorial Texts.
Bréard, Andrea
2015-01-01
Attitudes towards diagrammatic reasoning and visualization in mathematics were seldom spelled out in texts from pre-modern China, although illustrations figure prominently in mathematical literature since the eleventh century. Taking the sums of finite series and their combinatorial interpretation as a case study, this article investigates the epistemological function of illustrations from the eleventh to the nineteenth century that encode either the mathematical objects themselves or represent their related algorithms. It particularly focuses on the two illustrations given in Wang Lai's (1768-1813) Mathematical Principles of Sequential Combinations, arguing that they reflect a specific mode of nineteenth-century mathematical argumentative practice and served as a heuristic model for later authors.
Research and application of multi-agent genetic algorithm in tower defense game
NASA Astrophysics Data System (ADS)
Jin, Shaohua
2018-04-01
In this paper, a new multi-agent genetic algorithm based on orthogonal experiment is proposed, which is based on multi-agent system, genetic algorithm and orthogonal experimental design. The design of neighborhood competition operator, orthogonal crossover operator, Son and self-learning operator. The new algorithm is applied to mobile tower defense game, according to the characteristics of the game, the establishment of mathematical models, and finally increases the value of the game's monster.
A density based algorithm to detect cavities and holes from planar points
NASA Astrophysics Data System (ADS)
Zhu, Jie; Sun, Yizhong; Pang, Yueyong
2017-12-01
Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.
NASA Astrophysics Data System (ADS)
Kurnosov, R. Yu; Chernyshova, T. I.; Chernyshov, V. N.
2018-05-01
The algorithms for improving the metrological reliability of analogue blocks of measuring channels and information-measuring systems are developed. The proposed algorithms ensure the optimum values of their metrological reliability indices for a given analogue circuit block solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atanassov, E.; Dimitrov, D., E-mail: d.slavov@bas.bg, E-mail: emanouil@parallel.bas.bg, E-mail: gurov@bas.bg; Gurov, T.
2015-10-28
The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for optionmore » pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.« less
Simulating an underwater vehicle self-correcting guidance system with Simulink
NASA Astrophysics Data System (ADS)
Fan, Hui; Zhang, Yu-Wen; Li, Wen-Zhe
2008-09-01
Underwater vehicles have already adopted self-correcting directional guidance algorithms based on multi-beam self-guidance systems, not waiting for research to determine the most effective algorithms. The main challenges facing research on these guidance systems have been effective modeling of the guidance algorithm and a means to analyze the simulation results. A simulation structure based on Simulink that dealt with both issues was proposed. Initially, a mathematical model of relative motion between the vehicle and the target was developed, which was then encapsulated as a subsystem. Next, steps for constructing a model of the self-correcting guidance algorithm based on the Stateflow module were examined in detail. Finally, a 3-D model of the vehicle and target was created in VRML, and by processing mathematical results, the model was shown moving in a visual environment. This process gives more intuitive results for analyzing the simulation. The results showed that the simulation structure performs well. The simulation program heavily used modularization and encapsulation, so has broad applicability to simulations of other dynamic systems.
NASA Astrophysics Data System (ADS)
Atanassov, E.; Dimitrov, D.; Gurov, T.
2015-10-01
The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.
Language and Thought in Mathematics Staff Development: A Problem Probing Protocol
ERIC Educational Resources Information Center
Kabasakalian, Rita
2007-01-01
Background/Context: The theoretical framework of the paper comes from research on problem solving, considered by many to be the essence of mathematics; research on the importance of oral language in learning mathematics; and on the importance of the teacher as the primary instrument of learning mathematics for most students. As a nation, we are…
ERIC Educational Resources Information Center
Mathematical Association of America, Berkeley, CA. Committee on the Undergraduate Program in Mathematics.
This document presents the latest set of recommendations on the mathematical preparation of elementary and secondary school teachers developed by the Committee on the Undergraduate Program in Mathematics (CUPM) of the Mathematical Association of America (MAA). The introduction notes the background for the recommendations, and states that they are…
2012 National Survey of Science and Mathematics Education: Status of Middle School Mathematics
ERIC Educational Resources Information Center
Fulkerson, William O.
2013-01-01
The 2012 National Survey of Science and Mathematics Education was designed to provide up-to-date information and to identify trends in the areas of teacher background and experience, curriculum and instruction, and the availability and use of instructional resources. A total of 7,752 science and mathematics teachers in schools across the United…
2012 National Survey of Science and Mathematics Education: Status of Elementary School Mathematics
ERIC Educational Resources Information Center
Malzahn, Kristen A.
2013-01-01
The 2012 National Survey of Science and Mathematics Education was designed to provide up-to-date information and to identify trends in the areas of teacher background and experience, curriculum and instruction, and the availability and use of instructional resources. A total of 7,752 science and mathematics teachers in schools across the United…
2012 National Survey of Science and Mathematics Education: Status of High School Mathematics
ERIC Educational Resources Information Center
Smith, Adrienne A.
2013-01-01
The 2012 National Survey of Science and Mathematics Education was designed to provide up-to-date information and to identify trends in the areas of teacher background and experience, curriculum and instruction, and the availability and use of instructional resources. A total of 7,752 science and mathematics teachers in schools across the United…
"Come in with an Open Mind": Changing Attitudes towards Mathematics in Primary Teacher Education
ERIC Educational Resources Information Center
Hourigan, Mairéad; Leavy, Aisling M.; Carroll, Claire
2016-01-01
Background: The relationship between attitudes and behaviour has led to a focus on the role played by attitudes in the teaching and learning of mathematics. Purpose: This paper reports on an investigation into studentteachers' self-reported attitudes towards mathematics in the context of a mathematics education programme. The programme had been…
ERIC Educational Resources Information Center
Costa, H. M.; Nicholson, B.; Donlan, C.; Van Herwegen, J.
2018-01-01
Background: Different domain-specific and domain-general cognitive precursors play a key role in the development of mathematical abilities. The contribution of these domains to mathematical ability changes during development. Primary school-aged children who show mathematical difficulties form a heterogeneous group, but it is not clear whether…
NASA Astrophysics Data System (ADS)
Ganimedov, V. L.; Papaeva, E. O.; Maslov, N. A.; Larionov, P. M.
2017-09-01
Development of cell-mediated scaffold technologies for the treatment of critical bone defects is very important for the purpose of reparative bone regeneration. Today the properties of the bioreactor for cell-seeded scaffold cultivation are the subject of intensive research. We used the mathematical modeling of rotational reactor and construct computational algorithm with the help of ANSYS software package to develop this new procedure. The solution obtained with the help of the constructed computational algorithm is in good agreement with the analytical solution of Couette for the task of two coaxial cylinders. The series of flow computations for different rotation frequencies (1, 0.75, 0.5, 0.33, 1.125 Hz) was performed for the laminar flow regime approximation with the help of computational algorithm. It was found that Taylor vortices appear in the annular gap between the cylinders in a simulated bioreactor. It was obtained that shear stress in the range of interest (0.002-0.1 Pa) arise on outer surface of inner cylinder when it rotates with the frequency not exceeding 0.8 Hz. So the constructed mathematical model and the created computational algorithm for calculating the flow parameters allow predicting the shear stress and pressure values depending on the rotation frequency and geometric parameters, as well as optimizing the operating mode of the bioreactor.
ERIC Educational Resources Information Center
Kareshki, Hossein; Hajinezhad, Zahra
2014-01-01
The purpose of the present study is investigating the correlation between school quality and family socioeconomic background and students' mathematics achievement in the Middle East. The countries in comparison are UAE, Syria, Qatar, Iran, Saudi Arabia, Oman, Lebanon, Jordan, and Bahrain. The study utilized data from IEA's Trends in International…
ERIC Educational Resources Information Center
Svoboda, Ryan C.; Rozek, Christopher S.; Hyde, Janet S.; Harackiewicz, Judith M.; Destin, Mesmin
2016-01-01
High school students from lower-socioeconomic status (SES) backgrounds are less likely to enroll in advanced mathematics and science courses compared to students from higher-SES backgrounds. The current longitudinal study draws on identity-based and expectancy-value theories of motivation to explain the SES and mathematics and science…
Zamunér, Antonio R.; Catai, Aparecida M.; Martins, Luiz E. B.; Sakabe, Daniel I.; Silva, Ester Da
2013-01-01
Background The second heart rate (HR) turn point has been extensively studied, however there are few studies determining the first HR turn point. Also, the use of mathematical and statistical models for determining changes in dynamic characteristics of physiological variables during an incremental cardiopulmonary test has been suggested. Objectives To determine the first turn point by analysis of HR, surface electromyography (sEMG), and carbon dioxide output () using two mathematical models and to compare the results to those of the visual method. Method Ten sedentary middle-aged men (53.9±3.2 years old) were submitted to cardiopulmonary exercise testing on an electromagnetic cycle ergometer until exhaustion. Ventilatory variables, HR, and sEMG of the vastus lateralis were obtained in real time. Three methods were used to determine the first turn point: 1) visual analysis based on loss of parallelism between and oxygen uptake (); 2) the linear-linear model, based on fitting the curves to the set of data (Lin-Lin ); 3) a bi-segmental linear regression of Hinkley' s algorithm applied to HR (HMM-HR), (HMM- ), and sEMG data (HMM-RMS). Results There were no differences between workload, HR, and ventilatory variable values at the first ventilatory turn point as determined by the five studied parameters (p>0.05). The Bland-Altman plot showed an even distribution of the visual analysis method with Lin-Lin , HMM-HR, HMM-CO2, and HMM-RMS. Conclusion The proposed mathematical models were effective in determining the first turn point since they detected the linear pattern change and the deflection point of , HR responses, and sEMG. PMID:24346296
2014-01-01
Background A key problem in the analysis of mathematical models of molecular networks is the determination of their steady states. The present paper addresses this problem for Boolean network models, an increasingly popular modeling paradigm for networks lacking detailed kinetic information. For small models, the problem can be solved by exhaustive enumeration of all state transitions. But for larger models this is not feasible, since the size of the phase space grows exponentially with the dimension of the network. The dimension of published models is growing to over 100, so that efficient methods for steady state determination are essential. Several methods have been proposed for large networks, some of them heuristic. While these methods represent a substantial improvement in scalability over exhaustive enumeration, the problem for large networks is still unsolved in general. Results This paper presents an algorithm that consists of two main parts. The first is a graph theoretic reduction of the wiring diagram of the network, while preserving all information about steady states. The second part formulates the determination of all steady states of a Boolean network as a problem of finding all solutions to a system of polynomial equations over the finite number system with two elements. This problem can be solved with existing computer algebra software. This algorithm compares favorably with several existing algorithms for steady state determination. One advantage is that it is not heuristic or reliant on sampling, but rather determines algorithmically and exactly all steady states of a Boolean network. The code for the algorithm, as well as the test suite of benchmark networks, is available upon request from the corresponding author. Conclusions The algorithm presented in this paper reliably determines all steady states of sparse Boolean networks with up to 1000 nodes. The algorithm is effective at analyzing virtually all published models even those of moderate connectivity. The problem for large Boolean networks with high average connectivity remains an open problem. PMID:24965213
ERIC Educational Resources Information Center
Shumway, Jessica F.; Kyriopoulos, Joan
2014-01-01
Being able to find the correct answer to a math problem does not always indicate solid mathematics mastery. A student who knows how to apply the basic algorithms can correctly solve problems without understanding the relationships between numbers or why the algorithms work. The Common Core standards require that students actually understand…
Secondary School Mathematics Curriculum Improvement Study Information Bulletin 7.
ERIC Educational Resources Information Center
Secondary School Mathematics Curriculum Improvement Study, New York, NY.
The background, objectives, and design of Secondary School Mathematics Curriculum Improvement Study (SSMCIS) are summarized. Details are given of the content of the text series, "Unified Modern Mathematics," in the areas of algebra, geometry, linear algebra, probability and statistics, analysis (calculus), logic, and computer…
Mathematics at Work in Alberta.
ERIC Educational Resources Information Center
Glanfield, Florence, Ed.; Tilroe, Daryle, Ed.
This document is designed to assist teachers by providing practical examples of real world applications of high school mathematics. Fifteen problems are presented that individuals in industry and business solve using mathematics. Each problem provides the contributor's name, suggested skills required to solve the problem, background information…
Teaching Mathematics Education with Cultural Competency
ERIC Educational Resources Information Center
Dornoo, Michael
2015-01-01
Students learn through connections when understanding is enhanced by a more holistic view of the content. When mathematics is presented from diverse perspectives, students with diverse backgrounds, expectations, histories, and experiences benefit greatly. In this article the author addresses the need to teach mathematics with cultural competency…
Mathematics Equity. A Resource Book.
ERIC Educational Resources Information Center
Tyree, Eddy; And Others
Provided in this document is a brief summary of current research on equity in mathematics, readings on the topic, and lists of selected programs and resource materials. Readings presented include: "Teaching Mathematics in a Multicultural Setting: Some Considerations when Teachers and Students are of Differing Cultural Backgrounds"…
2012-01-01
Background Chaos Game Representation (CGR) is an iterated function that bijectively maps discrete sequences into a continuous domain. As a result, discrete sequences can be object of statistical and topological analyses otherwise reserved to numerical systems. Characteristically, CGR coordinates of substrings sharing an L-long suffix will be located within 2-L distance of each other. In the two decades since its original proposal, CGR has been generalized beyond its original focus on genomic sequences and has been successfully applied to a wide range of problems in bioinformatics. This report explores the possibility that it can be further extended to approach algorithms that rely on discrete, graph-based representations. Results The exploratory analysis described here consisted of selecting foundational string problems and refactoring them using CGR-based algorithms. We found that CGR can take the role of suffix trees and emulate sophisticated string algorithms, efficiently solving exact and approximate string matching problems such as finding all palindromes and tandem repeats, and matching with mismatches. The common feature of these problems is that they use longest common extension (LCE) queries as subtasks of their procedures, which we show to have a constant time solution with CGR. Additionally, we show that CGR can be used as a rolling hash function within the Rabin-Karp algorithm. Conclusions The analysis of biological sequences relies on algorithmic foundations facing mounting challenges, both logistic (performance) and analytical (lack of unifying mathematical framework). CGR is found to provide the latter and to promise the former: graph-based data structures for sequence analysis operations are entailed by numerical-based data structures produced by CGR maps, providing a unifying analytical framework for a diversity of pattern matching problems. PMID:22551152
The Mathematics of High School Physics: Models, Symbols, Algorithmic Operations and Meaning
ERIC Educational Resources Information Center
Kanderakis, Nikos
2016-01-01
In the seventeenth and eighteenth centuries, mathematicians and physical philosophers managed to study, via mathematics, various physical systems of the sublunar world through idealized and simplified models of these systems, constructed with the help of geometry. By analyzing these models, they were able to formulate new concepts, laws and…
Students' Mathematical Reasoning and Beliefs in Non-Routine Task Solving
ERIC Educational Resources Information Center
Jäder, Jonas; Sidenvall, Johan; Sumpter, Lovisa
2017-01-01
Beliefs and problem solving are connected and have been studied in different contexts. One of the common results of previous research is that students tend to prefer algorithmic approaches to mathematical tasks. This study explores Swedish upper secondary school students' beliefs and reasoning when solving non-routine tasks. The results regarding…
Teaching Proofs and Algorithms in Discrete Mathematics with Online Visual Logic Puzzles
ERIC Educational Resources Information Center
Cigas, John; Hsin, Wen-Jung
2005-01-01
Visual logic puzzles provide a fertile environment for teaching multiple topics in discrete mathematics. Many puzzles can be solved by the repeated application of a small, finite set of strategies. Explicitly reasoning from a strategy to a new puzzle state illustrates theorems, proofs, and logic principles. These provide valuable, concrete…
Calculator Logic Systems and Mathematical Understandings.
ERIC Educational Resources Information Center
Burrows, Enid R.
This monograph is aimed at helping the reader understand the built-in logic of various calculator operating systems. It is an outgrowth of workshop contacts with in-service and pre-service teachers of mathematics and is in response to their request for a book on the subject of calculator logic systems and calculator algorithms. The mathematical…
Fostering Algebraic Understanding through Math
ERIC Educational Resources Information Center
Lim, Kien H.
2016-01-01
Magic captivates humans because of their innate capacity to be intrigued and a desire to resolve their curiosity. In a mathematics classroom, algorithms akin to magic tricks can be an effective tool to engage students in thinking and problem solving. Tricks that rely on the power of mathematics are especially suitable for students to experience an…
Mathematical foundations of the GraphBLAS
Kepner, Jeremy; Aaltonen, Peter; Bader, David; ...
2016-12-01
The GraphBLAS standard (GraphBlas.org) is being developed to bring the potential of matrix-based graph algorithms to the broadest possible audience. Mathematically, the GraphBLAS defines a core set of matrix-based graph operations that can be used to implement a wide class of graph algorithms in a wide range of programming environments. This study provides an introduction to the mathematics of the GraphBLAS. Graphs represent connections between vertices with edges. Matrices can represent a wide range of graphs using adjacency matrices or incidence matrices. Adjacency matrices are often easier to analyze while incidence matrices are often better for representing data. Fortunately, themore » two are easily connected by matrix multiplication. A key feature of matrix mathematics is that a very small number of matrix operations can be used to manipulate a very wide range of graphs. This composability of a small number of operations is the foundation of the GraphBLAS. A standard such as the GraphBLAS can only be effective if it has low performance overhead. Finally, performance measurements of prototype GraphBLAS implementations indicate that the overhead is low.« less
Dinç, Erdal; Ozdemir, Abdil
2005-01-01
Multivariate chromatographic calibration technique was developed for the quantitative analysis of binary mixtures enalapril maleate (EA) and hydrochlorothiazide (HCT) in tablets in the presence of losartan potassium (LST). The mathematical algorithm of multivariate chromatographic calibration technique is based on the use of the linear regression equations constructed using relationship between concentration and peak area at the five-wavelength set. The algorithm of this mathematical calibration model having a simple mathematical content was briefly described. This approach is a powerful mathematical tool for an optimum chromatographic multivariate calibration and elimination of fluctuations coming from instrumental and experimental conditions. This multivariate chromatographic calibration contains reduction of multivariate linear regression functions to univariate data set. The validation of model was carried out by analyzing various synthetic binary mixtures and using the standard addition technique. Developed calibration technique was applied to the analysis of the real pharmaceutical tablets containing EA and HCT. The obtained results were compared with those obtained by classical HPLC method. It was observed that the proposed multivariate chromatographic calibration gives better results than classical HPLC.
Keivanian, Farshid; Mehrshad, Nasser; Bijari, Abolfazl
2016-01-01
D Flip-Flop as a digital circuit can be used as a timing element in many sophisticated circuits. Therefore the optimum performance with the lowest power consumption and acceptable delay time will be critical issue in electronics circuits. The newly proposed Dual-Edge Triggered Static D Flip-Flop circuit layout is defined as a multi-objective optimization problem. For this, an optimum fuzzy inference system with fuzzy rules is proposed to enhance the performance and convergence of non-dominated sorting Genetic Algorithm-II by adaptive control of the exploration and exploitation parameters. By using proposed Fuzzy NSGA-II algorithm, the more optimum values for MOSFET channel widths and power supply are discovered in search space than ordinary NSGA types. What is more, the design parameters involving NMOS and PMOS channel widths and power supply voltage and the performance parameters including average power consumption and propagation delay time are linked. To do this, the required mathematical backgrounds are presented in this study. The optimum values for the design parameters of MOSFETs channel widths and power supply are discovered. Based on them the power delay product quantity (PDP) is 6.32 PJ at 125 MHz Clock Frequency, L = 0.18 µm, and T = 27 °C.
Spatiotemporal models for the simulation of infrared backgrounds
NASA Astrophysics Data System (ADS)
Wilkes, Don M.; Cadzow, James A.; Peters, R. Alan, II; Li, Xingkang
1992-09-01
It is highly desirable for designers of automatic target recognizers (ATRs) to be able to test their algorithms on targets superimposed on a wide variety of background imagery. Background imagery in the infrared spectrum is expensive to gather from real sources, consequently, there is a need for accurate models for producing synthetic IR background imagery. We have developed a model for such imagery that will do the following: Given a real, infrared background image, generate another image, distinctly different from the one given, that has the same general visual characteristics as well as the first and second-order statistics of the original image. The proposed model consists of a finite impulse response (FIR) kernel convolved with an excitation function, and histogram modification applied to the final solution. A procedure for deriving the FIR kernel using a signal enhancement algorithm has been developed, and the histogram modification step is a simple memoryless nonlinear mapping that imposes the first order statistics of the original image onto the synthetic one, thus the overall model is a linear system cascaded with a memoryless nonlinearity. It has been found that the excitation function relates to the placement of features in the image, the FIR kernel controls the sharpness of the edges and the global spectrum of the image, and the histogram controls the basic coloration of the image. A drawback to this method of simulating IR backgrounds is that a database of actual background images must be collected in order to produce accurate FIR and histogram models. If this database must include images of all types of backgrounds obtained at all times of the day and all times of the year, the size of the database would be prohibitive. In this paper we propose improvements to the model described above that enable time-dependent modeling of the IR background. This approach can greatly reduce the number of actual IR backgrounds that are required to produce a sufficiently accurate mathematical model for synthesizing a similar IR background for different times of the day. Original and synthetic IR backgrounds will be presented. Previous research in simulating IR backgrounds was performed by Strenzwilk, et al., Botkin, et al., and Rapp. The most recent work of Strenzwilk, et al. was based on the use of one-dimensional ARMA models for synthesizing the images. Their results were able to retain the global statistical and spectral behavior of the original image, but the synthetic image was not visually very similar to the original. The research presented in this paper is the result of an attempt to improve upon their results, and represents a significant improvement in quality over previously obtained results.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
ERIC Educational Resources Information Center
Walkington, Candace; Clinton, Virginia; Shivraj, Pooja
2018-01-01
The link between reading and mathematics achievement is well known, and an important question is whether readability factors in mathematics problems are differentially impacting student groups. Using 20 years of data from the National Assessment of Educational Progress and the Trends in International Mathematics and Science Study, we examine how…
System-Level Evaluation: Language and Other Background Factors Affecting Mathematics Achievement
ERIC Educational Resources Information Center
Howie, Sarah
2005-01-01
The aim of this study is to describe and to explore the main factors affecting the performance of South African pupils in the mathematics test of the Third International Mathematics and Science Study-Repeat (TIMSS-R). The first objective was to describe the performance of the pupils in the mathematics test, the pupils' proficiency in English, as…
Evaluation of Multiclass Model Observers in PET LROC Studies
NASA Astrophysics Data System (ADS)
Gifford, H. C.; Kinahan, P. E.; Lartizien, C.; King, M. A.
2007-02-01
A localization ROC (LROC) study was conducted to evaluate nonprewhitening matched-filter (NPW) and channelized NPW (CNPW) versions of a multiclass model observer as predictors of human tumor-detection performance with PET images. Target localization is explicitly performed by these model observers. Tumors were placed in the liver, lungs, and background soft tissue of a mathematical phantom, and the data simulation modeled a full-3D acquisition mode. Reconstructions were performed with the FORE+AWOSEM algorithm. The LROC study measured observer performance with 2D images consisting of either coronal, sagittal, or transverse views of the same set of cases. Versions of the CNPW observer based on two previously published difference-of-Gaussian channel models demonstrated good quantitative agreement with human observers. One interpretation of these results treats the CNPW observer as a channelized Hotelling observer with implicit internal noise
An overview of the mathematical and statistical analysis component of RICIS
NASA Technical Reports Server (NTRS)
Hallum, Cecil R.
1987-01-01
Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.
Visuospatial Training Improves Elementary Students' Mathematics Performance
ERIC Educational Resources Information Center
Lowrie, Tom; Logan, Tracy; Ramful, Ajay
2017-01-01
Background: Although spatial ability and mathematics performance are highly correlated, there is scant research on the extent to which spatial ability training can improve mathematics performance. Aims: This study evaluated the efficacy of a visuospatial intervention programme within classrooms to determine the effect on students' (1) spatial…
Incentive Pay for Remotely Piloted Aircraft Career Fields
2012-01-01
Fields C.1. Mathematical Symbols for Non-Stochastic Values and Shock Terms...78 C.2. Mathematical Symbols for Taste and Compensation . . . . . . . . . . . 79 xiii Summary Background and...manning requirement, even with the current incentive pays and reenlistment bonuses. 2 The mathematical foundations, data, and estimation methods for the
FORUM: The Algorithmic Way of Life is Best and Responses.
ERIC Educational Resources Information Center
Maurer, Stephen B.; And Others
1985-01-01
The forum is focused on thinking about and with algorithms as a way of unifying all one's mathematical endeavors. The lead article by Maurer presents examples and discussion of this point. Responses, often disagreeing with his views, are by Douglas, Korte, Hilton, Renz, Smorynski, Hammersley, and Halmos. (MNS)
Transactional Algorithm for Subtracting Fractions: Go Shopping
ERIC Educational Resources Information Center
Pinckard, James Seishin
2009-01-01
The purpose of this quasi-experimental research study was to examine the effects of an alternative or transactional algorithm for subtracting mixed numbers within the middle school setting. Initial data were gathered from the student achievement of four mathematics teachers at three different school sites. The results indicated students who…
The Xmath Integration Algorithm
ERIC Educational Resources Information Center
Bringslid, Odd
2009-01-01
The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…
ERIC Educational Resources Information Center
Stuart, Jennifer Lynn
2017-01-01
The purpose of this correlation study was to identify a possible relationship between elementary teacher background in mathematics as measured by completed college math credit hours, district-provided professional development hours of training in Common Core math standards, and years of teaching experience, and teacher efficacy in math as measured…
Flow temporal reconstruction from non-time-resolved data part I: mathematic fundamentals
NASA Astrophysics Data System (ADS)
Legrand, Mathieu; Nogueira, José; Lecuona, Antonio
2011-10-01
At least two circumstances point to the need of postprocessing techniques to recover lost time information from non-time-resolved data: the increasing interest in identifying and tracking coherent structures in flows of industrial interest and the high data throughput of global measuring techniques, such as PIV, for the validation of computational fluid dynamics (CFD) codes. This paper offers the mathematic fundamentals of a space--time reconstruction technique from non-time-resolved, statistically independent data. An algorithm has been developed to identify and track traveling coherent structures in periodic flows. Phase-averaged flow fields are reconstructed with a correlation-based method, which uses information from the Proper Orthogonal Decomposition (POD). The theoretical background shows that the snapshot POD coefficients can be used to recover flow phase information. Once this information is recovered, the real snapshots are used to reconstruct the flow history and characteristics, avoiding neither the use of POD modes nor any associated artifact. The proposed time reconstruction algorithm is in agreement with the experimental evidence given by the practical implementation proposed in the second part of this work (Legrand et al. in Exp Fluids, 2011), using the coefficients corresponding to the first three POD modes. It also agrees with the results on similar issues by other authors (Ben Chiekh et al. in 9 Congrès Francophone de Vélocimétrie Laser, Bruxelles, Belgium, 2004; Van Oudheusden et al. in Exp Fluids 39-1:86-98, 2005; Meyer et al. in 7th International Symposium on Particle Image Velocimetry, Rome, Italy, 2007a; in J Fluid Mech 583:199-227, 2007b; Perrin et al. in Exp Fluids 43-2:341-355, 2007). Computer time to perform the reconstruction is relatively short, of the order of minutes with current PC technology.
2011-01-01
Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520
NASA Astrophysics Data System (ADS)
van der Hoff, Quay
2017-08-01
The science of biology has been transforming dramatically and so the need for a stronger mathematical background for biology students has increased. Biological students reaching the senior or post-graduate level often come to realize that their mathematical background is insufficient. Similarly, students in a mathematics programme, interested in biological phenomena, find it difficult to master the complex systems encountered in biology. In short, the biologists do not have enough mathematics and the mathematicians are not being taught enough biology. The need for interdisciplinary curricula that includes disciplines such as biology, physical science, and mathematics is widely recognized, but has not been widely implemented. In this paper, it is suggested that students develop a skill set of ecology, mathematics and technology to encourage working across disciplinary boundaries. To illustrate such a skill set, a predator-prey model that contains self-limiting factors for both predator and prey is suggested. The general idea of dynamics, is introduced and students are encouraged to discover the applicability of this approach to more complex biological systems. The level of mathematics and technology required is not advanced; therefore, it is ideal for inclusion in a senior-level or introductory graduate-level course for students interested in mathematical biology.
Shearlet Features for Registration of Remotely Sensed Multitemporal Images
NASA Technical Reports Server (NTRS)
Murphy, James M.; Le Moigne, Jacqueline
2015-01-01
We investigate the role of anisotropic feature extraction methods for automatic image registration of remotely sensed multitemporal images. Building on the classical use of wavelets in image registration, we develop an algorithm based on shearlets, a mathematical generalization of wavelets that offers increased directional sensitivity. Initial experimental results on LANDSAT images are presented, which indicate superior performance of the shearlet algorithm when compared to classical wavelet algorithms.
The instanton method and its numerical implementation in fluid mechanics
NASA Astrophysics Data System (ADS)
Grafke, Tobias; Grauer, Rainer; Schäfer, Tobias
2015-08-01
A precise characterization of structures occurring in turbulent fluid flows at high Reynolds numbers is one of the last open problems of classical physics. In this review we discuss recent developments related to the application of instanton methods to turbulence. Instantons are saddle point configurations of the underlying path integrals. They are equivalent to minimizers of the related Freidlin-Wentzell action and known to be able to characterize rare events in such systems. While there is an impressive body of work concerning their analytical description, this review focuses on the question on how to compute these minimizers numerically. In a short introduction we present the relevant mathematical and physical background before we discuss the stochastic Burgers equation in detail. We present algorithms to compute instantons numerically by an efficient solution of the corresponding Euler-Lagrange equations. A second focus is the discussion of a recently developed numerical filtering technique that allows to extract instantons from direct numerical simulations. In the following we present modifications of the algorithms to make them efficient when applied to two- or three-dimensional (2D or 3D) fluid dynamical problems. We illustrate these ideas using the 2D Burgers equation and the 3D Navier-Stokes equations.
ERIC Educational Resources Information Center
Pehkonen, Erkki
This report describes the theoretical background of an international comparison project on pupils' mathematical beliefs and outlines its realization. The first chapter briefly discusses problems with the underlying concepts of "belief" and "conception." The central concept, view of mathematics, is introduced in the second…
ERIC Educational Resources Information Center
Pennington, Charlotte R.; Heim, Derek
2016-01-01
Background: Women in mathematical domains may become attuned to situational cues that signal a discredited social identity, contributing to their lower achievement and underrepresentation. Aim: This study examined whether heightened in-group representation alleviates the effects of stereotype threat on women's mathematical performance. It further…
Educational Neuroscience: New Horizons for Research in Mathematics Education
ERIC Educational Resources Information Center
Campbell, Stephen R.
2006-01-01
This paper outlines an initiative in mathematics education research that aims to augment qualitative methods of research into mathematical cognition and learning with quantitative methods of psychometrics and psychophysiology. Background and motivation are provided for this initiative, which is coming to be referred to as educational neuroscience.…
ERIC Educational Resources Information Center
Wilkins, Jesse L. M.
2015-01-01
Background: Prior research has shown that students taught using "Standards"-based mathematics curricula tend to outperform students on measures of mathematics achievement. However, little research has focused particularly on the promotion of student quantitative literacy (QLT). In this study, the potential influence of the…
A Reflection Framework for Teaching Mathematics
ERIC Educational Resources Information Center
Merritt, Eileen G.; Rimm-Kaufman, Sara E.; Berry, Robert Q., III; Walkowiak, Temple A.; McCracken, Erin R.
2010-01-01
Mathematics teachers confront dozens of daily decisions about how to instruct students. It is well established that high-quality instruction provides benefits for students with diverse learning and family backgrounds. However, it is often difficult for teachers to identify the critical aspects of a successful mathematics lesson as they strive to…
Mathematics Interventions for Children and Adolescents with Down Syndrome: A Research Synthesis
ERIC Educational Resources Information Center
Lemons, C. J.; Powell, S. R.; King, S. A.; Davidson, K. A.
2015-01-01
Background: Many children and adolescents with Down syndrome fail to achieve proficiency in mathematics. Researchers have suggested that tailoring interventions based on the behavioural phenotype may enhance efficacy. Method: The research questions that guided this review were (1) what types of mathematics interventions have been empirically…
An Excel-Aided Method for Teaching Calculus-Based Business Mathematics
ERIC Educational Resources Information Center
Liang, Jiajuan; Martin, Linda
2008-01-01
Calculus-based business mathematics is a required quantitative course for undergraduate business students in most AACSB accredited schools or colleges of business. Many business students, however, have relatively weak mathematical background or even display math-phobia when presented with calculus problems. Because of the popularity of Excel, its…
ERIC Educational Resources Information Center
Schiller, Kathryn S.; Hunt, Donald J.
2011-01-01
Schools are institutions in which students' course taking creates series of linked learning opportunities continually shaped by not only curricular structures but demographic and academic backgrounds. In contrast to a seven-step normative course sequence reflecting the conventional hierarchical structure of mathematics, analysis of more than…
Student Achievement in College Calculus, Louisiana State University 1967-1968.
ERIC Educational Resources Information Center
Scannicchio, Thomas Henry
An investigation of freshmen achievement in an introductory calculus course was performed on the basis of high school mathematics background to find predictors of college calculus grades. Overall high school academic achievement, overall high school mathematics achievement, number of high school mathematics units, pattern of college preparatory…
Handbook for Spoken Mathematics: (Larry's Speakeasy).
ERIC Educational Resources Information Center
Chang, Lawrence A.; And Others
This handbook is directed toward those who have to deal with spoken mathematics, yet have insufficient background to know the correct verbal expression for the written symbolic one. It compiles consistent and well-defined ways of uttering mathematical expressions so listeners will receive clear, unambiguous, and well-pronounced representations.…
Who Is Afraid of Math? Two Sources of Genetic Variance for Mathematical Anxiety
ERIC Educational Resources Information Center
Wang, Zhe; Hart, Sara Ann; Kovas, Yulia; Lukowski, Sarah; Soden, Brooke; Thompson, Lee A.; Plomin, Robert; McLoughlin, Grainne; Bartlett, Christopher W.; Lyons, Ian M.; Petrill, Stephen A.
2014-01-01
Background: Emerging work suggests that academic achievement may be influenced by the management of affect as well as through efficient information processing of task demands. In particular, mathematical anxiety has attracted recent attention because of its damaging psychological effects and potential associations with mathematical problem solving…
Preparing Teachers to Lead Mathematics Discussions
ERIC Educational Resources Information Center
Boerst, Timothy A.; Sleep, Laurie; Ball, Deborah Loewenberg; Bass, Hyman
2011-01-01
Background/Context: Discussion is central to mathematics teaching and learning, as well as to mathematics as an academic discipline. Studies have shown that facilitating discussions is complex work that is not easily done or learned. To make such complex aspects of the work of teaching learnable by beginners, recent research has focused on…
ERIC Educational Resources Information Center
Chval, Kathryn; Abell, Sandra; Pareja, Enrique; Musikul, Kusalin; Ritzka, Gerard
2008-01-01
High quality teachers are essential to improving the teaching and learning of mathematics and science, necessitating effective professional development (PD) and learning environments for teachers. However, many PD programs for science and mathematics teachers fall short because they fail to consider teacher background, experience, knowledge,…
Humanities-Oriented Accents in Teaching Mathematics to Prospective Primary School Teachers
ERIC Educational Resources Information Center
Tabov, Jordan; Gortcheva, Iordanka
2016-01-01
Our research includes undergraduate students who major in primary school education. Their academic background is prevailingly in the humanities. This poses specific demands on their mathematics instruction at university. To attract them to their mathematics course and raise its effectiveness, we use a series of activities. Writing assignments…
Exploring Iconic Interpretation and Mathematics Teacher Development through Clinical Simulations
ERIC Educational Resources Information Center
Dotger, Benjamin; Masingila, Joanna; Bearkland, Mary; Dotger, Sharon
2015-01-01
Field placements serve as the traditional "clinical" experience for prospective mathematics teachers to immerse themselves in the mathematical challenges of students. This article reports data from a different type of learning experience, that of a clinical simulation with a standardized individual. We begin with a brief background on…
ERIC Educational Resources Information Center
Arikan, Serkan; van de Vijver, Fons J. R.; Yagmur, Kutlay
2017-01-01
Lower reading and mathematics performance of Turkish immigrant students as compared to mainstream European students could reflect differential learning outcomes, differential socioeconomic backgrounds of the groups, differential mainstream language proficiency, and/or test bias. Using PISA reading and mathematics scores of these groups, we…
Are Mathematics Problems a Problem for Women and Girls?
ERIC Educational Resources Information Center
Schonberger, Ann K.
The primary questions investigated are: Is it true that males excel in mathematical problem solving and, if so, when does this superiority develop? An examination of recent research showed that sex-related differences did exist, although small, even after controlling for mathematics background. Differences appeared in early adolescence and were…
Graphs, matrices, and the GraphBLAS: Seven good reasons
Kepner, Jeremy; Bader, David; Buluç, Aydın; ...
2015-01-01
The analysis of graphs has become increasingly important to a wide range of applications. Graph analysis presents a number of unique challenges in the areas of (1) software complexity, (2) data complexity, (3) security, (4) mathematical complexity, (5) theoretical analysis, (6) serial performance, and (7) parallel performance. Implementing graph algorithms using matrix-based approaches provides a number of promising solutions to these challenges. The GraphBLAS standard (istcbigdata.org/GraphBlas) is being developed to bring the potential of matrix based graph algorithms to the broadest possible audience. The GraphBLAS mathematically defines a core set of matrix-based graph operations that can be used to implementmore » a wide class of graph algorithms in a wide range of programming environments. This paper provides an introduction to the GraphBLAS and describes how the GraphBLAS can be used to address many of the challenges associated with analysis of graphs.« less
A new mathematical modelling based shape extraction technique for Forensic Odontology.
G, Jaffino; A, Banumathi; Gurunathan, Ulaganathan; B, Vijayakumari; J, Prabin Jose
2017-04-01
Forensic Odontology is a specific means for identifying a person in which deceased, and particularly in fatality incidents. The algorithm can be proposed to identify a person by comparing both postmortem (PM) and antemortem (AM) dental radiographs and photographs. This work aims to introduce a new mathematical algorithm for photographs in addition with radiographs. Isoperimetric graph partitioning method is used to extract the shape of dental images in forensic identification. Shape matching is done by comparing AM and PM dental images using both similarity and distance measures. Experimental results prove that the higher matching distance is observed by distance metric rather than similarity measures. The results of this algorithm show that a high hit rate is observed for distance based performance measures and it is well suited for forensic odontologist to identify a person. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Adaptive spatial filtering improves speech reception in noise while preserving binaural cues.
Bissmeyer, Susan R S; Goldsworthy, Raymond L
2017-09-01
Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.
An advancing front Delaunay triangulation algorithm designed for robustness
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
A new algorithm is described for generating an unstructured mesh about an arbitrary two-dimensional configuration. Mesh points are generated automatically by the algorithm in a manner which ensures a smooth variation of elements, and the resulting triangulation constitutes the Delaunay triangulation of these points. The algorithm combines the mathematical elegance and efficiency of Delaunay triangulation algorithms with the desirable point placement features, boundary integrity, and robustness traditionally associated with advancing-front-type mesh generation strategies. The method offers increased robustness over previous algorithms in that it cannot fail regardless of the initial boundary point distribution and the prescribed cell size distribution throughout the flow-field.
Algorithms in Modern Mathematics and Computer Science.
1980-01-01
importance, since we will go on doing what we are doing no matter what it is called; after all, other disciplines like Mathematics and Chemistry are no...longer related very strongly to the etymology of their names. However, if I had a chance to vote for the name of my own discipline, I would choose to call
ERIC Educational Resources Information Center
Taksa, Isak; Goldberg, Robert
2004-01-01
Traditional peer-to-peer Supplemental Instruction (SI) was introduced into higher education over a quarter of a century ago and promptly became an integral part of the developmental mathematics curricula in many senior and community colleges. Later, some colleges introduced Video-based Supplemental Instruction (VSI) and, in recent years,…
Sheng, Xi
2012-07-01
The thesis aims to study the automation replenishment algorithm in hospital on medical supplies supplying chain. The mathematical model and algorithm of medical supplies automation replenishment are designed through referring to practical data form hospital on the basis of applying inventory theory, greedy algorithm and partition algorithm. The automation replenishment algorithm is proved to realize automatic calculation of the medical supplies distribution amount and optimize medical supplies distribution scheme. A conclusion could be arrived that the model and algorithm of inventory theory, if applied in medical supplies circulation field, could provide theoretical and technological support for realizing medical supplies automation replenishment of hospital on medical supplies supplying chain.
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
Novel mathematical algorithm for pupillometric data analysis.
Canver, Matthew C; Canver, Adam C; Revere, Karen E; Amado, Defne; Bennett, Jean; Chung, Daniel C
2014-01-01
Pupillometry is used clinically to evaluate retinal and optic nerve function by measuring pupillary response to light stimuli. We have developed a mathematical algorithm to automate and expedite the analysis of non-filtered, non-calculated pupillometric data obtained from mouse pupillary light reflex recordings, obtained from dynamic pupillary diameter recordings following exposure of varying light intensities. The non-filtered, non-calculated pupillometric data is filtered through a low pass finite impulse response (FIR) filter. Thresholding is used to remove data caused by eye blinking, loss of pupil tracking, and/or head movement. Twelve physiologically relevant parameters were extracted from the collected data: (1) baseline diameter, (2) minimum diameter, (3) response amplitude, (4) re-dilation amplitude, (5) percent of baseline diameter, (6) response time, (7) re-dilation time, (8) average constriction velocity, (9) average re-dilation velocity, (10) maximum constriction velocity, (11) maximum re-dilation velocity, and (12) onset latency. No significant differences were noted between parameters derived from algorithm calculated values and manually derived results (p ≥ 0.05). This mathematical algorithm will expedite endpoint data derivation and eliminate human error in the manual calculation of pupillometric parameters from non-filtered, non-calculated pupillometric values. Subsequently, these values can be used as reference metrics for characterizing the natural history of retinal disease. Furthermore, it will be instrumental in the assessment of functional visual recovery in humans and pre-clinical models of retinal degeneration and optic nerve disease following pharmacological or gene-based therapies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Removing Visual Bias in Filament Identification: A New Goodness-of-fit Measure
NASA Astrophysics Data System (ADS)
Green, C.-E.; Cunningham, M. R.; Dawson, J. R.; Jones, P. A.; Novak, G.; Fissel, L. M.
2017-05-01
Different combinations of input parameters to filament identification algorithms, such as disperse and filfinder, produce numerous different output skeletons. The skeletons are a one-pixel-wide representation of the filamentary structure in the original input image. However, these output skeletons may not necessarily be a good representation of that structure. Furthermore, a given skeleton may not be as good of a representation as another. Previously, there has been no mathematical “goodness-of-fit” measure to compare output skeletons to the input image. Thus far this has been assessed visually, introducing visual bias. We propose the application of the mean structural similarity index (MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the MSSIM to find the output skeletons that are the most mathematically similar to the original input image (the optimum, or “best,” skeletons) for a given algorithm, and independently of the algorithm. This measure makes possible systematic parameter studies, aimed at finding the subset of input parameter values returning optimum skeletons. It can also be applied to the output of non-skeleton-based filament identification algorithms, such as the Hessian matrix method. The MSSIM removes the need to visually examine thousands of output skeletons, and eliminates the visual bias, subjectivity, and limited reproducibility inherent in that process, representing a major improvement upon existing techniques. Importantly, it also allows further automation in the post-processing of output skeletons, which is crucial in this era of “big data.”
Interior search algorithm (ISA): a novel approach for global optimization.
Gandomi, Amir H
2014-07-01
This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
A Methodology for Projecting U.S.-Flag Commercial Tanker Capacity
1986-03-01
total crude supply for the total US is less than the sum of the total crude supplies of the PADDs . The algorithm generating the output shown in tables...other PADDs . Accordingly, projected receipts for PADD V are zero , and in conjunction with the values for the vari- ables that previously were...SHIPMENTS ALGORITHM This section presents the mathematics of the algorithm that generates the shipments projections for each PADD . The notation
Inverse problem of radiofrequency sounding of ionosphere
NASA Astrophysics Data System (ADS)
Velichko, E. N.; Yu. Grishentsev, A.; Korobeynikov, A. G.
2016-01-01
An algorithm for the solution of the inverse problem of vertical ionosphere sounding and a mathematical model of noise filtering are presented. An automated system for processing and analysis of spectrograms of vertical ionosphere sounding based on our algorithm is described. It is shown that the algorithm we suggest has a rather high efficiency. This is supported by the data obtained at the ionospheric stations of the so-called “AIS-M” type.
Algorithm for the stabilization of motion a bounding vehicle in the flight phase
NASA Technical Reports Server (NTRS)
Lapshin, V. V.
1980-01-01
The unsupported phase of motion of a multileg bounding vehicle is examined. An algorithm for stabilization of the angular motion of the vehicle housing by change of the motion of the legs during flight is constructed. The results of mathematical modelling of the stabilization process by computer are presented.
A Genetic Algorithm Approach to Nonlinear Least Squares Estimation
ERIC Educational Resources Information Center
Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.
2004-01-01
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…
Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm
ERIC Educational Resources Information Center
Stewart, Wayne; Stewart, Sepideh
2014-01-01
For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…
Algorithms for Scheduling and Network Problems
1991-09-01
time. We already know, by Lemma 2.2.1, that WOPT = O(log( mpU )), so if we could solve this integer program optimally we would be done. However, the...Folydirat, 15:177-191, 1982. [6] I.S. Belov and Ya. N. Stolin. An algorithm in a single path operations scheduling problem. In Mathematical Economics and
Accurately tracking single-cell movement trajectories in microfluidic cell sorting devices.
Jeong, Jenny; Frohberg, Nicholas J; Zhou, Enlu; Sulchek, Todd; Qiu, Peng
2018-01-01
Microfluidics are routinely used to study cellular properties, including the efficient quantification of single-cell biomechanics and label-free cell sorting based on the biomechanical properties, such as elasticity, viscosity, stiffness, and adhesion. Both quantification and sorting applications require optimal design of the microfluidic devices and mathematical modeling of the interactions between cells, fluid, and the channel of the device. As a first step toward building such a mathematical model, we collected video recordings of cells moving through a ridged microfluidic channel designed to compress and redirect cells according to cell biomechanics. We developed an efficient algorithm that automatically and accurately tracked the cell trajectories in the recordings. We tested the algorithm on recordings of cells with different stiffness, and showed the correlation between cell stiffness and the tracked trajectories. Moreover, the tracking algorithm successfully picked up subtle differences of cell motion when passing through consecutive ridges. The algorithm for accurately tracking cell trajectories paves the way for future efforts of modeling the flow, forces, and dynamics of cell properties in microfluidics applications.
About some types of constraints in problems of routing
NASA Astrophysics Data System (ADS)
Petunin, A. A.; Polishuk, E. G.; Chentsov, A. G.; Chentsov, P. A.; Ukolov, S. S.
2016-12-01
Many routing problems arising in different applications can be interpreted as a discrete optimization problem with additional constraints. The latter include generalized travelling salesman problem (GTSP), to which task of tool routing for CNC thermal cutting machines is sometimes reduced. Technological requirements bound to thermal fields distribution during cutting process are of great importance when developing algorithms for this task solution. These requirements give rise to some specific constraints for GTSP. This paper provides a mathematical formulation for the problem of thermal fields calculating during metal sheet thermal cutting. Corresponding algorithm with its programmatic implementation is considered. The mathematical model allowing taking such constraints into account considering other routing problems is discussed either.
NASA Astrophysics Data System (ADS)
Tickle, Andrew J.; Smith, Jeremy S.; Wu, Q. Henry
2008-04-01
Presented in this paper is the design of a skin filter which unlike many systems already developed for use, this system will not use RGB or HSI colour but an 8-bit greyscale instead. This is done in order to make the system more convenient to employ on an FPGA, to increase the speed to better enable real-time imaging and to make it easier to combine with the previously designed binary based algorithms. This paper will discuss the many approaches and methods that could be considered such as Bayes format and thresholds, pixel extraction, mathematical morphological strings, edge detection or a combination of the previous and a discussion about which provided the best performance. The research for this skin filter was carried out in two stages, firstly on people who had an ethnic origin of White - British, Asian or Asian British, Chinese and Mixed White and Asian. The second phase which won't be included here in great detail will cover the same principles for the other ethnic backgrounds of Black or Black British - Caribbean or Africa, Other Black background, Asian or Asian British - Indian, Pakistani or Bangladeshi. This is due to the fact that we have to modify the parameters that govern the detection process to account for greyscale changes in the skin tone, texture and intensity; however the same principles would still be applied for general detection and integration into the previous algorithm. The latter is discussed and what benefits it will give.
The use of mathematical models in teaching wastewater treatment engineering.
Morgenroth, E; Arvin, E; Vanrolleghem, P
2002-01-01
Mathematical modeling of wastewater treatment processes has become increasingly popular in recent years. To prepare students for their future careers, environmental engineering education should provide students with sufficient background and experiences to understand and apply mathematical models efficiently and responsibly. Approaches for introducing mathematical modeling into courses on wastewater treatment engineering are discussed depending on the learning objectives, level of the course and the time available.
ERIC Educational Resources Information Center
Blotnicky, Karen A.; Franz-Odendaal, Tamara; French, Frederick; Joy, Phillip
2018-01-01
Background: A sample of 1448 students in grades 7 and 9 was drawn from public schools in Atlantic Canada to explore students' knowledge of science and mathematics requirements for science, technology, engineering, and mathematics (STEM) careers. Also explored were their mathematics self-efficacy (MSE), their future career interests, their…
ERIC Educational Resources Information Center
Hole, Arne; Grønmo, Liv Sissel; Onstad, Torgeir
2018-01-01
Background: This paper discusses a framework for analyzing the dependence on mathematical theory in test items, that is, a framework for discussing to what extent knowledge of mathematical theory is helpful for the student in solving the item. The framework can be applied to any test in which some knowledge of mathematical theory may be useful,…
Elements of Mathematics, Book O: Intuitive Background. Chapter 1, Operational Systems.
ERIC Educational Resources Information Center
Exner, Robert; And Others
The sixteen chapters of this book provide the core material for the Elements of Mathematics Program, a secondary sequence developed for highly motivated students with strong verbal abilities. The sequence is based on a functional-relational approach to mathematics teaching, and emphasizes teaching by analysis of real-life situations. This text is…
How Young Children View Mathematical Representations: A Study Using Eye-Tracking Technology
ERIC Educational Resources Information Center
Bolden, David; Barmby, Patrick; Raine, Stephanie; Gardner, Matthew
2015-01-01
Background: It has been shown that mathematical representations can aid children's understanding of mathematical concepts but that children can sometimes have difficulty in interpreting them correctly. New advances in eye-tracking technology can help in this respect because it allows data to be gathered concerning children's focus of attention and…
How Important Is Where You Start? Early Mathematics Knowledge and Later School Success
ERIC Educational Resources Information Center
Claessens, Amy; Engel, Mimi
2013-01-01
Background: Children's early skills are essential for their later success in school. Recent evidence highlights the importance of early mathematics, relative to reading and socioemotional skills, for elementary school achievement. Key advocacy groups for both early childhood and mathematics education have issued position statements on the…
Provoking Contingent Moments: Knowledge for "Powerful Teaching" at the Horizon
ERIC Educational Resources Information Center
Hurst, Chris
2017-01-01
Background: Teacher knowledge continues to be a topic of debate in Australasia and in other parts of the world. There have been many attempts by mathematics educators and researchers to define the knowledge needed by teachers to teach mathematics effectively. A plethora of terms, such as mathematical content knowledge, pedagogical content…
ERIC Educational Resources Information Center
Sabag, Nissim
2017-01-01
Background: The importance of knowledge and skills in mathematics for electrical engineering students is well known. Engineers and engineering educators agree that any engineering curriculum must include plenty of mathematics studies to enrich the engineer's toolbox. Nevertheless, little attention has been given to the possible contribution of…
ERIC Educational Resources Information Center
Young, Adena E.; Worrell, Frank C.; Gabelko, Nina H.
2011-01-01
In this study, we used logistic regression to examine how well student background and prior achievement variables predicted success among students attending accelerated and enrichment mathematics courses at a summer program (N = 459). Socioeconomic status, grade point average (GPA), and mathematics diagnostic test scores significantly predicted…
A Simple Model for a SARS Epidemic
ERIC Educational Resources Information Center
Ang, Keng Cheng
2004-01-01
In this paper, we examine the use of an ordinary differential equation in modelling the SARS outbreak in Singapore. The model provides an excellent example of using mathematics in a real life situation. The mathematical concepts involved are accessible to students with A level Mathematics backgrounds. Data for the SARS epidemic in Singapore are…
ERIC Educational Resources Information Center
Huang, Qi; Zhang, Xiao; Liu, Yingyi; Yang, Wen; Song, Zhanmei
2017-01-01
Background: A growing body of recent research has shown that parent-child mathematical activities have a strong effect on children's mathematical learning. However, this research was conducted predominantly in Western societies and focused mainly on mothers' involvement in such activities. Aims: This study aimed to examine both mother-child and…
Elements of Mathematics, Book O: Intuitive Background. Chapter 5, Mappings.
ERIC Educational Resources Information Center
Exner, Robert; And Others
The sixteen chapters of this book provide the core material for the Elements of Mathematics Program, a secondary sequence developed for highly motivated students with strong verbal abilities. The sequence is based on a functional-relational approach to mathematics teaching, and emphasizes teaching by analysis of real-life situations. This text is…
Using Sport to Engage and Motivate Students to Learn Mathematics
ERIC Educational Resources Information Center
Robinson, Carol L.
2012-01-01
This article describes how technology has been used to motivate the learning of mathematics for students of Sports Technology at Loughborough University. Sports applications are introduced whenever appropriate and Matlab is taught to enable the students to solve realistic problems. The mathematical background of the students is varied and the…
Role of Mathematics Learning Development Centres in HEIs
ERIC Educational Resources Information Center
Nzekwe-Excel, C.
2010-01-01
Background and Rationale: Student withdrawal and non-completion in institutions have been an issue of considerable concern. The lack of mathematical ability has been identified as a factor resulting to non-completion in higher institutions. Several students in higher education approach mathematics with a lot of anxiety. This has created the need…
ERIC Educational Resources Information Center
Mercado, Janet
2017-01-01
Equity in mathematics teaching has gained increased attention in the last few decades. A growing field of research has provided various definitions of equity, outlined standards, and identified practices that lead to equitable learning opportunities for all students, particularly for students from non-dominant backgrounds. However, few studies…
Elements of Mathematics, Book O: Intuitive Background. Chapter 2, The Integers.
ERIC Educational Resources Information Center
Exner, Robert; And Others
The sixteen chapters of this book provide the core materials for the Elements of Mathematics Program, a secondary sequence developed for highly motivated students with strong verbal abilities. The sequence is based on a functional-relational approach to mathematics teaching, and emphasizes teaching by analysis of real-life situations. This text is…
Self-Concept Mediates the Relation between Achievement and Emotions in Mathematics
ERIC Educational Resources Information Center
Van der Beek, Jojanneke P. J.; Van der Ven, Sanne H. G.; Kroesbergen, Evelyn H.; Leseman, Paul P. M.
2017-01-01
Background: Mathematics achievement is related to positive and negative emotions. Pekrun's control-value theory of achievement emotions suggests that students' self-concept (i.e., self-appraisal of ability) may be an important mediator of the relation between mathematics achievement and emotions. Aims: The aims were (1) to investigate the…
Mathematics Instruction in Tokyo's and Hawaii's Junior High Schools. Final Report.
ERIC Educational Resources Information Center
Hawaii Univ., Honolulu. Coll. of Education.
Mathematics instruction in junior high schools in Tokyo and Hawaii was compared in order to gain knowledge of how mathematics teachers' effectiveness in the classroom may be improved. Because they were likely to influence teachers' behavior, these factors were considered: teachers' background and teaching load, allocation of time, views on…
Adewumi, Aderemi Oluyinka; Chetty, Sivashan
2017-01-01
The Annual Crop Planning (ACP) problem was a recently introduced problem in the literature. This study further expounds on this problem by presenting a new mathematical formulation, which is based on market economic factors. To determine solutions, a new local search metaheuristic algorithm is investigated which is called the enhanced Best Performance Algorithm (eBPA). eBPA's results are compared against two well-known local search metaheuristic algorithms; these include Tabu Search and Simulated Annealing. The results show the potential of the eBPA for continuous optimization problems.
NASA Astrophysics Data System (ADS)
Chatterjee, R. S.; Singh, Narendra; Thapa, Shailaja; Sharma, Dravneeta; Kumar, Dheeraj
2017-06-01
The present study proposes land surface temperature (LST) retrieval from satellite-based thermal IR data by single channel radiative transfer algorithm using atmospheric correction parameters derived from satellite-based and in-situ data and land surface emissivity (LSE) derived by a hybrid LSE model. For example, atmospheric transmittance (τ) was derived from Terra MODIS spectral radiance in atmospheric window and absorption bands, whereas the atmospheric path radiance and sky radiance were estimated using satellite- and ground-based in-situ solar radiation, geographic location and observation conditions. The hybrid LSE model which is coupled with ground-based emissivity measurements is more versatile than the previous LSE models and yields improved emissivity values by knowledge-based approach. It uses NDVI-based and NDVI Threshold method (NDVITHM) based algorithms and field-measured emissivity values. The model is applicable for dense vegetation cover, mixed vegetation cover, bare earth including coal mining related land surface classes. The study was conducted in a coalfield of India badly affected by coal fire for decades. In a coal fire affected coalfield, LST would provide precise temperature difference between thermally anomalous coal fire pixels and background pixels to facilitate coal fire detection and monitoring. The derived LST products of the present study were compared with radiant temperature images across some of the prominent coal fire locations in the study area by graphical means and by some standard mathematical dispersion coefficients such as coefficient of variation, coefficient of quartile deviation, coefficient of quartile deviation for 3rd quartile vs. maximum temperature, coefficient of mean deviation (about median) indicating significant increase in the temperature difference among the pixels. The average temperature slope between adjacent pixels, which increases the potential of coal fire pixel detection from background pixels, is significantly larger in the derived LST products than the corresponding radiant temperature images.
Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen
2002-12-10
Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.
Electrical Circuits in the Mathematics/Computer Science Classroom.
ERIC Educational Resources Information Center
McMillan, Robert D.
1988-01-01
Shows how, with little or no electrical background, students can apply Boolean algebra concepts to design and build integrated electrical circuits in the classroom that will reinforce important ideas in mathematics. (PK)
System for corrosion monitoring in pipeline applying fuzzy logic mathematics
NASA Astrophysics Data System (ADS)
Kuzyakov, O. N.; Kolosova, A. L.; Andreeva, M. A.
2018-05-01
A list of factors influencing corrosion rate on the external side of underground pipeline is determined. Principles of constructing a corrosion monitoring system are described; the system performance algorithm and program are elaborated. A comparative analysis of methods for calculating corrosion rate is undertaken. Fuzzy logic mathematics is applied to reduce calculations while considering a wider range of corrosion factors.
The Relation between Types of Assessment Tasks and the Mathematical Reasoning Students Use
ERIC Educational Resources Information Center
Boesen, Jesper; Lithner, Johan; Palm, Torulf
2010-01-01
The relation between types of tasks and the mathematical reasoning used by students trying to solve tasks in a national test situation is analyzed. The results show that when confronted with test tasks that share important properties with tasks in the textbook the students solved them by trying to recall facts or algorithms. Such test tasks did…
Formal logic rewrite system bachelor in teaching mathematical informatics
NASA Astrophysics Data System (ADS)
Habiballa, Hashim; Jendryscik, Radek
2017-07-01
The article presents capabilities of the formal rewrite logic system - Bachelor - for teaching theoretical computer science (mathematical informatics). The system Bachelor enables constructivist approach to teaching and therefore it may enhance the learning process in hard informatics essential disciplines. It brings not only detailed description of formal rewrite process but also it can demonstrate algorithmical principles for logic formulae manipulations.
Cellular automata-based modelling and simulation of biofilm structure on multi-core computers.
Skoneczny, Szymon
2015-01-01
The article presents a mathematical model of biofilm growth for aerobic biodegradation of a toxic carbonaceous substrate. Modelling of biofilm growth has fundamental significance in numerous processes of biotechnology and mathematical modelling of bioreactors. The process following double-substrate kinetics with substrate inhibition proceeding in a biofilm has not been modelled so far by means of cellular automata. Each process in the model proposed, i.e. diffusion of substrates, uptake of substrates, growth and decay of microorganisms and biofilm detachment, is simulated in a discrete manner. It was shown that for flat biofilm of constant thickness, the results of the presented model agree with those of a continuous model. The primary outcome of the study was to propose a mathematical model of biofilm growth; however a considerable amount of focus was also placed on the development of efficient algorithms for its solution. Two parallel algorithms were created, differing in the way computations are distributed. Computer programs were created using OpenMP Application Programming Interface for C++ programming language. Simulations of biofilm growth were performed on three high-performance computers. Speed-up coefficients of computer programs were compared. Both algorithms enabled a significant reduction of computation time. It is important, inter alia, in modelling and simulation of bioreactor dynamics.
NASA Astrophysics Data System (ADS)
Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.
2017-10-01
Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.
ERIC Educational Resources Information Center
Asirifi, Michael Kwabena; Mensah, Kweku Abeeku; Amoako, Joseph
2015-01-01
The purpose of this research article is to find out an assessment of different educational background of students performance in engineering mathematics and on the class of award obtained at the Higher National Diploma (HND) level at Cape Coast Polytechnic. A descriptive survey was conducted on students of the Electricals/Electronics Department…
Roth, Idit Lachover; Lachover, Boaz; Koren, Guy; Levin, Carina; Zalman, Luci; Koren, Ariel
2018-01-01
Background β-thalassemia major is a severe disease with high morbidity. The world prevalence of carriers is around 1.5–7%. The present study aimed to find a reliable formula for detecting β-thalassemia carriers using an extensive database of more than 22,000 samples obtained from a homogeneous population of childbearing age women with 3161 (13.6%) of β-thalassemia carriers and to check previously published formulas. Methods We applied a mathematical method based on the support vector machine (SVM) algorithm in the search for a reliable formula that can differentiate between thalassemia carriers and non-carriers, including normal counts or counts suspected to belong to iron-deficient women. Results Shine’s formula and our SVM formula showed >98% sensitivity and >99.77% negative predictive value (NPV). All other published formulas gave inferior results. Conclusions We found a reliable formula that can be incorporated into any automatic blood counter to alert health providers to the possibility of a woman being a β-thalassemia carrier. A further simple hemoglobin characterization by HPLC analysis should be performed to confirm the diagnosis, and subsequent family studies should be carried out. Our SVM formula is currently limited to women of fertility age until further analysis in other groups can be performed. PMID:29326805
NASA Astrophysics Data System (ADS)
Nakamura, Yoshimasa; Sekido, Hiroto
2018-04-01
The finite or the semi-infinite discrete-time Toda lattice has many applications to various areas in applied mathematics. The purpose of this paper is to review how the Toda lattice appears in the Lanczos algorithm through the quotient-difference algorithm and its progressive form (pqd). Then a multistep progressive algorithm (MPA) for solving linear systems is presented. The extended Lanczos parameters can be given not by computing inner products of the extended Lanczos vectors but by using the pqd algorithm with highly relative accuracy in a lower cost. The asymptotic behavior of the pqd algorithm brings us some applications of MPA related to eigenvectors.
FINITE-STATE APPROXIMATIONS TO DENUMERABLE-STATE DYNAMIC PROGRAMS,
AIR FORCE OPERATIONS, LOGISTICS), (*INVENTORY CONTROL, DYNAMIC PROGRAMMING), (*DYNAMIC PROGRAMMING, APPROXIMATION(MATHEMATICS)), INVENTORY CONTROL, DECISION MAKING, STOCHASTIC PROCESSES, GAME THEORY, ALGORITHMS, CONVERGENCE
Assessment of numeracy in sports and exercise science students at an Australian university
NASA Astrophysics Data System (ADS)
Green, Simon; McGlynn, Susan; Stuart, Deidre; Fahey, Paul; Pettigrew, Jim; Clothier, Peter
2018-05-01
The effect of high school study of mathematics on numeracy performance of sports and exercise science (SES) students is not clear. To investigate this further, we tested the numeracy skills of 401 students enrolled in a Bachelor of Health Sciences degree in SES using a multiple-choice survey consisting of four background questions and 39 numeracy test questions. Background questions (5-point scale) focused on highest level of mathematics studied at high school, self-perception of mathematics proficiency, perceived importance of mathematics to SES and likelihood of seeking help with mathematics. Numeracy questions focused on rational number, ratios and rates, basic algebra and graph interpretation. Numeracy performance was based on answers to these questions (1 mark each) and represented by the total score (maximum = 39). Students from first (n = 212), second (n = 78) and third (n = 111) years of the SES degree completed the test. The distribution of numeracy test scores for the entire cohort was negatively skewed with a median (IQR) score of 27(11). We observed statistically significant associations between test scores and the highest level of mathematics studied (P < 0.05), being lowest in students who studied Year 10 Mathematics (20 (9)), intermediate in students who studied Year 12 General Mathematics (26 (8)) and highest in two groups of students who studied higher-level Year 12 Mathematics (31 (9), 31 (6)). There were statistically significant associations between test scores and level of self-perception of mathematics proficiency and also likelihood of seeking help with mathematics (P < 0.05) but not with perceived importance of mathematics to SES. These findings reveal that the level of mathematics studied in high school is a critical factor determining the level of numeracy performance in SES students.
NASA Astrophysics Data System (ADS)
Sur, Chiranjib; Shukla, Anupam
2018-03-01
Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.
Design of a Synthetic Aperture Array to Support Experiments in Active Control of Scattering
1990-06-01
becomes necessary to validate the theory and test the control system algorithms . While experiments in open water would be most like the anticipated...mathematical development of the beamforming algorithms used as well as an estimate of their applicability to the specifics of beamforming in a reverberant...Chebyshev array have been proposed. The method used in ARRAY, a nested product algorithm , proposed by Bresler [21] is recommended by Pozar [19] and
Optimization of the two-sample rank Neyman-Pearson detector
NASA Astrophysics Data System (ADS)
Akimov, P. S.; Barashkov, V. M.
1984-10-01
The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.
Giving students the run of sprinting models
NASA Astrophysics Data System (ADS)
Heck, André; Ellermeijer, Ton
2009-11-01
A biomechanical study of sprinting is an interesting task for students who have a background in mechanics and calculus. These students can work with real data and do practical investigations similar to the way sports scientists do research. Student research activities are viable when the students are familiar with tools to collect and work with data from sensors and video recordings and with modeling tools for comparing simulation and experimental results. This article describes a multipurpose system, named COACH, that offers a versatile integrated set of tools for learning, doing, and teaching mathematics and science in a computer-based inquiry approach. Automated tracking of reference points and correction of perspective distortion in videos, state-of-the-art algorithms for data smoothing and numerical differentiation, and graphical system dynamics based modeling are some of the built-in techniques that are suitable for motion analysis. Their implementation and their application in student activities involving models of running are discussed.
Makkai, Géza; Buzády, Andrea; Erostyák, János
2010-01-01
Determination of concentrations of spectrally overlapping compounds has special difficulties. Several methods are available to calculate the constituents' concentrations in moderately complex mixtures. A method which can provide information about spectrally hidden components in mixtures is very useful. Two methods powerful in resolving spectral components are compared in this paper. The first method tested is the Derivative Matrix Isopotential Synchronous Fluorimetry (DMISF). It is based on derivative analysis of MISF spectra, which are constructed using isopotential trajectories in the Excitation-Emission Matrix (EEM) of background solution. For DMISF method, a mathematical routine fitting the 3D data of EEMs was developed. The other method tested uses classical Least Squares Fitting (LSF) algorithm, wherein Rayleigh- and Raman-scattering bands may lead to complications. Both methods give excellent sensitivity and have advantages against each other. Detection limits of DMISF and LSF have been determined at very different concentration and noise levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, J T
1975-06-01
A summary of work during the past several years on SETL, a new programming language drawing its dictions and basic concepts from the mathematical theory of sets, is presented. The work was started with the idea that a programming language modeled after an appropriate version of the formal language of mathematics might allow a programming style with some of the succinctness of mathematics, and that this might ultimately enable one to express and experiment with more complex algorithms than are now within reach. Part I discusses the general approach followed in the work. Part II focuses directly on the detailsmore » of the SETL language as it is now defined. It describes the facilities of SETL, includes short libraries of miscellaneous and of code optimization algorithms illustrating the use of SETL, and gives a detailed description of the manner in which the set-theoretic primitives provided by SETL are currently implemented. (RWR)« less
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr
2016-03-01
The overall objective of this project was to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics and developing rigorous mathematical techniques and computational algorithms to study such models. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2013-07-01
The Mathematics and Computation Division of the American Nuclear (ANS) and the Idaho Section of the ANS hosted the 2013 International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering (M and C 2013). This proceedings contains over 250 full papers with topics ranging from reactor physics; radiation transport; materials science; nuclear fuels; core performance and optimization; reactor systems and safety; fluid dynamics; medical applications; analytical and numerical methods; algorithms for advanced architectures; and validation verification, and uncertainty quantification.
Basic mathematical function libraries for scientific computation
NASA Technical Reports Server (NTRS)
Galant, David C.
1989-01-01
Ada packages implementing selected mathematical functions for the support of scientific and engineering applications were written. The packages provide the Ada programmer with the mathematical function support found in the languages Pascal and FORTRAN as well as an extended precision arithmetic and a complete complex arithmetic. The algorithms used are fully described and analyzed. Implementation assumes that the Ada type FLOAT objects fully conform to the IEEE 754-1985 standard for single binary floating-point arithmetic, and that INTEGER objects are 32-bit entities. Codes for the Ada packages are included as appendixes.
NASA Astrophysics Data System (ADS)
Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk
2016-05-01
In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.
Substructure System Identification for Finite Element Model Updating
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.; Blades, Eric L.
1997-01-01
This report summarizes research conducted under a NASA grant on the topic 'Substructure System Identification for Finite Element Model Updating.' The research concerns ongoing development of the Substructure System Identification Algorithm (SSID Algorithm), a system identification algorithm that can be used to obtain mathematical models of substructures, like Space Shuttle payloads. In the present study, particular attention was given to the following topics: making the algorithm robust to noisy test data, extending the algorithm to accept experimental FRF data that covers a broad frequency bandwidth, and developing a test analytical model (TAM) for use in relating test data to reduced-order finite element models.
ERIC Educational Resources Information Center
Miller, Jodie; Warren, Elizabeth
2014-01-01
Students living in disadvantaged contexts and whose second language is English (ESL) are at risk of not succeeding in school mathematics. It has been internationally recognised that students' socioeconomic background and their achievements in mathematics is more pronounced for Australian students (Thomson et al. 2011). This gap is even more…
A Mathematical Experience Involving Defining Processes: In-Action Definitions and Zero-Definitions
ERIC Educational Resources Information Center
Ouvrier-Buffet, Cecile
2011-01-01
In this paper, a focus is made on defining processes at stake in an unfamiliar situation coming from discrete mathematics which brings surprising mathematical results. The epistemological framework of Lakatos is questioned and used for the design and the analysis of the situation. The cognitive background of Vergnaud's approach enriches the study…
2012 National Survey of Science and Mathematics Education: Status of High School Biology
ERIC Educational Resources Information Center
Lyons, Kiira M.
2013-01-01
The 2012 National Survey of Science and Mathematics Education was designed to provide up-to-date information and to identify trends in the areas of teacher background and experience, curriculum and instruction, and the availability and use of instructional resources. A total of 7,752 science and mathematics teachers in schools across the United…
2012 National Survey of Science and Mathematics Education: Status of High School Chemistry
ERIC Educational Resources Information Center
Smith, P. Sean
2013-01-01
The 2012 National Survey of Science and Mathematics Education was designed to provide up-to-date information and to identify trends in the areas of teacher background and experience, curriculum and instruction, and the availability and use of instructional resources. A total of 7,752 science and mathematics teachers in schools across the United…
2012 National Survey of Science and Mathematics Education: Status of Elementary School Science
ERIC Educational Resources Information Center
Trygstad, Peggy J.
2013-01-01
The 2012 National Survey of Science and Mathematics Education was designed to provide up-to-date information and to identify trends in the areas of teacher background and experience, curriculum and instruction, and the availability and use of instructional resources. A total of 7,752 science and mathematics teachers in schools across the United…
ERIC Educational Resources Information Center
Pinxten, Maarten; Marsh, Herbert W.; De Fraine, Bieke; Van Den Noortgate, Wim; Van Damme, Jan
2014-01-01
Background: The multidimensionality of the academic self-concept in terms of domain specificity has been well established in previous studies, whereas its multidimensionality in terms of motivational functions (the so-called affect-competence separation) needs further examination. Aim: This study aims at exploring differential effects of enjoyment…
Report of the 2012 National Survey of Science and Mathematics Education
ERIC Educational Resources Information Center
Banilower, Eric R.; Smith, P. Sean; Weiss, Iris R.; Malzahn, Kristen A.; Campbell, Kiira M.; Weis, Aaron M.
2013-01-01
The 2012 National Survey of Science and Mathematics Education was designed to provide up-to-date information and to identify trends in the areas of teacher background and experience, curriculum and instruction, and the availability and use of instructional resources. A total of 7,752 science and mathematics teachers in schools across the United…
2012 National Survey of Science and Mathematics Education: Status of Middle School Science
ERIC Educational Resources Information Center
Weis, Aaron M.
2013-01-01
The 2012 National Survey of Science and Mathematics Education was designed to provide up-to-date information and to identify trends in the areas of teacher background and experience, curriculum and instruction, and the availability and use of instructional resources. A total of 7,752 science and mathematics teachers in schools across the United…
A Case Study of Pedagogy of Mathematics Support Tutors without a Background in Mathematics Education
ERIC Educational Resources Information Center
Walsh, Richard
2017-01-01
This study investigates the pedagogical skills and knowledge of three tertiary-level mathematics support tutors in a large group classroom setting. This is achieved through the use of video analysis and a theoretical framework comprising Rowland's Knowledge Quartet and general pedagogical knowledge. The study reports on the findings in relation to…
Studies in Mathematics, Volume IV. Geometry.
ERIC Educational Resources Information Center
Kutuzov, B. V.
This book is a translation of a Russian text. The translation is exact, and the language used by the author has not been brought up to date. The volume is probably most useful as a source of supplementary materials for high school mathematics. It is also useful for teachers to broaden their mathematical background. Chapters included in the text…
ERIC Educational Resources Information Center
Dierdorp, Adri; Bakker, Arthur; van Maanen, Jan A.; Eijkelhof, Harrie M. C.
2014-01-01
Background: Creating coherence between school subjects mathematics and science and making these school subjects meaningful are still topical challenges. This study investigates how students make meaningful connections between mathematics, statistics, science and applications when they engage in a specially developed unit that is based on…
ERIC Educational Resources Information Center
Exner, Robert; And Others
The sixteen chapters of this book provide the core material for the Elements of Mathematics Program, a secondary sequence developed for highly motivated students with strong verbal abilities. The sequence is based on a functional-relational approach to mathematics teaching, and emphasizes teaching by analysis of real-life situations. This text is…
ERIC Educational Resources Information Center
Jehopio, Peter J.; Wesonga, Ronald
2017-01-01
Background: The main objective of the study was to examine the relevance of engineering mathematics to the emerging industries. The level of abstraction, the standard of rigor, and the depth of theoretical treatment are necessary skills expected of a graduate engineering technician to be derived from mathematical knowledge. The question of whether…
2012 National Survey of Science and Mathematics Education: Status of High School Physics
ERIC Educational Resources Information Center
Banilower, Eric R.
2013-01-01
The 2012 National Survey of Science and Mathematics Education was designed to provide up-to-date information and to identify trends in the areas of teacher background and experience, curriculum and instruction, and the availability and use of instructional resources. A total of 7,752 science and mathematics teachers in schools across the United…
Judged Similarity of Aptitude and Achievement Tests in Mathematics.
ERIC Educational Resources Information Center
Donlon, Thomas F.
This study attempts to establish the ability of a panel of five judges with varied mathematics background to distinguish between two types of mathematical tests by separating their component items when they are presented in a mixed pool of aptitude and achievement tests. Typically, the two tests show high correlation. The judges showed about 70%…
ERIC Educational Resources Information Center
Baker, Courtney K.; Galanti, Terrie M.
2017-01-01
Background: This research highlights a school-university collaboration to pilot a professional development framework for integrating STEM in K-6 mathematics classrooms in a mid-Atlantic suburban school division. Because mathematics within STEM integration is often characterized as the calculations or the data representations in science classrooms,…
The Math Wars: Tensions in the Development of School Mathematics Curricula
ERIC Educational Resources Information Center
Wright, Pete
2012-01-01
The Math Wars have been raging since the 1990's in the United States, where the world of mathematics education has become polarised into two camps: the reformers and the traditionalists. In this article I explore the background to the Math Wars, with specific reference to conflicting ideologies of mathematics education. I draw parallels with…
ERIC Educational Resources Information Center
Kajander, Ann; Lovric, Miroslav
2017-01-01
As part of recent scrutiny of teacher capacity, the question of teachers' content knowledge of higher level mathematics emerges as important to the field of mathematics education. Elementary teachers in North America and some other countries tend to be subject generalists, yet it appears that some higher level mathematics background may be…
ERIC Educational Resources Information Center
Herbel-Eisenmann, Beth; Bartell, Tonya Gau; Breyfogle, M. Lynn; Bieda, Kristen; Crespo, Sandra; Dominguez, Higinio; Drake, Corey
2013-01-01
In this essay, the authors provide a rationale for the need to break the silence of privilege and oppression in mathematics education. They begin by providing a brief rationale from their personal and professional perspectives, which includes background about planning and executing the Privilege and Oppression in the Mathematics Preparation of…
Factors Related to White, Black, and Hispanic Women's Mathematics Attainments: A Descriptive Study.
ERIC Educational Resources Information Center
Rothschild, Susan J. S.; Lichtman, Marilyn
Virtually no research conducted on women and mathematics is longitudinal in scope, generalizable in extent, and ethnic-race specific in nature. This descriptive study begins to fill the gap by examining the effects of background, school, and social-psychological factors on Hispanic, black, and white women's mathematics attainments. Data for the…
Does the Acquisition of Spatial Skill Involve a Shift from Algorithm to Memory Retrieval?
ERIC Educational Resources Information Center
Frank, David J.; Macnamara, Brooke N.
2017-01-01
Performance on verbal and mathematical tasks is enhanced when participants shift from using algorithms to retrieving information directly from memory (Siegler, 1988a). However, it is unknown whether a shift to retrieval is involved in dynamic spatial skill acquisition. For example, do athletes mentally extrapolate the trajectory of the ball, or do…
Advanced Physiological Estimation of Cognitive Status (APECS)
2009-09-15
REPORT Advanced Physiological Estimation of Cognitive Status (APECS) Final Report 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: EEG...fitness and transmit data to command and control systems. Some of the signals that the physiological sensors measure are readily interpreted, such as...electroencephalogram (EEG) and other signals requires a complex series of mathematical transformations or algorithms. Overall, research on algorithms
ERIC Educational Resources Information Center
Karagiannis, P.; Markelis, I.; Paparrizos, K.; Samaras, N.; Sifaleras, A.
2006-01-01
This paper presents new web-based educational software (webNetPro) for "Linear Network Programming." It includes many algorithms for "Network Optimization" problems, such as shortest path problems, minimum spanning tree problems, maximum flow problems and other search algorithms. Therefore, webNetPro can assist the teaching process of courses such…
Software Technology Readiness Assessment. Defense Acquisition Guidance with Space Examples
2010-04-01
are never Software CTE candidates 19 Algorithm Example: Filters • Definitions – Filters in Signal Processing • A filter is a mathematical algorithm...Segment Segment • SOA as a CTE? – Google produced 40 million (!) hits in 0.2 sec for “SOA”. Even if we discount hits on the Society of Actuaries and
ERIC Educational Resources Information Center
Flores, Raymond; Koontz, Esther; Inan, Fethi A.; Alagic, Mara
2015-01-01
This study examined the impact of the order of two teaching approaches on students' abilities and on-task behaviors while learning how to solve percentage problems. Two treatment groups were compared. MR first received multiple representation instruction followed by traditional algorithmic instruction and TA first received these teaching…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oler, Kiri J.; Miller, Carl H.
In this paper, we present a methodology for reverse engineering integrated circuits, including a mathematical verification of a scalable algorithm used to generate minimal finite state machine representations of integrated circuits.
WebArray: an online platform for microarray data analysis
Xia, Xiaoqin; McClelland, Michael; Wang, Yipeng
2005-01-01
Background Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments. Results The currently implemented functions were based on limma and affy package from Bioconductor, the spacings LOESS histogram (SPLOSH) method, PCA-assisted normalization method and genome mapping method. WebArray incorporates these packages and provides a user-friendly interface for accessing a wide range of key functions of limma and others, such as spot quality weight, background correction, graphical plotting, normalization, linear modeling, empirical bayes statistical analysis, false discovery rate (FDR) estimation, chromosomal mapping for genome comparison. Conclusion WebArray offers a convenient platform for bench biologists to access several cutting-edge microarray data analysis tools. The website is freely available at . It runs on a Linux server with Apache and MySQL. PMID:16371165
Underwater terrain-aided navigation system based on combination matching algorithm.
Li, Peijuan; Sheng, Guoliang; Zhang, Xiaofei; Wu, Jingqiu; Xu, Baochun; Liu, Xing; Zhang, Yao
2018-07-01
Considering that the terrain-aided navigation (TAN) system based on iterated closest contour point (ICCP) algorithm diverges easily when the indicative track of strapdown inertial navigation system (SINS) is large, Kalman filter is adopted in the traditional ICCP algorithm, difference between matching result and SINS output is used as the measurement of Kalman filter, then the cumulative error of the SINS is corrected in time by filter feedback correction, and the indicative track used in ICCP is improved. The mathematic model of the autonomous underwater vehicle (AUV) integrated into the navigation system and the observation model of TAN is built. Proper matching point number is designated by comparing the simulation results of matching time and matching precision. Simulation experiments are carried out according to the ICCP algorithm and the mathematic model. It can be concluded from the simulation experiments that the navigation accuracy and stability are improved with the proposed combinational algorithm in case that proper matching point number is engaged. It will be shown that the integrated navigation system is effective in prohibiting the divergence of the indicative track and can meet the requirements of underwater, long-term and high precision of the navigation system for autonomous underwater vehicles. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Yi, Cancan; Lv, Yong; Xiao, Han; Ke, Ke; Yu, Xun
2017-12-01
For laser-induced breakdown spectroscopy (LIBS) quantitative analysis technique, baseline correction is an essential part for the LIBS data preprocessing. As the widely existing cases, the phenomenon of baseline drift is generated by the fluctuation of laser energy, inhomogeneity of sample surfaces and the background noise, which has aroused the interest of many researchers. Most of the prevalent algorithms usually need to preset some key parameters, such as the suitable spline function and the fitting order, thus do not have adaptability. Based on the characteristics of LIBS, such as the sparsity of spectral peaks and the low-pass filtered feature of baseline, a novel baseline correction and spectral data denoising method is studied in this paper. The improved technology utilizes convex optimization scheme to form a non-parametric baseline correction model. Meanwhile, asymmetric punish function is conducted to enhance signal-noise ratio (SNR) of the LIBS signal and improve reconstruction precision. Furthermore, an efficient iterative algorithm is applied to the optimization process, so as to ensure the convergence of this algorithm. To validate the proposed method, the concentration analysis of Chromium (Cr),Manganese (Mn) and Nickel (Ni) contained in 23 certified high alloy steel samples is assessed by using quantitative models with Partial Least Squares (PLS) and Support Vector Machine (SVM). Because there is no prior knowledge of sample composition and mathematical hypothesis, compared with other methods, the method proposed in this paper has better accuracy in quantitative analysis, and fully reflects its adaptive ability.
The finite element method in low speed aerodynamics
NASA Technical Reports Server (NTRS)
Baker, A. J.; Manhardt, P. D.
1975-01-01
The finite element procedure is shown to be of significant impact in design of the 'computational wind tunnel' for low speed aerodynamics. The uniformity of the mathematical differential equation description, for viscous and/or inviscid, multi-dimensional subsonic flows about practical aerodynamic system configurations, is utilized to establish the general form of the finite element algorithm. Numerical results for inviscid flow analysis, as well as viscous boundary layer, parabolic, and full Navier Stokes flow descriptions verify the capabilities and overall versatility of the fundamental algorithm for aerodynamics. The proven mathematical basis, coupled with the distinct user-orientation features of the computer program embodiment, indicate near-term evolution of a highly useful analytical design tool to support computational configuration studies in low speed aerodynamics.
Vision, healing brush, and fiber bundles
NASA Astrophysics Data System (ADS)
Georgiev, Todor
2005-03-01
The Healing Brush is a tool introduced for the first time in Adobe Photoshop (2002) that removes defects in images by seamless cloning (gradient domain fusion). The Healing Brush algorithms are built on a new mathematical approach that uses Fibre Bundles and Connections to model the representation of images in the visual system. Our mathematical results are derived from first principles of human vision, related to adaptation transforms of von Kries type and Retinex theory. In this paper we present the new result of Healing in arbitrary color space. In addition to supporting image repair and seamless cloning, our approach also produces the exact solution to the problem of high dynamic range compression of17 and can be applied to other image processing algorithms.
Ammari, Habib; Boulier, Thomas; Garnier, Josselin; Wang, Han
2017-01-31
Understanding active electrolocation in weakly electric fish remains a challenging issue. In this article we propose a mathematical formulation of this problem, in terms of partial differential equations. This allows us to detail two algorithms: one for localizing a target using the multi-frequency aspect of the signal, and another one for identifying the shape of this target. Shape recognition is designed in a machine learning point of view, and takes advantage of both the multi-frequency setup and the movement of the fish around its prey. Numerical simulations are shown for the computation of the electric field emitted and sensed by the fish; they are then used as an input for the two algorithms.
An electromagnetism-like metaheuristic for open-shop problems with no buffer
NASA Astrophysics Data System (ADS)
Naderi, Bahman; Najafi, Esmaeil; Yazdani, Mehdi
2012-12-01
This paper considers open-shop scheduling with no intermediate buffer to minimize total tardiness. This problem occurs in many production settings, in the plastic molding, chemical, and food processing industries. The paper mathematically formulates the problem by a mixed integer linear program. The problem can be optimally solved by the model. The paper also develops a novel metaheuristic based on an electromagnetism algorithm to solve the large-sized problems. The paper conducts two computational experiments. The first includes small-sized instances by which the mathematical model and general performance of the proposed metaheuristic are evaluated. The second evaluates the metaheuristic for its performance to solve some large-sized instances. The results show that the model and algorithm are effective to deal with the problem.
Mathematical Analysis of Algorithms within Mana
2014-06-01
MANA to analyze military operations without necessarily understanding how the results are achieved. The purpose of this thesis is to explore the...mathematical formulas that MANA utilizes in an effort to aid in creating a more informed understanding of results reached by MANA. This work is intended...utilizing MANA to analyze military operations without necessarily understanding how the results are achieved. The purpose of this thesis is to explore
Geometric Folding Algorithms: Bridging Theory to Practice
2009-11-03
orthogonal polyhedron can be folded from a single, universal crease pattern (box pleating). II. ORIGAMI DESIGN a.) Developed mathematical theory for what...happens in paper between creases, in particular for the case of circular creases. b.) Circular crease origami on permanent exhibition at MoMA in New...Developing mathematical theory of Robert Lang’s TreeMaker framework for efficiently folding tree-shaped origami bases.
Mathematical Fundamentals of Probabilistic Semantics for High-Level Fusion
2013-12-02
understanding of the fundamental aspects of uncertainty representation and reasoning that a theory of hard and soft high-level fusion must encompass...representation and reasoning that a theory of hard and soft high-level fusion must encompass. Successful completion requires an unbiased, in-depth...and soft information is the lack of a fundamental HLIF theory , backed by a consistent mathematical framework and supporting algorithms. Although there
Creativity and Technology in Mathematics: From Story Telling to Algorithmic with Op'Art
ERIC Educational Resources Information Center
Mercat, Christian; Filho, Pedro Lealdino; El-Demerdash, Mohamed
2017-01-01
This article describes some of the results of the European project mcSquared (http://mc2-project.eu/) regarding the use of Op'Art and optical illusion pieces as a tool to foster modeling and creative mathematical thinking in students. We present briefly the c-book technology and some results we got experimenting it. The Op'Art movement, with…
ERIC Educational Resources Information Center
Masalski, William J.
This book seeks to develop, enhance, and expand students' understanding of mathematics by using technology. Topics covered include the advantages of spreadsheets along with the opportunity to explore the 'what if?' type of questions encountered in the problem-solving process, enhancing the user's insight into the development and use of algorithms,…
Cooking Potatoes: Experimentation and Mathematical Modeling.
ERIC Educational Resources Information Center
Chen, Xiao Dong
2002-01-01
Describes a laboratory activity involving a mathematical model of cooking potatoes that can be solved analytically. Highlights the microstructure aspects of the experiment. Provides the key aspects of the results, detailed background readings, laboratory procedures and data analyses. (MM)
NASA Astrophysics Data System (ADS)
Huang, Weilin; Wang, Runqiu; Chen, Yangkang
2018-05-01
Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.
Digital Sound Synthesis Algorithms: a Tutorial Introduction and Comparison of Methods
NASA Astrophysics Data System (ADS)
Lee, J. Robert
The objectives of the dissertation are to provide both a compendium of sound-synthesis methods with detailed descriptions and sound examples, as well as a comparison of the relative merits of each method based on ease of use, observed sound quality, execution time, and data storage requirements. The methods are classified under the general headings of wavetable-lookup synthesis, additive synthesis, subtractive synthesis, nonlinear methods, and physical modelling. The nonlinear methods comprise a large group that ranges from the well-known frequency-modulation synthesis to waveshaping. The final category explores computer modelling of real musical instruments and includes numerical and analytical solutions to the classical wave equation of motion, along with some of the more sophisticated time -domain models that are possible through the prudent combination of simpler synthesis techniques. The dissertation is intended to be understandable by a musician who is mathematically literate but who does not necessarily have a background in digital signal processing. With this limitation in mind, a brief and somewhat intuitive description of digital sampling theory is provided in the introduction. Other topics such as filter theory are discussed as the need arises. By employing each of the synthesis methods to produce the same type of sound, interesting comparisons can be made. For example, a struck string sound, such as that typical of a piano, can be produced by algorithms in each of the synthesis classifications. Many sounds, however, are peculiar to a single algorithm and must be examined independently. Psychoacoustic studies were conducted as an aid in the comparison of the sound quality of several implementations of the synthesis algorithms. Other psychoacoustic experiments were conducted to supplement the established notions of which timbral issues are important in the re -synthesis of the sounds of acoustic musical instruments.
Zhu, Haitao; Nie, Binbin; Liu, Hua; Guo, Hua; Demachi, Kazuyuki; Sekino, Masaki; Shan, Baoci
2016-05-01
Phase map cross-correlation detection and quantification may produce highlighted signal at superparamagnetic iron oxide nanoparticles, and distinguish them from other hypointensities. The method may quantify susceptibility change by performing least squares analysis between a theoretically generated magnetic field template and an experimentally scanned phase image. Because characteristic phase recognition requires the removal of phase wrap and phase background, additional steps of phase unwrapping and filtering may increase the chance of computing error and enlarge the inconsistence among algorithms. To solve problem, phase gradient cross-correlation and quantification method is developed by recognizing characteristic phase gradient pattern instead of phase image because phase gradient operation inherently includes unwrapping and filtering functions. However, few studies have mentioned the detectable limit of currently used phase gradient calculation algorithms. The limit may lead to an underestimation of large magnetic susceptibility change caused by high-concentrated iron accumulation. In this study, mathematical derivation points out the value of maximum detectable phase gradient calculated by differential chain algorithm in both spatial and Fourier domain. To break through the limit, a modified quantification method is proposed by using unwrapped forward differentiation for phase gradient generation. The method enlarges the detectable range of phase gradient measurement and avoids the underestimation of magnetic susceptibility. Simulation and phantom experiments were used to quantitatively compare different methods. In vivo application performs MRI scanning on nude mice implanted by iron-labeled human cancer cells. Results validate the limit of detectable phase gradient and the consequent susceptibility underestimation. Results also demonstrate the advantage of unwrapped forward differentiation compared with differential chain algorithms for susceptibility quantification at high-concentrated iron accumulation. Copyright © 2015 Elsevier Inc. All rights reserved.
Research on the control of large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1983-01-01
The research effort on the control of large space structures at the University of Houston has concentrated on the mathematical theory of finite-element models; identification of the mass, damping, and stiffness matrix; assignment of damping to structures; and decoupling of structure dynamics. The objective of the work has been and will continue to be the development of efficient numerical algorithms for analysis, control, and identification of large space structures. The major consideration in the development of the algorithms has been the large number of equations that must be handled by the algorithm as well as sensitivity of the algorithms to numerical errors.
2017-01-01
The Annual Crop Planning (ACP) problem was a recently introduced problem in the literature. This study further expounds on this problem by presenting a new mathematical formulation, which is based on market economic factors. To determine solutions, a new local search metaheuristic algorithm is investigated which is called the enhanced Best Performance Algorithm (eBPA). eBPA’s results are compared against two well-known local search metaheuristic algorithms; these include Tabu Search and Simulated Annealing. The results show the potential of the eBPA for continuous optimization problems. PMID:28792495
Esfahanian, Mehri; Shokuhi Rad, Ali; Khoshhal, Saeed; Najafpour, Ghasem; Asghari, Behnam
2016-07-01
In this paper, genetic algorithm was used to investigate mathematical modeling of ethanol fermentation in a continuous conventional bioreactor (CCBR) and a continuous membrane bioreactor (CMBR) by ethanol permselective polydimethylsiloxane (PDMS) membrane. A lab scale CMBR with medium glucose concentration of 100gL(-1) and Saccharomyces cerevisiae microorganism was designed and fabricated. At dilution rate of 0.14h(-1), maximum specific cell growth rate and productivity of 0.27h(-1) and 6.49gL(-1)h(-1) were respectively found in CMBR. However, at very high dilution rate, the performance of CMBR was quite similar to conventional fermentation on account of insufficient incubation time. In both systems, genetic algorithm modeling of cell growth, ethanol production and glucose concentration were conducted based on Monod and Moser kinetic models during each retention time at unsteady condition. The results showed that Moser kinetic model was more satisfactory and desirable than Monod model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mathematics at A-Level. A Discussion Paper on the Applied Content. No. 93.
ERIC Educational Resources Information Center
Mathematical Association, Leicester (England).
In September 1979, the Mathematical Association in England held a weekend seminar on the scope of Applied Mathematics at A-level, and a subcommittee was established to consider the topic at more length. This paper is the first product of the subcommittee's deliberations. Sections 1 and 2 describe the background to current A-level courses: (1) who…
ERIC Educational Resources Information Center
Graf, Edith Aurora
2009-01-01
This report makes recommendations for the development of middle-school assessment in mathematics, based on a synthesis of scientific findings in cognitive psychology and mathematics education. The focus is on background research, rather than test specifications or example tasks. Readers interested in early development and pilot efforts associated…
ERIC Educational Resources Information Center
Berry, Emma; Mac An Bhaird, Ciarán; O'Shea, Ann
2015-01-01
The provision of some level of Mathematics Learning Support is now commonplace in the majority of Higher Education Institutions in the UK and Ireland. Most of these supports were initially established with the aim of trying to address the problem of large numbers of first-year students with weak mathematical backgrounds. The centres provide…
ERIC Educational Resources Information Center
Banse, Holland W.; Curby, Timothy W.; Palacios, Natalia A.; Rimm-Kaufman, Sara E.
2018-01-01
Background: Teaching is comprised of interconnected practices. Some practices are domain neutral (DN), or independent of a content area. Examples of DN practices include emotional and instructional support and classroom organization. Others are domain specific (DS), or content dependent. Within a mathematics context, examples of DS practices…
ERIC Educational Resources Information Center
Guerrero, Lourdes; Rivera, Antonio
Fourteen third graders were given numerical computation and division-with-remainder (DWR) problems both before and after they were taught the division algorithm in classrooms. Their solutions were examined. The results show that students' initial acquisition of the division algorithm did improve their performance in numerical division computations…
Summary of Research 1997, Department of Mathematics.
1999-01-01
problems. This capabil- ity is especially important at the present time when technology in general, and information operations in particular, are changing...compression algorithms, especially the Radiant TIN algorithm and its use on tactical imagery. SUMMARY: Several aspects of this problem were...points are not always the same, especially when bifurcation occurs. The equilibrium sets of control systems and their bifurcations are classified based
ERIC Educational Resources Information Center
Wiles, Clyde
Two questions were investigated in this study: (1) How did the computational proficiency of sixth graders who had one year's experience with Developing Mathematical Processes (DMP) materials compare with an equivalent group of students who used the usual textbook program; and (2) What occurs when sixth graders study algorithms as sequences of rule…
NASA Astrophysics Data System (ADS)
Basalto, Nicolas; Bellotti, Roberto; de Carlo, Francesco; Facchi, Paolo; Pantaleo, Ester; Pascazio, Saverio
2008-10-01
A clustering algorithm based on the Hausdorff distance is analyzed and compared to the single, complete, and average linkage algorithms. The four clustering procedures are applied to a toy example and to the time series of financial data. The dendrograms are scrutinized and their features compared. The Hausdorff linkage relies on firm mathematical grounds and turns out to be very effective when one has to discriminate among complex structures.
Chiang, Tzu-An; Che, Z H; Cui, Zhihua
2014-01-01
This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V(Max) method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did.
Chiang, Tzu-An; Che, Z. H.
2014-01-01
This study designed a cross-stage reverse logistics course for defective products so that damaged products generated in downstream partners can be directly returned to upstream partners throughout the stages of a supply chain for rework and maintenance. To solve this reverse supply chain design problem, an optimal cross-stage reverse logistics mathematical model was developed. In addition, we developed a genetic algorithm (GA) and three particle swarm optimization (PSO) algorithms: the inertia weight method (PSOA_IWM), V Max method (PSOA_VMM), and constriction factor method (PSOA_CFM), which we employed to find solutions to support this mathematical model. Finally, a real case and five simulative cases with different scopes were used to compare the execution times, convergence times, and objective function values of the four algorithms used to validate the model proposed in this study. Regarding system execution time, the GA consumed more time than the other three PSOs did. Regarding objective function value, the GA, PSOA_IWM, and PSOA_CFM could obtain a lower convergence value than PSOA_VMM could. Finally, PSOA_IWM demonstrated a faster convergence speed than PSOA_VMM, PSOA_CFM, and the GA did. PMID:24772026
Mathematical Foundation for Plane Covering Using Hexagons
NASA Technical Reports Server (NTRS)
Johnson, Gordon G.
1999-01-01
This work is to indicate the development and mathematical underpinnings of the algorithms previously developed for covering the plane and the addressing of the elements of the covering. The algorithms are of interest in that they provides a simple systematic way of increasing or decreasing resolution, in the sense that if we have the covering in place and there is an image superimposed upon the covering, then we may view the image in a rough form or in a very detailed form with minimal effort. Such ability allows for quick searches of crude forms to determine a class in which to make a detailed search. In addition, the addressing algorithms provide an efficient way to process large data sets that have related subsets. The algorithms produced were based in part upon the work of D. Lucas "A Multiplication in N Space" which suggested a set of three vectors, any two of which would serve as a bases for the plane and also that the hexagon is the natural geometric object to be used in a covering with a suggested bases. The second portion is a refinement of the eyeball vision system, the globular viewer.
Dynamic sensitivity analysis of biological systems
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2008-01-01
Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time-dependent admissible input. Conclusion By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with time-dependent admissible input. PMID:19091016
Comparative study of classification algorithms for immunosignaturing data
2012-01-01
Background High-throughput technologies such as DNA, RNA, protein, antibody and peptide microarrays are often used to examine differences across drug treatments, diseases, transgenic animals, and others. Typically one trains a classification system by gathering large amounts of probe-level data, selecting informative features, and classifies test samples using a small number of features. As new microarrays are invented, classification systems that worked well for other array types may not be ideal. Expression microarrays, arguably one of the most prevalent array types, have been used for years to help develop classification algorithms. Many biological assumptions are built into classifiers that were designed for these types of data. One of the more problematic is the assumption of independence, both at the probe level and again at the biological level. Probes for RNA transcripts are designed to bind single transcripts. At the biological level, many genes have dependencies across transcriptional pathways where co-regulation of transcriptional units may make many genes appear as being completely dependent. Thus, algorithms that perform well for gene expression data may not be suitable when other technologies with different binding characteristics exist. The immunosignaturing microarray is based on complex mixtures of antibodies binding to arrays of random sequence peptides. It relies on many-to-many binding of antibodies to the random sequence peptides. Each peptide can bind multiple antibodies and each antibody can bind multiple peptides. This technology has been shown to be highly reproducible and appears promising for diagnosing a variety of disease states. However, it is not clear what is the optimal classification algorithm for analyzing this new type of data. Results We characterized several classification algorithms to analyze immunosignaturing data. We selected several datasets that range from easy to difficult to classify, from simple monoclonal binding to complex binding patterns in asthma patients. We then classified the biological samples using 17 different classification algorithms. Using a wide variety of assessment criteria, we found ‘Naïve Bayes’ far more useful than other widely used methods due to its simplicity, robustness, speed and accuracy. Conclusions ‘Naïve Bayes’ algorithm appears to accommodate the complex patterns hidden within multilayered immunosignaturing microarray data due to its fundamental mathematical properties. PMID:22720696
VNIR hyperspectral background characterization methods in adverse weather conditions
NASA Astrophysics Data System (ADS)
Romano, João M.; Rosario, Dalton; Roth, Luz
2009-05-01
Hyperspectral technology is currently being used by the military to detect regions of interest where potential targets may be located. Weather variability, however, may affect the ability for an algorithm to discriminate possible targets from background clutter. Nonetheless, different background characterization approaches may facilitate the ability for an algorithm to discriminate potential targets over a variety of weather conditions. In a previous paper, we introduced a new autonomous target size invariant background characterization process, the Autonomous Background Characterization (ABC) or also known as the Parallel Random Sampling (PRS) method, features a random sampling stage, a parallel process to mitigate the inclusion by chance of target samples into clutter background classes during random sampling; and a fusion of results at the end. In this paper, we will demonstrate how different background characterization approaches are able to improve performance of algorithms over a variety of challenging weather conditions. By using the Mahalanobis distance as the standard algorithm for this study, we compare the performance of different characterization methods such as: the global information, 2 stage global information, and our proposed method, ABC, using data that was collected under a variety of adverse weather conditions. For this study, we used ARDEC's Hyperspectral VNIR Adverse Weather data collection comprised of heavy, light, and transitional fog, light and heavy rain, and low light conditions.
Modelling the spread of innovation in wild birds.
Shultz, Thomas R; Montrey, Marcel; Aplin, Lucy M
2017-06-01
We apply three plausible algorithms in agent-based computer simulations to recent experiments on social learning in wild birds. Although some of the phenomena are simulated by all three learning algorithms, several manifestations of social conformity bias are simulated by only the approximate majority (AM) algorithm, which has roots in chemistry, molecular biology and theoretical computer science. The simulations generate testable predictions and provide several explanatory insights into the diffusion of innovation through a population. The AM algorithm's success raises the possibility of its usefulness in studying group dynamics more generally, in several different scientific domains. Our differential-equation model matches simulation results and provides mathematical insights into the dynamics of these algorithms. © 2017 The Author(s).
A mathematical model of the passage of an asteroid-comet body through the Earth’s atmosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaydurov, V., E-mail: shaidurov04@mail.ru; Siberian Federal University, 79 Svobodny pr., 660041 Krasnoyarsk; Shchepanovskaya, G.
In the paper, a mathematical model and a numerical algorithm are proposed for modeling the complex of phenomena which accompany the passage of a friable asteroid-comet body through the Earth’s atmosphere: the material ablation, the dissociation of molecules, and the radiation. The proposed model is constructed on the basis of the Navier-Stokes equations for viscous heat-conducting gas with an additional equation for the motion and propagation of a friable lumpy-dust material in air. The energy equation is modified for the relation between two its kinds: the usual energy of the translation of molecules (which defines the temperature and pressure) andmore » the combined energy of their rotation, oscillation, electronic excitation, dissociation, and radiation. For the mathematical model of atmosphere, the distribution of density, pressure, and temperature in height is taken as for the standard atmosphere. An asteroid-comet body is taken initially as a round body consisting of a friable lumpy-dust material with corresponding density and significant viscosity which far exceed those for the atmosphere gas. A numerical algorithm is proposed for solving the initial-boundary problem for the extended system of Navier-Stokes equations. The algorithm is the combination of the semi-Lagrangian approximation for Lagrange transport derivatives and the conforming finite element method for other terms. The implementation of these approaches is illustrated by a numerical example.« less
Holistic approach for automated background EEG assessment in asphyxiated full-term infants
NASA Astrophysics Data System (ADS)
Matic, Vladimir; Cherian, Perumpillichira J.; Koolen, Ninah; Naulaers, Gunnar; Swarte, Renate M.; Govaert, Paul; Van Huffel, Sabine; De Vos, Maarten
2014-12-01
Objective. To develop an automated algorithm to quantify background EEG abnormalities in full-term neonates with hypoxic ischemic encephalopathy. Approach. The algorithm classifies 1 h of continuous neonatal EEG (cEEG) into a mild, moderate or severe background abnormality grade. These classes are well established in the literature and a clinical neurophysiologist labeled 272 1 h cEEG epochs selected from 34 neonates. The algorithm is based on adaptive EEG segmentation and mapping of the segments into the so-called segments’ feature space. Three features are suggested and further processing is obtained using a discretized three-dimensional distribution of the segments’ features represented as a 3-way data tensor. Further classification has been achieved using recently developed tensor decomposition/classification methods that reduce the size of the model and extract a significant and discriminative set of features. Main results. Effective parameterization of cEEG data has been achieved resulting in high classification accuracy (89%) to grade background EEG abnormalities. Significance. For the first time, the algorithm for the background EEG assessment has been validated on an extensive dataset which contained major artifacts and epileptic seizures. The demonstrated high robustness, while processing real-case EEGs, suggests that the algorithm can be used as an assistive tool to monitor the severity of hypoxic insults in newborns.
A Feature-Based Approach to Modeling Protein–DNA Interactions
Segal, Eran
2008-01-01
Transcription factor (TF) binding to its DNA target site is a fundamental regulatory interaction. The most common model used to represent TF binding specificities is a position specific scoring matrix (PSSM), which assumes independence between binding positions. However, in many cases, this simplifying assumption does not hold. Here, we present feature motif models (FMMs), a novel probabilistic method for modeling TF–DNA interactions, based on log-linear models. Our approach uses sequence features to represent TF binding specificities, where each feature may span multiple positions. We develop the mathematical formulation of our model and devise an algorithm for learning its structural features from binding site data. We also developed a discriminative motif finder, which discovers de novo FMMs that are enriched in target sets of sequences compared to background sets. We evaluate our approach on synthetic data and on the widely used TF chromatin immunoprecipitation (ChIP) dataset of Harbison et al. We then apply our algorithm to high-throughput TF ChIP data from mouse and human, reveal sequence features that are present in the binding specificities of mouse and human TFs, and show that FMMs explain TF binding significantly better than PSSMs. Our FMM learning and motif finder software are available at http://genie.weizmann.ac.il/. PMID:18725950
Lifting wavelet method of target detection
NASA Astrophysics Data System (ADS)
Han, Jun; Zhang, Chi; Jiang, Xu; Wang, Fang; Zhang, Jin
2009-11-01
Image target recognition plays a very important role in the areas of scientific exploration, aeronautics and space-to-ground observation, photography and topographic mapping. Complex environment of the image noise, fuzzy, all kinds of interference has always been to affect the stability of recognition algorithm. In this paper, the existence of target detection in real-time, accuracy problems, as well as anti-interference ability, using lifting wavelet image target detection methods. First of all, the use of histogram equalization, the goal difference method to obtain the region, on the basis of adaptive threshold and mathematical morphology operations to deal with the elimination of the background error. Secondly, the use of multi-channel wavelet filter wavelet transform of the original image de-noising and enhancement, to overcome the general algorithm of the noise caused by the sensitive issue of reducing the rate of miscarriage of justice will be the multi-resolution characteristics of wavelet and promotion of the framework can be designed directly in the benefits of space-time region used in target detection, feature extraction of targets. The experimental results show that the design of lifting wavelet has solved the movement of the target due to the complexity of the context of the difficulties caused by testing, which can effectively suppress noise, and improve the efficiency and speed of detection.
NASA Technical Reports Server (NTRS)
1994-01-01
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1993 through March 31, 1994. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.
Mathematical model of design loading vessel
NASA Astrophysics Data System (ADS)
Budnik, V. Yu
2017-10-01
Transport by ferry is very important in our time. The paper shows the factors that affect the operation of the ferry. The constraints of the designed system were identified. The indicators of quality were articulated. It can be done by means of improving the decision-making process and the choice of the optimum loading options to ensure efficient functioning of Kerch strait ferry line. The algorithm and a mathematical model were developed.
NASA Astrophysics Data System (ADS)
Onevsky, P. M.; Onevsky, M. P.; Pogonin, V. A.
2018-03-01
The structure and mathematical models of the main subsystems of the control system of the “Artificial Lungs” are presented. This structure implements the process of imitation of human external respiration in the system “Artificial lungs - self-contained breathing apparatus”. A presented algorithm for parametric identification of the model is based on spectral operators, which allows using it in real time.
NASA Technical Reports Server (NTRS)
Phillips, K.
1976-01-01
A mathematical model for job scheduling in a specified context is presented. The model uses both linear programming and combinatorial methods. While designed with a view toward optimization of scheduling of facility and plant operations at the Deep Space Communications Complex, the context is sufficiently general to be widely applicable. The general scheduling problem including options for scheduling objectives is discussed and fundamental parameters identified. Mathematical algorithms for partitioning problems germane to scheduling are presented.
ERIC Educational Resources Information Center
Fain, Angela Christine
2013-01-01
Students with emotional and behavioral disorders (E/BD) display severe social and academic deficits that can adversely affect their academic performance in mathematics and result in higher rates of failure throughout their schooling compared to other students with disabilities (U.S. Department of Education, 2005; Webber & Plotts, 2008).…
Fluid surface compensation in digital holographic microscopy for topography measurement
NASA Astrophysics Data System (ADS)
Lin, Li-Chien; Tu, Han-Yen; Lai, Xin-Ji; Wang, Sheng-Shiun; Cheng, Chau-Jern
2012-06-01
A novel technique is presented for surface compensation and topography measurement of a specimen in fluid medium by digital holographic microscopy (DHM). In the measurement, the specimen is preserved in a culture dish full of liquid culture medium and an environmental vibration induces a series of ripples to create a non-uniform background on the reconstructed phase image. A background surface compensation algorithm is proposed to account for this problem. First, we distinguish the cell image from the non-uniform background and a morphological image operation is used to reduce the noise effect on the background surface areas. Then, an adaptive sampling from the background surface is employed, taking dense samples from the high-variation area while leaving the smooth region mostly untouched. A surface fitting algorithm based on the optimal bi-cubic functional approximation is used to establish a whole background surface for the phase image. Once the background surface is found, the background compensated phase can be obtained by subtracting the estimated background from the original phase image. From the experimental results, the proposed algorithm performs effectively in removing the non-uniform background of the phase image and has the ability to obtain the specimen topography inside fluid medium under environmental vibrations.
Interactive visualization of Earth and Space Science computations
NASA Technical Reports Server (NTRS)
Hibbard, William L.; Paul, Brian E.; Santek, David A.; Dyer, Charles R.; Battaiola, Andre L.; Voidrot-Martinez, Marie-Francoise
1994-01-01
Computers have become essential tools for scientists simulating and observing nature. Simulations are formulated as mathematical models but are implemented as computer algorithms to simulate complex events. Observations are also analyzed and understood in terms of mathematical models, but the number of these observations usually dictates that we automate analyses with computer algorithms. In spite of their essential role, computers are also barriers to scientific understanding. Unlike hand calculations, automated computations are invisible and, because of the enormous numbers of individual operations in automated computations, the relation between an algorithm's input and output is often not intuitive. This problem is illustrated by the behavior of meteorologists responsible for forecasting weather. Even in this age of computers, many meteorologists manually plot weather observations on maps, then draw isolines of temperature, pressure, and other fields by hand (special pads of maps are printed for just this purpose). Similarly, radiologists use computers to collect medical data but are notoriously reluctant to apply image-processing algorithms to that data. To these scientists with life-and-death responsibilities, computer algorithms are black boxes that increase rather than reduce risk. The barrier between scientists and their computations can be bridged by techniques that make the internal workings of algorithms visible and that allow scientists to experiment with their computations. Here we describe two interactive systems developed at the University of Wisconsin-Madison Space Science and Engineering Center (SSEC) that provide these capabilities to Earth and space scientists.
A Unified Mathematical Approach to Image Analysis.
1987-08-31
describes four instances of the paradigm in detail. Directions for ongoing and future research are also indicated. Keywords: Image processing; Algorithms; Segmentation; Boundary detection; tomography; Global image analysis .
Analysis of students’ mathematical reasoning
NASA Astrophysics Data System (ADS)
Sukirwan; Darhim; Herman, T.
2018-01-01
The reasoning is one of the mathematical abilities that have very complex implications. This complexity causes reasoning including abilities that are not easily attainable by students. Similarly, studies dealing with reason are quite diverse, primarily concerned with the quality of mathematical reasoning. The objective of this study was to determine the quality of mathematical reasoning based perspective Lithner. Lithner looked at how the environment affects the mathematical reasoning. In this regard, Lithner made two perspectives, namely imitative reasoning and creative reasoning. Imitative reasoning can be memorized and algorithmic reasoning. The Result study shows that although the students generally still have problems in reasoning. Students tend to be on imitative reasoning which means that students tend to use a routine procedure when dealing with reasoning. It is also shown that the traditional approach still dominates on the situation of students’ daily learning.
ERIC Educational Resources Information Center
Chiel, Hillel J.; McManus, Jeffrey M.; Shaw, Kendrick M.
2010-01-01
We describe the development of a course to teach modeling and mathematical analysis skills to students of biology and to teach biology to students with strong backgrounds in mathematics, physics, or engineering. The two groups of students have different ways of learning material and often have strong negative feelings toward the area of knowledge…
Chang, Yue-Yue; Wu, Hai-Long; Fang, Huan; Wang, Tong; Liu, Zhi; Ouyang, Yang-Zi; Ding, Yu-Jie; Yu, Ru-Qin
2018-06-15
In this study, a smart and green analytical method based on the second-order calibration algorithm coupled with excitation-emission matrix (EEM) fluorescence was developed for the determination of rhodamine dyes illegally added into chilli samples. The proposed method not only has the advantage of high sensitivity over the traditional fluorescence method but also fully displays the "second-order advantage". Pure signals of analytes were successfully extracted from severely interferential EEMs profiles via using alternating trilinear decomposition (ATLD) algorithm even in the presence of common fluorescence problems such as scattering, peak overlaps and unknown interferences. It is worth noting that the unknown interferents can denote different kinds of backgrounds, not only refer to a constant background. In addition, the method using interpolation method could avoid the information loss of analytes of interest. The use of "mathematical separation" instead of complicated "chemical or physical separation" strategy can be more effective and environmentally friendly. A series of statistical parameters including figures of merit and RSDs of intra- (≤1.9%) and inter-day (≤6.6%) were calculated to validate the accuracy of the proposed method. Furthermore, the authoritative method of HPLC-FLD was adopted to verify the qualitative and quantitative results of the proposed method. Compared with the two methods, it also showed that the ATLD-EEMs method has the advantages of accuracy, rapidness, simplicity and green, which is expected to be developed as an attractive alternative method for simultaneous and interference-free determination of rhodamine dyes illegally added into complex matrices. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Varsavsky, Cristina
2010-12-01
An increasing number of Australian students elect not to undertake studies in mathematical methods in the final years of their secondary schooling. Some higher education providers now offer pathways for these students to pursue mathematics studies up to a major specialization within the bachelor of science programme. This article analyses the performance in and engagement with mathematics of the students who elect to take up this option. Findings indicate that these are not very different when compared to students who enter university with an intermediate mathematics preparation. The biggest contrast in performance and engagement is with those students who have studied mathematics in senior secondary school to an advanced level.
Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu
2017-01-01
Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112
QAPgrid: A Two Level QAP-Based Approach for Large-Scale Data Analysis and Visualization
Inostroza-Ponta, Mario; Berretta, Regina; Moscato, Pablo
2011-01-01
Background The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain “hidden regularities” and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. Methodology/Principal Findings We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Conclusions/Significance Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics. PMID:21267077
Mathematical modeling of tomographic scanning of cylindrically shaped test objects
NASA Astrophysics Data System (ADS)
Kapranov, B. I.; Vavilova, G. V.; Volchkova, A. V.; Kuznetsova, I. S.
2018-05-01
The paper formulates mathematical relationships that describe the length of the radiation absorption band in the test object for the first generation tomographic scan scheme. A cylindrically shaped test object containing an arbitrary number of standard circular irregularities is used to perform mathematical modeling. The obtained mathematical relationships are corrected with respect to chemical composition and density of the test object material. The equations are derived to calculate the resulting attenuation radiation from cobalt-60 isotope when passing through the test object. An algorithm to calculate the radiation flux intensity is provided. The presented graphs describe the dependence of the change in the γ-quantum flux intensity on the change in the radiation source position and the scanning angle of the test object.
Spencer, Sarah; Clegg, Judy; Stackhouse, Joy; Rush, Robert
2017-03-01
Well-documented associations exist between socio-economic background and language ability in early childhood, and between educational attainment and language ability in children with clinically referred language impairment. However, very little research has looked at the associations between language ability, educational attainment and socio-economic background during adolescence, particularly in populations without language impairment. To investigate: (1) whether adolescents with higher educational outcomes overall had higher language abilities; and (2) associations between adolescent language ability, socio-economic background and educational outcomes, specifically in relation to Mathematics, English Language and English Literature GCSE grade. A total of 151 participants completed five standardized language assessments measuring vocabulary, comprehension of sentences and spoken paragraphs, and narrative skills and one nonverbal assessment when between 13 and 14 years old. These data were compared with the participants' educational achievement obtained upon leaving secondary education (16 years old). Univariate logistic regressions were employed to identify those language assessments and demographic factors that were associated with achieving a targeted A * -C grade in English Language, English Literature and Mathematics General Certificate of Secondary Education (GCSE) at 16 years. Further logistic regressions were then conducted to examine further the contribution of socio-economic background and spoken language skills in the multivariate models. Vocabulary, comprehension of sentences and spoken paragraphs, and mean length utterance in a narrative task along with socio-economic background contributed to whether participants achieved an A * -C grade in GCSE Mathematics and English Language and English Literature. Nonverbal ability contributed to English Language and Mathematics. The results of multivariate logistic regressions then found that vocabulary skills were particularly relevant to all three GCSE outcomes. Socio-economic background only remained important for English Language, once language assessment scores and demographic information were considered. Language ability, and in particular vocabulary, plays an important role for educational achievement. Results confirm a need for ongoing support for spoken language ability throughout secondary education and a potential role for speech and language therapy provision in the continuing drive to reduce the gap in educational attainment between groups from differing socio-economic backgrounds. © 2016 Royal College of Speech and Language Therapists.
Bayesian parameter estimation for nonlinear modelling of biological pathways.
Ghasemi, Omid; Lindsey, Merry L; Yang, Tianyi; Nguyen, Nguyen; Huang, Yufei; Jin, Yu-Fang
2011-01-01
The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC) method. We applied this approach to the biological pathways involved in the left ventricle (LV) response to myocardial infarction (MI) and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly parameterized dynamic systems. Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.
Method for exploiting bias in factor analysis using constrained alternating least squares algorithms
Keenan, Michael R.
2008-12-30
Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.
Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
W. Hasan, W. Z.
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554
NASA Astrophysics Data System (ADS)
Zingoni, Andrea; Diani, Marco; Corsini, Giovanni
2016-10-01
We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.
Mathematics Readiness of First-Year University Students
ERIC Educational Resources Information Center
Atuahene, Francis; Russell, Tammy A.
2016-01-01
The majority of high school students, particularly underrepresented minorities (URMs) from low socioeconomic backgrounds are graduating from high school less prepared academically for advanced-level college mathematics. Using 2009 and 2010 course enrollment data, several statistical analyses (multiple linear regression, Cochran Mantel Haenszel…
NASA Astrophysics Data System (ADS)
Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.
2017-12-01
In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.
Jiang, Junfeng; Wang, Shaohua; Liu, Tiegen; Liu, Kun; Yin, Jinde; Meng, Xiange; Zhang, Yimo; Wang, Shuang; Qin, Zunqi; Wu, Fan; Li, Dingjie
2012-07-30
A demodulation algorithm based on absolute phase recovery of a selected monochromatic frequency is proposed for optical fiber Fabry-Perot pressure sensing system. The algorithm uses Fourier transform to get the relative phase and intercept of the unwrapped phase-frequency linear fit curve to identify its interference-order, which are then used to recover the absolute phase. A simplified mathematical model of the polarized low-coherence interference fringes was established to illustrate the principle of the proposed algorithm. Phase unwrapping and the selection of monochromatic frequency were discussed in detail. Pressure measurement experiment was carried out to verify the effectiveness of the proposed algorithm. Results showed that the demodulation precision by our algorithm could reach up to 0.15kPa, which has been improved by 13 times comparing with phase slope based algorithm.
Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiu, Dongbin
2017-03-03
The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.
2016-01-01
Currently, anesthesiologists use clinical parameters to directly measure the depth of anesthesia (DoA). This clinical standard of monitoring is often combined with brain monitoring for better assessment of the hypnotic component of anesthesia. Brain monitoring devices provide indices allowing for an immediate assessment of the impact of anesthetics on consciousness. However, questions remain regarding the mechanisms underpinning these indices of hypnosis. By briefly describing current knowledge of the brain's electrical activity during general anesthesia, as well as the operating principles of DoA monitors, the aim of this work is to simplify our understanding of the mathematical processes that allow for translation of complex patterns of brain electrical activity into dimensionless indices. This is a challenging task because mathematical concepts appear remote from clinical practice. Moreover, most DoA algorithms are proprietary algorithms and the difficulty of exploring the inner workings of mathematical models represents an obstacle to accurate simplification. The limitations of current DoA monitors — and the possibility for improvement — as well as perspectives on brain monitoring derived from recent research on corticocortical connectivity and communication are also discussed. PMID:27066200
An algebra-based method for inferring gene regulatory networks.
Vera-Licona, Paola; Jarrah, Abdul; Garcia-Puente, Luis David; McGee, John; Laubenbacher, Reinhard
2014-03-26
The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the dynamic patterns present in the network. Boolean polynomial dynamical systems provide a powerful modeling framework for the reverse engineering of gene regulatory networks, that enables a rich mathematical structure on the model search space. A C++ implementation of the method, distributed under LPGL license, is available, together with the source code, at http://www.paola-vera-licona.net/Software/EARevEng/REACT.html.
A biomimetic algorithm for the improved detection of microarray features
NASA Astrophysics Data System (ADS)
Nicolau, Dan V., Jr.; Nicolau, Dan V.; Maini, Philip K.
2007-02-01
One the major difficulties of microarray technology relate to the processing of large and - importantly - error-loaded images of the dots on the chip surface. Whatever the source of these errors, those obtained in the first stage of data acquisition - segmentation - are passed down to the subsequent processes, with deleterious results. As it has been demonstrated recently that biological systems have evolved algorithms that are mathematically efficient, this contribution attempts to test an algorithm that mimics a bacterial-"patented" algorithm for the search of available space and nutrients to find, "zero-in" and eventually delimitate the features existent on the microarray surface.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.
On the efficient and reliable numerical solution of rate-and-state friction problems
NASA Astrophysics Data System (ADS)
Pipping, Elias; Kornhuber, Ralf; Rosenau, Matthias; Oncken, Onno
2016-03-01
We present a mathematically consistent numerical algorithm for the simulation of earthquake rupture with rate-and-state friction. Its main features are adaptive time stepping, a novel algebraic solution algorithm involving nonlinear multigrid and a fixed point iteration for the rate-and-state decoupling. The algorithm is applied to a laboratory scale subduction zone which allows us to compare our simulations with experimental results. Using physical parameters from the experiment, we find a good fit of recurrence time of slip events as well as their rupture width and peak slip. Computations in 3-D confirm efficiency and robustness of our algorithm.
NASA Astrophysics Data System (ADS)
Xu, Quan-Li; Cao, Yu-Wei; Yang, Kun
2018-03-01
Ant Colony Optimization (ACO) is the most widely used artificial intelligence algorithm at present. This study introduced the principle and mathematical model of ACO algorithm in solving Vehicle Routing Problem (VRP), and designed a vehicle routing optimization model based on ACO, then the vehicle routing optimization simulation system was developed by using c ++ programming language, and the sensitivity analyses, estimations and improvements of the three key parameters of ACO were carried out. The results indicated that the ACO algorithm designed in this paper can efficiently solve rational planning and optimization of VRP, and the different values of the key parameters have significant influence on the performance and optimization effects of the algorithm, and the improved algorithm is not easy to locally converge prematurely and has good robustness.
Locality-constrained anomaly detection for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Liu, Jiabin; Li, Wei; Du, Qian; Liu, Kui
2015-12-01
Detecting a target with low-occurrence-probability from unknown background in a hyperspectral image, namely anomaly detection, is of practical significance. Reed-Xiaoli (RX) algorithm is considered as a classic anomaly detector, which calculates the Mahalanobis distance between local background and the pixel under test. Local RX, as an adaptive RX detector, employs a dual-window strategy to consider pixels within the frame between inner and outer windows as local background. However, the detector is sensitive if such a local region contains anomalous pixels (i.e., outliers). In this paper, a locality-constrained anomaly detector is proposed to remove outliers in the local background region before employing the RX algorithm. Specifically, a local linear representation is designed to exploit the internal relationship between linearly correlated pixels in the local background region and the pixel under test and its neighbors. Experimental results demonstrate that the proposed detector improves the original local RX algorithm.
An improved algorithm of laser spot center detection in strong noise background
NASA Astrophysics Data System (ADS)
Zhang, Le; Wang, Qianqian; Cui, Xutai; Zhao, Yu; Peng, Zhong
2018-01-01
Laser spot center detection is demanded in many applications. The common algorithms for laser spot center detection such as centroid and Hough transform method have poor anti-interference ability and low detection accuracy in the condition of strong background noise. In this paper, firstly, the median filtering was used to remove the noise while preserving the edge details of the image. Secondly, the binarization of the laser facula image was carried out to extract target image from background. Then the morphological filtering was performed to eliminate the noise points inside and outside the spot. At last, the edge of pretreated facula image was extracted and the laser spot center was obtained by using the circle fitting method. In the foundation of the circle fitting algorithm, the improved algorithm added median filtering, morphological filtering and other processing methods. This method could effectively filter background noise through theoretical analysis and experimental verification, which enhanced the anti-interference ability of laser spot center detection and also improved the detection accuracy.
Chen, Yung-Yue
2018-05-08
Mobile devices are often used in our daily lives for the purposes of speech and communication. The speech quality of mobile devices is always degraded due to the environmental noises surrounding mobile device users. Regretfully, an effective background noise reduction solution cannot easily be developed for this speech enhancement problem. Due to these depicted reasons, a methodology is systematically proposed to eliminate the effects of background noises for the speech communication of mobile devices. This methodology integrates a dual microphone array with a background noise elimination algorithm. The proposed background noise elimination algorithm includes a whitening process, a speech modelling method and an H ₂ estimator. Due to the adoption of the dual microphone array, a low-cost design can be obtained for the speech enhancement of mobile devices. Practical tests have proven that this proposed method is immune to random background noises, and noiseless speech can be obtained after executing this denoise process.
Fuzzy PID control algorithm based on PSO and application in BLDC motor
NASA Astrophysics Data System (ADS)
Lin, Sen; Wang, Guanglong
2017-06-01
A fuzzy PID control algorithm is studied based on improved particle swarm optimization (PSO) to perform Brushless DC (BLDC) motor control which has high accuracy, good anti-jamming capability and steady state accuracy compared with traditional PID control. The mathematical and simulation model is established for BLDC motor by simulink software, and the speed loop of the fuzzy PID controller is designed. The simulation results show that the fuzzy PID control algorithm based on PSO has higher stability, high control precision and faster dynamic response speed.
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1977-01-01
The problem of mathematically defining a smooth surface, passing through a finite set of given points is studied. Literature relating to the problem is briefly reviewed. An algorithm is described that first constructs a triangular grid in the (x,y) domain, and first partial derivatives at the modal points are estimated. Interpolation in the triangular cells using a method that gives C sup.1 continuity overall is examined. Performance of software implementing the algorithm is discussed. Theoretical results are presented that provide valuable guidance in the development of algorithms for constructing triangular grids.
A High-Level Language for Modeling Algorithms and Their Properties
NASA Astrophysics Data System (ADS)
Akhtar, Sabina; Merz, Stephan; Quinson, Martin
Designers of concurrent and distributed algorithms usually express them using pseudo-code. In contrast, most verification techniques are based on more mathematically-oriented formalisms such as state transition systems. This conceptual gap contributes to hinder the use of formal verification techniques. Leslie Lamport introduced PlusCal, a high-level algorithmic language that has the "look and feel" of pseudo-code, but is equipped with a precise semantics and includes a high-level expression language based on set theory. PlusCal models can be compiled to TLA + and verified using the model checker tlc.
Baseline mathematics and geodetics for tracking operations
NASA Technical Reports Server (NTRS)
James, R.
1981-01-01
Various geodetic and mapping algorithms are analyzed as they apply to radar tracking systems and tested in extended BASIC computer language for real time computer applications. Closed-form approaches to the solution of converting Earth centered coordinates to latitude, longitude, and altitude are compared with classical approximations. A simplified approach to atmospheric refractivity called gradient refraction is compared with conventional ray tracing processes. An extremely detailed set of documentation which provides the theory, derivations, and application of algorithms used in the programs is included. Validation methods are also presented for testing the accuracy of the algorithms.
ERIC Educational Resources Information Center
Puig, Luis, Ed.; Gutierrez, Angel, Ed.
The second volume of this proceedings contains full research articles. Papers include: (1) "Lave and Wenger's social practice theory and teaching and learning school mathematics" (J. Adler); (2) "Being a researcher and being a teacher" (J. Ainley); (3) "Procedural and conceptual aspects of standard algorithms in calculus" (M.B. Ali and D. Tall);…
Implementing the Continued Fraction Algorithm on the Illiac IV.
1980-01-01
Illinois University in 1975. Originally, the program was only capable of factoring numbers up to 30 decimal digits in length, but a number of improvements...Wunderlich F49620-79-C-O199 (, ) Mathematical Sciences Dept Northern ILlinois University _ 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM...ELEMENT. PROJECT, TASK AREA & WORK UNIT NUMBERS L rthern Illinois University 61102F 2304/A6 Mathematical Sciences Dept. DeKalb, IL 60115 II. CONTROLLING
ERIC Educational Resources Information Center
Son, Ji-Won; Han, Seong Won; Kang, Chungseo; Kwon, Oh Nam
2016-01-01
The purpose of this study is to compare and contrast student, teacher, and school factors that are associated with student mathematics achievement in South Korea and the United States. Using the data from the Trends in International Mathematics and Science Study (TIMSS) 2011, this study examines factors that are linked to teachers who deliver…
Validation Studies of the Accuracy of Various SO2 Gas Retrievals in the Thermal InfraRed (8-14 μm)
NASA Astrophysics Data System (ADS)
Gabrieli, A.; Wright, R.; Lucey, P. G.; Porter, J. N.; Honniball, C.; Garbeil, H.; Wood, M.
2016-12-01
Quantifying hazardous SO2 in the atmosphere and in volcanic plumes is important for public health and volcanic eruption prediction. Remote sensing measurements of spectral radiance of plumes contain information on the abundance of SO2. However, in order to convert such measurements into SO2 path-concentrations, reliable inversion algorithms are needed. Various techniques can be employed to derive SO2 path-concentrations. The first approach employs a Partial Least Square Regression model trained using MODTRAN5 simulations for a variety of plume and atmospheric conditions. Radiances at many spectral wavelengths (8-14 μm) were used in the algorithm. The second algorithm uses measurements inside and outside the SO2 plume. Measurements in the plume-free region (background sky) make it possible to remove background atmospheric conditions and any instrumental effects. After atmospheric and instrumental effects are removed, MODTRAN5 is used to fit the SO2 spectral feature and obtain SO2 path-concentrations. The two inversion algorithms described above can be compared with the inversion algorithm for SO2 retrievals developed by Prata and Bernardo (2014). Their approach employs three wavelengths to characterize the plume temperature, the atmospheric background, and the SO2 path-concentration. The accuracy of these various techniques requires further investigation in terms of the effects of different atmospheric background conditions. Validating these inversion algorithms is challenging because ground truth measurements are very difficult. However, if the three separate inversion algorithms provide similar SO2 path-concentrations for actual measurements with various background conditions, then this increases confidence in the results. Measurements of sky radiance when looking through SO2 filled gas cells were collected with a Thermal Hyperspectral Imager (THI) under various atmospheric background conditions. These data were processed using the three inversion approaches, which were tested for convergence on the known SO2 gas cell path-concentrations. For this study, the inversion algorithms were modified to account for the gas cell configuration. Results from these studies will be presented, as well as results from SO2 gas plume measurements at Kīlauea volcano, Hawai'i.
Meng, Qing-chun; Rong, Xiao-xia; Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi
2016-01-01
CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996-2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated.
Zhang, Yi-min; Wan, Xiao-le; Liu, Yuan-yuan; Wang, Yu-zhi
2016-01-01
CO2 emission influences not only global climate change but also international economic and political situations. Thus, reducing the emission of CO2, a major greenhouse gas, has become a major issue in China and around the world as regards preserving the environmental ecology. Energy consumption from coal, oil, and natural gas is primarily responsible for the production of greenhouse gases and air pollutants such as SO2 and NOX, which are the main air pollutants in China. In this study, a mathematical multi-objective optimization method was adopted to analyze the collaborative emission reduction of three kinds of gases on the basis of their common restraints in different ways of energy consumption to develop an economic, clean, and efficient scheme for energy distribution. The first part introduces the background research, the collaborative emission reduction for three kinds of gases, the multi-objective optimization, the main mathematical modeling, and the optimization method. The second part discusses the four mathematical tools utilized in this study, which include the Granger causality test to analyze the causality between air quality and pollutant emission, a function analysis to determine the quantitative relation between energy consumption and pollutant emission, a multi-objective optimization to set up the collaborative optimization model that considers energy consumption, and an optimality condition analysis for the multi-objective optimization model to design the optimal-pole algorithm and obtain an efficient collaborative reduction scheme. In the empirical analysis, the data of pollutant emission and final consumption of energies of Tianjin in 1996–2012 was employed to verify the effectiveness of the model and analyze the efficient solution and the corresponding dominant set. In the last part, several suggestions for collaborative reduction are recommended and the drawn conclusions are stated. PMID:27010658
NASA Astrophysics Data System (ADS)
Nichols, Jeri Ann
This study examined the relationship between mathematics background and performance on graph-related problems in physics before and after instruction on the graphical analysis of motion and several microcomputer-based laboratory experiences. Students identified as either having or not having a graphing technology enhanced precalculus mathematics background were further categorized into one of four groups according to mathematics placement at the university. The performances of these groups were compared to identity differences. Pre- and Post-test data were collected from 589 students and 312 students during Autumn Quarter 1990 and Winter Quarter 1991 respectively. Background information was collected from each student. Significant differences were found between students with the technology enhanced mathematics background and those without when considering the entire populations both quarters. The students with the technology background were favored Autumn quarter and students without the technology background were favored Winter quarter. However, the entire population included an underrepresentation of students at the highest and lowest placements; hence, these were eliminated from the analyses. No significant differences were found between the technology/no technology groups after the elimination of the underrepresented groups. All categories of students increased their mean scores from pretest to post-test; the average increase was 8.23 points Autumn Quarter and 11.41 points Winter Quarter. Males consistently outperformed females on both the pretest and the post-test Autumn 1990. All students found questions involving the concept of acceleration more difficult than questions involving velocity or distance. Questions requiring students to create graphs were more difficult than questions requiring students to interpret graphs. Further research involving a qualitative component is recommended to identify the specific skills students use when solving graph-related physics problems. In addition, it is recommended that a similar study be conducted to include a control group not participating in the microcomputer -based laboratory experiments.
NASA Astrophysics Data System (ADS)
Fasni, N.; Turmudi, T.; Kusnandi, K.
2017-09-01
This research background of this research is the importance of student problem solving abilities. The purpose of this study is to find out whether there are differences in the ability to solve mathematical problems between students who have learned mathematics using Ang’s Framework for Mathematical Modelling Instruction (AFFMMI) and students who have learned using scientific approach (SA). The method used in this research is a quasi-experimental method with pretest-postest control group design. Data analysis of mathematical problem solving ability using Indepent Sample Test. The results showed that there was a difference in the ability to solve mathematical problems between students who received learning with Ang’s Framework for Mathematical Modelling Instruction and students who received learning with a scientific approach. AFFMMI focuses on mathematical modeling. This modeling allows students to solve problems. The use of AFFMMI is able to improve the solving ability.
NASA Astrophysics Data System (ADS)
Jarvis, Jan; Haertelt, Marko; Hugger, Stefan; Butschek, Lorenz; Fuchs, Frank; Ostendorf, Ralf; Wagner, Joachim; Beyerer, Juergen
2017-04-01
In this work we present data analysis algorithms for detection of hazardous substances in hyperspectral observations acquired using active mid-infrared (MIR) backscattering spectroscopy. We present a novel background extraction algorithm based on the adaptive target generation process proposed by Ren and Chang called the adaptive background generation process (ABGP) that generates a robust and physically meaningful set of background spectra for operation of the well-known adaptive matched subspace detection (AMSD) algorithm. It is shown that the resulting AMSD-ABGP detection algorithm competes well with other widely used detection algorithms. The method is demonstrated in measurement data obtained by two fundamentally different active MIR hyperspectral data acquisition devices. A hyperspectral image sensor applicable in static scenes takes a wavelength sequential approach to hyperspectral data acquisition, whereas a rapid wavelength-scanning single-element detector variant of the same principle uses spatial scanning to generate the hyperspectral observation. It is shown that the measurement timescale of the latter is sufficient for the application of the data analysis algorithms even in dynamic scenarios.
Towards a Definition of Basic Numeracy
ERIC Educational Resources Information Center
Girling, Michael
1977-01-01
The author redefines basic numeracy as the ability to use a four-function calculator sensibly. He then defines "sensibly" and considers the place of algorithms in the scheme of mathematical calculations. (MN)
Collaborative and Cooperative Learning in Malaysian Mathematics Education
ERIC Educational Resources Information Center
Hossain, Md. Anowar; Tarmizi, Rohani Ahmad; Ayud, Ahmad Fauzi Mohd
2012-01-01
Collaborative and cooperative learning studies are well recognized in Malaysian mathematics education research. Cooperative learning is used to serve various ability students taking into consideration of their level of understanding, learning styles, sociological backgrounds that develop students' academic achievement and skills, and breeze the…
On Automatic Assessment and Conceptual Understanding
ERIC Educational Resources Information Center
Rasila, Antti; Malinen, Jarmo; Tiitu, Hannu
2015-01-01
We consider two complementary aspects of mathematical skills, i.e. "procedural fluency" and "conceptual understanding," from a point of view that is related to modern e-learning environments and computer-based assessment. Pedagogical background of teaching mathematics is discussed, and it is proposed that the traditional book…
Ideas: NCTM Standards-Based Instruction, Grades K-4.
ERIC Educational Resources Information Center
Hynes, Michael C., Ed.
This document is a collection of activity-based mathematics lessons for grades K-4 from the "Ideas" department in "Arithmetic Teacher: Mathematics Education through the Middle Grades." Each lesson includes background information, objectives, directions, extensions, and student worksheets. A matrix is included which correlates…
Ideas: NCTM Standards-Based Instruction, Grades 5-8.
ERIC Educational Resources Information Center
Hynes, Michael C., Ed.
This document is a collection of activity-based mathematics lessons for grades 5-8 from the "Ideas" department in "Arithmetic Teacher: Mathematics Education through the Middle Grades." Each lesson includes background information, objectives, directions, extensions, and student worksheets. A matrix is included which correlates…
ERIC Educational Resources Information Center
Gur, Hulya
2009-01-01
Background: Trigonometry is an area of mathematics that students believe to be particularly difficult and abstract compared with the other subjects of mathematics. Trigonometry is often introduced early in year 8 with most textbooks traditionally starting with naming sides of right-angled triangles. Students need to see and understand why their…
NASA Astrophysics Data System (ADS)
Bhathal, Ragbir
2016-09-01
The number of students entering engineering schools in Australian universities has increased tremendously over the last few years because of the Australian Federal Government's policy of increasing the participation rates of Higher School Certificate students and students from low social economic status backgrounds in the tertiary sector. They now come with a diverse background of skills, motivations and prior knowledge. It is imperative that new methods of teaching and learning be developed. This paper describes an online tutorial system used in conjunction with contextual physics and mathematics, and the revision of the relevant mathematical knowledge at the appropriate time before a new topic is introduced in the teaching and learning of engineering physics. Taken as a whole, this study shows that students not only improved their final examination results but there was also an increase in the retention rate of first-year engineering students which has financial implications for the university.
Eliciting candidate anatomical routes for protein interactions: a scenario from endocrine physiology
2013-01-01
Background In this paper, we use: i) formalised anatomical knowledge of connectivity between body structures and ii) a formal theory of physiological transport between fluid compartments in order to define and make explicit the routes followed by proteins to a site of interaction. The underlying processes are the objects of mathematical models of physiology and, therefore, the motivation for the approach can be understood as using knowledge representation and reasoning methods to propose concrete candidate routes corresponding to correlations between variables in mathematical models of physiology. In so doing, the approach projects physiology models onto a representation of the anatomical and physiological reality which underpins them. Results The paper presents a method based on knowledge representation and reasoning for eliciting physiological communication routes. In doing so, the paper presents the core knowledge representation and algorithms using it in the application of the method. These are illustrated through the description of a prototype implementation and the treatment of a simple endocrine scenario whereby a candidate route of communication between ANP and its receptors on the external membrane of smooth muscle cells in renal arterioles is elicited. The potential of further development of the approach is illustrated through the informal discussion of a more complex scenario. Conclusions The work presented in this paper supports research in intercellular communication by enabling knowledge‐based inference on physiologically‐related biomedical data and models. PMID:23590598
Special Issue on a Fault Tolerant Network on Chip Architecture
NASA Astrophysics Data System (ADS)
Janidarmian, Majid; Tinati, Melika; Khademzadeh, Ahmad; Ghavibazou, Maryam; Fekr, Atena Roshan
2010-06-01
In this paper a fast and efficient spare switch selection algorithm is presented in a reliable NoC architecture based on specific application mapped onto mesh topology called FERNA. Based on ring concept used in FERNA, this algorithm achieves best results equivalent to exhaustive algorithm with much less run time improving two parameters. Inputs of FERNA algorithm for response time of the system and extra communication cost minimization are derived from simulation of high transaction level using SystemC TLM and mathematical formulation, respectively. The results demonstrate that improvement of above mentioned parameters lead to advance whole system reliability that is analytically calculated. Mapping algorithm has been also investigated as an effective issue on extra bandwidth requirement and system reliability.
A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy
Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.
2017-01-01
One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997
Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung
2016-02-01
Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low-contrast microcalcifications, the FBP reduced detectability due to its increased noise. The EM algorithm yielded high conspicuity for both microcalcifications and masses and yielded better ASFs in terms of the full width at half maximum. The higher contrast and lower homogeneity in terms of texture analysis were shown in FBP algorithm than in other algorithms. The patient images using the EM algorithm resulted in high visibility of low-contrast mass with clear border. In this study, we compared three reconstruction algorithms by using various kinds of breast phantoms and patient cases. Future work using these algorithms and considering the type of the breast and the acquisition techniques used (e.g., angular range, dose distribution) should include the use of actual patients or patient-like phantoms to increase the potential for practical applications.
A cooperative strategy for parameter estimation in large scale systems biology models
2012-01-01
Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112
Statistics, Computation, and Modeling in Cosmology
NASA Astrophysics Data System (ADS)
Jewell, Jeff; Guiness, Joe; SAMSI 2016 Working Group in Cosmology
2017-01-01
Current and future ground and space based missions are designed to not only detect, but map out with increasing precision, details of the universe in its infancy to the present-day. As a result we are faced with the challenge of analyzing and interpreting observations from a wide variety of instruments to form a coherent view of the universe. Finding solutions to a broad range of challenging inference problems in cosmology is one of the goals of the “Statistics, Computation, and Modeling in Cosmology” workings groups, formed as part of the year long program on ‘Statistical, Mathematical, and Computational Methods for Astronomy’, hosted by the Statistical and Applied Mathematical Sciences Institute (SAMSI), a National Science Foundation funded institute. Two application areas have emerged for focused development in the cosmology working group involving advanced algorithmic implementations of exact Bayesian inference for the Cosmic Microwave Background, and statistical modeling of galaxy formation. The former includes study and development of advanced Markov Chain Monte Carlo algorithms designed to confront challenging inference problems including inference for spatial Gaussian random fields in the presence of sources of galactic emission (an example of a source separation problem). Extending these methods to future redshift survey data probing the nonlinear regime of large scale structure formation is also included in the working group activities. In addition, the working group is also focused on the study of ‘Galacticus’, a galaxy formation model applied to dark matter-only cosmological N-body simulations operating on time-dependent halo merger trees. The working group is interested in calibrating the Galacticus model to match statistics of galaxy survey observations; specifically stellar mass functions, luminosity functions, and color-color diagrams. The group will use subsampling approaches and fractional factorial designs to statistically and computationally efficiently explore the Galacticus parameter space. The group will also use the Galacticus simulations to study the relationship between the topological and physical structure of the halo merger trees and the properties of the resulting galaxies.
Space moving target detection and tracking method in complex background
NASA Astrophysics Data System (ADS)
Lv, Ping-Yue; Sun, Sheng-Li; Lin, Chang-Qing; Liu, Gao-Rui
2018-06-01
The background of the space-borne detectors in real space-based environment is extremely complex and the signal-to-clutter ratio is very low (SCR ≈ 1), which increases the difficulty for detecting space moving targets. In order to solve this problem, an algorithm combining background suppression processing based on two-dimensional least mean square filter (TDLMS) and target enhancement based on neighborhood gray-scale difference (GSD) is proposed in this paper. The latter can filter out most of the residual background clutter processed by the former such as cloud edge. Through this procedure, both global and local SCR have obtained substantial improvement, indicating that the target has been greatly enhanced. After removing the detector's inherent clutter region through connected domain processing, the image only contains the target point and the isolated noise, in which the isolated noise could be filtered out effectively through multi-frame association. The proposed algorithm in this paper has been compared with some state-of-the-art algorithms for moving target detection and tracking tasks. The experimental results show that the performance of this algorithm is the best in terms of SCR gain, background suppression factor (BSF) and detection results.
Reconciliation of Gene and Species Trees
Rusin, L. Y.; Lyubetskaya, E. V.; Gorbunov, K. Y.; Lyubetsky, V. A.
2014-01-01
The first part of the paper briefly overviews the problem of gene and species trees reconciliation with the focus on defining and algorithmic construction of the evolutionary scenario. Basic ideas are discussed for the aspects of mapping definitions, costs of the mapping and evolutionary scenario, imposing time scales on a scenario, incorporating horizontal gene transfers, binarization and reconciliation of polytomous trees, and construction of species trees and scenarios. The review does not intend to cover the vast diversity of literature published on these subjects. Instead, the authors strived to overview the problem of the evolutionary scenario as a central concept in many areas of evolutionary research. The second part provides detailed mathematical proofs for the solutions of two problems: (i) inferring a gene evolution along a species tree accounting for various types of evolutionary events and (ii) trees reconciliation into a single species tree when only gene duplications and losses are allowed. All proposed algorithms have a cubic time complexity and are mathematically proved to find exact solutions. Solving algorithms for problem (ii) can be naturally extended to incorporate horizontal transfers, other evolutionary events, and time scales on the species tree. PMID:24800245
Genetic Networks and Anticipation of Gene Expression Patterns
NASA Astrophysics Data System (ADS)
Gebert, J.; Lätsch, M.; Pickl, S. W.; Radde, N.; Weber, G.-W.; Wünschiers, R.
2004-08-01
An interesting problem for computational biology is the analysis of time-series expression data. Here, the application of modern methods from dynamical systems, optimization theory, numerical algorithms and the utilization of implicit discrete information lead to a deeper understanding. In [1], we suggested to represent the behavior of time-series gene expression patterns by a system of ordinary differential equations, which we analytically and algorithmically investigated under the parametrical aspect of stability or instability. Our algorithm strongly exploited combinatorial information. In this paper, we deepen, extend and exemplify this study from the viewpoint of underlying mathematical modelling. This modelling consists in evaluating DNA-microarray measurements as the basis of anticipatory prediction, in the choice of a smooth model given by differential equations, in an approach of the right-hand side with parametric matrices, and in a discrete approximation which is a least squares optimization problem. We give a mathematical and biological discussion, and pay attention to the special case of a linear system, where the matrices do not depend on the state of expressions. Here, we present first numerical examples.
Theory of Remote Image Formation
NASA Astrophysics Data System (ADS)
Blahut, Richard E.
2004-11-01
In many applications, images, such as ultrasonic or X-ray signals, are recorded and then analyzed with digital or optical processors in order to extract information. Such processing requires the development of algorithms of great precision and sophistication. This book presents a unified treatment of the mathematical methods that underpin the various algorithms used in remote image formation. The author begins with a review of transform and filter theory. He then discusses two- and three-dimensional Fourier transform theory, the ambiguity function, image construction and reconstruction, tomography, baseband surveillance systems, and passive systems (where the signal source might be an earthquake or a galaxy). Information-theoretic methods in image formation are also covered, as are phase errors and phase noise. Throughout the book, practical applications illustrate theoretical concepts, and there are many homework problems. The book is aimed at graduate students of electrical engineering and computer science, and practitioners in industry. Presents a unified treatment of the mathematical methods that underpin the algorithms used in remote image formation Illustrates theoretical concepts with reference to practical applications Provides insights into the design parameters of real systems
An algorithm-based topographical biomaterials library to instruct cell fate
Unadkat, Hemant V.; Hulsman, Marc; Cornelissen, Kamiel; Papenburg, Bernke J.; Truckenmüller, Roman K.; Carpenter, Anne E.; Wessling, Matthias; Post, Gerhard F.; Uetz, Marc; Reinders, Marcel J. T.; Stamatialis, Dimitrios; van Blitterswijk, Clemens A.; de Boer, Jan
2011-01-01
It is increasingly recognized that material surface topography is able to evoke specific cellular responses, endowing materials with instructive properties that were formerly reserved for growth factors. This opens the window to improve upon, in a cost-effective manner, biological performance of any surface used in the human body. Unfortunately, the interplay between surface topographies and cell behavior is complex and still incompletely understood. Rational approaches to search for bioactive surfaces will therefore omit previously unperceived interactions. Hence, in the present study, we use mathematical algorithms to design nonbiased, random surface features and produce chips of poly(lactic acid) with 2,176 different topographies. With human mesenchymal stromal cells (hMSCs) grown on the chips and using high-content imaging, we reveal unique, formerly unknown, surface topographies that are able to induce MSC proliferation or osteogenic differentiation. Moreover, we correlate parameters of the mathematical algorithms to cellular responses, which yield novel design criteria for these particular parameters. In conclusion, we demonstrate that randomized libraries of surface topographies can be broadly applied to unravel the interplay between cells and surface topography and to find improved material surfaces. PMID:21949368
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keyes, D.; McInnes, L. C.; Woodward, C.
This report is an outcome of the workshop Multiphysics Simulations: Challenges and Opportunities, sponsored by the Institute of Computing in Science (ICiS). Additional information about the workshop, including relevant reading and presentations on multiphysics issues in applications, algorithms, and software, is available via https://sites.google.com/site/icismultiphysics2011/. We consider multiphysics applications from algorithmic and architectural perspectives, where 'algorithmic' includes both mathematical analysis and computational complexity and 'architectural' includes both software and hardware environments. Many diverse multiphysics applications can be reduced, en route to their computational simulation, to a common algebraic coupling paradigm. Mathematical analysis of multiphysics coupling in this form is not alwaysmore » practical for realistic applications, but model problems representative of applications discussed herein can provide insight. A variety of software frameworks for multiphysics applications have been constructed and refined within disciplinary communities and executed on leading-edge computer systems. We examine several of these, expose some commonalities among them, and attempt to extrapolate best practices to future systems. From our study, we summarize challenges and forecast opportunities. We also initiate a modest suite of test problems encompassing features present in many applications.« less
2016-05-01
Algorithm for Overcoming the Curse of Dimensionality for Certain Non-convex Hamilton-Jacobi Equations, Projections and Differential Games Yat Tin...subproblems. Our approach is expected to have wide applications in continuous dynamic games , control theory problems, and elsewhere. Mathematics...differential dynamic games , control theory problems, and dynamical systems coming from the physical world, e.g. [11]. An important application is to
Information Dynamics in Networks: Models and Algorithms
2016-09-13
Twitter ; we investigated how to detect spam accounts on Facebook and other social networks by graph analytics; and finally we investigated how to design...networks. We investigated the appropriateness of existing mathematical models for explaining the structure of retweet cascades on Twitter ; we investigated...Received Paper 1.00 2.00 3.00 . A Note on Modeling Retweet Cascades on Twitter , Workshop on Algorithms and Models for the Web Graph. 09-DEC-15
Algorithms and Array Design Criteria for Robust Imaging in Interferometry
2016-04-01
Interferometry 1.1 Chapter Overview In this Section, we introduce the physics -based principles of optical interferometry, thereby providing a foundation for...particular physical structure (i.e. the existence of a certain type of loop in the interferometric graph), and provide a simple algorithm for identifying...mathematical conditions for wrap invariance to a physical condition on aperture placement is more intuitive when considering the raw phase measurements as
Detection and Classification of Objects in Synthetic Aperture Radar Imagery
2006-02-01
a higher False Alarm Rate (FAR). Currently, a standard edge detector is the Canny algorithm, which is available with the mathematics package MATLAB ...the algorithm used to calculate the Radon transform. The MATLAB implementation uses the built in Radon transform procedure, which is extremely... MATLAB code for a faster forward-backwards selection process has also been provided. In both cases, the feature selection was accomplished by using
NASA Technical Reports Server (NTRS)
Kincaid, D. R.; Young, D. M.
1984-01-01
Adapting and designing mathematical software to achieve optimum performance on the CYBER 205 is discussed. Comments and observations are made in light of recent work done on modifying the ITPACK software package and on writing new software for vector supercomputers. The goal was to develop very efficient vector algorithms and software for solving large sparse linear systems using iterative methods.
Factors Affecting Turkish Students' Achievement in Mathematics
ERIC Educational Resources Information Center
Demir, Ibrahim; Kilic, Serpil; Depren, Ozer
2009-01-01
Following past researches, student background, learning strategies, self-related cognitions in mathematics and school climate variables were important for achievement. The purpose of this study was to identify a number of factors that represent the relationship among sets of interrelated variables using principal component factor analysis and…
Motivation and Self-Regulated Learning Influences on Middle School Mathematics Achievement
ERIC Educational Resources Information Center
Cleary, Timothy J.; Kitsantas, Anastasia
2017-01-01
The primary purpose of the current study was to use structural equation modeling to examine the relations among background variables (socioeconomic status, prior mathematics achievement), motivation variables (self-efficacy, task interest, school connectedness), self-regulated learning (SRL) behaviors, and performance in middle school mathematics…
The Power of the Raised Eyebrow.
ERIC Educational Resources Information Center
Burton, Grace M.
This paper begins by emphasizing the school counselor's role in insuring equal educational opportunities for all students. The problem of girls' low enrollment in secondary school mathematics classes and the implications of an inadequate mathematics background are discussed. Specific steps to encourage young women to continue their study of…