MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING
ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN
2013-01-01
In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963
Mathematical Morphology Techniques For Image Processing Applications In Biomedical Imaging
NASA Astrophysics Data System (ADS)
Bartoo, Grace T.; Kim, Yongmin; Haralick, Robert M.; Nochlin, David; Sumi, Shuzo M.
1988-06-01
Mathematical morphology operations allow object identification based on shape and are useful for grouping a cluster of small objects into one object. Because of these capabilities, we have implemented and evaluated this technique for our study of Alzheimer's disease. The microscopic hallmark of Alzheimer's disease is the presence of brain lesions known as neurofibrillary tangles and senile plaques. These lesions have distinct shapes compared to normal brain tissue. Neurofibrillary tangles appear as flame-shaped structures, whereas senile plaques appear as circular clusters of small objects. In order to quantitatively analyze the distribution of these lesions, we have developed and applied the tools of mathematical morphology on the Pixar Image Computer. As a preliminary test of the accuracy of the automatic detection algorithm, a study comparing computer and human detection of senile plaques was performed by evaluating 50 images from 5 different patients. The results of this comparison demonstrates that the computer counts correlate very well with the human counts (correlation coefficient = .81). Now that the basic algorithm has been shown to work, optimization of the software will be performed to improve its speed. Also future improvements such as local adaptive thresholding will be made to the image analysis routine to further improve the systems accuracy.
Applying Mathematical Processes (AMP)
ERIC Educational Resources Information Center
Kathotia, Vinay
2011-01-01
This article provides insights into the "Applying Mathematical Processes" resources, developed by the Nuffield Foundation. It features Nuffield AMP activities--and related ones from Bowland Maths--that were designed to support the teaching and assessment of key processes in mathematics--representing a situation mathematically, analysing,…
Processing of microCT implant-bone systems images using Fuzzy Mathematical Morphology
NASA Astrophysics Data System (ADS)
Bouchet, A.; Colabella, L.; Omar, S.; Ballarre, J.; Pastore, J.
2016-04-01
The relationship between a metallic implant and the existing bone in a surgical permanent prosthesis is of great importance since the fixation and osseointegration of the system leads to the failure or success of the surgery. Micro Computed Tomography is a technique that helps to visualize the structure of the bone. In this study, the microCT is used to analyze implant-bone systems images. However, one of the problems presented in the reconstruction of these images is the effect of the iron based implants, with a halo or fluorescence scattering distorting the micro CT image and leading to bad 3D reconstructions. In this work we introduce an automatic method for eliminate the effect of AISI 316L iron materials in the implant-bone system based on the application of Compensatory Fuzzy Mathematical Morphology for future investigate about the structural and mechanical properties of bone and cancellous materials.
Investigating Teachers' Images of Mathematics
ERIC Educational Resources Information Center
Sterenberg, Gladys
2008-01-01
Research suggests that understanding new images of mathematics is very challenging and can contribute to teacher resistance. An explicit exploration of personal views of mathematics may be necessary for pedagogical change. One possible way for exploring these images is through mathematical metaphors. As metaphors focus on similarities, they can be…
Semantic Processing of Mathematical Gestures
ERIC Educational Resources Information Center
Lim, Vanessa K.; Wilson, Anna J.; Hamm, Jeff P.; Phillips, Nicola; Iwabuchi, Sarina J.; Corballis, Michael C.; Arzarello, Ferdinando; Thomas, Michael O. J.
2009-01-01
Objective: To examine whether or not university mathematics students semantically process gestures depicting mathematical functions (mathematical gestures) similarly to the way they process action gestures and sentences. Semantic processing was indexed by the N400 effect. Results: The N400 effect elicited by words primed with mathematical gestures…
Images, Anxieties and Attitudes toward Mathematics
ERIC Educational Resources Information Center
Belbase, Shashidhar
2010-01-01
Images, anxieties, and attitudes towards mathematics are common interest among mathematics teachers, teacher educators and researchers. The main purpose of this literature review based paper is to discuss and analyze images, anxieties, and attitudes towards mathematics in order to foster meaningful teaching and learning of mathematics. Images of…
Images, Anxieties, and Attitudes toward Mathematics
ERIC Educational Resources Information Center
Belbase, Shashidhar
2013-01-01
The purpose of this paper is to discuss and analyze images, anxieties, and attitudes towards mathematics in order to foster meaningful teaching and learning of mathematics. Images of mathematics seem to be profoundly shaped by epistemological, philosophical, and pedagogical perspectives of one who views mathematics either as priori or a…
Introduction to computer image processing
NASA Technical Reports Server (NTRS)
Moik, J. G.
1973-01-01
Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.
Exploring Mathematical Definition Construction Processes
ERIC Educational Resources Information Center
Ouvrier-Buffet, Cecile
2006-01-01
The definition of "definition" cannot be taken for granted. The problem has been treated from various angles in different journals. Among other questions raised on the subject we find: the notions of "concept definition" and "concept image", conceptions of mathematical definitions, redefinitions, and from a more axiomatic point of view, how to…
NASA Technical Reports Server (NTRS)
1993-01-01
Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.
Image processing and reconstruction
Chartrand, Rick
2012-06-15
This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.
Image Processing for Teaching.
ERIC Educational Resources Information Center
Greenberg, R.; And Others
1993-01-01
The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…
Workbook, Basic Mathematics and Wastewater Processing Calculations.
ERIC Educational Resources Information Center
New York State Dept. of Environmental Conservation, Albany.
This workbook serves as a self-learning guide to basic mathematics and treatment plant calculations and also as a reference and source book for the mathematics of sewage treatment and processing. In addition to basic mathematics, the workbook discusses processing and process control, laboratory calculations and efficiency calculations necessary in…
Mathematical Modeling: A Structured Process
ERIC Educational Resources Information Center
Anhalt, Cynthia Oropesa; Cortez, Ricardo
2015-01-01
Mathematical modeling, in which students use mathematics to explain or interpret physical, social, or scientific phenomena, is an essential component of the high school curriculum. The Common Core State Standards for Mathematics (CCSSM) classify modeling as a K-12 standard for mathematical practice and as a conceptual category for high school…
Gifted Students' Metaphor Images about Mathematics
ERIC Educational Resources Information Center
Arikan, Elif Esra; Unal, Hasan
2015-01-01
The aim of this study is to investigate the metaphors images of gifted students about mathematics. The sample of the study consists of 82 gifted students, which are 2, 3, 4, 5, 6, 7 graders, from Istanbul. Data were collected by asking students to complete the sentence: "Mathematics is as …, because…". In the study content analysis was…
ERIC Educational Resources Information Center
Yilmaz, Suha; Tekin-Dede, Ayse
2016-01-01
Mathematization competency is considered in the field as the focus of modelling process. Considering the various definitions, the components of the mathematization competency are determined as identifying assumptions, identifying variables based on the assumptions and constructing mathematical model/s based on the relations among identified…
Self and Peer Assessment of Mathematical Processes
ERIC Educational Resources Information Center
Onion, Alice; Javaheri, Elnaz
2011-01-01
This article explores using Bowland assessment tasks and Nuffield Applying Mathematical Processes (AMP) activities as part of a scheme of work. The Bowland tasks and Nuffield AMP activities are designed to develop students' mathematical thinking; they are focused on key processes. Unfamiliar demands are made on the students and they are challenged…
Mathematics from Still and Moving Images
ERIC Educational Resources Information Center
Pierce, Robyn; Stacey, Kaye; Ball, Lynda
2005-01-01
Digital photos and digital movies offer an excellent way of bringing real world situations into the mathematics classroom. The technologies surveyed here are feasible for everyday classroom use and inexpensive. Examples are drawn from the teaching of Cartesian coordinates, linear functions, ratio and Pythagoras' theorem using still images, and…
Iterative Processes in Mathematics Education
ERIC Educational Resources Information Center
Mudaly, Vimolan
2009-01-01
There are many arguments that reflect on inductive versus deductive methods in mathematics. Claims are often made that teaching from the general to the specific does make understanding better for learners or vice versa. I discuss an intervention conducted with Grade 10 (15-year-old) learners in a small suburb in South Africa. I reflect on the…
An Emergent Framework: Views of Mathematical Processes
ERIC Educational Resources Information Center
Sanchez, Wendy B.; Lischka, Alyson E.; Edenfield, Kelly W.; Gammill, Rebecca
2015-01-01
The findings reported in this paper were generated from a case study of teacher leaders at a state-level mathematics conference. Investigation focused on how participants viewed the mathematical processes of communication, connections, representations, problem solving, and reasoning and proof. Purposeful sampling was employed to select nine…
Lensless ghost imaging based on mathematical simulation and experimental simulation
NASA Astrophysics Data System (ADS)
Liu, Yanyan; Wang, Biyi; Zhao, Yingchao; Dong, Junzhang
2014-02-01
The differences of conventional imaging and correlated imaging are discussed in this paper. The mathematical model of lensless ghost imaging system is set up and the image of double slits is computed by mathematical simulation. The results are also testified by the experimental verification. Both the theory simulation and experimental verifications results shows that the mathematical model based on statistical optical principle are keeping consistent with real experimental results.
NASA Technical Reports Server (NTRS)
Gunther, F. J.
1986-01-01
Apple Image-Processing Educator (AIPE) explores ability of microcomputers to provide personalized computer-assisted instruction (CAI) in digital image processing of remotely sensed images. AIPE is "proof-of-concept" system, not polished production system. User-friendly prompts provide access to explanations of common features of digital image processing and of sample programs that implement these features.
Multispectral imaging and image processing
NASA Astrophysics Data System (ADS)
Klein, Julie
2014-02-01
The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.
Mathematics of Information Processing and the Internet
ERIC Educational Resources Information Center
Hart, Eric W.
2010-01-01
The mathematics of information processing and the Internet can be organized around four fundamental themes: (1) access (finding information easily); (2) security (keeping information confidential); (3) accuracy (ensuring accurate information); and (4) efficiency (data compression). In this article, the author discusses each theme with reference to…
Huang, H.K.
1981-01-01
Biomedical image processing is a very broad field; it covers biomedical signal gathering, image forming, picture processing, and image display to medical diagnosis based on features extracted from images. This article reviews this topic in both its fundamentals and applications. In its fundamentals, some basic image processing techniques including outlining, deblurring, noise cleaning, filtering, search, classical analysis and texture analysis have been reviewed together with examples. The state-of-the-art image processing systems have been introduced and discussed in two categories: general purpose image processing systems and image analyzers. In order for these systems to be effective for biomedical applications, special biomedical image processing languages have to be developed. The combination of both hardware and software leads to clinical imaging devices. Two different types of clinical imaging devices have been discussed. There are radiological imagings which include radiography, thermography, ultrasound, nuclear medicine and CT. Among these, thermography is the most noninvasive but is limited in application due to the low energy of its source. X-ray CT is excellent for static anatomical images and is moving toward the measurement of dynamic function, whereas nuclear imaging is moving toward organ metabolism and ultrasound is toward tissue physical characteristics. Heart imaging is one of the most interesting and challenging research topics in biomedical image processing; current methods including the invasive-technique cineangiography, and noninvasive ultrasound, nuclear medicine, transmission, and emission CT methodologies have been reviewed.
NASA Technical Reports Server (NTRS)
1990-01-01
The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.
Hyperspectral image processing methods
Technology Transfer Automated Retrieval System (TEKTRAN)
Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...
The (Mathematical) Modeling Process in Biosciences
Torres, Nestor V.; Santos, Guido
2015-01-01
In this communication, we introduce a general framework and discussion on the role of models and the modeling process in the field of biosciences. The objective is to sum up the common procedures during the formalization and analysis of a biological problem from the perspective of Systems Biology, which approaches the study of biological systems as a whole. We begin by presenting the definitions of (biological) system and model. Particular attention is given to the meaning of mathematical model within the context of biology. Then, we present the process of modeling and analysis of biological systems. Three stages are described in detail: conceptualization of the biological system into a model, mathematical formalization of the previous conceptual model and optimization and system management derived from the analysis of the mathematical model. All along this work the main features and shortcomings of the process are analyzed and a set of rules that could help in the task of modeling any biological system are presented. Special regard is given to the formative requirements and the interdisciplinary nature of this approach. We conclude with some general considerations on the challenges that modeling is posing to current biology. PMID:26734063
Subroutines For Image Processing
NASA Technical Reports Server (NTRS)
Faulcon, Nettie D.; Monteith, James H.; Miller, Keith W.
1988-01-01
Image Processing Library computer program, IPLIB, is collection of subroutines facilitating use of COMTAL image-processing system driven by HP 1000 computer. Functions include addition or subtraction of two images with or without scaling, display of color or monochrome images, digitization of image from television camera, display of test pattern, manipulation of bits, and clearing of screen. Provides capability to read or write points, lines, and pixels from image; read or write at location of cursor; and read or write array of integers into COMTAL memory. Written in FORTRAN 77.
Differential morphology and image processing.
Maragos, P
1996-01-01
Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.
Medical image processing system
NASA Astrophysics Data System (ADS)
Wang, Dezong; Wang, Jinxiang
1994-12-01
In this paper a medical image processing system is described. That system is named NAI200 Medical Image Processing System and has been appraised by Chinese Government. Principles and cases provided here. Many kinds of pictures are used in modern medical diagnoses, for example B-supersonic, X-ray, CT and MRI. Some times the pictures are not good enough for diagnoses. The noises interfere with real situation on these pictures. That means the image processing is needed. A medical image processing system is described in this paper. That system is named NAI200 Medical Image Processing System and has been appraised by Chinese Government. There are four functions in that system. The first part is image processing. More than thirty four programs are involved. The second part is calculating. The areas or volumes of single or multitissues are calculated. Three dimensional reconstruction is the third part. The stereo images of organs or tumors are reconstructed with cross-sections. The last part is image storage. All pictures can be transformed to digital images, then be stored in hard disk or soft disk. In this paper not only all functions of that system are introduced, also the basic principles of these functions are explained in detail. This system has been applied in hospitals. The images of hundreds of cases have been processed. We describe the functions combining real cases. Here we only introduce a few examples.
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans
2001-12-01
This article is divided into two parts: the first is an opinion, the second is a description. The opinion is that diagnostic medical imaging is not a detection problem. The description is of a specific medical image-processing program. Why the opinion? If medical imaging were a detection problem, then image processing would unimportant. However, image processing is crucial. We illustrate this fact using three examples ultrasound, magnetic resonance imaging and, most poignantly, computed radiography. Although the examples are anecdotal they are illustrative. The description is of the image-processing program ImprocRAD written by one of the authors (Dallas). First we will discuss the motivation for creating yet another image processing program including system characterization which is an area of expertise of one of the authors (Roehrig). We will then look at the structure of the program and finally, to the point, the specific application: mammographic diagnostic reading. We will mention rapid display of mammogram image sets and then discuss processing. In that context, we describe a real-time image-processing tool we term the MammoGlass.
NASA Technical Reports Server (NTRS)
Zolotukhin, V. G.; Kolosov, B. I.; Usikov, D. A.; Borisenko, V. I.; Mosin, S. T.; Gorokhov, V. N.
1980-01-01
A description of a batch of programs for the YeS-1040 computer combined into an automated system for processing photo (and video) images of the Earth's surface, taken from spacecraft, is presented. Individual programs with the detailed discussion of the algorithmic and programmatic facilities needed by the user are presented. The basic principles for assembling the system, and the control programs are included. The exchange format within whose framework the cataloging of any programs recommended for the system of processing will be activated in the future is displayed.
Mathematical model on a desalination process
Al-Samawi, A.A. )
1994-05-01
Mathematical models on the desalination of brackish water using EDR process are formulated. The product desalinated water variable is hypothesized as being dependent upon the following independent variables: total dissolved solids of the feed water, total dissolved solids of the product water, the rate of feed water, the temperature of feed water, the number of stages of membranes, and the energy consumption. The final model which is selected on statistical basis is considered appropriated for both prediction purposes and for the purpose of quantifying the separate effects of each significant variable upon the rate of production of desalted water variable. Results of the analysis are reported herein. 6 refs., 4 figs., 5 tabs.
Apple Image Processing Educator
NASA Technical Reports Server (NTRS)
Gunther, F. J.
1981-01-01
A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.
NASA Technical Reports Server (NTRS)
1992-01-01
To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.
NASA Technical Reports Server (NTRS)
Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill
1992-01-01
The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.
Mathematical modeling of the coating process.
Toschkoff, Gregor; Khinast, Johannes G
2013-12-01
Coating of tablets is a common unit operation in the pharmaceutical industry. In most cases, the final product must meet strict quality requirements; to meet them, a detailed understanding of the coating process is required. To this end, numerous experiment studies have been performed. However, to acquire a mechanistic understanding, experimental data must be interpreted in the light of mathematical models. In recent years, a combination of analytical modeling and computational simulations enabled deeper insights into the nature of the coating process. This paper presents an overview of modeling and simulation approaches of the coating process, covering various relevant aspects from scale-up considerations to coating mass uniformity investigations and models for drop atomization. The most important analytical and computational concepts are presented and the findings are compared.
NASA Technical Reports Server (NTRS)
1986-01-01
Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.
Mathematical Analysis and Optimization of Infiltration Processes
NASA Technical Reports Server (NTRS)
Chang, H.-C.; Gottlieb, D.; Marion, M.; Sheldon, B. W.
1997-01-01
A variety of infiltration techniques can be used to fabricate solid materials, particularly composites. In general these processes can be described with at least one time dependent partial differential equation describing the evolution of the solid phase, coupled to one or more partial differential equations describing mass transport through a porous structure. This paper presents a detailed mathematical analysis of a relatively simple set of equations which is used to describe chemical vapor infiltration. The results demonstrate that the process is controlled by only two parameters, alpha and beta. The optimization problem associated with minimizing the infiltration time is also considered. Allowing alpha and beta to vary with time leads to significant reductions in the infiltration time, compared with the conventional case where alpha and beta are treated as constants.
Collective Mathematical Understanding as an Improvisational Process
ERIC Educational Resources Information Center
Martin, Lyndon C.; Towers, Jo
2003-01-01
This paper explores the phenomenon of mathematical understanding, and offers a response to the question raised by Martin (2001) at PME-NA about the possibility for and nature of collective mathematical understanding. In referring to collective mathematical understanding we point to the kinds of learning and understanding we may see occurring when…
Mathematical modeling of biomass fuels formation process.
Gaska, Krzysztof; Wandrasz, Andrzej J
2008-01-01
The increasing demand for thermal and electric energy in many branches of industry and municipal management accounts for a drastic diminishing of natural resources (fossil fuels). Meanwhile, in numerous technical processes, a huge mass of wastes is produced. A segregated and converted combustible fraction of the wastes, with relatively high calorific value, may be used as a component of formed fuels. The utilization of the formed fuel components from segregated groups of waste in associated processes of co-combustion with conventional fuels causes significant savings resulting from partial replacement of fossil fuels, and reduction of environmental pollution resulting directly from the limitation of waste migration to the environment (soil, atmospheric air, surface and underground water). The realization of technological processes with the utilization of formed fuel in associated thermal systems should be qualified by technical criteria, which means that elementary processes as well as factors of sustainable development, from a global viewpoint, must not be disturbed. The utilization of post-process waste should be preceded by detailed technical, ecological and economic analyses. In order to optimize the mixing process of fuel components, a mathematical model of the forming process was created. The model is defined as a group of data structures which uniquely identify a real process and conversion of this data in algorithms based on a problem of linear programming. The paper also presents the optimization of parameters in the process of forming fuels using a modified simplex algorithm with a polynomial worktime. This model is a datum-point in the numerical modeling of real processes, allowing a precise determination of the optimal elementary composition of formed fuels components, with assumed constraints and decision variables of the task.
BAOlab: Image processing program
NASA Astrophysics Data System (ADS)
Larsen, Søren S.
2014-03-01
BAOlab is an image processing package written in C that should run on nearly any UNIX system with just the standard C libraries. It reads and writes images in standard FITS format; 16- and 32-bit integer as well as 32-bit floating-point formats are supported. Multi-extension FITS files are currently not supported. Among its tools are ishape for size measurements of compact sources, mksynth for generating synthetic images consisting of a background signal including Poisson noise and a number of pointlike sources, imconvol for convolving two images (a “source” and a “kernel”) with each other using fast fourier transforms (FFTs) and storing the output as a new image, and kfit2d for fitting a two-dimensional King model to an image.
Methods in Astronomical Image Processing
NASA Astrophysics Data System (ADS)
Jörsäter, S.
A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future
Subdivision processes in mathematics and science
NASA Astrophysics Data System (ADS)
Stavy, Ruth; Tirosh, Dina
In the course of a research project now in progress, three successive division problems were presented to students in Grades 7-12. The first problem concerned a geometrical line segment, while the other two dealt with material substances (copper wire and water). All three problems involved the same process: successive division. Two of the problems (line segment and copper wire) were also figurally similar. Our data indicate that the similarity in the process had a profound effect on students' responses. The effect of the similarity in process suggests that the repeated process of division has a coercive effect, imposing itself on students' responses and encouraging then to view successive division processes as finite or infinite regardless of the content of the problem.It is possible to trace out, step by step, a more or less parallel process of development for the ideas of points and continuity and those dealing with atoms and physical objects in the child's conception of the ideal world. The only difference between these two processes is that to the child's way of thinking physical points or atoms still possess surface and volume, whereas mathematical points tend to lose all extension (though during the stages of development which concerns us here, this remains only a tendency.) (Piaget & Inhelder, 1948, pp. 126).Our first naive impression of nature and matter is that of continuity. Be it a piece of matter or a volume of liquid we invariably conceive it as divisible into infinity, and even so small a part of it appears to us to possess the same properties as the whole. (Hilbert, 1925, pp. 162).
The Image of Mathematics Held by Irish Post-Primary Students
ERIC Educational Resources Information Center
Lane, Ciara; Stynes, Martin; O'Donoghue, John
2014-01-01
The image of mathematics held by Irish post-primary students was examined and a model for the image found was constructed. Initially, a definition for "image of mathematics" was adopted with image of mathematics hypothesized as comprising attitudes, beliefs, self-concept, motivation, emotions and past experiences of mathematics. Research…
Students' Images of Mathematics
ERIC Educational Resources Information Center
Martin, Lee; Gourley-Delaney, Pamela
2014-01-01
Students' judgments about "what counts" as mathematics in and out of school have important consequences for problem solving and transfer, yet our understanding of the source and nature of these judgments remains incomplete. Thirty-five sixth grade students participated in a study focused on what activities students judge as…
Stochastic processes, estimation theory and image enhancement
NASA Technical Reports Server (NTRS)
Assefi, T.
1978-01-01
An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.
Image processing occupancy sensor
Brackney, Larry J.
2016-09-27
A system and method of detecting occupants in a building automation system environment using image based occupancy detection and position determinations. In one example, the system includes an image processing occupancy sensor that detects the number and position of occupants within a space that has controllable building elements such as lighting and ventilation diffusers. Based on the position and location of the occupants, the system can finely control the elements to optimize conditions for the occupants, optimize energy usage, among other advantages.
Processes and Priorities in Planning Mathematics Teaching
ERIC Educational Resources Information Center
Sullivan, Peter; Clarke, David J.; Clarke, Doug M.; Farrell, Lesley; Gerrard, Jessica
2013-01-01
Insights into teachers' planning of mathematics reported here were gathered as part of a broader project examining aspects of the implementation of the Australian curriculum in mathematics (and English). In particular, the responses of primary and secondary teachers to a survey of various aspects of decisions that inform their use of…
Mathematical Problem Solving through Sequential Process Analysis
ERIC Educational Resources Information Center
Codina, A.; Cañadas, M. C.; Castro, E.
2015-01-01
Introduction: The macroscopic perspective is one of the frameworks for research on problem solving in mathematics education. Coming from this perspective, our study addresses the stages of thought in mathematical problem solving, offering an innovative approach because we apply sequential relations and global interrelations between the different…
Cognition in Children's Mathematical Processing: Bringing Psychology to the Classroom
ERIC Educational Resources Information Center
Witt, Marcus
2010-01-01
Introduction: The cognitive processes that underpin successful mathematical processing in children have been well researched by experimental psychologists, but are not widely understood among teachers of primary mathematics. This is a shame, as an understanding of these cognitive processes could be highly useful to practitioners. This paper…
Programmable Image Processing Element
NASA Astrophysics Data System (ADS)
Eversole, W. L.; Salzman, J. F.; Taylor, F. V.; Harland, W. L.
1982-07-01
The algorithmic solution to many image-processing problems frequently uses sums of products where each multiplicand is an input sample (pixel) and each multiplier is a stored coefficient. This paper presents a large-scale integrated circuit (LSIC) implementation that provides accumulation of nine products and discusses its evolution from design through application 'A read-only memory (ROM) accumulate algorithm is used to perform the multiplications and is the key to one-chip implementation. The ROM function is actually implemented with erasable programmable ROM (EPROM) to allow reprogramming of the circuit to a variety of different functions. A real-time brassboard is being constructed to demonstrate four different image-processing operations on TV images.
Meaning and Process in Mathematics and Programming.
ERIC Educational Resources Information Center
Grogono, Peter
1989-01-01
Trends in computer programing language design are described and children's difficulties in learning to write programs for mathematics problems are considered. Languages are compared under the headings of imperative programing, functional programing, logic programing, and pictures. (DC)
NASA Technical Reports Server (NTRS)
Roth, D. J.; Hull, D. R.
1994-01-01
IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.
Lo, Winnie Y; Puchalski, Sarah M
2008-01-01
Image processing or digital image manipulation is one of the greatest advantages of digital radiography (DR). Preprocessing depends on the modality and corrects for system irregularities such as differential light detection efficiency, dead pixels, or dark noise. Processing is manipulation of the raw data just after acquisition. It is generally proprietary and specific to the DR vendor but encompasses manipulations such as unsharp mask filtering within two or more spatial frequency bands, histogram sliding and stretching, and gray scale rendition or lookup table application. These processing steps have a profound effect on the final appearance of the radiograph, but they can also lead to artifacts unique to digital systems. Postprocessing refers to manipulation of the final appearance of the radiograph by the end-user and does not involve alteration of the raw data.
Enhancing the Teaching and Learning of Mathematical Visual Images
ERIC Educational Resources Information Center
Quinnell, Lorna
2014-01-01
The importance of mathematical visual images is indicated by the introductory paragraph in the Statistics and Probability content strand of the Australian Curriculum, which draws attention to the importance of learners developing skills to analyse and draw inferences from data and "represent, summarise and interpret data and undertake…
First Year Mathematics Undergraduates' Settled Images of Tangent Line
ERIC Educational Resources Information Center
Biza, Irene; Zachariades, Theodossios
2010-01-01
This study concerns 182 first year mathematics undergraduates' perspectives on the tangent line of function graph in the light of a previous study on Year 12 pupils' perspectives. The aim was the investigation of tangency images that settle after undergraduates' distancing from the notion for a few months and after their participation in…
Van Eeckhout, E.; Pope, P.; Balick, L.
1996-07-01
This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.
Mechanical-mathematical modeling for landslide process
NASA Astrophysics Data System (ADS)
Svalova, V.
2009-04-01
500 m and displacement of a landslide in the plan over 1 m. Last serious activization of a landslide has taken place in 2002 with a motion on 53 cm. Catastrophic activization of the deep blockglide landslide in the area of Khoroshevo in Moscow took place in 2006-2007. A crack of 330 m long appeared in the old sliding circus, along which a new 220 m long creeping block was separated from the plateau and began sinking with a displaced surface of the plateau reaching to 12 m. Such activization of the landslide process was not observed in Moscow since mid XIX century. The sliding area of Khoroshevo was stable during long time without manifestations of activity. Revealing of the reasons of deformation and development of ways of protection from deep landslide motions is extremely actual and difficult problem which decision is necessary for preservation of valuable historical monuments and modern city constructions. The reasons of activization and protective measures are discussed. Structure of monitoring system for urban territories is elaborated. Mechanical-mathematical model of high viscous fluid was used for modeling of matter behavior on landslide slopes. Equation of continuity and an approximated equation of the Navier-Stockes for slow motions in a thin layer were used. The results of modelling give possibility to define the place of highest velocity on landslide surface, which could be the best place for monitoring post position. Model can be used for calibration of monitoring equipment and gives possibility to investigate some fundamental aspects of matter movement on landslide slope.
Characterising the Cognitive Processes in Mathematical Investigation
ERIC Educational Resources Information Center
Yeo, Joseph B. W.; Yeap, Ban Har
2010-01-01
Many educators believe that mathematical investigation involves both problem posing and problem solving, but some teachers have taught their students to investigate during problem solving. The confusion about the relationship between investigation and problem solving may affect how teachers teach their students and how researchers conduct their…
Distance University Students' Processing of Mathematics Exercises.
ERIC Educational Resources Information Center
Svenson, I. F.; And Others
1983-01-01
The development and use of a taxonomy for describing the way distance students plan work and use course materials to complete mathematics exercises is detailed. The taxonomy was developed from an exploratory study but the initial results are viewed as having implications for educational use in many situations. (Author/MP)
Visual Processing in Generally Gifted and Mathematically Excelling Adolescents
ERIC Educational Resources Information Center
Paz-Baruch, Nurit; Leikin, Roza; Leikin, Mark
2016-01-01
Little empirical data are available concerning the cognitive abilities of gifted individuals in general and especially those who excel in mathematics. We examined visual processing abilities distinguishing between general giftedness (G) and excellence in mathematics (EM). The research population consisted of 190 students from four groups of 10th-…
Hsu, Chun-Wei; Goh, Joshua O. S.
2016-01-01
When comparing between the values of different choices, human beings can rely on either more cognitive processes, such as using mathematical computation, or more affective processes, such as using emotion. However, the neural correlates of how these two types of processes operate during value-based decision-making remain unclear. In this study, we investigated the extent to which neural regions engaged during value-based decision-making overlap with those engaged during mathematical and emotional processing in a within-subject manner. In a functional magnetic resonance imaging experiment, participants viewed stimuli that always consisted of numbers and emotional faces that depicted two choices. Across tasks, participants decided between the two choices based on the expected value of the numbers, a mathematical result of the numbers, or the emotional face stimuli. We found that all three tasks commonly involved various cortical areas including frontal, parietal, motor, somatosensory, and visual regions. Critically, the mathematical task shared common areas with the value but not emotion task in bilateral striatum. Although the emotion task overlapped with the value task in parietal, motor, and sensory areas, the mathematical task also evoked responses in other areas within these same cortical structures. Minimal areas were uniquely engaged for the value task apart from the other two tasks. The emotion task elicited a more expansive area of neural activity whereas value and mathematical task responses were in more focal regions. Whole-brain spatial correlation analysis showed that valuative processing engaged functional brain responses more similarly to mathematical processing than emotional processing. While decisions on expected value entail both mathematical and emotional processing regions, mathematical processes have a more prominent contribution particularly in subcortical processes. PMID:27375466
Hsu, Chun-Wei; Goh, Joshua O S
2016-01-01
When comparing between the values of different choices, human beings can rely on either more cognitive processes, such as using mathematical computation, or more affective processes, such as using emotion. However, the neural correlates of how these two types of processes operate during value-based decision-making remain unclear. In this study, we investigated the extent to which neural regions engaged during value-based decision-making overlap with those engaged during mathematical and emotional processing in a within-subject manner. In a functional magnetic resonance imaging experiment, participants viewed stimuli that always consisted of numbers and emotional faces that depicted two choices. Across tasks, participants decided between the two choices based on the expected value of the numbers, a mathematical result of the numbers, or the emotional face stimuli. We found that all three tasks commonly involved various cortical areas including frontal, parietal, motor, somatosensory, and visual regions. Critically, the mathematical task shared common areas with the value but not emotion task in bilateral striatum. Although the emotion task overlapped with the value task in parietal, motor, and sensory areas, the mathematical task also evoked responses in other areas within these same cortical structures. Minimal areas were uniquely engaged for the value task apart from the other two tasks. The emotion task elicited a more expansive area of neural activity whereas value and mathematical task responses were in more focal regions. Whole-brain spatial correlation analysis showed that valuative processing engaged functional brain responses more similarly to mathematical processing than emotional processing. While decisions on expected value entail both mathematical and emotional processing regions, mathematical processes have a more prominent contribution particularly in subcortical processes. PMID:27375466
scikit-image: image processing in Python
Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony
2014-01-01
scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921
scikit-image: image processing in Python.
van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony
2014-01-01
scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org.
Image Processing Diagnostics: Emphysema
NASA Astrophysics Data System (ADS)
McKenzie, Alex
2009-10-01
Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.
Analysis of physical processes via imaging vectors
NASA Astrophysics Data System (ADS)
Volovodenko, V.; Efremova, N.; Efremov, V.
2016-06-01
Practically, all modeling processes in one way or another are random. The foremost formulated theoretical foundation embraces Markov processes, being represented in different forms. Markov processes are characterized as a random process that undergoes transitions from one state to another on a state space, whereas the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. In the Markov processes the proposition (model) of the future by no means changes in the event of the expansion and/or strong information progression relative to preceding time. Basically, modeling physical fields involves process changing in time, i.e. non-stationay processes. In this case, the application of Laplace transformation provides unjustified description complications. Transition to other possibilities results in explicit simplification. The method of imaging vectors renders constructive mathematical models and necessary transition in the modeling process and analysis itself. The flexibility of the model itself using polynomial basis leads to the possible rapid transition of the mathematical model and further analysis acceleration. It should be noted that the mathematical description permits operator representation. Conversely, operator representation of the structures, algorithms and data processing procedures significantly improve the flexibility of the modeling process.
Computer image processing and recognition
NASA Technical Reports Server (NTRS)
Hall, E. L.
1979-01-01
A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.
Image processing and recognition for biological images
Uchida, Seiichi
2013-01-01
This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739
Smart Image Enhancement Process
NASA Technical Reports Server (NTRS)
Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)
2012-01-01
Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.
ASPIC: STARLINK image processing package
NASA Astrophysics Data System (ADS)
Davenhall, A. C.; Hartley, Ken F.; Penny, Alan J.; Kelly, B. D.; King, Dave J.; Lupton, W. F.; Tudhope, D.; Pike, C. D.; Cooke, J. A.; Pence, W. D.; Wallace, Patrick T.; Brownrigg, D. R. K.; Baines, Dave W. T.; Warren-Smith, Rodney F.; McNally, B. V.; Bell, L. L.; Jones, T. A.; Terrett, Dave L.; Pearce, D. J.; Carey, J. V.; Currie, Malcolm J.; Benn, Chris; Beard, S. M.; Giddings, Jack R.; Balona, Luis A.; Harrison, B.; Wood, Roger; Sparkes, Bill; Allan, Peter M.; Berry, David S.; Shirt, J. V.
2015-10-01
ASPIC handled basic astronomical image processing. Early releases concentrated on image arithmetic, standard filters, expansion/contraction/selection/combination of images, and displaying and manipulating images on the ARGS and other devices. Later releases added new astronomy-specific applications to this sound framework. The ASPIC collection of about 400 image-processing programs was written using the Starlink "interim" environment in the 1980; the software is now obsolete.
Litke, Alan
2006-03-27
The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.
Filter for biomedical imaging and image processing
NASA Astrophysics Data System (ADS)
Mondal, Partha P.; Rajan, K.; Ahmad, Imteyaz
2006-07-01
Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a priori, knowledge about the type of noise corrupting the image. This makes the standard filters application specific. Widely used filters such as average, Gaussian, and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high-frequency details, making the image nonsmooth. An integrated general approach to design a finite impulse response filter based on Hebbian learning is proposed for optimal image filtering. This algorithm exploits the interpixel correlation by updating the filter coefficients using Hebbian learning. The algorithm is made iterative for achieving efficient learning from the neighborhood pixels. This algorithm performs optimal smoothing of the noisy image by preserving high-frequency as well as low-frequency features. Evaluation results show that the proposed finite impulse response filter is robust under various noise distributions such as Gaussian noise, salt-and-pepper noise, and speckle noise. Furthermore, the proposed approach does not require any a priori knowledge about the type of noise. The number of unknown parameters is few, and most of these parameters are adaptively obtained from the processed image. The proposed filter is successfully applied for image reconstruction in a positron emission tomography imaging modality. The images reconstructed by the proposed algorithm are found to be superior in quality compared with those reconstructed by existing PET image reconstruction methodologies.
Image processing in digital radiography.
Freedman, M T; Artz, D S
1997-01-01
Image processing is a critical part of obtaining high-quality digital radiographs. Fortunately, the user of these systems does not need to understand image processing in detail, because the manufacturers provide good starting values. Because radiologists may have different preferences in image appearance, it is helpful to know that many aspects of image appearance can be changed by image processing, and a new preferred setting can be loaded into the computer and saved so that it can become the new standard processing method. Image processing allows one to change the overall optical density of an image and to change its contrast. Spatial frequency processing allows an image to be sharpened, improving its appearance. It also allows noise to be blurred so that it is less visible. Care is necessary to avoid the introduction of artifacts or the hiding of mediastinal tubes.
A Review on Mathematical Modeling for Textile Processes
NASA Astrophysics Data System (ADS)
Chattopadhyay, R.
2015-10-01
Mathematical model is a powerful tool in engineering for studying variety of problems related to design and development of products and processes, optimization of manufacturing process, understanding a phenomenon and predicting product's behaviour in actual use. An insight of the process and use of appropriate mathematical tools are necessary for developing models. In the present paper, a review of types of model, procedure followed in developing them and their limitations have been discussed. Modeling techniques being used in few textile processes available in the literature have been cited as examples.
Mathematical modelling in the computer-aided process planning
NASA Astrophysics Data System (ADS)
Mitin, S.; Bochkarev, P.
2016-04-01
This paper presents new approaches to organization of manufacturing preparation and mathematical models related to development of the computer-aided multi product process planning (CAMPP) system. CAMPP system has some peculiarities compared to the existing computer-aided process planning (CAPP) systems: fully formalized developing of the machining operations; a capacity to create and to formalize the interrelationships among design, process planning and process implementation; procedures for consideration of the real manufacturing conditions. The paper describes the structure of the CAMPP system and shows the mathematical models and methods to formalize the design procedures.
Mathematical Modeling of the Induced Mutation Process in Bacterial Cells
NASA Astrophysics Data System (ADS)
Belov, Oleg V.; Krasavin, Evgeny A.; Parkhomenko, Alexander Yu.
2010-01-01
A mathematical model of the ultraviolet (UV) irradiation-induced mutation process in bacterial cells Escherichia coli is developed. Using mathematical approaches, the whole chain of events is tracked from a cell exposure to the damaging factor to mutation formation in the DNA chain. An account of the key special features of the regulation of this genetic network allows predicting the effects induced by the cell exposure to certain UV energy fluence.
FORTRAN Algorithm for Image Processing
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hull, David R.
1987-01-01
FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.
Multiscale Image Processing of Solar Image Data
NASA Astrophysics Data System (ADS)
Young, C.; Myers, D. C.
2001-12-01
It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.
The APL image processing laboratory
NASA Technical Reports Server (NTRS)
Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.
1984-01-01
The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.
ERIC Educational Resources Information Center
Martin, Lyndon C.; Towers, Jo
2015-01-01
In the research reported in this paper, we develop a theoretical perspective to describe and account for the growth of collective mathematical understanding. We discuss collective processes in mathematics, drawing in particular on theoretical work in the domains of improvisational jazz and theatre. Using examples of data from a study of elementary…
Mathematical abilities in dyslexic children: a diffusion tensor imaging study.
Koerte, Inga K; Willems, Anna; Muehlmann, Marc; Moll, Kristina; Cornell, Sonia; Pixner, Silvia; Steffinger, Denise; Keeser, Daniel; Heinen, Florian; Kubicki, Marek; Shenton, Martha E; Ertl-Wagner, Birgit; Schulte-Körne, Gerd
2016-09-01
Dyslexia is characterized by a deficit in language processing which mainly affects word decoding and spelling skills. In addition, children with dyslexia also show problems in mathematics. However, for the latter, the underlying structural correlates have not been investigated. Sixteen children with dyslexia (mean age 9.8 years [0.39]) and 24 typically developing children (mean age 9.9 years [0.29]) group matched for age, gender, IQ, and handedness underwent 3 T MR diffusion tensor imaging as well as cognitive testing. Tract-Based Spatial Statistics were performed to correlate behavioral data with diffusion data. Children with dyslexia performed worse than controls in standardized verbal number tasks, such as arithmetic efficiency tests (addition, subtraction, multiplication, division). In contrast, the two groups did not differ in the nonverbal number line task. Arithmetic efficiency, representing the total score of the four arithmetic tasks, multiplication, and division, correlated with diffusion measures in widespread areas of the white matter, including bilateral superior and inferior longitudinal fasciculi in children with dyslexia compared to controls. Children with dyslexia demonstrated lower performance in verbal number tasks but performed similarly to controls in a nonverbal number task. Further, an association between verbal arithmetic efficiency and diffusion measures was demonstrated in widespread areas of the white matter suggesting compensatory mechanisms in children with dyslexia compared to controls. Taken together, poor fact retrieval in children with dyslexia is likely a consequence of deficits in the language system, which not only affects literacy skills but also impacts on arithmetic skills. PMID:26286825
NASA Astrophysics Data System (ADS)
Lane, Ciara; Stynes, Martin; O'Donoghue, John
2016-10-01
A questionnaire survey was carried out as part of a PhD research study to investigate the image of mathematics held by post-primary students in Ireland. The study focused on students in fifth year of post-primary education studying ordinary level mathematics for the Irish Leaving Certificate examination - the final examination for students in second-level or post-primary education. At the time this study was conducted, ordinary level mathematics students constituted approximately 72% of Leaving Certificate students. Students were aged between 15 and 18 years. A definition for 'image of mathematics' was adapted from Lim and Wilson, with image of mathematics hypothesized as comprising attitudes, beliefs, self-concept, motivation, emotions and past experiences of mathematics. A questionnaire was composed incorporating 84 fixed-response items chosen from eight pre-established scales by Aiken, Fennema and Sherman, Gourgey and Schoenfeld. This paper focuses on the findings from the questionnaire survey. Students' images of mathematics are compared with regard to gender, type of post-primary school attended and prior mathematical achievement.
ERIC Educational Resources Information Center
Lane, Ciara; Stynes, Martin; O'Donoghue, John
2016-01-01
A questionnaire survey was carried out as part of a PhD research study to investigate the image of mathematics held by post-primary students in Ireland. The study focused on students in fifth year of post-primary education studying ordinary level mathematics for the Irish Leaving Certificate examination--the final examination for students in…
Adequate mathematical modelling of environmental processes
NASA Astrophysics Data System (ADS)
Chashechkin, Yu. D.
2012-04-01
In environmental observations and laboratory visualization both large scale flow components like currents, jets, vortices, waves and a fine structure are registered (different examples are given). The conventional mathematical modeling both analytical and numerical is directed mostly on description of energetically important flow components. The role of a fine structures is still remains obscured. A variety of existing models makes it difficult to choose the most adequate and to estimate mutual assessment of their degree of correspondence. The goal of the talk is to give scrutiny analysis of kinematics and dynamics of flows. A difference between the concept of "motion" as transformation of vector space into itself with a distance conservation and the concept of "flow" as displacement and rotation of deformable "fluid particles" is underlined. Basic physical quantities of the flow that are density, momentum, energy (entropy) and admixture concentration are selected as physical parameters defined by the fundamental set which includes differential D'Alembert, Navier-Stokes, Fourier's and/or Fick's equations and closing equation of state. All of them are observable and independent. Calculations of continuous Lie groups shown that only the fundamental set is characterized by the ten-parametric Galilelian groups reflecting based principles of mechanics. Presented analysis demonstrates that conventionally used approximations dramatically change the symmetries of the governing equations sets which leads to their incompatibility or even degeneration. The fundamental set is analyzed taking into account condition of compatibility. A high order of the set indicated on complex structure of complete solutions corresponding to physical structure of real flows. Analytical solutions of a number problems including flows induced by diffusion on topography, generation of the periodic internal waves a compact sources in week-dissipative media as well as numerical solutions of the same
Basic research planning in mathematical pattern recognition and image analysis
NASA Technical Reports Server (NTRS)
Bryant, J.; Guseman, L. F., Jr.
1981-01-01
Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.
NASA Astrophysics Data System (ADS)
Castellano, M.; Ottaviani, D.; Fontana, A.; Merlin, E.; Pilo, S.; Falcone, M.
2015-09-01
In the past years modern mathematical methods for image analysis have led to a revolution in many fields, from computer vision to scientific imaging. However, some recently developed image processing techniques successfully exploited by other sectors have been rarely, if ever, experimented on astronomical observations. We present here tests of two classes of variational image enhancement techniques: "structure-texture decomposition" and "super-resolution" showing that they are effective in improving the quality of observations. Structure-texture decomposition allows to recover faint sources previously hidden by the background noise, effectively increasing the depth of available observations. Super-resolution yields an higher-resolution and a better sampled image out of a set of low resolution frames, thus mitigating problematics in data analysis arising from the difference in resolution/sampling between different instruments, as in the case of EUCLID VIS and NIR imagers.
Cooperative processes in image segmentation
NASA Technical Reports Server (NTRS)
Davis, L. S.
1982-01-01
Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.
Mathematical Modelling of Continuous Biotechnological Processes
ERIC Educational Resources Information Center
Pencheva, T.; Hristozov, I.; Shannon, A. G.
2003-01-01
Biotechnological processes (BTP) are characterized by a complicated structure of organization and interdependent characteristics. Partial differential equations or systems of partial differential equations are used for their behavioural description as objects with distributed parameters. Modelling of substrate without regard to dispersion…
Voyager image processing at the Image Processing Laboratory
NASA Technical Reports Server (NTRS)
Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.
1980-01-01
This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.
Examining prospective mathematics teachers' proof processes for algebraic concepts
NASA Astrophysics Data System (ADS)
Güler, Gürsel; Dikici, Ramazan
2014-05-01
The aim of this study was to examine prospective mathematics teachers' proof processes for algebraic concepts. The study was conducted with 10 prospective teachers who were studying at the department of secondary mathematics teaching and who volunteered to participate in the study. The data were obtained via task-based clinical interviews that were conducted with the prospective teachers. The data obtained were analysed in accordance with the content analysis by focusing on the difficulties highlighted in the literature. It was observed that the difficulties prospective teachers experience in proof processes were in parallel with the difficulties highlighted in the literature.
Parallel asynchronous systems and image processing algorithms
NASA Technical Reports Server (NTRS)
Coon, D. D.; Perera, A. G. U.
1989-01-01
A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.
Bhattacharjee, Supratik; Sundareshan, Malur K
2003-08-01
Several powerful iterative algorithms are being developed for the restoration and superresolution of diffraction-limited imagery data by use of diverse mathematical techniques. Notwithstanding the mathematical sophistication of the approaches used in their development and the potential for resolution enhancement possible with their implementation, the spectrum extrapolation that is central to superresolution comes in these algorithms only as a by-product and needs to be checked only after the completion of the processing steps to ensure that an expansion of the image bandwidth has indeed occurred. To overcome this limitation, a new approach of mathematically extrapolating the image spectrum and employing it to design constraint sets for implementing set-theoretic estimation procedures is described. Performance evaluation of a specific projection-onto-convex-sets algorithm by using this approach for the restoration and superresolution of degraded images is outlined. The primary goal of the method presented is to expand the power spectrum of the input image beyond the range of the sensor that captured the image.
ERIC Educational Resources Information Center
Sagirli, Meryem Özturan
2016-01-01
The aim of the present study is to investigate pre-service secondary mathematics teachers' cognitive-metacognitive behaviours during the mathematical problem-solving process considering class level. The study, in which the case study methodology was employed, was carried out with eight pre-service mathematics teachers, enrolled at a university in…
SWNT Imaging Using Multispectral Image Processing
NASA Astrophysics Data System (ADS)
Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.
2012-02-01
A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.
An image processing algorithm for PPCR imaging
NASA Astrophysics Data System (ADS)
Cowen, Arnold R.; Giles, Anthony; Davies, Andrew G.; Workman, A.
1993-09-01
During 1990 The UK Department of Health installed two Photostimulable Phosphor Computed Radiography (PPCR) systems in the General Infirmary at Leeds with a view to evaluating the clinical and physical performance of the technology prior to its introduction into the NHS. An issue that came to light from the outset of the projects was the radiologists reservations about the influence of the standard PPCR computerized image processing on image quality and diagnostic performance. An investigation was set up by FAXIL to develop an algorithm to produce single format high quality PPCR images that would be easy to implement and allay the concerns of radiologists.
Mathematical modeling of the neuron morphology using two dimensional images.
Rajković, Katarina; Marić, Dušica L; Milošević, Nebojša T; Jeremic, Sanja; Arsenijević, Valentina Arsić; Rajković, Nemanja
2016-02-01
In this study mathematical analyses such as the analysis of area and length, fractal analysis and modified Sholl analysis were applied on two dimensional (2D) images of neurons from adult human dentate nucleus (DN). Using mathematical analyses main morphological properties were obtained including the size of neuron and soma, the length of all dendrites, the density of dendritic arborization, the position of the maximum density and the irregularity of dendrites. Response surface methodology (RSM) was used for modeling the size of neurons and the length of all dendrites. However, the RSM model based on the second-order polynomial equation was only possible to apply to correlate changes in the size of the neuron with other properties of its morphology. Modeling data provided evidence that the size of DN neurons statistically depended on the size of the soma, the density of dendritic arborization and the irregularity of dendrites. The low value of mean relative percent deviation (MRPD) between the experimental data and the predicted neuron size obtained by RSM model showed that model was suitable for modeling the size of DN neurons. Therefore, RSM can be generally used for modeling neuron size from 2D images.
The Development from Effortful to Automatic Processing in Mathematical Cognition.
ERIC Educational Resources Information Center
Kaye, Daniel B.; And Others
This investigation capitalizes upon the information processing models that depend upon measurement of latency of response to a mathematical problem and the decomposition of reaction time (RT). Simple two term addition problems were presented with possible solutions for true-false verification, and accuracy and RT to response were recorded. Total…
Theoretical foundations of spatially-variant mathematical morphology part ii: gray-level images.
Bouaynaya, Nidhal; Schonfeld, Dan
2008-05-01
In this paper, we develop a spatially-variant (SV) mathematical morphology theory for gray-level signals and images in the Euclidean space. The proposed theory preserves the geometrical concept of the structuring function, which provides the foundation of classical morphology and is essential in signal and image processing applications. We define the basic SV gray-level morphological operators (i.e., SV gray-level erosion, dilation, opening, and closing) and investigate their properties. We demonstrate the ubiquity of SV gray-level morphological systems by deriving a kernel representation for a large class of systems, called V-systems, in terms of the basic SV graylevel morphological operators. A V-system is defined to be a gray-level operator, which is invariant under gray-level (vertical) translations. Particular attention is focused on the class of SV flat gray-level operators. The kernel representation for increasing V-systems is a generalization of Maragos' kernel representation for increasing and translation-invariant function-processing systems. A representation of V-systems in terms of their kernel elements is established for increasing and upper-semi-continuous V-systems. This representation unifies a large class of spatially-variant linear and non-linear systems under the same mathematical framework. Finally, simulation results show the potential power of the general theory of gray-level spatially-variant mathematical morphology in several image analysis and computer vision applications. PMID:18369253
Subband/Transform MATLAB Functions For Processing Images
NASA Technical Reports Server (NTRS)
Glover, D.
1995-01-01
SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.
Mathematical modeling of DNA's transcription process for the cancer study
NASA Astrophysics Data System (ADS)
Morales-Peñaloza, A.; Meza-López, C. D.; Godina-Nava, J. J.
2012-10-01
The cancer is a phenomenon caused by an anomaly in the DNA's transcription process, therefore it is necessary to known how such anomaly is generated in order to implement alternative therapies to combat it. We propose to use mathematical modeling to treat the problem. Is implemented a simulation of the process of transcription and are studied the transport properties in the heterogeneous case using nonlinear dynamics.
Astronomical Image Processing with Hadoop
NASA Astrophysics Data System (ADS)
Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-07-01
In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification
Acoustic image-processing software
NASA Astrophysics Data System (ADS)
Several algorithims that display, enhance and analyze side-scan sonar images of the seafloor, have been developed by the University of Washington, Seattle, as part of an Office of Naval Research funded program in acoustic image analysis. One of these programs, PORTAL, is a small (less than 100K) image display and enhancement program that can run on MS-DOS computers with VGA boards. This program is now available in the public domain for general use in acoustic image processing.PORTAL is designed to display side-scan sonar data that is stored in most standard formats, including SeaMARC I, II, 150 and GLORIA data. (See image.) In addition to the “standard” formats, PORTAL has a module “front end” that allows the user to modify the program to accept other image formats. In addition to side-scan sonar data, the program can also display digital optical images from scanners and “framegrabbers,” gridded bathymetry data from Sea Beam and other sources, and potential field (magnetics/gravity) data. While limited in image analysis capability, the program allows image enhancement by histogram manipulation, and basic filtering operations, including multistage filtering. PORTAL can print reasonably high-quality images on Postscript laser printers and lower-quality images on non-Postscript printers with HP Laserjet emulation. Images suitable only for index sheets are also possible on dot matrix printers.
Generalized Mathematical Model Predicting the Mechanical Processing Topography
NASA Astrophysics Data System (ADS)
Leonov, S. L.; Markov, A. M.; Belov, A. B.; Sczygol, N.
2016-04-01
We propose a unified approach for the construction of mathematical models for the formation of surface topography and calculation of its roughness parameters for different methods of machining processes. The approach is based on a process of geometric copy tool in the material which superimposes plastico-elastic deformation, oscillatory occurrences in processing and random components of the profile. The unified approach makes it possible to reduce the time forcreation of simulated stochastic model for a specific type of processing and guarantee the accuracy of geometric parameters calculation of the surface. We make an application example of generalized model for calculation of roughness density distribution Ra in external sharpening.
On the mathematical modeling of wound healing angiogenesis in skin as a reaction-transport process
Flegg, Jennifer A.; Menon, Shakti N.; Maini, Philip K.; McElwain, D. L. Sean
2015-01-01
Over the last 30 years, numerous research groups have attempted to provide mathematical descriptions of the skin wound healing process. The development of theoretical models of the interlinked processes that underlie the healing mechanism has yielded considerable insight into aspects of this critical phenomenon that remain difficult to investigate empirically. In particular, the mathematical modeling of angiogenesis, i.e., capillary sprout growth, has offered new paradigms for the understanding of this highly complex and crucial step in the healing pathway. With the recent advances in imaging and cell tracking, the time is now ripe for an appraisal of the utility and importance of mathematical modeling in wound healing angiogenesis research. The purpose of this review is to pedagogically elucidate the conceptual principles that have underpinned the development of mathematical descriptions of wound healing angiogenesis, specifically those that have utilized a continuum reaction-transport framework, and highlight the contribution that such models have made toward the advancement of research in this field. We aim to draw attention to the common assumptions made when developing models of this nature, thereby bringing into focus the advantages and limitations of this approach. A deeper integration of mathematical modeling techniques into the practice of wound healing angiogenesis research promises new perspectives for advancing our knowledge in this area. To this end we detail several open problems related to the understanding of wound healing angiogenesis, and outline how these issues could be addressed through closer cross-disciplinary collaboration. PMID:26483695
Application of mathematical modelling methods for acoustic images reconstruction
NASA Astrophysics Data System (ADS)
Bolotina, I.; Kazazaeva, A.; Kvasnikov, K.; Kazazaev, A.
2016-04-01
The article considers the reconstruction of images by Synthetic Aperture Focusing Technique (SAFT). The work compares additive and multiplicative methods for processing signals received from antenna array. We have proven that the multiplicative method gives a better resolution. The study includes the estimation of beam trajectories for antenna arrays using analytical and numerical methods. We have shown that the analytical estimation method allows decreasing the image reconstruction time in case of linear antenna array implementation.
Conceptions and Images of Mathematics Professors on Teaching Mathematics in School.
ERIC Educational Resources Information Center
Pehkonen, Erkki
1999-01-01
Clarifies what kind of mathematical beliefs are conveyed to student teachers during their studies. Interviews mathematics professors (n=7) from five Finnish universities who were responsible for mathematics teacher education. Professors estimated that teachers' basic knowledge was poor and old-fashioned, requiring improvement, and they emphasized…
Fuzzy image processing in sun sensor
NASA Technical Reports Server (NTRS)
Mobasser, S.; Liebe, C. C.; Howard, A.
2003-01-01
This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.
The Mathematics of Medical Imaging in the Classroom.
ERIC Educational Resources Information Center
Funkhouser, Charles P.; Jafari, Farhad; Eubank, William B.
2002-01-01
Presents an integrated exposition of aspects of secondary school mathematics and a medical science specialty. Reviews clinical medical practice and theoretical and empirical literature in mathematics education and radiology to develop and pilot model integrative classroom topics and activities. Suggests mathematical applications in numeration and…
Signal and Image Processing Operations
1995-05-10
VIEW is a software system for processing arbitrary multidimensional signals. It provides facilities for numerical operations, signal displays, and signal databasing. The major emphasis of the system is on the processing of time-sequences and multidimensional images. The system is designed to be both portable and extensible. It runs currently on UNIX systems, primarily SUN workstations.
Preventing clonal evolutionary processes in cancer: Insights from mathematical models.
Rodriguez-Brenes, Ignacio A; Wodarz, Dominik
2015-07-21
Clonal evolutionary processes can drive pathogenesis in human diseases, with cancer being a prominent example. To prevent or treat cancer, mechanisms that can potentially interfere with clonal evolutionary processes need to be understood better. Mathematical modeling is an important research tool that plays an ever-increasing role in cancer research. This paper discusses how mathematical models can be useful to gain insights into mechanisms that can prevent disease initiation, help analyze treatment responses, and aid in the design of treatment strategies to combat the emergence of drug-resistant cells. The discussion will be done in the context of specific examples. Among defense mechanisms, we explore how replicative limits and cellular senescence induced by telomere shortening can influence the emergence and evolution of tumors. Among treatment approaches, we consider the targeted treatment of chronic lymphocytic leukemia (CLL) with tyrosine kinase inhibitors. We illustrate how basic evolutionary mathematical models have the potential to make patient-specific predictions about disease and treatment outcome, and argue that evolutionary models could become important clinical tools in the field of personalized medicine.
Mathematical Modelling of Bacterial Populations in Bio-remediation Processes
NASA Astrophysics Data System (ADS)
Vasiliadou, Ioanna A.; Vayenas, Dimitris V.; Chrysikopoulos, Constantinos V.
2011-09-01
An understanding of bacterial behaviour concerns many field applications, such as the enhancement of water, wastewater and subsurface bio-remediation, the prevention of environmental pollution and the protection of human health. Numerous microorganisms have been identified to be able to degrade chemical pollutants, thus, a variety of bacteria are known that can be used in bio-remediation processes. In this study the development of mathematical models capable of describing bacterial behaviour considered in bio-augmentation plans, such as bacterial growth, consumption of nutrients, removal of pollutants, bacterial transport and attachment in porous media, is presented. The mathematical models may be used as a guide in designing and assessing the conditions under which areas contaminated with pollutants can be better remediated.
Associative architecture for image processing
NASA Astrophysics Data System (ADS)
Adar, Rutie; Akerib, Avidan
1997-09-01
This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.
Digital processing of radiographic images
NASA Technical Reports Server (NTRS)
Bond, A. D.; Ramapriyan, H. K.
1973-01-01
Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.
FITS Liberator: Image processing software
NASA Astrophysics Data System (ADS)
Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David
2012-06-01
The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.
Seismic Imaging Processing and Migration
2000-06-26
Salvo is a 3D, finite difference, prestack, depth migration code for parallel computers. It is also capable of processing 2D and poststack data. The code requires as input a seismic dataset, a velocity model and a file of parameters that allows the user to select various options. The code uses this information to produce a seismic image. Some of the options available to the user include the application of various filters and imaging conditions. Themore » code also incorporates phase encoding (patent applied for) to process multiple shots simultaneously.« less
Fingerprint recognition using image processing
NASA Astrophysics Data System (ADS)
Dholay, Surekha; Mishra, Akassh A.
2011-06-01
Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.
Image processing on MPP-like arrays
Coletti, N.B.
1983-01-01
The desirability and suitability of using very large arrays of processors such as the Massively Parallel Processor (MPP) for processing remotely sensed images is investigated. The dissertation can be broken into two areas. The first area is the mathematical analysis of emultating the Bitonic Sorting Network on an array of processors. This sort is useful in histogramming images that have a very large number of pixel values (or gray levels). The optimal number of routing steps required to emulate a N = 2/sup k/ x 2/sup k/ element network on a 2/sup n/ x 2/sup n/ array (k less than or equal to n less than or equal to 7), provided each processor contains one element before and after every merge sequence, is proved to be 14 ..sqrt..N - 4log/sub 2/N - 14. Several already existing emulations achieve this lower bound. The number of elements sorted dictates a particular sorting network, and hence the number of routing steps. It is established that the cardinality N = 3/4 x 2/sup 2n/ elements used the absolute minimum routing steps, 8 ..sqrt..3 ..sqrt..N -4log/sub 2/N - (20 - 4log/sub 2/3). An algorithm achieving this bound is presented. The second area covers the implementations of the image processing tasks. In particular the histogramming of large numbers of gray-levels, geometric distortion determination and its efficient correction, fast Fourier transforms, and statistical clustering are investigated.
Linear Algebra and Image Processing
ERIC Educational Resources Information Center
Allali, Mohamed
2010-01-01
We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)
Linear algebra and image processing
NASA Astrophysics Data System (ADS)
Allali, Mohamed
2010-09-01
We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty.
Concept Learning through Image Processing.
ERIC Educational Resources Information Center
Cifuentes, Lauren; Yi-Chuan, Jane Hsieh
This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…
Mathematical Formulation Requirements and Specifications for the Process Models
Steefel, C.; Moulton, D.; Pau, G.; Lipnikov, K.; Meza, J.; Lichtner, P.; Wolery, T.; Bacon, D.; Spycher, N.; Bell, J.; Moridis, G.; Yabusaki, S.; Sonnenthal, E.; Zyvoloski, G.; Andre, B.; Zheng, L.; Davis, J.
2010-11-01
The Advanced Simulation Capability for Environmental Management (ASCEM) is intended to be a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM program is aimed at addressing critical EM program needs to better understand and quantify flow and contaminant transport behavior in complex geological systems. It will also address the long-term performance of engineered components including cementitious materials in nuclear waste disposal facilities, in order to reduce uncertainties and risks associated with DOE EM's environmental cleanup and closure activities. Building upon national capabilities developed from decades of Research and Development in subsurface geosciences, computational and computer science, modeling and applied mathematics, and environmental remediation, the ASCEM initiative will develop an integrated, open-source, high-performance computer modeling system for multiphase, multicomponent, multiscale subsurface flow and contaminant transport. This integrated modeling system will incorporate capabilities for predicting releases from various waste forms, identifying exposure pathways and performing dose calculations, and conducting systematic uncertainty quantification. The ASCEM approach will be demonstrated on selected sites, and then applied to support the next generation of performance assessments of nuclear waste disposal and facility decommissioning across the EM complex. The Multi-Process High Performance Computing (HPC) Simulator is one of three thrust areas in ASCEM. The other two are the Platform and Integrated Toolsets (dubbed the Platform) and Site Applications. The primary objective of the HPC Simulator is to provide a flexible and extensible computational engine to simulate the coupled processes and flow scenarios described by the conceptual models developed using the ASCEM Platform. The graded and iterative approach to assessments naturally
Image-based quantification and mathematical modeling of spatial heterogeneity in ESC colonies.
Herberg, Maria; Zerjatke, Thomas; de Back, Walter; Glauche, Ingmar; Roeder, Ingo
2015-06-01
Pluripotent embryonic stem cells (ESCs) have the potential to differentiate into cells of all three germ layers. This unique property has been extensively studied on the intracellular, transcriptional level. However, ESCs typically form clusters of cells with distinct size and shape, and establish spatial structures that are vital for the maintenance of pluripotency. Even though it is recognized that the cells' arrangement and local interactions play a role in fate decision processes, the relations between transcriptional and spatial patterns have not yet been studied. We present a systems biology approach which combines live-cell imaging, quantitative image analysis, and multiscale, mathematical modeling of ESC growth. In particular, we develop quantitative measures of the morphology and of the spatial clustering of ESCs with different expression levels and apply them to images of both in vitro and in silico cultures. Using the same measures, we are able to compare model scenarios with different assumptions on cell-cell adhesions and intercellular feedback mechanisms directly with experimental data. Applying our methodology to microscopy images of cultured ESCs, we demonstrate that the emerging colonies are highly variable regarding both morphological and spatial fluorescence patterns. Moreover, we can show that most ESC colonies contain only one cluster of cells with high self-renewing capacity. These cells are preferentially located in the interior of a colony structure. The integrated approach combining image analysis with mathematical modeling allows us to reveal potential transcription factor related cellular and intercellular mechanisms behind the emergence of observed patterns that cannot be derived from images directly. PMID:25605123
Image processing applications in NDE
Morris, R.A.
1980-01-01
Nondestructive examination (NDE) can be defined as a technique or collection of techniques that permits one to determine some property of a material or object without damaging the object. There are a large number of such techniques and most of them use visual imaging in one form or another. They vary from holographic interferometry where displacements under stress are measured to the visual inspection of an objects surface to detect cracks after penetrant has been applied. The use of image processing techniques on the images produced by NDE is relatively new and can be divided into three general categories: classical image enhancement; mensuration techniques; and quantitative sensitometry. An example is discussed of how image processing techniques are used to nondestructively and destructively test the product throughout its life cycle. The product that will be followed is the microballoon target used in the laser fusion program. The laser target is a small (50 to 100 ..mu..m - dia) glass sphere with typical wall thickness of 0.5 to 6 ..mu..m. The sphere may be used as is or may be given a number of coatings of any number of materials. The beads are mass produced by the millions and the first nondestructive test is to separate the obviously bad beads (broken or incomplete) from the good ones. After this has been done, the good beads must be inspected for spherocity and wall thickness uniformity. The microradiography of the glass, uncoated bead is performed on a specially designed low-energy x-ray machine. The beads are mounted in a special jig and placed on a Kodak high resolution plate in a vacuum chamber that contains the x-ray source. The x-ray image is made with an energy less that 2 keV and the resulting images are then inspected at a magnification of 500 to 1000X. Some typical results are presented.
The Mathematics of Medical Imaging in the Classroom
ERIC Educational Resources Information Center
Funkhouser, Charles P.; Jafari, Farhad; Eubank, William B.
2002-01-01
The article presents an integrated exposition of aspects of secondary school mathematics and a medical science specialty together with related classroom activities. Clinical medical practice and theoretical and empirical literature in mathematics education and radiology were reviewed to develop and pilot model integrative classroom topics and…
The image of mathematics held by Irish post-primary students
NASA Astrophysics Data System (ADS)
Lane, Ciara; Stynes, Martin; O'Donoghue, John
2014-08-01
The image of mathematics held by Irish post-primary students was examined and a model for the image found was constructed. Initially, a definition for 'image of mathematics' was adopted with image of mathematics hypothesized as comprising attitudes, beliefs, self-concept, motivation, emotions and past experiences of mathematics. Research focused on students studying ordinary level mathematics for the Irish Leaving Certificate examination - the final examination for students in second-level or post-primary education. Students were aged between 15 and 18 years. A questionnaire was constructed with both quantitative and qualitative aspects. The questionnaire survey was completed by 356 post-primary students. Responses were analysed quantitatively using Statistical Package for the Social Sciences (SPSS) and qualitatively using the constant comparative method of analysis and by reviewing individual responses. Findings provide an insight into Irish post-primary students' images of mathematics and offer a means for constructing a theoretical model of image of mathematics which could be beneficial for future research.
Investigation of Prospective Primary Mathematics Teachers' Perceptions and Images for Quadrilaterals
ERIC Educational Resources Information Center
Turnuklu, Elif; Gundogdu Alayli, Funda; Akkas, Elif Nur
2013-01-01
The object of this study was to show how prospective elementary mathematics teachers define and classify the quadrilaterals and to find out their images. This research was a qualitative study. It was conducted with 36 prospective elementary mathematics teachers studying at 3rd and 4th years in an educational faculty. The data were collected by…
Investigation of Primary Mathematics Student Teachers' Concept Images: Cylinder and Cone
ERIC Educational Resources Information Center
Ertekin, Erhan; Yazici, Ersen; Delice, Ali
2014-01-01
The aim of the present study is to determine the influence of concept definitions of cylinder and cone on primary mathematics student teachers' construction of relevant concept images. The study had a relational survey design and the participants were 238 primary mathematics student teachers. Statistical analyses implied the following:…
Analysis of electronic autoradiographs by mathematical post-processing
NASA Astrophysics Data System (ADS)
Ghosh, S.; Baier, M.; Schütz, J.; Schneider, F.; Scherer, U. W.
2016-02-01
Autoradiography is a well-established method of nuclear imaging. When different radionuclides are present simultaneously, additional processing is needed to distinguish distributions of radionuclides. In this work, a method is presented where aluminium absorbers of different thickness are used to produce images with different cut-off energies. By subtracting images pixel-by-pixel one can generate images representing certain ranges of β-particle energies. The method is applied to the measurement of irradiated reactor graphite samples containing several radionuclides to determine the spatial distribution of these radionuclides within pre-defined energy windows. The process was repeated under fixed parameters after thermal treatment of the samples. The greyscale images of the distribution after treatment were subtracted from the corresponding pre-treatment images. Significant changes in the intensity and distribution of radionuclides could be observed in some samples. Due to the thermal treatment parameters the most significant differences were observed in the 3H and 14C inventory and distribution.
Proceedings of the NASA Symposium on Mathematical Pattern Recognition and Image Analysis
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.
1983-01-01
The application of mathematical and statistical analyses techniques to imagery obtained by remote sensors is described by Principal Investigators. Scene-to-map registration, geometric rectification, and image matching are among the pattern recognition aspects discussed.
Medical Image Segmentation using the HSI color space and Fuzzy Mathematical Morphology
NASA Astrophysics Data System (ADS)
Gasparri, J. P.; Bouchet, A.; Abras, G.; Ballarin, V.; Pastore, J. I.
2011-12-01
Diabetic retinopathy is the most common cause of blindness among the active population in developed countries. An early ophthalmologic examination followed by proper treatment can prevent blindness. The purpose of this work is develop an automated method for segmentation the vasculature in retinal images in order to assist the expert in the evolution of a specific treatment or in the diagnosis of a potential pathology. Since the HSI space has the ability to separate the intensity of the intrinsic color information, its use is recommended for the digital processing images when they are affected by lighting changes, characteristic of the images under study. By the application of color filters, is achieved artificially change the tone of blood vessels, to better distinguish them from the bottom. This technique, combined with the application of fuzzy mathematical morphology tools as the Top-Hat transformation, creates images of the retina, where vascular branches are markedly enhanced over the original. These images provide the visualization of blood vessels by the specialist.
Applications in Digital Image Processing
ERIC Educational Resources Information Center
Silverman, Jason; Rosen, Gail L.; Essinger, Steve
2013-01-01
Students are immersed in a mathematically intensive, technological world. They engage daily with iPods, HDTVs, and smartphones--technological devices that rely on sophisticated but accessible mathematical ideas. In this article, the authors provide an overview of four lab-type activities that have been used successfully in high school mathematics…
Modelling Of Flotation Processes By Classical Mathematical Methods - A Review
NASA Astrophysics Data System (ADS)
Jovanović, Ivana; Miljanović, Igor
2015-12-01
Flotation process modelling is not a simple task, mostly because of the process complexity, i.e. the presence of a large number of variables that (to a lesser or a greater extent) affect the final outcome of the mineral particles separation based on the differences in their surface properties. The attempts toward the development of the quantitative predictive model that would fully describe the operation of an industrial flotation plant started in the middle of past century and it lasts to this day. This paper gives a review of published research activities directed toward the development of flotation models based on the classical mathematical rules. The description and systematization of classical flotation models were performed according to the available references, with emphasize exclusively given to the flotation process modelling, regardless of the model application in a certain control system. In accordance with the contemporary considerations, models were classified as the empirical, probabilistic, kinetic and population balance types. Each model type is presented through the aspects of flotation modelling at the macro and micro process levels.
ERIC Educational Resources Information Center
Kosko, Karl Wesley; Norton, Anderson
2012-01-01
The current body of literature suggests an interactive relationship between several of the process standards advocated by National Council of Teachers of Mathematics. Verbal and written mathematical communication has often been described as an alternative to typical mathematical representations (e.g., charts and graphs). Therefore, the…
ERIC Educational Resources Information Center
De Smedt, Bert; Gilmore, Camilla K.
2011-01-01
This study examined numerical magnitude processing in first graders with severe and mild forms of mathematical difficulties, children with mathematics learning disabilities (MLD) and children with low achievement (LA) in mathematics, respectively. In total, 20 children with MLD, 21 children with LA, and 41 regular achievers completed a numerical…
Networks for image acquisition, processing and display
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.
1990-01-01
The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.
Multispectral Image Processing for Plants
NASA Technical Reports Server (NTRS)
Miles, Gaines E.
1991-01-01
The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.
NASA Astrophysics Data System (ADS)
Frollo, Ivan; Krafčík, Andrej; Andris, Peter; Přibil, Jiří; Dermek, Tomáš
2015-12-01
Circular samples are the frequent objects of "in-vitro" investigation using imaging method based on magnetic resonance principles. The goal of our investigation is imaging of thin planar layers without using the slide selection procedure, thus only 2D imaging or imaging of selected layers of samples in circular vessels, eppendorf tubes,.. compulsorily using procedure "slide selection". In spite of that the standard imaging methods was used, some specificity arise when mathematical modeling of these procedure is introduced. In the paper several mathematical models were presented that were compared with real experimental results. Circular magnetic samples were placed into the homogenous magnetic field of a low field imager based on nuclear magnetic resonance. For experimental verification an MRI 0.178 Tesla ESAOTE Opera imager was used.
Concurrent Image Processing Executive (CIPE)
NASA Technical Reports Server (NTRS)
Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.
1988-01-01
The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.
Identifying Cognitive Processes Important to Mathematics Learning but Often Overlooked
ERIC Educational Resources Information Center
Turner, Ross
2011-01-01
In August 2010, ACER held its annual conference in Melbourne. The theme of the 2010 conference--"Teaching Mathematics? Make It Count"--was chosen to highlight that mathematics education is an area of high priority in Australia. In the author's own presentation to the conference, he outlined research into an area that he believes is very important…
Investigating Preservice Mathematics Teachers' Manipulative Material Design Processes
ERIC Educational Resources Information Center
Sandir, Hakan
2016-01-01
Students use concrete manipulatives to form an imperative affiliation between conceptual and procedural knowledge (Balka, 1993). Hence, it is necessary to design specific mathematics manipulatives that focus on different mathematical concepts. Preservice teachers need to know how to make and use manipulatives that stimulate students' thinking as…
The Importance of Dialogic Processes to Conceptual Development in Mathematics
ERIC Educational Resources Information Center
Kazak, Sibel; Wegerif, Rupert; Fujita, Taro
2015-01-01
We argue that dialogic theory, inspired by the Russian scholar Mikhail Bakhtin, has a distinct contribution to the analysis of the genesis of understanding in the mathematics classroom. We begin by contrasting dialogic theory to other leading theoretical approaches to understanding conceptual development in mathematics influenced by Jean Piaget…
Using n-Dimensional Volumes for Mathematical Applications in Spectral Image Analysis
NASA Astrophysics Data System (ADS)
Ziemann, Amanda K.
The ability to detect an object or activity -- such as a military vehicle, construction area, campsite, or vehicle tracks -- is highly important to both military and civilian applications. Sensors that process multi and hyperspectral images provide a medium for performing such tasks. Hyperspectral imaging is a technique for collecting and processing imagery at a large number of visible and non-visible wavelengths. Different materials exhibit different trends in their spectra, which can be used to analyze the image. For an image collected at n different wavelengths, the spectrum of each pixel can be mathematically represented as an n-element vector. The algorithm established in this work, the Simplex Volume Estimation algorithm (SVE), focuses specifically on change detection and large area search. In hyperspectral image analysis, a set of pixels constitutes a data cloud, with each pixel corresponding to a vector endpoint in Euclidean space. The SVE algorithm takes a geometrical approach to image analysis based on the linear mixture model, which describes each pixel in an image collected at n spectral bands as a linear combination of n+1 pure-material component spectra (known as endmembers). Iterative endmember identification is used to construct a volume function, where the Gram matrix is used to calculate the hypervolume of the data at each iteration as the endmembers are considered in Euclidean spaces of increasing dimensionality. Linear algebraic theory substantiates that the volume function accurately characterizes the inherent dimensionality of a set of data, and supports that the volume function provides a tool for identifying the subspace in which the magnitude of the spread of the data is the greatest. A metric is extracted from the volume function, and is used to quantify the relative complexity within a single image or the change in complexity across multiple images. The SVE algorithm was applied to hyperspectral images for the tasks of change detection
ERIC Educational Resources Information Center
Greenberg, Richard
1998-01-01
Describes the Image Processing for Teaching (IPT) project which provides digital image processing to excite students about science and mathematics as they use research-quality software on microcomputers. Provides information on IPT whose components of this dissemination project have been widespread teacher education, curriculum-based materials…
Image enhancement based on gamma map processing
NASA Astrophysics Data System (ADS)
Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An
2010-05-01
This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.
NASA Astrophysics Data System (ADS)
Galligan, Linda
2001-09-01
When comparing Chinese and English language, large differences in orthography, syntax, semantics, and phonetics are found. These differences may have consequences in the processing of mathematical text, yet little consideration is given to them when the mathematical abilities of students from these different cultures are compared. This paper reviews the differences between English and Mandarin Chinese language, evaluates current research, discusses the possible consequences for processing mathematical text in both languages, and outlines future research possibilities.
Visualization of children's mathematics solving process using near infrared spectroscopic approach
NASA Astrophysics Data System (ADS)
Kuroda, Yasufumi; Okamoto, Naoko; Chance, Britton; Nioka, Shoko; Eda, Hideo; Maesako, Takanori
2009-02-01
Over the past decade, the application of results from brain science research to education research has been a controversial topic. A NIRS imaging system shows images of Hb parameters in the brain. Measurements using NIRS are safe, easy and the equipment is portable, allowing subjects to tolerate longer research periods. The purpose of this research is to examine the characteristics of Hb using NIRS at the moment of understanding. We measured Hb in the prefrontal cortex of children while they were solving mathematical problems (tangram puzzles). As a result of the experiment, we were able to classify the children into three groups based on their solution methods. Hb continually increased in a group which could not develop a problem solving strategy for the tangram puzzles. Hb declined steadily for a group which was able to develop a strategy for the tangram puzzles. Hb was steady for a certain group that had already developed a strategy before solving the problems. Our experiments showed that the brain data from NIRS enables the visualization of children's mathematical solution processes.
ERIC Educational Resources Information Center
Delice, Ali; Kertil, Mahmut
2015-01-01
This article reports the results of a study that investigated pre-service mathematics teachers' modelling processes in terms of representational fluency in a modelling activity related to a cassette player. A qualitative approach was used in the data collection process. Students' individual and group written responses to the mathematical modelling…
Prospective Elementary Mathematics Teachers' Thought Processes on a Model Eliciting Activity
ERIC Educational Resources Information Center
Eraslan, Ali
2012-01-01
Mathematical model and modeling are one of the topics that have been intensively discussed in recent years. The purpose of this study is to examine prospective elementary mathematics teachers' thought processes on a model eliciting activity and reveal difficulties or blockages in the processes. The study includes forty-five seniors taking the…
Analysis of Mathematics Teachers' Self-Efficacy Levels Concerning the Teaching Process
ERIC Educational Resources Information Center
Ünsal, Serkan; Korkmaz, Fahrettin; Perçin, Safiye
2016-01-01
The purpose of this study is to identify mathematics teachers' opinions on the teaching process self-efficacy levels; and to examine mathematics teachers' teaching process self-efficacy beliefs with regards to specific variables. The study was conducted in Turkey during the second term of the 2015-2016 academic year. The study sample consisted of…
Precision processing of earth image data
NASA Technical Reports Server (NTRS)
Bernstein, R.; Stierhoff, G. C.
1976-01-01
Precise corrections of Landsat data are useful for generating land-use maps, detecting various crops and determining their acreage, and detecting changes. The paper discusses computer processing and visualization techniques for Landsat data so that users can get more information from the imagery. The elementary unit of data in each band of each scene is the integrated value of intensity of reflected light detected in the field of view by each sensor. To develop the basic mathematical approach for precision correction of the data, differences between positions of ground control points on the reference map and the observed control points in the scene are used to evaluate the coefficients of cubic time functions of roll, pitch, and yaw, and a linear time function of altitude deviation from normal height above local earth's surface. The resultant equation, termed a mapping function, corrects the warped data image into one that approximates the reference map. Applications are discussed relative to shade prints, extraction of road features, and atlas of cities.
Combining image-processing and image compression schemes
NASA Technical Reports Server (NTRS)
Greenspan, H.; Lee, M.-C.
1995-01-01
An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.
Applications Of Image Processing In Criminalistics
NASA Astrophysics Data System (ADS)
Krile, Thomas F.; Walkup, John F.; Barsallo, Adonis; Olimb, Hal; Tarng, Jaw-Horng
1987-01-01
A review of some basic image processing techniques for enhancement and restoration of images is given. Both digital and optical approaches are discussed. Fingerprint images are used as examples to illustrate the various processing techniques and their potential applications in criminalistics.
Heuristic and algorithmic processing in English, mathematics, and science education.
Sharps, Matthew J; Hess, Adam B; Price-Sharps, Jana L; Teh, Jane
2008-01-01
Many college students experience difficulties in basic academic skills. Recent research suggests that much of this difficulty may lie in heuristic competency--the ability to use and successfully manage general cognitive strategies. In the present study, the authors evaluated this possibility. They compared participants' performance on a practice California Basic Educational Skills Test and on a series of questions in the natural sciences with heuristic and algorithmic performance on a series of mathematics and reading comprehension exercises. Heuristic competency in mathematics was associated with better scores in science and mathematics. Verbal and algorithmic skills were associated with better reading comprehension. These results indicate the importance of including heuristic training in educational contexts and highlight the importance of a relatively domain-specific approach to questions of cognition in higher education.
Fractal-based image processing for mine detection
NASA Astrophysics Data System (ADS)
Nelson, Susan R.; Tuovila, Susan M.
1995-06-01
A fractal-based analysis algorithm has been developed to perform the task of automated recognition of minelike targets in side scan sonar images. Because naturally occurring surfaces, such as the sea bottom, are characterized by irregular textures they are well suited to modeling as fractal surfaces. Manmade structures, including mines, are composed of Euclidean shapes, which makes fractal-based analysis highly appropriate for discrimination of mines from a natural background. To that end, a set of fractal features, including fractal dimension, was developed to classify image areas as minelike targets, nonmine areas, or clutter. Four different methods of fractal dimension calculation were compared and the Weierstrass function was used to study the effect of various signal processing procedures on the fractal qualities of an image. The difference in fractal dimension between different images depends not only on the physical features extant in the images but in the underlying statistical characteristics of the processing procedures applied to the images and the underlying mathematical assumptions of the fractal dimension calculation methods. For the image set studied, fractal-based analysis achieved a classification rate similar to human operators, and was very successful in identifying areas of clutter. The analysis technique presented here is applicable to any type of signal that may be configured as an image, making this technique suitable for multisensor systems.
Programmable remapper for image processing
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)
1991-01-01
A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.
Handbook on COMTAL's Image Processing System
NASA Technical Reports Server (NTRS)
Faulcon, N. D.
1983-01-01
An image processing system is the combination of an image processor with other control and display devices plus the necessary software needed to produce an interactive capability to analyze and enhance image data. Such an image processing system installed at NASA Langley Research Center, Instrument Research Division, Acoustics and Vibration Instrumentation Section (AVIS) is described. Although much of the information contained herein can be found in the other references, it is hoped that this single handbook will give the user better access, in concise form, to pertinent information and usage of the image processing system.
New method of contour image processing based on the formalism of spiral light beams
Volostnikov, Vladimir G; Kishkin, S A; Kotova, S P
2013-07-31
The possibility of applying the mathematical formalism of spiral light beams to the problems of contour image recognition is theoretically studied. The advantages and disadvantages of the proposed approach are evaluated; the results of numerical modelling are presented. (optical image processing)
NASA Astrophysics Data System (ADS)
Prosvirnikov, D. B.; Ziatdinova, D. F.; Timerbaev, N. F.; Saldaev, V. A.; Gilfanov, K. H.
2016-04-01
The article analyses the physical picture of the process of steam explosion treatment of pre-impregnated lignocellulosic material, on the basis of which a mathematical modelling of the process is done. The mathematical modelling is represented in the form of differential equations with boundary conditions. The obtained mathematical description allows identifying the degree of influence of various factors on the kinetics of the process and producing a rational selection of operating parameters for the considered processes in terms of the set of application tasks.
Harmony Theory: A Mathematical Framework for Stochastic Parallel Processing.
ERIC Educational Resources Information Center
Smolensky, Paul
This paper presents preliminary results of research founded on the hypothesis that in real environments there exist regularities that can be idealized as mathematical structures that are simple enough to be analyzed. The author considered three steps in analyzing the encoding of modularity of the environment. First, a general information…
Thinking Process of Pseudo Construction in Mathematics Concepts
ERIC Educational Resources Information Center
Subanji; Nusantara, Toto
2016-01-01
This article aims at studying pseudo construction of student thinking in mathematical concepts, integer number operation, algebraic forms, area concepts, and triangle concepts. 391 junior high school students from four districts of East Java Province Indonesia were taken as the subjects. Data were collected by means of distributing the main…
Learning Elementary School Mathematics as a Culturally Conditioned Process.
ERIC Educational Resources Information Center
Vasco, Carlos E.
Mathematics is thought to be the most culturally independent of all academic subjects. "New Math" textbooks printed in the United States or Belgium were translated into Spanish and Portuguese with only minor variations in the story problems and are now taught in most Latin-American countries. Looking backwards, it was not different in past years…
SHETTY, ANIL N.; CHIANG, SHARON; MALETIC-SAVATIC, MIRJANA; KASPRIAN, GREGOR; VANNUCCI, MARINA; LEE, WESLEY
2016-01-01
In this article, we discuss the theoretical background for diffusion weighted imaging and diffusion tensor imaging. Molecular diffusion is a random process involving thermal Brownian motion. In biological tissues, the underlying microstructures restrict the diffusion of water molecules, making diffusion directionally dependent. Water diffusion in tissue is mathematically characterized by the diffusion tensor, the elements of which contain information about the magnitude and direction of diffusion and is a function of the coordinate system. Thus, it is possible to generate contrast in tissue based primarily on diffusion effects. Expressing diffusion in terms of the measured diffusion coefficient (eigenvalue) in any one direction can lead to errors. Nowhere is this more evident than in white matter, due to the preferential orientation of myelin fibers. The directional dependency is removed by diagonalization of the diffusion tensor, which then yields a set of three eigenvalues and eigenvectors, representing the magnitude and direction of the three orthogonal axes of the diffusion ellipsoid, respectively. For example, the eigenvalue corresponding to the eigenvector along the long axis of the fiber corresponds qualitatively to diffusion with least restriction. Determination of the principal values of the diffusion tensor and various anisotropic indices provides structural information. We review the use of diffusion measurements using the modified Stejskal–Tanner diffusion equation. The anisotropy is analyzed by decomposing the diffusion tensor based on symmetrical properties describing the geometry of diffusion tensor. We further describe diffusion tensor properties in visualizing fiber tract organization of the human brain. PMID:27441031
Computers in Public Schools: Changing the Image with Image Processing.
ERIC Educational Resources Information Center
Raphael, Jacqueline; Greenberg, Richard
1995-01-01
The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…
Images of the Limit of Function Formed in the Course of Mathematical Studies at the University
ERIC Educational Resources Information Center
Przenioslo, Malgorzata
2004-01-01
The paper is based on extensive research carried out on students of mathematics who had completed a university course of calculus. The basic purpose of the research was to determine the students' images of the concept of limit, that is to find out their associations, conceptions and intuitions connected with limits and to determine the degree of…
Pina, Violeta; Castillo, Alejandro; Cohen Kadosh, Roi; Fuentes, Luis J.
2015-01-01
Previous studies have suggested that numerical processing relates to mathematical performance, but it seems that such relationship is more evident for intentional than for automatic numerical processing. In the present study we assessed the relationship between the two types of numerical processing and specific mathematical abilities in a sample of 109 children in grades 1–6. Participants were tested in an ample range of mathematical tests and also performed both a numerical and a size comparison task. The results showed that numerical processing related to mathematical performance only when inhibitory control was involved in the comparison tasks. Concretely, we found that intentional numerical processing, as indexed by the numerical distance effect in the numerical comparison task, was related to mathematical reasoning skills only when the task-irrelevant dimension (the physical size) was incongruent; whereas automatic numerical processing, indexed by the congruency effect in the size comparison task, was related to mathematical calculation skills only when digits were separated by small distance. The observed double dissociation highlights the relevance of both intentional and automatic numerical processing in mathematical skills, but when inhibitory control is also involved. PMID:25873909
Examining Prospective Mathematics Teachers' Proof Processes for Algebraic Concepts
ERIC Educational Resources Information Center
Güler, Gürsel; Dikici, Ramazan
2014-01-01
The aim of this study was to examine prospective mathematics teachers' proof processes for algebraic concepts. The study was conducted with 10 prospective teachers who were studying at the department of secondary mathematics teaching and who volunteered to participate in the study. The data were obtained via task-based clinical interviews…
ERIC Educational Resources Information Center
Palla, Marina; Potari, Despina; Spyrou, Panagiotis
2012-01-01
In this study, we investigate the meaning students attribute to the structure of mathematical induction (MI) and the process of proof construction using mathematical induction in the context of a geometric recursion problem. Two hundred and thirteen 17-year-old students of an upper secondary school in Greece participated in the study. Students'…
Speed of Information Processing in Generally Gifted and Excelling-in-Mathematics Adolescents
ERIC Educational Resources Information Center
Paz-Baruch, N.; Leikin, M.; Aharon-Peretz, J.; Leikin, R.
2014-01-01
A considerable amount of recent evidence suggests that speed of information processing (SIP) may be related to general giftedness as well as contributing to higher mathematical ability. To date, no study has examined SIP associated with both general giftedness (G) and excellence in mathematics (EM). This paper presents a part of more extensive…
A Mathematical Experience Involving Defining Processes: In-Action Definitions and Zero-Definitions
ERIC Educational Resources Information Center
Ouvrier-Buffet, Cecile
2011-01-01
In this paper, a focus is made on defining processes at stake in an unfamiliar situation coming from discrete mathematics which brings surprising mathematical results. The epistemological framework of Lakatos is questioned and used for the design and the analysis of the situation. The cognitive background of Vergnaud's approach enriches the study…
Image processing applied to laser cladding process
Meriaudeau, F.; Truchetet, F.
1996-12-31
The laser cladding process, which consists of adding a melt powder to a substrate in order to improve or change the behavior of the material against corrosion, fatigue and so on, involves a lot of parameters. In order to perform good tracks some parameters need to be controlled during the process. The authors present here a low cost performance system using two CCD matrix cameras. One camera provides surface temperature measurements while the other gives information relative to the powder distribution or geometric characteristics of the tracks. The surface temperature (thanks to Beer Lambert`s law) enables one to detect variations in the mass feed rate. Using such a system the authors are able to detect fluctuation of 2 to 3g/min in the mass flow rate. The other camera gives them information related to the powder distribution, a simple algorithm applied to the data acquired from the CCD matrix camera allows them to see very weak fluctuations within both gaz flux (carriage or protection gaz). During the process, this camera is also used to perform geometric measurements. The height and the width of the track are obtained in real time and enable the operator to find information related to the process parameters such as the speed processing, the mass flow rate. The authors display the result provided by their system in order to enhance the efficiency of the laser cladding process. The conclusion is dedicated to a summary of the presented works and the expectations for the future.
Matching rendered and real world images by digital image processing
NASA Astrophysics Data System (ADS)
Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume
2010-05-01
Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.
Image Processing in Intravascular OCT
NASA Astrophysics Data System (ADS)
Wang, Zhao; Wilson, David L.; Bezerra, Hiram G.; Rollins, Andrew M.
Coronary artery disease is the leading cause of death in the world. Intravascular optical coherence tomography (IVOCT) is rapidly becoming a promising imaging modality for characterization of atherosclerotic plaques and evaluation of coronary stenting. OCT has several unique advantages over alternative technologies, such as intravascular ultrasound (IVUS), due to its better resolution and contrast. For example, OCT is currently the only imaging modality that can measure the thickness of the fibrous cap of an atherosclerotic plaque in vivo. OCT also has the ability to accurately assess the coverage of individual stent struts by neointimal tissue over time. However, it is extremely time-consuming to analyze IVOCT images manually to derive quantitative diagnostic metrics. In this chapter, we introduce some computer-aided methods to automate the common IVOCT image analysis tasks.
Programmable Iterative Optical Image And Data Processing
NASA Technical Reports Server (NTRS)
Jackson, Deborah J.
1995-01-01
Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.
Non-linear Post Processing Image Enhancement
NASA Technical Reports Server (NTRS)
Hunt, Shawn; Lopez, Alex; Torres, Angel
1997-01-01
A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,
A mathematical model of neuro-fuzzy approximation in image classification
NASA Astrophysics Data System (ADS)
Gopalan, Sasi; Pinto, Linu; Sheela, C.; Arun Kumar M., N.
2016-06-01
Image digitization and explosion of World Wide Web has made traditional search for image, an inefficient method for retrieval of required grassland image data from large database. For a given input query image Content-Based Image Retrieval (CBIR) system retrieves the similar images from a large database. Advances in technology has increased the use of grassland image data in diverse areas such has agriculture, art galleries, education, industry etc. In all the above mentioned diverse areas it is necessary to retrieve grassland image data efficiently from a large database to perform an assigned task and to make a suitable decision. A CBIR system based on grassland image properties and it uses the aid of a feed-forward back propagation neural network for an effective image retrieval is proposed in this paper. Fuzzy Memberships plays an important role in the input space of the proposed system which leads to a combined neural fuzzy approximation in image classification. The CBIR system with mathematical model in the proposed work gives more clarity about fuzzy-neuro approximation and the convergence of the image features in a grassland image.
Quantitative image processing in fluid mechanics
NASA Technical Reports Server (NTRS)
Hesselink, Lambertus; Helman, James; Ning, Paul
1992-01-01
The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.
Water surface capturing by image processing
Technology Transfer Automated Retrieval System (TEKTRAN)
An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...
Eclipse: ESO C Library for an Image Processing Software Environment
NASA Astrophysics Data System (ADS)
Devillard, Nicolas
2011-12-01
Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems.
Parallel-Processing Software for Creating Mosaic Images
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric
2008-01-01
A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.
[Development of a Text-Data Based Learning Tool That Integrates Image Processing and Displaying].
Shinohara, Hiroyuki; Hashimoto, Takeyuki
2015-01-01
We developed a text-data based learning tool that integrates image processing and displaying by Excel. Knowledge required for programing this tool is limited to using absolute, relative, and composite cell references and learning approximately 20 mathematical functions available in Excel. The new tool is capable of resolution translation, geometric transformation, spatial-filter processing, Radon transform, Fourier transform, convolutions, correlations, deconvolutions, wavelet transform, mutual information, and simulation of proton density-, T1-, and T2-weighted MR images. The processed images of 128 x 128 pixels or 256 x 256 pixels are observed directly within Excel worksheets without using any particular image display software. The results of image processing using this tool were compared with those using C language and the new tool was judged to have sufficient accuracy to be practically useful. The images displayed on Excel worksheets were compared with images using binary-data display software. This comparison indicated that the image quality of the Excel worksheets was nearly equal to the latter in visual impressions. Since image processing is performed by using text-data, the process is visible and facilitates making contrasts by using mathematical equations within the program. We concluded that the newly developed tool is adequate as a computer-assisted learning tool for use in medical image processing.
Image processing for drawing recognition
NASA Astrophysics Data System (ADS)
Feyzkhanov, Rustem; Zhelavskaya, Irina
2014-03-01
The task of recognizing edges of rectangular structures is well known. Still, almost all of them work with static images and has no limit on work time. We propose application of conducting homography for the video stream which can be obtained from the webcam. We propose algorithm which can be successfully used for this kind of application. One of the main use cases of such application is recognition of drawings by person on the piece of paper before webcam.
CT Image Processing Using Public Digital Networks
Rhodes, Michael L.; Azzawi, Yu-Ming; Quinn, John F.; Glenn, William V.; Rothman, Stephen L.G.
1984-01-01
Nationwide commercial computer communication is now commonplace for those applications where digital dialogues are generally short and widely distributed, and where bandwidth does not exceed that of dial-up telephone lines. Image processing using such networks is prohibitive because of the large volume of data inherent to digital pictures. With a blend of increasing bandwidth and distributed processing, network image processing becomes possible. This paper examines characteristics of a digital image processing service for a nationwide network of CT scanner installations. Issues of image transmission, data compression, distributed processing, software maintenance, and interfacility communication are also discussed. Included are results that show the volume and type of processing experienced by a network of over 50 CT scanners for the last 32 months.
Parallel digital signal processing architectures for image processing
NASA Astrophysics Data System (ADS)
Kshirsagar, Shirish P.; Hartley, David A.; Harvey, David M.; Hobson, Clifford A.
1994-10-01
This paper describes research into a high speed image processing system using parallel digital signal processors for the processing of electro-optic images. The objective of the system is to reduce the processing time of non-contact type inspection problems including industrial and medical applications. A single processor can not deliver sufficient processing power required for the use of applications hence, a MIMD system is designed and constructed to enable fast processing of electro-optic images. The Texas Instruments TMS320C40 digital signal processor is used due to its high speed floating point CPU and the support for the parallel processing environment. A custom designed VISION bus is provided to transfer images between processors. The system is being applied for solder joint inspection of high technology printed circuit boards.
ERIC Educational Resources Information Center
Davis, C. E.; Osler, James E.
2013-01-01
This paper details the outcomes of a qualitative in-depth investigation into teacher education mathematics preparation. This research is grounded in the notion that mathematics teacher education students (as "degree seeking candidates") need to develop strong foundations of mathematical practice as defined by the Common Core State…
A mathematical study of a random process proposed as an atmospheric turbulence model
NASA Technical Reports Server (NTRS)
Sidwell, K.
1977-01-01
A random process is formed by the product of a local Gaussian process and a random amplitude process, and the sum of that product with an independent mean value process. The mathematical properties of the resulting process are developed, including the first and second order properties and the characteristic function of general order. An approximate method for the analysis of the response of linear dynamic systems to the process is developed. The transition properties of the process are also examined.
Interactive image processing in swallowing research
NASA Astrophysics Data System (ADS)
Dengel, Gail A.; Robbins, JoAnne; Rosenbek, John C.
1991-06-01
Dynamic radiographic imaging of the mouth, larynx, pharynx, and esophagus during swallowing is used commonly in clinical diagnosis, treatment and research. Images are recorded on videotape and interpreted conventionally by visual perceptual methods, limited to specific measures in the time domain and binary decisions about the presence or absence of events. An image processing system using personal computer hardware and original software has been developed to facilitate measurement of temporal, spatial and temporospatial parameters. Digitized image sequences derived from videotape are manipulated and analyzed interactively. Animation is used to preserve context and increase efficiency of measurement. Filtering and enhancement functions heighten image clarity and contrast, improving visibility of details which are not apparent on videotape. Distortion effects and extraneous head and body motions are removed prior to analysis, and spatial scales are controlled to permit comparison among subjects. Effects of image processing on intra- and interjudge reliability and research applications are discussed.
Image Algebra Matlab language version 2.3 for image processing and compression research
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric
2010-08-01
Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation
NASA Astrophysics Data System (ADS)
Zhao, Hui; Wei, Jingxuan
2014-09-01
The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.
Earth Observation Services (Image Processing Software)
NASA Technical Reports Server (NTRS)
1992-01-01
San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.
Nonlinear Optical Image Processing with Bacteriorhodopsin Films
NASA Technical Reports Server (NTRS)
Downie, John D.; Deiss, Ron (Technical Monitor)
1994-01-01
The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.
Image-plane processing of visual information
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.
1984-01-01
Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.
A mathematical approach to image reconstruction on dual-energy computed tomography
NASA Astrophysics Data System (ADS)
Kim, Sungwhan; Ahn, Chi Young; Kang, Sung-Ho; Ha, Taeyoung; Jeon, Kiwan
2015-03-01
In this paper, we provide a mathematical approach to reconstruct the Compton scatter and photo-electronic coefficients using the dual-energy CT system. The proposed imaging method is based on the mean value theorem to handle the non-linear integration coming from the polychromatic energy based CT scan system. We show a numerical simulation result for the validation of the proposed algorithm
Digital Image Processing in Private Industry.
ERIC Educational Resources Information Center
Moore, Connie
1986-01-01
Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…
Command Line Image Processing System (CLIPS)
NASA Astrophysics Data System (ADS)
Fleagle, S. R.; Meyers, G. L.; Kulinski, R. G.
1985-06-01
An interactive image processing language (CLIPS) has been developed for use in an image processing environment. CLIPS uses a simple syntax with extensive on-line help to allow even the most naive user perform complex image processing tasks. In addition, CLIPS functions as an interpretive language complete with data structures and program control statements. CLIPS statements fall into one of three categories: command, control,and utility statements. Command statements are expressions comprised of intrinsic functions and/or arithmetic operators which act directly on image or user defined data. Some examples of CLIPS intrinsic functions are ROTATE, FILTER AND EXPONENT. Control statements allow a structured programming style through the use of statements such as DO WHILE and IF-THEN - ELSE. Utility statements such as DEFINE, READ, and WRITE, support I/O and user defined data structures. Since CLIPS uses a table driven parser, it is easily adapted to any environment. New commands may be added to CLIPS by writing the procedure in a high level language such as Pascal or FORTRAN and inserting the syntax for that command into the table. However, CLIPS was designed by incorporating most imaging operations into the language as intrinsic functions. CLIPS allows the user to generate new procedures easily with these powerful functions in an interactive or off line fashion using a text editor. The fact that CLIPS can be used to generate complex procedures quickly or perform basic image processing functions interactively makes it a valuable tool in any image processing environment.
Fu, C.Y.; Petrich, L.I.
1997-12-30
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.
Image processing technique based on image understanding architecture
NASA Astrophysics Data System (ADS)
Kuvychko, Igor
2000-12-01
Effectiveness of image applications is directly based on its abilities to resolve ambiguity and uncertainty in the real images. That requires tight integration of low-level image processing with high-level knowledge-based reasoning, which is the solution of the image understanding problem. This article presents a generic computational framework necessary for the solution of image understanding problem -- Spatial Turing Machine. Instead of tape of symbols, it works with hierarchical networks dually represented as discrete and continuous structures. Dual representation provides natural transformation of the continuous image information into the discrete structures, making it available for analysis. Such structures are data and algorithms at the same time and able to perform graph and diagrammatic operations being the basis of intelligence. They can create derivative structures that play role of context, or 'measurement device,' giving the ability to analyze, and run top-bottom algorithms. Symbols naturally emerge there, and symbolic operations work in combination with new simplified methods of computational intelligence. That makes images and scenes self-describing, and provides flexible ways of resolving uncertainty. Classification of images truly invariant to any transformation could be done via matching their derivative structures. New proposed architecture does not require supercomputers, opening ways to the new image technologies.
Mathematical modeling of a single stage ultrasonically assisted distillation process.
Mahdi, Taha; Ahmad, Arshad; Ripin, Adnan; Abdullah, Tuan Amran Tuan; Nasef, Mohamed M; Ali, Mohamad W
2015-05-01
The ability of sonication phenomena in facilitating separation of azeotropic mixtures presents a promising approach for the development of more intensified and efficient distillation systems than conventional ones. To expedite the much-needed development, a mathematical model of the system based on conservation principles, vapor-liquid equilibrium and sonochemistry was developed in this study. The model that was founded on a single stage vapor-liquid equilibrium system and enhanced with ultrasonic waves was coded using MATLAB simulator and validated with experimental data for ethanol-ethyl acetate mixture. The effects of both ultrasonic frequency and intensity on the relative volatility and azeotropic point were examined, and the optimal conditions were obtained using genetic algorithm. The experimental data validated the model with a reasonable accuracy. The results of this study revealed that the azeotropic point of the mixture can be totally eliminated with the right combination of sonication parameters and this can be utilized in facilitating design efforts towards establishing a workable ultrasonically intensified distillation system.
Fingerprint image enhancement by differential hysteresis processing.
Blotta, Eduardo; Moler, Emilce
2004-05-10
A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results. PMID:15062948
Fingerprint image enhancement by differential hysteresis processing.
Blotta, Eduardo; Moler, Emilce
2004-05-10
A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.
Image-processing with augmented reality (AR)
NASA Astrophysics Data System (ADS)
Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash
2013-03-01
In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.
Corn tassel detection based on image processing
NASA Astrophysics Data System (ADS)
Tang, Wenbing; Zhang, Yane; Zhang, Dongxing; Yang, Wei; Li, Minzan
2012-01-01
Machine vision has been widely applied in facility agriculture, and played an important role in obtaining environment information. In this paper, it is studied that application of image processing to recognize and locate corn tassel for corn detasseling machine. The corn tassel identification and location method was studied based on image processing and automated technology guidance information was provided for the actual production of corn emasculation operation. The system is the application of image processing to recognize and locate corn tassel for corn detasseling machine. According to the color characteristic of corn tassel, image processing techniques was applied to identify corn tassel of the images under HSI color space and Image segmentation was applied to extract the part of corn tassel, the feature of corn tassel was analyzed and extracted. Firstly, a series of preprocessing procedures were done. Then, an image segmentation algorithm based on HSI color space was develop to extract corn tassel from background and region growing method was proposed to recognize the corn tassel. The results show that this method could be effective for extracting corn tassel parts from the collected picture and can be used for corn tassel location information; this result could provide theoretical basis guidance for corn intelligent detasseling machine.
Huang, Jian; Du, Feng-lei; Yao, Yuan; Wan, Qun; Wang, Xiao-Song; Chen, Fei-Yan
2015-08-01
Distance effect has been regarded as the best established marker of basic numerical magnitude processes and is related to individual mathematical abilities. A larger behavioral distance effect is suggested to be concomitant with lower mathematical achievement in children. However, the relationship between distance effect and superior mathematical abilities is unclear. One could get superior mathematical abilities by acquiring the skill of abacus-based mental calculation (AMC), which can be used to solve calculation problems with exceptional speed and high accuracy. In the current study, we explore the relationship between distance effect and superior mathematical abilities by examining whether and how the AMC training modifies numerical magnitude processing. Thus, mathematical competencies were tested in 18 abacus-trained children (who accepted the AMC training) and 18 non-trained children. Electroencephalography (EEG) waveforms were recorded when these children executed numerical comparison tasks in both Arabic digit and dot array forms. We found that: (a) the abacus-trained group had superior mathematical abilities than their peers; (b) distance effects were found both in behavioral results and on EEG waveforms; (c) the distance effect size of the average amplitude on the late negative-going component was different between groups in the digit task, with a larger effect size for abacus-trained children; (d) both the behavioral and EEG distance effects were modulated by the notation. These results revealed that the neural substrates of magnitude processing were modified by AMC training, and suggested that the mechanism of the representation of numerical magnitude for children with superior mathematical abilities was different from their peers. In addition, the results provide evidence for a view of non-abstract numerical representation.
Huang, Jian; Du, Feng-lei; Yao, Yuan; Wan, Qun; Wang, Xiao-Song; Chen, Fei-Yan
2015-08-01
Distance effect has been regarded as the best established marker of basic numerical magnitude processes and is related to individual mathematical abilities. A larger behavioral distance effect is suggested to be concomitant with lower mathematical achievement in children. However, the relationship between distance effect and superior mathematical abilities is unclear. One could get superior mathematical abilities by acquiring the skill of abacus-based mental calculation (AMC), which can be used to solve calculation problems with exceptional speed and high accuracy. In the current study, we explore the relationship between distance effect and superior mathematical abilities by examining whether and how the AMC training modifies numerical magnitude processing. Thus, mathematical competencies were tested in 18 abacus-trained children (who accepted the AMC training) and 18 non-trained children. Electroencephalography (EEG) waveforms were recorded when these children executed numerical comparison tasks in both Arabic digit and dot array forms. We found that: (a) the abacus-trained group had superior mathematical abilities than their peers; (b) distance effects were found both in behavioral results and on EEG waveforms; (c) the distance effect size of the average amplitude on the late negative-going component was different between groups in the digit task, with a larger effect size for abacus-trained children; (d) both the behavioral and EEG distance effects were modulated by the notation. These results revealed that the neural substrates of magnitude processing were modified by AMC training, and suggested that the mechanism of the representation of numerical magnitude for children with superior mathematical abilities was different from their peers. In addition, the results provide evidence for a view of non-abstract numerical representation. PMID:26238541
Mathematical simulation of hemodynamical processes and medical technologies
NASA Astrophysics Data System (ADS)
Tsitsyura, Nadiya; Novyc'kyy, Victor V.; Lushchyk, Ulyana B.
2001-06-01
Vascular pathologies constitute a significant part of human's diseases and their rate tends to increase. Numerous investigations of brain blood flow in a normal condition and in a pathological one has created a new branch of modern medicine -- angioneurology. It combines the information on brain angioarchitecture and on blood supply in a normal condition and in a pathological one. Investigations of a disease's development constitute an important problem of a modern medicine. Cerebrum blood supply is regulated by arterial inflow and venous outflow, but, unfortunately, in the literature available arterial and venous beds are considered separately. This causes an one-sided interpretation of atherosclerotical and discirculatory encefalopathies. As arterial inflow and venous outflow are interrelated, it seems to be expedient to perform a complex estimation of arteriovenous interactions, prove a correlation dependence connection between the beds and find a dependence in a form of mathematical function. The results will be observed clearly in the graphs. There were 139 patients aged from 2 up to 70 examined in the 'Istyna' Scientific Medical Ultrasound Center by means of a Logidop 2 apparatus manufactured by Kranzbuhler, Germany using a technique of cerebral arteries and veins ultrasound location (invented and patented by Ulyana Lushchyk, State Patent of Ukraine N10262 of 19/07/1995). A clinical interpretation of the results obtained was performed. With the help of this technique and ultrasound Dopplerography the blood flow in major head and cervical arteries was investigated. While performing a visual graphic analysis we paid attention to the changes of carotid artery (CA), internal jugular vein (IJV) and supratrochlear artery's (STA) hemodynamical parameters. Generally accepted blood flow parameters: FS -- maximal systolic frequency and FD -- minimal diastolic frequency were measured. The correlation between different combinations of parameters in the vessels mentioned
Overview on METEOSAT geometrical image data processing
NASA Technical Reports Server (NTRS)
Diekmann, Frank J.
1994-01-01
Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.
Cognitive components of a mathematical processing network in 9-year-old children.
Szűcs, Dénes; Devine, Amy; Soltesz, Fruzsina; Nobes, Alison; Gabriel, Florence
2014-07-01
We determined how various cognitive abilities, including several measures of a proposed domain-specific number sense, relate to mathematical competence in nearly 100 9-year-old children with normal reading skill. Results are consistent with an extended number processing network and suggest that important processing nodes of this network are phonological processing, verbal knowledge, visuo-spatial short-term and working memory, spatial ability and general executive functioning. The model was highly specific to predicting arithmetic performance. There were no strong relations between mathematical achievement and verbal short-term and working memory, sustained attention, response inhibition, finger knowledge and symbolic number comparison performance. Non-verbal intelligence measures were also non-significant predictors when added to our model. Number sense variables were non-significant predictors in the model and they were also non-significant predictors when entered into regression analysis with only a single visuo-spatial WM measure. Number sense variables were predicted by sustained attention. Results support a network theory of mathematical competence in primary school children and falsify the importance of a proposed modular 'number sense'. We suggest an 'executive memory function centric' model of mathematical processing. Mapping a complex processing network requires that studies consider the complex predictor space of mathematics rather than just focusing on a single or a few explanatory factors. PMID:25089322
Real-time optical image processing techniques
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang
1988-01-01
Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.
Davis, Nicole; Cannistraci, Christopher J.; Rogers, Baxter P.; Gatenby, J. Christopher; Fuchs, Lynn S.; Anderson, Adam W.; Gore, John C.
2009-01-01
We used functional magnetic resonance imaging (fMRI) to explore the patterns of brain activation associated with different levels of performance in exact and approximate calculation tasks in well defined cohorts of children with mathematical calculation difficulties (MD) and typically developing controls. Both groups of children activated the same network of brain regions; however, children in the MD group had significantly increased activation in parietal, frontal, and cingulate cortices during both calculation tasks. A majority of the differences occurred in anatomical brain regions associated with cognitive resources such as executive functioning and working memory that are known to support higher level arithmetic skill but are not specific to mathematical processing. We propose that these findings are evidence that children with MD use the same types of problem solving strategies as TD children, but their weak mathematical processing system causes them to employ a more developmentally immature and less efficient form of the strategies. PMID:19410589
NASA Astrophysics Data System (ADS)
Putri, Arrival Rince; Nova, Tertia Delia; Watanabe, M.
2016-02-01
Bird flu infection processes within a poultry farm are formulated mathematically. A spatial effect is taken into account for the virus concentration with a diffusive term. An infection process is represented in terms of a traveling wave solutions. For a small removal rate, a singular perturbation analysis lead to existence of traveling wave solutions, that correspond to progressive infection in one direction.
Mathematical Thinking Process of Autistic Students in Terms of Representational Gesture
ERIC Educational Resources Information Center
Mustafa, Sriyanti; Nusantara, Toto; Subanji; Irawati, Santi
2016-01-01
The aim of this study is to describe the mathematical thinking process of autistic students in terms of gesture, using a qualitative approach. Data collecting is conducted by using 3 (three) audio-visual cameras. During the learning process, both teacher and students' activity are recorded using handy cam and digital camera (full HD capacity).…
PASS Processes and Early Mathematics Skills in Dutch and Italian Kindergarteners
ERIC Educational Resources Information Center
Kroesbergen, Evelyn H.; Van Luit, Johannes E. H.; Naglieri, Jack A.; Taddei, Stefano; Franchi, Elena
2010-01-01
The purpose of this study was to investigate the relation between early mathematical skills and cognitive processing abilities for two samples of children in Italy (N = 40) and the Netherlands (N = 59) who completed both a cognitive test that measures Planning, Attention, Simultaneous, and Successive (PASS) processing and an early mathematical…
Cognitive Components of a Mathematical Processing Network in 9-Year-Old Children
ERIC Educational Resources Information Center
Szucs, Dénes; Devine, Amy; Soltesz, Fruzsina; Nobes, Alison; Gabriel, Florence
2014-01-01
We determined how various cognitive abilities, including several measures of a proposed domain-specific number sense, relate to mathematical competence in nearly 100 9-year-old children with normal reading skill. Results are consistent with an extended number processing network and suggest that important processing nodes of this network are…
ERIC Educational Resources Information Center
Hunsader, Patricia D.; Thompson, Denisse R.; Zorin, Barbara
2013-01-01
In this paper, we present a framework used to analyze the extent to which assessments (i.e., chapter tests) accompanying three published elementary grades 3-5 curricula in the United States provide students with opportunities to engage with key mathematical processes. The framework uses indicators for five criteria to assess the processes of…
Bistatic SAR: Signal Processing and Image Formation.
Wahl, Daniel E.; Yocky, David A.
2014-10-01
This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.
Palm print image processing with PCNN
NASA Astrophysics Data System (ADS)
Yang, Jun; Zhao, Xianhong
2010-08-01
Pulse coupled neural networks (PCNN) is based on Eckhorn's model of cat visual cortex, and imitate mammals visual processing, and palm print has been found as a personal biological feature for a long history. This inspired us with the combination of them: a novel method for palm print processing is proposed, which includes pre-processing and feature extraction of palm print image using PCNN; then the feature of palm print image is used for identifying. Our experiment shows that a verification rate of 87.5% can be achieved at ideal condition. We also find that the verification rate decreases duo to rotate or shift of palm.
Twofold processing for denoising ultrasound medical images.
Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y
2015-01-01
Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285
A low-cost vector processor boosting compute-intensive image processing operations
NASA Technical Reports Server (NTRS)
Adorf, Hans-Martin
1992-01-01
Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.
Transaction recording in medical image processing
NASA Astrophysics Data System (ADS)
Riedel, Christian H.; Ploeger, Andreas; Onnasch, Dietrich G. W.; Mehdorn, Hubertus M.
1999-07-01
In medical image processing original image data on archive servers may absolutely not be modified directly. On the other hand, images from read-only devices like CD-ROM cannot be changed and saved on the same storage medium. In both cases the modified data have to be stored as a second version and large amounts of storage volume are needed. We avoid these problems by using a program which records only each transaction prescribed to images. Each transaction is stored and used for further utilization and for renewed submission of the modified data. Conventionally, every time an image is viewed or printed, the modified version has to be saved in addition to the recorded data, either automatically or by the user. Compared to these approaches which not only squander storage space but area also time consuming our program has the following and advantages: First, the original image data which may not be modified are protected against manipulation. Second, small amounts of storage volume and network range are needed. Third, approved image operations can be automated by macros derived from transaction recordings. Finally, operations on the original data can always be controlled and traced back. As the handling of images gets easier with this concept, security for original image data is granted.
NASA Astrophysics Data System (ADS)
Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.
2008-08-01
A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).
Digital-image processing and image analysis of glacier ice
Fitzpatrick, Joan J.
2013-01-01
This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.
Park, Joonkoo; Li, Rosa; Brannon, Elizabeth M
2014-03-01
In early childhood, humans learn culturally specific symbols for number that allow them entry into the world of complex numerical thinking. Yet little is known about how the brain supports the development of the uniquely human symbolic number system. Here, we use functional magnetic resonance imaging along with an effective connectivity analysis to investigate the neural substrates for symbolic number processing in young children. We hypothesized that, as children solidify the mapping between symbols and underlying magnitudes, important developmental changes occur in the neural communication between the right parietal region, important for the representation of non-symbolic numerical magnitudes, and other brain regions known to be critical for processing numerical symbols. To test this hypothesis, we scanned children between 4 and 6 years of age while they performed a magnitude comparison task with Arabic numerals (numerical, symbolic), dot arrays (numerical, non-symbolic), and lines (non-numerical). We then identified the right parietal seed region that showed greater blood-oxygen-level-dependent signal in the numerical versus the non-numerical conditions. A psychophysiological interaction method was used to find patterns of effective connectivity arising from this parietal seed region specific to symbolic compared to non-symbolic number processing. Two brain regions, the left supramarginal gyrus and the right precentral gyrus, showed significant effective connectivity from the right parietal cortex. Moreover, the degree of this effective connectivity to the left supramarginal gyrus was correlated with age, and the degree of the connectivity to the right precentral gyrus predicted performance on a standardized symbolic math test. These findings suggest that effective connectivity underlying symbolic number processing may be critical as children master the associations between numerical symbols and magnitudes, and that these connectivity patterns may serve as an
A novel mathematical setup for fault tolerant control systems with state-dependent failure process
NASA Astrophysics Data System (ADS)
Chitraganti, S.; Aberkane, S.; Aubrun, C.
2014-12-01
In this paper, we consider a fault tolerant control system (FTCS) with state- dependent failures and provide a tractable mathematical model to handle the state-dependent failures. By assuming abrupt changes in system parameters, we use a jump process modelling of failure process and the fault detection and isolation (FDI) process. In particular, we assume that the failure rates of the failure process vary according to which set the state of the system belongs to.
Mathematical simulation of the process of condensing natural gas
NASA Astrophysics Data System (ADS)
Tastandieva, G. M.
2015-01-01
Presents a two-dimensional unsteady model of heat transfer in terms of condensation of natural gas at low temperatures. Performed calculations of the process heat and mass transfer of liquefied natural gas (LNG) storage tanks of cylindrical shape. The influence of model parameters on the nature of heat transfer. Defined temperature regimes eliminate evaporation by cooling liquefied natural gas. The obtained dependence of the mass flow rate of vapor condensation gas temperature. Identified the possibility of regulating the process of "cooling down" liquefied natural gas in terms of its partial evaporation with low cost energy.
Fundamental concepts of digital image processing
Twogood, R.E.
1983-03-01
The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.
Fundamental Concepts of Digital Image Processing
DOE R&D Accomplishments Database
Twogood, R. E.
1983-03-01
The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.
A Pipeline Tool for CCD Image Processing
NASA Astrophysics Data System (ADS)
Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.
MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.
Image processing of angiograms: A pilot study
NASA Technical Reports Server (NTRS)
Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.
1974-01-01
The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.
Image gathering and processing - Information and fidelity
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Halyo, N.; Samms, R. W.; Stacy, K.
1985-01-01
In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, combining optical design with image-forming and edge-detection algorithms. The optical design of the image-gathering system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio (SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficient sampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formulations account for. Performance analyses and simulations for ordinary optical-design constraints and random scences indicate that (1) different image-forming algorithms prefer different optical designs; (2) informationally optimized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequency channel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high); and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processing algorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr's model of) human vision and thus reduces the data preprocessing and transmission required for machine vision.
Image processing for the Arcetri Solar Archive
NASA Astrophysics Data System (ADS)
Centrone, M.; Ermolli, I.; Giorgi, F.
The modelling recently developed to "reconstruct" with high accuracy the measured Total Solar Irradiance (TSI) variations, based on semi-empirical atmosphere models and observed distribution of the solar magnetic regions, can be applied to "construct" the TSI variations back in time making use of observations stored on several historic photographic archives. However, the analyses of images obtained through these archives is not a straightforward task, because these images suffer of several defects originated by the acquisition techniques and the data storing. In this paper we summarize the processing applied to identify solar features on the images obtained by the digitization of the Arcetri solar archive.
CCD architecture for spacecraft SAR image processing
NASA Technical Reports Server (NTRS)
Arens, W. E.
1977-01-01
A real-time synthetic aperture radar (SAR) image processing architecture amenable to future on-board spacecraft applications is currently under development. Using state-of-the-art charge-coupled device (CCD) technology, low cost and power are inherent features. Other characteristics include the ability to reprogram correlation reference functions, correct for range migration, and compensate for antenna beam pointing errors on the spacecraft in real time. The first spaceborne demonstration is scheduled to be flown as an experiment on a 1982 Shuttle imaging radar mission (SIR-B). This paper describes the architecture and implementation characteristics of this initial spaceborne CCD SAR image processor.
Mathematical analysis of the navigational process in homing pigeons.
Schiffner, Ingo; Baumeister, Johann; Wiltschko, Roswitha
2011-12-21
In a novel approach based on the principles of dynamic systems theory, we analyzed the tracks of pigeons recorded with the help of miniaturized GPS recorders. Using the method of time lag embedding, we calculated the largest Lyapunov exponent to determine the system's predictability and the correlation dimension to estimate the number of factors involved. A low Lyapunov exponent around 0.02, which proved to be rather constant over all calculations, indicates that the navigational process is almost deterministic. In the distribution of the correlation dimension estimates we found three distinctive peaks, at 3.3, 3.7 and 4.2, indicating that avian navigation is a complex multi-dimensional process, involving at least four or five independent factors. Additional factors, as indicated by an increase in the correlation dimension, seem to be included as the pigeons approach their home loft. This increase in correlation dimension and its fractal nature suggest that the various navigational factors can be included as required and weighted independently. Neither the correlation dimension nor the Lyapunov exponent is affected by increasing familiarity of the pigeons with the terrain. This suggests that the navigational strategy is stable with the same process controlling the flight across familiar as well as unfamiliar terrain.
Industrial Holography Combined With Image Processing
NASA Astrophysics Data System (ADS)
Schorner, J.; Rottenkolber, H.; Roid, W.; Hinsch, K.
1988-01-01
Holographic test methods have gained to become a valuable tool for the engineer in research and development. But also in the field of non-destructive quality control holographic test equipment is now accepted for tests within the production line. The producer of aircraft tyres e. g. are using holographic tests to prove the guarantee of their tyres. Together with image processing the whole test cycle is automatisized. The defects within the tyre are found automatically and are listed on an outprint. The power engine industry is using holographic vibration tests for the optimization of their constructions. In the plastics industry tanks, wheels, seats and fans are tested holographically to find the optimum of shape. The automotive industry makes holography a tool for noise reduction. Instant holography and image processing techniques for quantitative analysis have led to an economic application of holographic test methods. New developments of holographic units in combination with image processing are presented.
Support Routines for In Situ Image Processing
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean
2013-01-01
This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the
Processing infrared images of aircraft lapjoints
NASA Technical Reports Server (NTRS)
Syed, Hazari; Winfree, William P.; Cramer, K. E.
1992-01-01
Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.
ERIC Educational Resources Information Center
Illinois State Board of Education, 2004
2004-01-01
The Illinois State Board of Education (ISBE) administered Illinois Measure of Annual Growth in English (IMAGE) tests in Spring 2004. IMAGE tests are administered to Limited English Proficient (LEP) students who have been in either a Transitional Bilingual Education (TBE) or Transitional Program of Instruction (TPI) program since September 30 of…
FLIPS: Friendly Lisp Image Processing System
NASA Astrophysics Data System (ADS)
Gee, Shirley J.
1991-08-01
The Friendly Lisp Image Processing System (FLIPS) is the interface to Advanced Target Detection (ATD), a multi-resolutional image analysis system developed by Hughes in conjunction with the Hughes Research Laboratories. Both menu- and graphics-driven, FLIPS enhances system usability by supporting the interactive nature of research and development. Although much progress has been made, fully automated image understanding technology that is both robust and reliable is not a reality. In situations where highly accurate results are required, skilled human analysts must still verify the findings of these systems. Furthermore, the systems often require processing times several orders of magnitude greater than that needed by veteran personnel to analyze the same image. The purpose of FLIPS is to facilitate the ability of an image analyst to take statistical measurements on digital imagery in a timely fashion, a capability critical in research environments where a large percentage of time is expended in algorithm development. In many cases, this entails minor modifications or code tinkering. Without a well-developed man-machine interface, throughput is unduly constricted. FLIPS provides mechanisms which support rapid prototyping for ATD. This paper examines the ATD/FLIPS system. The philosophy of ATD in addressing image understanding problems is described, and the capabilities of FLIPS are discussed, along with a description of the interaction between ATD and FLIPS. Finally, an overview of current plans for the system is outlined.
Onboard Image Processing System for Hyperspectral Sensor.
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-09-25
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.
Onboard Image Processing System for Hyperspectral Sensor
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281
Onboard Image Processing System for Hyperspectral Sensor.
Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun
2015-01-01
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281
Processing Images of Craters for Spacecraft Navigation
NASA Technical Reports Server (NTRS)
Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.
2009-01-01
A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.
Enhanced neutron imaging detector using optical processing
Hutchinson, D.P.; McElhaney, S.A.
1992-08-01
Existing neutron imaging detectors have limited count rates due to inherent property and electronic limitations. The popular multiwire proportional counter is qualified by gas recombination to a count rate of less than 10{sup 5} n/s over the entire array and the neutron Anger camera, even though improved with new fiber optic encoding methods, can only achieve 10{sup 6} cps over a limited array. We present a preliminary design for a new type of neutron imaging detector with a resolution of 2--5 mm and a count rate capability of 10{sup 6} cps pixel element. We propose to combine optical and electronic processing to economically increase the throughput of advanced detector systems while simplifying computing requirements. By placing a scintillator screen ahead of an optical image processor followed by a detector array, a high throughput imaging detector may be constructed.
Simplified labeling process for medical image segmentation.
Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N
2012-01-01
Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms. PMID:23286072
Feedback regulation of microscopes by image processing.
Tsukada, Yuki; Hashimoto, Koichi
2013-05-01
Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system.
Web-based document image processing
NASA Astrophysics Data System (ADS)
Walker, Frank L.; Thoma, George R.
1999-12-01
Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.
Mariner 9-Image processing and products
Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.
1973-01-01
The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.
Improving Synthetic Aperture Image by Image Compounding in Beamforming Process
NASA Astrophysics Data System (ADS)
Martínez-Graullera, Oscar; Higuti, Ricardo T.; Martín, Carlos J.; Ullate, Luis. G.; Romero, David; Parrilla, Montserrat
2011-06-01
In this work, signal processing techniques are used to improve the quality of image based on multi-element synthetic aperture techniques. Using several apodization functions to obtain different side lobes distribution, a polarity function and a threshold criterium are used to develop an image compounding technique. The spatial diversity is increased using an additional array, which generates complementary information about the defects, improving the results of the proposed algorithm and producing high resolution and contrast images. The inspection of isotropic plate-like structures using linear arrays and Lamb waves is presented. Experimental results are shown for a 1-mm-thick isotropic aluminum plate with artificial defects using linear arrays formed by 30 piezoelectric elements, with the low dispersion symmetric mode S0 at the frequency of 330 kHz.
Digital image processing of vascular angiograms
NASA Technical Reports Server (NTRS)
Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.
1975-01-01
The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.
Mathematical modeling of physical processes in inorganic chemistry
Chiu, H.L.
1988-01-01
The first part deals with the rapid calculation of steady-state concentration profiles in contactors using the Purex Process. Most of the computer codes simulating the reprocessing of spent nuclear fuel generate the steady-state properties by calculating the transient behavior of the contactors. In this study, the author simulates the steady-state concentration profiles directly without first generating the transient behavior. Two computer codes are developed, PUMA (Plutonium-Uranium-Matrix-Algorithm) and PUNE (Plutonium-Uranium-Non-Equilibrium). The first one simulates the steady-state concentration profiles under conditions of equilibrium mass transfer. The second one accounts for deviations from mass transfer equilibrium. The second part of this dissertation shows how to use the classical trajectory method to study the equilibrium and saddle-point geometries of MX{sub n} (n = 2-7) molecules. Two nuclear potential functions that have the property of invariance to the operations of the permutation group of nuclei in molecules of the general formula MX{sub n} are described. Such potential functions allow equivalent isomers to have equal energies so that various statistical mechanical properties can be simply determined. The first function contains two center interactions between pairs of peripheral atoms and its defined by V(r) = 1/2{Sigma}{sub {alpha}}k{triangle}r{sub {alpha}{mu}}{sup 2} + {Sigma}{sub {alpha}< {beta}} QR{sub {alpha}{beta}}{sup {minus}n} (n = 1,2...). The second function contains two and three center interactions and is defined by V({Theta}) = 1/2{Sigma}{sub {alpha}}K{triangle}{sub {alpha}{mu}}{sup 2} + 1/2{Sigma}{sub {alpha}<{beta}}Qr{sub 0}{sup 2} ({Theta}{sub {alpha}{mu}{beta}} - {pi}){sup 2}.
NASA Technical Reports Server (NTRS)
Heydorn, R. D.
1984-01-01
The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of the Earth from remotely sensed measurement of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inference about the Earth.
ERIC Educational Resources Information Center
Baltaci, Serdal
2016-01-01
It is a widely known fact that gifted students have different skills compared to their peers. However, to what extent gifted students use mathematical thinking skills during probability problem solving process emerges as a significant question. Thence, the main aim of the present study is to examine 8th grade gifted students' probability…
ERIC Educational Resources Information Center
Wilson, P. Holt; Lee, Hollylynne Stohl; Hollebrands, Karen F.
2011-01-01
This study investigated the processes used by prospective mathematics teachers as they examined middle-school students' work solving statistical problems using a computer software program. Ways in which the model may be used by other researchers and implications for the design of pedagogical tasks for prospective teachers are discussed. (Contains…
The Development and Validation of Scores on the Mathematics Information Processing Scale (MIPS).
ERIC Educational Resources Information Center
Bessant, Kenneth C.
1997-01-01
This study reports on the development and psychometric properties of a new 87-item Mathematics Information Processing Scale that explores learning strategies, metacognitive problem-solving skills, and attentional deployment. Results with 340 college students support the use of the instrument, for which factor analysis identified five theoretically…
ERIC Educational Resources Information Center
Iiskala, Tuike; Vauras, Marja; Lehtinen, Erno; Salonen, Pekka
2011-01-01
This study investigated how metacognition appears as a socially shared phenomenon within collaborative mathematical word-problem solving processes of dyads of high-achieving pupils. Four dyads solved problems of different difficulty levels. The pupils were 10 years old. The problem-solving activities were videotaped and transcribed in terms of…
ERIC Educational Resources Information Center
Incikabi, Lutfi; Sancar Tokmak, Hatice
2012-01-01
This case study examined the educational software evaluation processes of pre-service teachers who attended either expertise-based training (XBT) or traditional training in conjunction with a Software-Evaluation checklist. Forty-three mathematics teacher candidates and three experts participated in the study. All participants evaluated educational…
Reflective Processes in a Mathematics Classroom with a Rich Learning Environment.
ERIC Educational Resources Information Center
Hershkowitz, Rina; Schwarz, Baruch B.
1999-01-01
Examined grade nine student reflection during mathematics problem-solving situations in which students worked individually, collaborated in small groups, subsequently wrote group reports, then engaged in a teacher-led discussion in which students reported on the process. Found that reporting was a social practice through which private artifacts…
ERIC Educational Resources Information Center
Andersson, Ulf; Ostergren, Rickard
2012-01-01
The study sought out to extend our knowledge regarding the origin of mathematical learning disabilities (MLD) in children by testing different hypotheses in the same samples of children. Different aspects of cognitive functions and number processing were assessed in fifth- and sixth-graders (11-13 years old) with MLD and compared to controls. The…
ERIC Educational Resources Information Center
Nakamura, Yasuyuki; Nishi, Shinnosuke; Muramatsu, Yuta; Yasutake, Koichi; Yamakawa, Osamu; Tagawa, Takahiro
2014-01-01
In this paper, we introduce a mathematical model for collaborative learning and the answering process for multiple-choice questions. The collaborative learning model is inspired by the Ising spin model and the model for answering multiple-choice questions is based on their difficulty level. An intensive simulation study predicts the possibility of…
Limiting liability via high resolution image processing
Greenwade, L.E.; Overlin, T.K.
1996-12-31
The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.
The ‘hit’ phenomenon: a mathematical model of human dynamics interactions as a stochastic process
NASA Astrophysics Data System (ADS)
Ishii, Akira; Arakaki, Hisashi; Matsuda, Naoya; Umemura, Sanae; Urushidani, Tamiko; Yamagata, Naoya; Yoshida, Narihiko
2012-06-01
A mathematical model for the ‘hit’ phenomenon in entertainment within a society is presented as a stochastic process of human dynamics interactions. The model uses only the advertisement budget time distribution as an input, and word-of-mouth (WOM), represented by posts on social network systems, is used as data to make a comparison with the calculated results. The unit of time is days. The WOM distribution in time is found to be very close to the revenue distribution in time. Calculations for the Japanese motion picture market based on the mathematical model agree well with the actual revenue distribution in time.
Visual parameter optimisation for biomedical image processing
2015-01-01
Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538
Soleimani, Effat; Mokhtari-Dizaji, Manijhe; Saberi, Hajir; Sharif-Kashani, Shervin
2016-08-01
Clarifying the complex interaction between mechanical and biological processes in healthy and diseased conditions requires constitutive models for arterial walls. In this study, a mathematical model for the displacement of the carotid artery wall in the longitudinal direction is defined providing a satisfactory representation of the axial stress applied to the arterial wall. The proposed model was applied to the carotid artery wall motion estimated from ultrasound image sequences of 10 healthy adults, and the axial stress waveform exerted on the artery wall was extracted. Consecutive ultrasonic images (30 frames per second) of the common carotid artery of 10 healthy subjects (age 44 ± 4 year) were recorded and transferred to a personal computer. Longitudinal displacement and acceleration were extracted from ultrasonic image processing using a block-matching algorithm. Furthermore, images were examined using a maximum gradient algorithm and time rate changes of the internal diameter and intima-media thickness were extracted. Finally, axial stress was estimated using an appropriate constitutive equation for thin-walled tubes. Performance of the proposed model was evaluated using goodness of fit between approximated and measured longitudinal displacement statistics. Values of goodness-of-fit statistics indicated high quality of fit for all investigated subjects with the mean adjusted R-square (0.86 ± 0.08) and root mean squared error (0.08 ± 0.04 mm). According to the results of the present study, maximum and minimum axial stresses exerted on the arterial wall are 1.7 ± 0.6 and -1.5 ± 0.5 kPa, respectively. These results reveal the potential of this technique to provide a new method to assess arterial stress from ultrasound images, overcoming the limitations of the finite element and other simulation techniques.
Subband/transform functions for image processing
NASA Technical Reports Server (NTRS)
Glover, Daniel
1993-01-01
Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.
Color Imaging management in film processing
NASA Astrophysics Data System (ADS)
Tremeau, Alain; Konik, Hubert; Colantoni, Philippe
2003-12-01
The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.
Bitplane Image Coding With Parallel Coefficient Processing.
Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor
2016-01-01
Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible.
[Digital thoracic radiology: devices, image processing, limits].
Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E
2001-09-01
In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing.
[Digital thoracic radiology: devices, image processing, limits].
Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E
2001-09-01
In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing. PMID:11567193
NASA Astrophysics Data System (ADS)
Truchetet, F.; Léni, P. E.; Fougerolle, Y.
2013-05-01
Mastering the sorting of the data in signal (nD) can lead to multiple applications like new compression, transmission, watermarking, encryption methods and even new processing methods for image. Some authors in the past decades have proposed to use these approaches for image compression, indexing, median filtering, mathematical morphology, encryption. A mathematical rigorous way for doing such a study has been introduced by Andrei Nikolaievitch Kolmogorov (1903-1987) in 1957 and recent results have provided constructive ways and practical algorithms for implementing the Kolmogorov theorem. We propose in this paper to present those algorithms and some preliminary results obtained by our team by applying them to image processing problems such as compression, progressive transmission and watermarking.
EOS image data processing system definition study
NASA Technical Reports Server (NTRS)
Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.
1973-01-01
The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.
Molina, Manuel; Mota, Manuel; Ramos, Alfonso
2015-01-01
This work deals with mathematical modeling through branching processes. We consider sexually reproducing animal populations where, in each generation, the number of progenitor couples is determined in a non-predictable environment. By using a class of two-sex branching processes, we describe their demographic dynamics and provide several probabilistic and inferential contributions. They include results about the extinction of the population and the estimation of the offspring distribution and its main moments. We also present an application to salmonid populations.
Iglesias-Sarmiento, Valentín; Deaño, Manuel
2011-01-01
This investigation analyzed the relation between cognitive functioning and mathematical achievement in 114 students in fourth, fifth, and sixth grades. Differences in cognitive performance were studied concurrently in three selected achievement groups: mathematical learning disability group (MLD), low achieving group (LA), and typically achieving group (TA). For this purpose, performance in verbal memory and in the PASS cognitive processes of planning, attention, and simultaneous and successive processing was assessed at the end of the academic course. Correlational analyses showed that phonological loop and successive and simultaneous processing were related to mathematical achievement at all three grades. Regression analysis revealed simultaneous processing as a cognitive predictor of mathematical performance, although phonological loop was also associated with higher achievement. Simultaneous and successive processing were the elements that differentiated the MLD group from the LA group. These results show that, of all the variables analyzed in this study, simultaneous processing was the best predictor of mathematical performance. PMID:21444928
Translational motion compensation in ISAR image processing.
Wu, H; Grenier, D; Delisle, G Y; Fang, D G
1995-01-01
In inverse synthetic aperture radar (ISAR) imaging, the target rotational motion with respect to the radar line of sight contributes to the imaging ability, whereas the translational motion must be compensated out. This paper presents a novel two-step approach to translational motion compensation using an adaptive range tracking method for range bin alignment and a recursive multiple-scatterer algorithm (RMSA) for signal phase compensation. The initial step of RMSA is equivalent to the dominant-scatterer algorithm (DSA). An error-compensating point source is then recursively synthesized from the selected range bins, where each contains a prominent scatterer. Since the clutter-induced phase errors are reduced by phase averaging, the image speckle noise can be reduced significantly. Experimental data processing for a commercial aircraft and computer simulations confirm the validity of the approach.
ERIC Educational Resources Information Center
Gullick, Margaret M.; Sprute, Lisa A.; Temple, Elise
2011-01-01
Individual differences in mathematics performance may stem from domain-general factors like working memory and intelligence. Parietal and frontal brain areas have been implicated in number processing, but the influence of such cognitive factors on brain activity during mathematics processing is not known. The relationship between brain mechanisms…
ERIC Educational Resources Information Center
Hidiroglu, Çaglar Naci; Bukova Güzel, Esra
2013-01-01
The aim of the present study is to conceptualize the approaches displayed for validation of model and thought processes provided in mathematical modeling process performed in technology-aided learning environment. The participants of this grounded theory study were nineteen secondary school mathematics student teachers. The data gathered from the…
Architecture for web-based image processing
NASA Astrophysics Data System (ADS)
Srini, Vason P.; Pini, David; Armstrong, Matt D.; Alalusi, Sayf H.; Thendean, John; Ueng, Sain-Zee; Bushong, David P.; Borowski, Erek S.; Chao, Elaine; Rabaey, Jan M.
1997-09-01
A computer systems architecture for processing medical images and other data coming over the Web is proposed. The architecture comprises a Java engine for communicating images over the Internet, storing data in local memory, doing floating point calculations, and a coprocessor MIMD parallel DSP for doing fine-grained operations found in video, graphics, and image processing applications. The local memory is shared between the Java engine and the parallel DSP. Data coming from the Web is stored in the local memory. This approach avoids the frequent movement of image data between a host processor's memory and an image processor's memory, found in many image processing systems. A low-power and high performance parallel DSP architecture containing lots of processors interconnected by a segmented hierarchical network has been developed. The instruction set of the 16-bit processor supports video, graphics, and image processing calculations. Two's complement arithmetic, saturation arithmetic, and packed instructions are supported. Higher data precision such as 32-bit and 64-bit can be achieved by cascading processors. A VLSI chip implementation of the architecture containing 64 processors organized in 16 clusters and interconnected by a statically programmable hierarchical bus is in progress. The buses are segmentable by programming switches on the bus. The instruction memory of each processor has sixteen 40-bit words. Data streaming through the processor is manipulated by the instructions. Multiple operations can be performed in a single cycle in a processor. A low-power handshake protocol is used for synchronization between the sender and the receiver of data. Temporary storage for data and filter coefficients is provided in each chip. A 256 by 16 memory unit is included in each of the 16 clusters. The memory unit can be used as a delay line, FIFO, lookup table or random access memory. The architecture is scalable with technology. Portable multimedia terminals like U
Computer image processing in marine resource exploration
NASA Technical Reports Server (NTRS)
Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.
1976-01-01
Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.
Mathematical models in simulation process in rehabilitation of persons with disabilities
NASA Astrophysics Data System (ADS)
Gorie, Nina; Dolga, Valer; Mondoc, Alina
2012-11-01
The problems of people with disability are varied. A disability may be physical, cognitive, mental, sensory, emotional, developmental or some combination of these. The major disabilities which can appear in people's lives are: the blindness, the deafness, the limb-girdle muscular dystrophy, the orthopedic impairment, the visual impairment. A disability is an umbrella term, covering impairments, activity limitations and participation restrictions. A disability may occur during a person's lifetime or may be present from birth. The authors conclude that some of these disabilities like physical, cognitive, mental, sensory, emotional, developmental can be rehabilitated. Starting from this state of affairs the authors present briefly the possibility of using certain mechatronic systems for rehabilitation of persons with different disabilities. The authors focus their presentation on alternative calling the Stewart platform in order to achieve the proposed goal. The authors present a mathematical model of systems theory approach under the parallel system and described its contents can. The authors analyze in a meaningful mathematical model describing the procedure of rehabilitation process. From the affected function biomechanics and taking into account medical recommendations the authors illustrate the mathematical models of rehabilitation work. The authors assemble a whole mathematical model of parallel structure and the rehabilitation process and making simulation and highlighting the results estimated. The authors present in the end work the results envisaged in the end analysis work, conclusions and steps for future work program..
Saitou, Takashi; Imamura, Takeshi
2016-01-01
Cell cycle progression is strictly coordinated to ensure proper tissue growth, development, and regeneration of multicellular organisms. Spatiotemporal visualization of cell cycle phases directly helps us to obtain a deeper understanding of controlled, multicellular, cell cycle progression. The fluorescent ubiquitination-based cell cycle indicator (Fucci) system allows us to monitor, in living cells, the G1 and the S/G2/M phases of the cell cycle in red and green fluorescent colors, respectively. Since the discovery of Fucci technology, it has found numerous applications in the characterization of the timing of cell cycle phase transitions under diverse conditions and various biological processes. However, due to the complexity of cell cycle dynamics, understanding of specific patterns of cell cycle progression is still far from complete. In order to tackle this issue, quantitative approaches combined with mathematical modeling seem to be essential. Here, we review several studies that attempted to integrate Fucci technology and mathematical models to obtain quantitative information regarding cell cycle regulatory patterns. Focusing on the technological development of utilizing mathematics to retrieve meaningful information from the Fucci producing data, we discuss how the combined methods advance a quantitative understanding of cell cycle regulation.
Digital image processing of vascular angiograms
NASA Technical Reports Server (NTRS)
Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.
1975-01-01
A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.
Novel image processing approach to detect malaria
NASA Astrophysics Data System (ADS)
Mas, David; Ferrer, Belen; Cojoc, Dan; Finaurini, Sara; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev
2015-09-01
In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.
IPLIB (Image processing library) user's manual
NASA Technical Reports Server (NTRS)
Faulcon, N. D.; Monteith, J. H.; Miller, K.
1985-01-01
IPLIB is a collection of HP FORTRAN 77 subroutines and functions that facilitate the use of a COMTAL image processing system driven by an HP-1000 computer. It is intended for programmers who want to use the HP 1000 to drive the COMTAL Vision One/20 system. It is assumed that the programmer knows HP 1000 FORTRAN 77 or at least one FORTRAN dialect. It is also assumed that the programmer has some familiarity with the COMTAL Vision One/20 system.
Sorting Olive Batches for the Milling Process Using Image Processing
Puerto, Daniel Aguilera; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan
2015-01-01
The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729
Sorting Olive Batches for the Milling Process Using Image Processing.
Aguilera Puerto, Daniel; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan
2015-01-01
The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729
Color Image Processing and Object Tracking System
NASA Technical Reports Server (NTRS)
Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.
1996-01-01
This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.
Automated synthesis of image processing procedures using AI planning techniques
NASA Technical Reports Server (NTRS)
Chien, Steve; Mortensen, Helen
1994-01-01
This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.
Mathematical modelling of the incomplete transformations in pseudoelastic processes in binary alloys
NASA Astrophysics Data System (ADS)
Vokoun, David; Kafka, Vratislav
1996-04-01
In his two papers Kafka (1994,1994a) presented a new approach to explanation and to mathematical modelling of shape memory effect and of pseudoelasticity. This approach was based on his general concept of modelling inelastic processes in heterogeneous media (Kafka 1987) and it was shown that this concept can successfiilly be applied even in the case where the heterogeneity under study is on the atomic scale, i.e. in the case of binary alloys and their shape memory behaviour. In the second quoted paper (Kafka 1994a) quantitative comparisons with experimental data received with samples ofNiTi alloy were shown and it was demonstrated that the unified mathematical model is able to quantitatively describe the shape memory effect as well as the pseudoelastic processes under different temperatures.
FITSH- a software package for image processing
NASA Astrophysics Data System (ADS)
Pál, András.
2012-04-01
In this paper we describe the main features of the software package named FITSH, intended to provide a standalone environment for analysis of data acquired by imaging astronomical detectors. The package both provides utilities for the full pipeline of subsequent related data-processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple-image combinations, spatial transformations and interpolations) and aids the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The set of utilities found in this package is built on top of the commonly used UNIX/POSIX shells (hence the name of the package); therefore, both frequently used and well-documented tools for such environments can be exploited and managing a massive amount of data is rather convenient.
Vector processing enhancements for real-time image analysis.
Shoaf, S.; APS Engineering Support Division
2008-01-01
A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.
Portable EDITOR (PEDITOR): A portable image processing system. [satellite images
NASA Technical Reports Server (NTRS)
Angelici, G.; Slye, R.; Ozga, M.; Ritter, P.
1986-01-01
The PEDITOR image processing system was created to be readily transferable from one type of computer system to another. While nearly identical in function and operation to its predecessor, EDITOR, PEDITOR employs additional techniques which greatly enhance its portability. These cover system structure and processing. In order to confirm the portability of the software system, two different types of computer systems running greatly differing operating systems were used as target machines. A DEC-20 computer running the TOPS-20 operating system and using a Pascal Compiler was utilized for initial code development. The remaining programmers used a Motorola Corporation 68000-based Forward Technology FT-3000 supermicrocomputer running the UNIX-based XENIX operating system and using the Silicon Valley Software Pascal compiler and the XENIX C compiler for their initial code development.
The Airborne Ocean Color Imager - System description and image processing
NASA Technical Reports Server (NTRS)
Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.
1992-01-01
The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.
NASA Astrophysics Data System (ADS)
Lu, Lee-Jane W.; Nishino, Thomas K.; Johnson, Raleigh F.; Nayeem, Fatima; Brunder, Donald G.; Ju, Hyunsu; Leonard, Morton H., Jr.; Grady, James J.; Khamapirad, Tuenchit
2012-11-01
Women with mostly mammographically dense fibroglandular tissue (breast density, BD) have a four- to six-fold increased risk for breast cancer compared to women with little BD. BD is most frequently estimated from two-dimensional (2D) views of mammograms by a histogram segmentation approach (HSM) and more recently by a mathematical algorithm consisting of mammographic imaging parameters (MATH). Two non-invasive clinical magnetic resonance imaging (MRI) protocols: 3D gradient-echo (3DGRE) and short tau inversion recovery (STIR) were modified for 3D volumetric reconstruction of the breast for measuring fatty and fibroglandular tissue volumes by a Gaussian-distribution curve-fitting algorithm. Replicate breast exams (N = 2 to 7 replicates in six women) by 3DGRE and STIR were highly reproducible for all tissue-volume estimates (coefficients of variation <5%). Reliability studies compared measurements from four methods, 3DGRE, STIR, HSM, and MATH (N = 95 women) by linear regression and intra-class correlation (ICC) analyses. Rsqr, regression slopes, and ICC, respectively, were (1) 0.76-0.86, 0.8-1.1, and 0.87-0.92 for %-gland tissue, (2) 0.72-0.82, 0.64-0.96, and 0.77-0.91, for glandular volume, (3) 0.87-0.98, 0.94-1.07, and 0.89-0.99, for fat volume, and (4) 0.89-0.98, 0.94-1.00, and 0.89-0.98, for total breast volume. For all values estimated, the correlation was stronger for comparisons between the two MRI than between each MRI versus mammography, and between each MRI versus MATH data than between each MRI versus HSM data. All ICC values were >0.75 indicating that all four methods were reliable for measuring BD and that the mathematical algorithm and the two complimentary non-invasive MRI protocols could objectively and reliably estimate different types of breast tissues.
Are poor mathematics skills associated with visual deficits in temporal processing?
Sigmundsson, H; Anholt, S K; Talcott, J B
2010-01-22
Developmental learning disabilities such as dyslexia and dyscalculia have a high rate of co-occurrence in pediatric populations, suggesting that they share underlying cognitive and neurophysiological mechanisms. Dyslexia and other developmental disorders with a strong heritable component have been associated with reduced sensitivity to coherent motion stimuli, an index of visual temporal processing on a millisecond time-scale. Here we examined whether deficits in sensitivity to visual motion are evident in children who have poor mathematics skills relative to other children of the same age. We obtained psychophysical thresholds for visual coherent motion and a control task from two groups of children who differed in their performance on a test of mathematics achievement. Children with math skills in the lowest 10% in their cohort were less sensitive than age-matched controls to coherent motion, but they had statistically equivalent thresholds to controls on a coherent form control measure. Children with mathematics difficulties therefore tend to present a similar pattern of visual processing deficit to those that have been reported previously in other developmental disorders. We speculate that reduced sensitivity to temporally defined stimuli such as coherent motion represents a common processing deficit apparent across a range of commonly co-occurring developmental disorders. PMID:19995594
Moll, Kristina; Göbel, Silke M; Snowling, Margaret J
2015-01-01
As well as being the hallmark of mathematics disorders, deficits in number processing have also been reported for individuals with reading disorders. The aim of the present study was to investigate separately the components of numerical processing affected in reading and mathematical disorders within the framework of the Triple Code Model. Children with reading disorders (RD), mathematics disorders (MD), comorbid deficits (RD + MD), and typically developing children (TD) were tested on verbal, visual-verbal, and nonverbal number tasks. As expected, children with MD were impaired across a broad range of numerical tasks. In contrast, children with RD were impaired in (visual-)verbal number tasks but showed age-appropriate performance in nonverbal number skills, suggesting their impairments were domain specific and related to their reading difficulties. The comorbid group showed an additive profile of the impairments of the two single-deficit groups. Performance in speeded verbal number tasks was related to rapid automatized naming, a measure of visual-verbal access in the RD but not in the MD group. The results indicate that deficits in number skills are due to different underlying cognitive deficits in children with RD compared to children with MD: a phonological deficit in RD and a deficit in processing numerosities in MD.
NASA Astrophysics Data System (ADS)
Statella, Thiago; Pina, Pedro; da Silva, Erivaldo Antônio
2012-09-01
This paper presents a method for automatic identification of dust devils tracks in MOC NA and HiRISE images of Mars. The method is based on Mathematical Morphology and is able to successfully process those images despite their difference in spatial resolution or size of the scene. A dataset of 200 images from the surface of Mars representative of the diversity of those track features was considered for developing, testing and evaluating our method, confronting the outputs with reference images made manually. Analysis showed a mean accuracy of about 92%. We also give some examples on how to use the results to get information about dust devils, namelly mean width, main direction of movement and coverage per scene.
Development of the SOFIA Image Processing Tool
NASA Technical Reports Server (NTRS)
Adams, Alexander N.
2011-01-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.
Image processing and the Arithmetic Fourier Transform
Tufts, D.W.; Fan, Z.; Cao, Z.
1989-01-01
A new Fourier technique, the Arithmetic Fourier Transform (AFT) was recently developed for signal processing. This approach is based on the number-theoretic method of Mobius inversion. The AFT needs only additions except for a small amount of multiplications by prescribed scale factors. This new algorithm is also well suited to parallel processing. And there is no accumulation of rounding errors in the AFT algorithm. In this reprint, the AFT is used to compute the discrete cosine transform and is also extended to 2-D cases for image processing. A 2-D Mobius inversion formula is proved. It is then applied to the computation of Fourier coefficients of a periodic 2-D function. It is shown that the output of an array of delay-line (or transversal) filters is the Mobius transform of the input harmonic terms. The 2-D Fourier coefficients can therefore be obtained through Mobius inversion of the output of the filter array.
Students, Computers and Mathematics the Golden Trilogy in the Teaching-Learning Process
ERIC Educational Resources Information Center
García-Santillán, Arturo; Escalera-Chávez, Milka Elena; López-Morales, José Satsumi; Córdova Rangel, Arturo
2014-01-01
In this paper we examine the relationships between students' attitudes towards mathematics and technology, therefore, we take a Galbraith and Hines' scale (1998, 2000) about mathematics confidence, computer confidence, computer and mathematics interaction, mathematics motivation, computer motivation, and mathematics engagement. 164…
HYMOSS signal processing for pushbroom spectral imaging
NASA Technical Reports Server (NTRS)
Ludwig, David E.
1991-01-01
The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.
HYMOSS signal processing for pushbroom spectral imaging
NASA Astrophysics Data System (ADS)
Ludwig, David E.
1991-06-01
The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.
[Glossary of terms used by radiologists in image processing].
Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P
1995-01-01
We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing. PMID:8762273
NASA Astrophysics Data System (ADS)
Roman, Evelyn
1990-02-01
In this paper we discuss the work being done at Itek combining parallel, symbolic, and neural methodologies at different stages of processing for imagery exploitation. We describe a prototype system we have been implementing combining real-time parallel image processing on an 8-stage parallel image-processing engine (PIPE) computer with expert system software such as our Multi-Sensor Exploitation Assistant system on the Symbolics LISP machine and with neural computations on the PIPE and on its host IBM AT for target recognition and change detection applications. We also provide a summary of basic neural concepts, and show the commonality between neural nets and related mathematics, artificial intelligence, and traditional image processing concepts. This provides us with numerous choices for the implementation of constraint satisfaction, transformational invariance, inference and representational mechanisms, and software lifecycle engineering methodologies in the different computational layers. Our future work may include optical processing as well, for a real-time capability complementing the PIPE's.
A New Image Processing and GIS Package
NASA Technical Reports Server (NTRS)
Rickman, D.; Luvall, J. C.; Cheng, T.
1998-01-01
The image processing and GIS package ELAS was developed during the 1980's by NASA. It proved to be a popular, influential and powerful in the manipulation of digital imagery. Before the advent of PC's it was used by hundreds of institutions, mostly schools. It is the unquestioned, direct progenitor or two commercial GIS remote sensing packages, ERDAS and MapX and influenced others, such as PCI. Its power was demonstrated by its use for work far beyond its original purpose, having worked several different types of medical imagery, photomicrographs of rock, images of turtle flippers and numerous other esoteric imagery. Although development largely stopped in the early 1990's the package still offers as much or more power and flexibility than any other roughly comparable package, public or commercial. It is a huge body or code, representing more than a decade of work by full time, professional programmers. The current versions all have several deficiencies compared to current software standards and usage, notably its strictly command line interface. In order to support their research needs the authors are in the process of fundamentally changing ELAS, and in the process greatly increasing its power, utility, and ease of use. The new software is called ELAS II. This paper discusses the design of ELAS II.
ERIC Educational Resources Information Center
Canturk-Gunhan, Berna; Bukova-Guzel, Esra; Ozgur, Zekiye
2012-01-01
The purpose of this study is to determine prospective mathematics teachers' views about using problem-based learning (PBL) in statistics teaching and to examine their thought processes. It is a qualitative study conducted with 15 prospective mathematics teachers from a state university in Turkey. The data were collected via participant observation…
ERIC Educational Resources Information Center
Scherer, Petra; Steinbring, Heinz
2006-01-01
One could focus on many different aspects of improving the quality of mathematics teaching. For a better understanding of children's mathematical learning processes or teaching and learning in general, reflection on and analysis of concrete classroom situations are of major importance. On the basis of experiences gained in a collaborative research…
ERIC Educational Resources Information Center
Klein, M.
2002-01-01
Undertakes, from a poststructuralist perspective, a meta-analysis of two short episodes from a paper by Manouchehri and Goodman (2000). Explores how mathematical knowledge and identities are produced in teaching/learning interactions in the classroom and the wider practical implications of this productive power of process for mathematics education…
Stent segmentation in IOCT-TD images using gradient combination and mathematical morphology
NASA Astrophysics Data System (ADS)
Cardona Cardenas, Diego A.; Cardoso Moraes, Matheus; Furuie, Sérgio S.
2015-01-01
In 2010, cardiovascular disease (CVD) caused 33% of the total deaths in Brazil. Modalities such as Intravascular Optical Coherent Tomography (IOCT) provides coronary in vivo for detecting and monitoring the progression of CVDs. Specifically, this type of modality is widely used in neo-intima post stent re-stenosis investigation. Computational methods applied to IOCT images can render objective structure information, such as areas, perimeters, etc., allowing more accurate diagnostics. However, the variety of methods in the literature applied in IOCT is still small compared to other related modalities. Therefore, we propose a stent segmentation approach based on extracted features by gradient operations, and Mathematical Morphology. The methodology can be summarized as following: the lumen is segmented and the contrast stretching is generated, both to be used as auxiliary information. Second, the edges of objects were obtained by gradient computation. Next, a stent extractor finds and select relevant stent information. Finally, an interpolation procedure followed by morphological operations ends the segmentation. To evaluate the method, 160 images from pig coronaries were segmented and compared to their gold standards, the images were acquired after 30, 90 and 180 days of stent implantation. The proposed approach present good accuracy of True Positive (TP(%)) = 96.51±5.10, False Positive (FP(%)) = 6.09±5.32 , False Negative (FN(%)) = 3.49±5.10. Conclusion, the good results and the low complexity encourage the use and continuous evolution of current approach. However, only images of IOCT-TD technology were evaluated; therefore, further investigations should adapt this approach to work with IOCT-FD technology as well.
Some notes on the application of discrete wavelet transform in image processing
Caria, Egydio C. S.; Costa A, Trajano A. de; Rebello, Joao Marcos A.
2011-06-23
Mathematical transforms are used in signal processing in order to extract what is known as 'hidden' information. One of these mathematical tools is the Discrete Wavelet Transform (DWT), which has been increasingly employed in non-destructive testing and, more specifically, in image processing. The main concern in the present work is to employ DWT to suppress noise without losing relevant image features. However, some aspects must be taken into consideration when applying DWT in image processing, mainly in the case of weld radiographs, in order to achieve consistent results. Three topics were selected as representative of these difficulties, as follows: 1) How can image matrix be filled to fit the 2{sup n} lines and 2{sup n} rows requirement? 2) How can the most suitable decomposition level of the DWT function and the correct choice of their coefficient suppression be selected? 3) Is there any influence of the scanning direction and the weld radiograph image, e.g., longitudinal or transversal, on the final processing image? It is known that some artifacts may be present in weld radiograph images. Indeed, the weld surface is frequently rough and rippled, what can be seen as gray level variation on the radiograph, being sometimes mistaken as defective areas. Depending on the position of these artifacts, longitudinal or transversal to the weld bead, they may have different influences on the image processing procedure. This influence is clearly seen in the distribution of the DWT Function coefficients. In the present work, examples of two weld radiographs of quite different image quality were given in order to exemplify it.
A fuzzy mathematics model for radioactive waste characterization by process knowledge
Smith, M.; Stevens, S.; Elam, K.; Vrba, J.
1994-12-31
Fuzzy mathematics and fuzzy logic are means for making decisions that can integrate complicated combinations of hard and soft factors and produce mathematically validated results that can be independently verified. In this particular application, several sources of information regarding the waste stream have been compiled, including facility operating records, other waste generated from the facility in the past, laboratory analysis results, and interviews with facility personnel. A fuzzy mathematics model is used to interrelate these various sources of information and arrive at a defensible estimate of the contaminant concentration in the final waste product. The model accounts for the separate process knowledge-based contaminant concentrations by providing a weighted averaging technique to incorporate information from the various sources. Reliability estimates are provided for each of the component pieces of information and combined using the model into an estimate that provides a near-probabilistic value for contaminant concentration. The speadsheet accounts for the estimated uncertainty in the concentration on the basis of {open_quotes}reliability curves,{close_quotes} which are derived from personal process knowledge as well as limited independent measurements.
Using Image Processing to Determine Emphysema Severity
NASA Astrophysics Data System (ADS)
McKenzie, Alexander; Sadun, Alberto
2010-10-01
Currently X-rays and computerized tomography (CT) scans are used to detect emphysema, but other tests are required to accurately quantify the amount of lung that has been affected by the disease. These images clearly show if a patient has emphysema, but are unable by visual scan alone, to quantify the degree of the disease, as it presents as subtle, dark spots on the lung. Our goal is to use these CT scans to accurately diagnose and determine emphysema severity levels in patients. This will be accomplished by performing several different analyses of CT scan images of several patients representing a wide range of severity of the disease. In addition to analyzing the original CT data, this process will convert the data to one and two bit images and will then examine the deviation from a normal distribution curve to determine skewness. Our preliminary results show that this method of assessment appears to be more accurate and robust than the currently utilized methods, which involve looking at percentages of radiodensities in the air passages of the lung.
Image processing to optimize wave energy converters
NASA Astrophysics Data System (ADS)
Bailey, Kyle Marc-Anthony
The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.
Mathematical simulation of direct reduction process in zinc-bearing pellets
NASA Astrophysics Data System (ADS)
Liu, Ying; Su, Fu-yong; Wen, Zhi; Li, Zhi; Yong, Hai-quan; Feng, Xiao-hong
2013-11-01
A one-dimensional unsteady mathematical model was established to describe direct reduction in a composite pellet made of metallurgical dust. The model considered heat transfer, mass transfer, and chemical reactions including iron oxide reductions, zinc oxide reduction and carbon gasification, and it was numerically solved by the tridiagonal matrix algorithm (TDMA). In order to verify the model, an experiment was performed, in which the profiles of temperature and zinc removal rate were measured during the reduction process. Results calculated by the mathematical model were in fairly good agreement with experimental data. Finally, the effects of furnace temperature, pellet size, and carbon content were investigated by model calculations. It is found that the pellet temperature curve can be divided into four parts according to heating rate. Also, the zinc removal rate increases with the increase of furnace temperature and the decrease of pellet size, and carbon content in the pellet has little influence on the zinc removal rate.
Mathematical modelling of some chemical and physical processes in underground coal gasification
Creighton, J. R.
1981-08-01
Underground coal gasification normally involves two vertical wells which must be linked by a channel having low resistance to gas flow. There are several ways of establishing such linkage, but all leave a relatively open horizontal hole with a diameter on the order of a meter. To increase our understanding of the chemical and physical processes governing underground coal gasification LLNL has been conducting laboratory scale experiments accompanied by mathematical modelling. Blocks of selected coal types are cut to fit 55 gallon oil drums and sealed in place with plaster. A 1 cm. diameter hole is drilled the length of the block and plumbing attached to provide a flow of air or oxygen/steam mixture. After an instrumented burn the block is sawed open to examine the cavity. Mathematical modelling has been directed towards predicting the cavity shape. This paper describes some sub-models and examines their impact on predicted cavity shapes.
Cárdenas Sandoval, Rosy Paola; Garzón-Alvarado, Diego Alexander; Ramírez Martínez, Angélica Maria
2012-06-01
This article proposes a mathematical model that predicts the wound healing process of the ligament after a sprain, grade II. The model describes the swelling, expression of the platelet-derived growth factor (PDGF), formation and migration of fibroblasts into the injury area and the expression of collagen fibers. Additionally, the model can predict the effect of ice treatment in reducing inflammation and the action of mechanical stress in the process of remodeling of collagen fibers. The results obtained from computer simulation show a high concordance with the clinical data previously reported by other authors.
Molina, Manuel; Mota, Manuel; Ramos, Alfonso
2015-01-01
This work deals with mathematical modeling through branching processes. We consider sexually reproducing animal populations where, in each generation, the number of progenitor couples is determined in a non-predictable environment. By using a class of two-sex branching processes, we describe their demographic dynamics and provide several probabilistic and inferential contributions. They include results about the extinction of the population and the estimation of the offspring distribution and its main moments. We also present an application to salmonid populations. PMID:24526259
ERIC Educational Resources Information Center
Ozdemir, S.; Reis, Z. Ayvaz
2013-01-01
Mathematics is an important discipline, providing crucial tools, such as problem solving, to improve our cognitive abilities. In order to solve a problem, it is better to envision and represent through multiple means. Multiple representations can help a person to redefine a problem with his/her own words in that envisioning process. Dynamic and…
Platform for distributed image processing and image retrieval
NASA Astrophysics Data System (ADS)
Gueld, Mark O.; Thies, Christian J.; Fischer, Benedikt; Keysers, Daniel; Wein, Berthold B.; Lehmann, Thomas M.
2003-06-01
We describe a platform for the implementation of a system for content-based image retrieval in medical applications (IRMA). To cope with the constantly evolving medical knowledge, the platform offers a flexible feature model to store and uniformly access all feature types required within a multi-step retrieval approach. A structured generation history for each feature allows the automatic identification and re-use of already computed features. The platform uses directed acyclic graphs composed of processing steps and control elements to model arbitrary retrieval algorithms. This visually intuitive, data-flow oriented representation vastly improves the interdisciplinary communication between computer scientists and physicians during the development of new retrieval algorithms. The execution of the graphs is fully automated within the platform. Each processing step is modeled as a feature transformation. Due to a high degree of system transparency, both the implementation and the evaluation of retrieval algorithms are accelerated significantly. The platform uses a client-server architecture consisting of a central database, a central job scheduler, instances of a daemon service, and clients which embed user-implemented feature ansformations. Automatically distributed batch processing and distributed feature storage enable the cost-efficient use of an existing workstation cluster.
NASA Astrophysics Data System (ADS)
Massar, Melody L.; Bhagavatula, Ramamurthy; Ozolek, John A.; Castro, Carlos A.; Fickus, Matthew; Kovacevic, Jelena
2011-09-01
We present the current state of our work on a mathematical framework for identification and delineation of histopathology images-local histograms and occlusion models. Local histograms are histograms computed over defined spatial neighborhoods whose purpose is to characterize an image locally. This unit of description is augmented by our occlusion models that describe a methodology for image formation. In the context of this image formation model, the power of local histograms with respect to appropriate families of images will be shown through various proved statements about expected performance. We conclude by presenting a preliminary study to demonstrate the power of the framework in the context of histopathology image classification tasks that, while differing greatly in application, both originate from what is considered an appropriate class of images for this framework.
Imaging fault zones using 3D seismic image processing techniques
NASA Astrophysics Data System (ADS)
Iacopini, David; Butler, Rob; Purves, Steve
2013-04-01
Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes
Multispectral image processing: the nature factor
NASA Astrophysics Data System (ADS)
Watkins, Wendell R.
1998-09-01
The images processed by our brain represent our window into the world. For some animals this window is derived from a single eye, for others, including humans, two eyes provide stereo imagery, for others like the black widow spider several eyes are used (8 eyes), and some insects like the common housefly utilize thousands of eyes (ommatidia). Still other animals like the bat and dolphin have eyes for regular vision, but employ acoustic sonar vision for seeing where their regular eyes don't work such as in pitch black caves or turbid water. Of course, other animals have adapted to dark environments by bringing along their own lighting such as the firefly and several creates from the depths of the ocean floor. Animal vision is truly varied and has developed over millennia in many remarkable ways. We have learned a lot about vision processes by studying these animal systems and can still learn even more.
Bone feature analysis using image processing techniques.
Liu, Z Q; Austin, T; Thomas, C D; Clement, J G
1996-01-01
In order to establish the correlation between bone structure and age, and information about age-related bone changes, it is necessary to study microstructural features of human bone. Traditionally, in bone biology and forensic science, the analysis if bone cross-sections has been carried out manually. Such a process is known to be slow, inefficient and prone to human error. Consequently, the results obtained so far have been unreliable. In this paper we present a new approach to quantitative analysis of cross-sections of human bones using digital image processing techniques. We demonstrate that such a system is able to extract various bone features consistently and is capable of providing more reliable data and statistics for bones. Consequently, we will be able to correlate features of bone microstructure with age and possibly also with age related bone diseases such as osteoporosis. The development of knowledge-based computer vision-systems for automated bone image analysis can now be considered feasible.
Signal processing for imaging and mapping ladar
NASA Astrophysics Data System (ADS)
Grönwall, Christina; Tolt, Gustav
2011-11-01
The new generation laser-based FLASH 3D imaging sensors enable data collection at video rate. This opens up for realtime data analysis but also set demands on the signal processing. In this paper the possibilities and challenges with this new data type are discussed. The commonly used focal plane array based detectors produce range estimates that vary with the target's surface reflectance and target range, and our experience is that the built-in signal processing may not compensate fully for that. We propose a simple adjustment that can be used even if some sensor parameters are not known. The cost for the instantaneous image collection is, compared to scanning laser radar systems, lower range accuracy. By gathering range information from several frames the geometrical information of the target can be obtained. We also present an approach of how range data can be used to remove foreground clutter in front of a target. Further, we illustrate how range data enables target classification in near real-time and that the results can be improved if several frames are co-registered. Examples using data from forest and maritime scenes are shown.
Boix, Macarena; Cantó, Begoña
2013-04-01
Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.
MISR Browse Images: Cold Land Processes Experiment (CLPX)
Atmospheric Science Data Center
2013-04-02
MISR Browse Images: Cold Land Processes Experiment (CLPX) These MISR Browse ... series of images over the region observed during the NASA Cold Land Processes Experiment (CLPX). CLPX involved ground, airborne, and ...
Vanbinst, K; De Smedt, B
2016-01-01
This contribution reviewed the available evidence on the domain-specific and domain-general neurocognitive determinants of children's arithmetic development, other than nonsymbolic numerical magnitude processing, which might have been overemphasized as a core factor of individual differences in mathematics and dyscalculia. We focused on symbolic numerical magnitude processing, working memory, and phonological processing, as these determinants have been most researched and their roles in arithmetic can be predicted against the background of brain imaging data. Our review indicates that symbolic numerical magnitude processing is a major determinant of individual differences in arithmetic. Working memory, particularly the central executive, also plays a role in learning arithmetic, but its influence appears to be dependent on the learning stage and experience of children. The available evidence on phonological processing suggests that it plays a more subtle role in children's acquisition of arithmetic facts. Future longitudinal studies should investigate these factors in concert to understand their relative contribution as well as their mediating and moderating roles in children's arithmetic development.
Sasanguie, Delphine; Göbel, Silke M; Moll, Kristina; Smets, Karolien; Reynvoet, Bert
2013-03-01
In this study, the performance of typically developing 6- to 8-year-old children on an approximate number discrimination task, a symbolic comparison task, and a symbolic and nonsymbolic number line estimation task was examined. For the first time, children's performances on these basic cognitive number processing tasks were explicitly contrasted to investigate which of them is the best predictor of their future mathematical abilities. Math achievement was measured with a timed arithmetic test and with a general curriculum-based math test to address the additional question of whether the predictive association between the basic numerical abilities and mathematics achievement is dependent on which math test is used. Results revealed that performance on both mathematics achievement tests was best predicted by how well childrencompared digits. In addition, an association between performance on the symbolic number line estimation task and math achievement scores for the general curriculum-based math test measuring a broader spectrum of skills was found. Together, these results emphasize the importance of learning experiences with symbols for later math abilities.
Research on pavement crack recognition methods based on image processing
NASA Astrophysics Data System (ADS)
Cai, Yingchun; Zhang, Yamin
2011-06-01
In order to overview and analysis briefly pavement crack recognition methods , then find the current existing problems in pavement crack image processing, the popular methods of crack image processing such as neural network method, morphology method, fuzzy logic method and traditional image processing .etc. are discussed, and some effective solutions to those problems are presented.
ATM experiment S-056 image processing requirements definition
NASA Technical Reports Server (NTRS)
1972-01-01
A plan is presented for satisfying the image data processing needs of the S-056 Apollo Telescope Mount experiment. The report is based on information gathered from related technical publications, consultation with numerous image processing experts, and on the experience that was in working on related image processing tasks over a two-year period.
Mathematical modelling to predict the roughness average in micro milling process
NASA Astrophysics Data System (ADS)
Burlacu, C.; Iordan, O.
2016-08-01
Surface roughness plays a very important role in micro milling process and in any machining process, because indicates the state of the machined surface. Many surface roughness parameters that can be used to analyse a surface, but the most common surface roughness parameter used is the average roughness (Ra). This paper presents the experimental results obtained at micro milling of the C45W steel and the ways to determine the Ra parameter with respect to the working conditions. The chemical characteristics of the material were determined from a spectral analysis, chemical composition was measured at one point and two points, graphical and tabular. A profilometer Surtronic 3+ was used to examine the surface roughness profiles; the effect of independent parameters can be investigated and can get a proper relationship between the Ra parameter and the process variables. The mathematical model were developed, using multiple regression method with four independent variables D, v, ap, fz; the analysis was done using statistical software SPSS. The ANOVA analysis of variance and the F- test was used to justify the accuracy of the mathematical model. The multiple regression method was used to determine the correlation between a criterion variable and the predictor variables. The prediction model can be used for micro milling process optimization.
NASA Astrophysics Data System (ADS)
Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.
2016-10-01
The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.
Effects of image processing on the detective quantum efficiency
NASA Astrophysics Data System (ADS)
Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na
2010-04-01
Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.
Quantification technology study on flaws in steam-filled pipelines based on image processing
NASA Astrophysics Data System (ADS)
Sun, Lina; Yuan, Peixin
2009-07-01
Starting from exploiting the applied detection system of gas transmission pipeline, a set of X-ray image processing methods and pipeline flaw quantificational evaluation methods are proposed. Defective and non-defective strings and rows in gray image were extracted and oscillogram was obtained. We can distinguish defects in contrast with two gray images division. According to the gray value of defects with different thicknesses, the gray level depth curve is founded. Through exponential and polynomial fitting way to obtain the attenuation mathematical model which the beam penetrates pipeline, thus attain flaw deep dimension. This paper tests on the PPR pipe in the production of simulated holes flaw and cracks flaw, 135KV used the X-ray source on the testing. Test results show that X-ray image processing method, which meet the needs of high efficient flaw detection and provide quality safeguard for thick oil recovery, can be used successfully in detecting corrosion of insulated pipe.
A process-based mathematical model on methane production with emission indices for control.
Chakraborty, A; Bhattacharaya, D K
2006-08-01
In this paper, a process-based mathematical model is developed for the production of methane through biodegradation. It is a three-dimensional model given by ordinary differential equations. The results of the analysis of the model are interpreted through three emission indices, which are introduced for the first time. The estimation of either one or all of them can interpret the feasibility of the equilibrium and the long-term emission tendency of methane. The vulnerability of the methane production process with respect to soil temperature effects in methanogenic phase has been discussed and a feasible condition within a specified temperature range has defined for the nonvulnerability of the methane production process and also it has shown that under the same condition, zero-emission process of methane will be nonvulnerable with respect to the soil temperature effects in methanogenic phase. Lastly, condition for zero emission of methane is also obtained and it is interpreted through the emission indices.
ERIC Educational Resources Information Center
Klein, Pnina S.; Adi-Japha, Esther; Hakak-Benizri, Simcha
2010-01-01
The objective of this study was to examine gender differences in the relations between verbal, spatial, mathematics, and teacher-child mathematics interaction variables. Kindergarten children (N = 80) were videotaped playing games that require mathematical reasoning in the presence of their teachers. The children's mathematics, spatial, and verbal…
Methods for processing and imaging marsh foraminifera
Dreher, Chandra A.; Flocks, James G.
2011-01-01
This study is part of a larger U.S. Geological Survey (USGS) project to characterize the physical conditions of wetlands in southwestern Louisiana. Within these wetlands, groups of benthic foraminifera-shelled amoeboid protists living near or on the sea floor-can be used as agents to measure land subsidence, relative sea-level rise, and storm impact. In the Mississippi River Delta region, intertidal-marsh foraminiferal assemblages and biofacies were established in studies that pre-date the 1970s, with a very limited number of more recent studies. This fact sheet outlines this project's improved methods, handling, and modified preparations for the use of Scanning Electron Microscope (SEM) imaging of these foraminifera. The objective is to identify marsh foraminifera to the taxonomic species level by using improved processing methods and SEM imaging for morphological characterization in order to evaluate changes in distribution and frequency relative to other environmental variables. The majority of benthic marsh foraminifera consists of agglutinated forms, which can be more delicate than porcelaneous forms. Agglutinated tests (shells) are made of particles such as sand grains or silt and clay material, whereas porcelaneous tests consist of calcite.
Intelligent elevator management system using image processing
NASA Astrophysics Data System (ADS)
Narayanan, H. Sai; Karunamurthy, Vignesh; Kumar, R. Barath
2015-03-01
In the modern era, the increase in the number of shopping malls and industrial building has led to an exponential increase in the usage of elevator systems. Thus there is an increased need for an effective control system to manage the elevator system. This paper is aimed at introducing an effective method to control the movement of the elevators by considering various cases where in the location of the person is found and the elevators are controlled based on various conditions like Load, proximity etc... This method continuously monitors the weight limit of each elevator while also making use of image processing to determine the number of persons waiting for an elevator in respective floors. Canny edge detection technique is used to find out the number of persons waiting for an elevator. Hence the algorithm takes a lot of cases into account and locates the correct elevator to service the respective persons waiting in different floors.
Zhang, Yudong; Peng, Bo; Wang, Shuihua; Liang, Yu-Xiang; Yang, Jiquan; So, Kwok-Fai; Yuan, Ti-Fei
2016-01-01
Microglia are the mononuclear phagocytes with various functions in the central nervous system, and the morphologies of microglia imply the different stages and functions. In optical nerve transection model of the retina, the retrograde degeneration of retinal ganglion cells induces microglial activations to a unique morphology termed rod microglia. A few studies described the rod microglia in the cortex and retina; however, the spatial characteristic of rod microglia is not fully understood. In this study, we built a mathematical model to characterize the spatial trait of rod microglia. In addition, we developed a Matlab-based image processing pipeline that consists of log enhancement, image segmentation, mathematical morphology based cell detection, area calculation and angle analysis. This computer program provides researchers a powerful tool to quickly analyze the spatial trait of rod microglia. PMID:26888347
Hahn, Y.B. ); Sohn, H.Y. )
1990-12-01
This paper reports on a mathematical model developed to describe the rate processes in an axisymmetric copper flash smelting furnace shaft. A particular feature of the model is the incorporation of the four-flux model to describe the radiative heat transfer by combining the absorbing, emitting, and anisotropic scattering phenomena. The importance of various subprocesses of the radiative heat transfer in a flash smelting furnace has been studied. Model predictions showed that the radiation from the furnace walls and between the particles and the surrounding is the dominant mode of heat transfer in a flash smelting furnace.
The mathematical modeling of rapid solidification processing. Ph.D. Thesis. Final Report
NASA Technical Reports Server (NTRS)
Gutierrez-Miravete, E.
1986-01-01
The detailed formulation of and the results obtained from a continuum mechanics-based mathematical model of the planar flow melt spinning (PFMS) rapid solidification system are presented and discussed. The numerical algorithm proposed is capable of computing the cooling and freezing rates as well as the fluid flow and capillary phenomena which take place inside the molten puddle formed in the PFMS process. The FORTRAN listings of some of the most useful computer programs and a collection of appendices describing the basic equations used for the modeling are included.
Image processing and products for the Magellan mission to Venus
NASA Technical Reports Server (NTRS)
Clark, Jerry; Alexander, Doug; Andres, Paul; Lewicki, Scott; Mcauley, Myche
1992-01-01
The Magellan mission to Venus is providing planetary scientists with massive amounts of new data about the surface geology of Venus. Digital image processing is an integral part of the ground data system that provides data products to the investigators. The mosaicking of synthetic aperture radar (SAR) image data from the spacecraft is being performed at JPL's Multimission Image Processing Laboratory (MIPL). MIPL hosts and supports the Image Data Processing Subsystem (IDPS), which was developed in a VAXcluster environment of hardware and software that includes optical disk jukeboxes and the TAE-VICAR (Transportable Applications Executive-Video Image Communication and Retrieval) system. The IDPS is being used by processing analysts of the Image Data Processing Team to produce the Magellan image data products. Various aspects of the image processing procedure are discussed.
ERIC Educational Resources Information Center
Barak, Moshe; Asad, Khaled
2012-01-01
Background: This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose: The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these…
Trayanova, Natalia A
2014-01-01
Atrial fibrillation (AF) is the most common sustained arrhythmia in humans. The mechanisms that govern AF initiation and persistence are highly complex, of dynamic nature, and involve interactions across multiple temporal and spatial scales in the atria. This articles aims to review the mathematical modeling and computer simulation approaches to understanding AF mechanisms and aiding in its management. Various atrial modeling approaches are presented, with descriptions of the methodological basis and advancements in both lower-dimensional and realistic geometry models. A review of the most significant mechanistic insights made by atrial simulations is provided. The article showcases the contributions that atrial modeling and simulation have made not only to our understanding of the pathophysiology of atrial arrhythmias, but also to the development of AF management approaches. A summary of the future developments envisioned for the field of atrial simulation and modeling is also presented. The review contends that computational models of the atria assembled with data from clinical imaging modalities that incorporate electrophysiological and structural remodeling could become a first line of screening for new AF therapies and approaches, new diagnostic developments, and new methods for arrhythmia prevention. PMID:24763468
Spot restoration for GPR image post-processing
Paglieroni, David W; Beer, N. Reginald
2014-05-20
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Image and Signal Processing LISP Environment (ISLE)
Azevedo, S.G.; Fitch, J.P.; Johnson, R.R.; Lager, D.L.; Searfus, R.M.
1987-10-02
We have developed a multidimensional signal processing software system called the Image and Signal LISP Environment (ISLE). It is a hybrid software system, in that it consists of a LISP interpreter (used as the command processor) combined with FORTRAN, C, or LISP functions (used as the processing and display routines). Learning the syntax for ISLE is relatively simple and has the additional benefit of introducing a subset of commands from the general-purpose programming language, Common LISP. Because Common LISP is a well-documented and complete language, users do not need to depend exclusively on system developers for a description of the features of the command language, nor do the developers need to generate a command parser that exhaustively satisfies all the user requirements. Perhaps the major reason for selecting the LISP environment is that user-written code can be added to the environment through a ''foreign function'' interface without recompiling the entire system. The ability to perform fast prototyping of new algorithms is an important feature of this environment. As currently implemented, ISLE requires a Sun color or monochrome workstation and a license to run Franz Extended Common LISP. 16 refs., 4 figs.
Image processing techniques for laser propagation through atmospheric turbulence
NASA Astrophysics Data System (ADS)
Belichki, Sara B.; Splitter, Landon J.; Andrews, Larry C.; Phillips, Ronald L.; Coffaro, Joseph T.; Panich, Michael G.
2014-06-01
In order to better understand laser beam propagation through the analysis of the fluctuations in scintillation data, images from a 30 frame per second monochrome camera are utilized. Scintillation is the effect of atmospheric turbulence which is known to disrupt and alter the intensity and formation of a laser signal as it propagates through the atmosphere. To model and understand this phenomenon, recorded video output of a laser upon a target screen is inspected to determine how much of an effect the atmospheric turbulence has disrupted the laser signal as it has been propagated upon a set distance. The techniques of data processing outlined in this paper moves toward a software-based approach of determining the effects of propagation and detection of a laser based on the visual fluctuations caused by the scintillation effect. With the aid of such visual models, this paper examines the idea of implementing mathematical models via software that is then validated by the gathered video data taken at Kennedy Space Center.
Image Processing of Vega-Tv Observations
NASA Astrophysics Data System (ADS)
Möhlmann, D.; Danz, M.; Elter, G.; Mangold, T.; Rubbert, B.; Weidlich, U.; Lorenz, H.; Richter, G.
1986-12-01
Different algorithms, used to identify real structures in the near-nucleus TV-images of the VEGA-spacecrafts are described. They refer mainly to image-restauration, noise-reduction and different methods of texture analysis. The resulting images, showing first indications for structure of the surface of P/Halley, are discussed shortly.
Gomez, Alice; Piazza, Manuela; Jobert, Antoinette; Dehaene-Lambertz, Ghislaine; Dehaene, Stanislas; Huron, Caroline
2015-01-01
At school, children with Developmental Coordination Disorder (DCD) struggle with mathematics. However, little attention has been paid to their numerical cognition abilities. The goal of this study was to better understand the cognitive basis for mathematical difficulties in children with DCD. Twenty 7-to-10 years-old children with DCD were compared to twenty age-matched typically developing children using dot and digit comparison tasks to assess symbolic and nonsymbolic number processing and in a task of single digits additions. Results showed that children with DCD had lower performance in nonsymbolic and symbolic number comparison tasks than typically developing children. They were also slower to solve simple addition problems. Moreover, correlational analyses showed that children with DCD who experienced greater impairments in the nonsymbolic task also performed more poorly in the symbolic tasks. These findings suggest that DCD impairs both nonsymbolic and symbolic number processing. A systematic assessment of numerical cognition in children with DCD could provide a more comprehensive picture of their deficits and help in proposing specific remediation. PMID:26188690
NASA Astrophysics Data System (ADS)
Canelas, Ricardo; Heleno, Sandra; Pestana, Rita; Ferreira, Rui M. L.
2014-05-01
The objective of the present work is to devise a methodology to validate 2DH shallow-water models suitable to simulate flow hydrodynamics and channel morphology. For this purpose, a 2DH mathematical model, assembled at CEHIDRO, IST, is employed to model Tagus river floods over a 70 km reach and Synthetic Aperture Radar (SAR) images are collected to retrieve planar inundation extents. The model is suited for highly unsteady discontinuous flows over complex, time-evolving geometries, employing a finite-volume discretization scheme, based on a flux-splitting technique incorporating a reviewed version of the Roe Riemann solver. Novel closure terms for the non-equilibrium sediment transport model are included. New boundary conditions are employed, based on the Riemann variables associated the outgoing characteristic fields, coping with the provided hydrographs in a mathematically coherent manner. A high resolution Digital Elevation Model (DEM) is used and levee structures are considered as fully erodible elements. Spatially heterogeneous roughness characteristics are derived from land-use databases such as CORINE LandCover 2006. SAR satellite imagery of the floods is available and is used to validate the simulation results, with particular emphasis on the 2000/2001 flood. The delimited areas from the satellite and simulations are superimposed. The quality of the adjustment depends on the calibration of roughness coefficients and the spatial discretization of with small structures, with lengths at the order of the spatial discretization. Flow depths and registered discharges are recovered from the simulation and compared with data from a measuring station in the domain, with the comparison revealing remarkably high accuracy, both in terms of amplitudes and phase. Further inclusion of topographical detail should improve the comparison of flood extents regarding satellite data. The validated model was then employed to simulate 100-year floods in the same reach. The
Post-digital image processing based on microlens array
NASA Astrophysics Data System (ADS)
Shi, Chaiyuan; Xu, Feng
2014-10-01
Benefit from the attractive features such as compact volume, thin and lightweight, the imaging systems based on microlens array have become an active area of research. However, current imaging systems based on microlens array have insufficient imaging quality so that it cannot meet the practical requirements in most applications. As a result, the post-digital image processing for image reconstruction from the low-resolution sub-image sequence becomes particularly important. In general, the post-digital image processing mainly includes two parts: the accurate estimation of the motion parameters between the sub-image sequence and the reconstruction of high resolution image. In this paper, given the fact that the preprocessing of the unit image can make the edge of the reconstructed high-resolution image clearer, the low-resolution images are preprocessed before the post-digital image processing. Then, after the processing of the pixel rearrange method, a high-resolution image is obtained. From the result, we find that the edge of the reconstructed high-resolution image is clearer than that without preprocessing.
NASA Astrophysics Data System (ADS)
Sadrtdinov, Almaz R.; Esmagilova, Liliya M.; Saldaev, Vladimir A.; Sattarova, Zulfiya G.; Mokhovikov, Alexey A.
2016-08-01
The paper describes the process of thermochemical wood waste processing in to dimethyl ether. The physical picture of the process of waste wood recycling was compiled and studied and the mathematical model in the form of differential and algebraic equations with initial and boundary conditions was developed on its basis. The mathematical model allows to determine the optimum operating parameters of synthesis gas producing process, suitable for the catalytic synthesis of dimethyl ether and to calculate the basic constructive parameters of the equipment flowsheet.
Mathematical modeling of the process of filling a mold during injection molding of ceramic products
NASA Astrophysics Data System (ADS)
Kulkov, S. N.; Korobenkov, M. V.; Bragin, N. A.
2015-10-01
Using the software package Fluent it have been predicted of the filling of a mold in injection molding of ceramic products is of great importance, because the strength of the final product is directly related to the presence of voids in the molding, making possible early prediction of inaccuracies in the mold prior to manufacturing. The calculations were performed in the formulation of mathematical modeling of hydrodynamic turbulent process of filling a predetermined volume of a viscous liquid. The model used to determine the filling forms evaluated the influence of density and viscosity of the feedstock, and the injection pressure on the mold filling process to predict the formation of voids in the area caused by the shape defect geometry.
NASA Astrophysics Data System (ADS)
Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun
2016-05-01
In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
VIP: Vortex Image Processing pipeline for high-contrast direct imaging of exoplanets
NASA Astrophysics Data System (ADS)
Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Christiaens, Valentin; Absil, Olivier; Mawet, Dimitri
2016-03-01
VIP (Vortex Image Processing pipeline) provides pre- and post-processing algorithms for high-contrast direct imaging of exoplanets. Written in Python, VIP provides a very flexible framework for data exploration and image processing and supports high-contrast imaging observational techniques, including angular, reference-star and multi-spectral differential imaging. Several post-processing algorithms for PSF subtraction based on principal component analysis are available as well as the LLSG (Local Low-rank plus Sparse plus Gaussian-noise decomposition) algorithm for angular differential imaging. VIP also implements the negative fake companion technique coupled with MCMC sampling for rigorous estimation of the flux and position of potential companions.
Human skin surface evaluation by image processing
NASA Astrophysics Data System (ADS)
Zhu, Liangen; Zhan, Xuemin; Xie, Fengying
2003-12-01
Human skin gradually lose its tension and becomes very dry as time flies by. Use of cosmetics is effective to prevent skin aging. Recently, there are many choices of products of cosmetics. To show their effects, It is desirable to develop a way to evaluate quantificationally skin surface condition. In this paper, An automatic skin evaluating method is proposed. The skin surface has the pattern called grid-texture. This pattern is composed of the valleys that spread vertically, horizontally, and obliquely and the hills separated by them. Changes of the grid are closely linked to the skin surface condition. They can serve as a good indicator for the skin condition. By measuring the skin grid using digital image processing technologies, we can evaluate skin surface about its aging, health, and alimentary status. In this method, the skin grid is first detected to form a closed net. Then, some skin parameters such as Roughness, tension, scale and gloss can be calculated from the statistical measurements of the net. Through analyzing these parameters, the condition of the skin can be monitored.
Image-processing pipelines: applications in magnetic resonance histology
NASA Astrophysics Data System (ADS)
Johnson, G. Allan; Anderson, Robert J.; Cook, James J.; Long, Christopher; Badea, Alexandra
2016-03-01
Image processing has become ubiquitous in imaging research—so ubiquitous that it is easy to loose track of how diverse this processing has become. The Duke Center for In Vivo Microscopy has pioneered the development of Magnetic Resonance Histology (MRH), which generates large multidimensional data sets that can easily reach into the tens of gigabytes. A series of dedicated image-processing workstations and associated software have been assembled to optimize each step of acquisition, reconstruction, post-processing, registration, visualization, and dissemination. This talk will describe the image-processing pipelines from acquisition to dissemination that have become critical to our everyday work.
NASA Astrophysics Data System (ADS)
Rauh, Cornelia; Delgado, Antonio
2011-03-01
High pressures up to several hundreds of MPa are utilised in a wide range of applications in chemical engineering, bioengineering, and food engineering, aiming at selective control of (bio-)chemical reactions. Non-uniformity of process conditions may threaten the safety and quality of the resulting products as the process conditions such as pressure, temperature, and treatment history are crucial for the course of (bio-)chemical reactions. Therefore, thermofluid dynamical phenomena during the high-pressure process have to be examined, and tools to predict process uniformity and to optimise the processes have to be developed. Recently, mathematical models and numerical simulations of laboratory and industrial scale high-pressure processes have been set up and validated by experimental results. This contribution deals with the assumption of the modelling that relevant (bio-)chemical compounds are ideally dissolved or diluted particles in a continuum flow. By considering the definition of the continuum hypothesis regarding the minimum particle population in a distinct volume, limitations of this modelling and simulation are addressed.
DTV color and image processing: past, present, and future
NASA Astrophysics Data System (ADS)
Kim, Chang-Yeong; Lee, SeongDeok; Park, Du-Sik; Kwak, Youngshin
2006-01-01
The image processor in digital TV has started to play an important role due to the customers' growing desire for higher quality image. The customers want more vivid and natural images without any visual artifact. Image processing techniques are to meet customers' needs in spite of the physical limitation of the panel. In this paper, developments in image processing techniques for DTV in conjunction with developments in display technologies at Samsung R and D are reviewed. The introduced algorithms cover techniques required to solve the problems caused by the characteristics of the panel itself and techniques for enhancing the image quality of input signals optimized for the panel and human visual characteristics.
Image interpolation and denoising for division of focal plane sensors using Gaussian processes.
Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor
2014-06-16
Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter. PMID:24977618
Cardiovascular Imaging and Image Processing: Theory and Practice - 1975
NASA Technical Reports Server (NTRS)
Harrison, Donald C. (Editor); Sandler, Harold (Editor); Miller, Harry A. (Editor); Hood, Manley J. (Editor); Purser, Paul E. (Editor); Schmidt, Gene (Editor)
1975-01-01
Ultrasonography was examined in regard to the developmental highlights and present applicatons of cardiac ultrasound. Doppler ultrasonic techniques and the technology of miniature acoustic element arrays were reported. X-ray angiography was discussed with special considerations on quantitative three dimensional dynamic imaging of structure and function of the cardiopulmonary and circulatory systems in all regions of the body. Nuclear cardiography and scintigraphy, three--dimensional imaging of the myocardium with isotopes, and the commercialization of the echocardioscope were studied.
ERIC Educational Resources Information Center
Barlow, Angela T.; Huang, Rongjin; Law, Huk-Yuen; Chan, Yip Cheung; Zhang, Qiaoping; Baxter, Wesley A.; Gaddy, Angeline K.
2016-01-01
Mathematical disagreements occur when students challenge each other's ideas related to a mathematical concept. In this research, we examined Hong Kong and U.S. elementary teachers' perceptions of mathematical disagreements and their resolutions using a video-stimulated survey. Participants were directed to give particular attention to the…
ERIC Educational Resources Information Center
Seltman, Muriel; Seltman, P. E. J.
1978-01-01
The authors stress the importance of bringing together the causal logic of history and the formal logic of mathematics in order to humanize mathematics and make it more accessible. An example of such treatment is given in a discussion of the centrality of Euclid and the Euclidean system to mathematics development. (MN)
Improving Primary School Prospective Teachers' Understanding of the Mathematics Modeling Process
ERIC Educational Resources Information Center
Bal, Aytgen Pinar; Doganay, Ahmet
2014-01-01
The development of mathematical thinking plays an important role on the solution of problems faced in daily life. Determining the relevant variables and necessary procedural steps in order to solve problems constitutes the essence of mathematical thinking. Mathematical modeling provides an opportunity for explaining thoughts in real life by making…
ERIC Educational Resources Information Center
Rader, Laura
2009-01-01
The reality is that approximately 5-8% of school-age students have memory or other cognitive deficits that interfere with their ability to acquire, master, and apply mathematical concepts and skills (Geary, 2004). These students with Mathematical Learning Disabilities (MLD) are at risk for failure in middle school mathematics because they…
Image processing techniques for digital orthophotoquad production
Hood, Joy J.; Ladner, L. J.; Champion, Richard A.
1989-01-01
Orthophotographs have long been recognized for their value as supplements or alternatives to standard maps. Recent trends towards digital cartography have resulted in efforts by the US Geological Survey to develop a digital orthophotoquad production system. Digital image files were created by scanning color infrared photographs on a microdensitometer. Rectification techniques were applied to remove tile and relief displacement, thereby creating digital orthophotos. Image mosaicking software was then used to join the rectified images, producing digital orthophotos in quadrangle format.
NASA Technical Reports Server (NTRS)
Heydorn, R. P.
1984-01-01
The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of he Earth from remotely sensed measurements of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inferences about the Earth. This report summarizes the progress that has been made toward this program goal by each of the principal investigators in the MPRIA Program.
An Image Processing Algorithm Based On FMAT
NASA Technical Reports Server (NTRS)
Wang, Lui; Pal, Sankar K.
1995-01-01
Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.
A mathematical process model for cadmium precipitation by sulfate-reducing bacterial biofilms.
White, Christopher; Dennis, John S; Gadd, Geoffrey M
2003-04-01
Sulfate-reducing bacterial (SRB) biofilms were grown in a flowcell in which the biofilm was grown on a fixed area of support which was supplied with recirculating medium of defined composition, volume and circulation rate. Utilization rates for substrates, production rates for products and material mass-balances for substrates and Cd were determined and a mathematical model constructed based on theoretical considerations and experimental data. The rate of sulfate reduction was zero-order with respect to sulfate concentration and unaffected by the presence of 250 microM Cd. However, Cd reacted with the sulfide produced by the SRB to produce solid CdS, removing sulfide from solution. A significant fraction of colloidal CdS was formed which flocculated relatively slowly, limiting the overall rate of Cd bioprecipitation. Experiments using chemically-synthesised colloidal CdS indicated that the biofilm did not influence colloidal Cd flocculation but stimulated sedimentation of the CdS precipitate once flocculated. A mathematical model of bioprecipitation was developed in which the CdS formation rate was determined by two steps: sulfide production by the biofilm and colloidal CdS flocculation. This model accurately predicted the behaviour of further experimental runs which indicated the adequacy of the overall process description. The model also indicated that the rate of sulfate reduction and the rate of flocculation were the key variables in optimising the biofilm system for metal removal.
Viking image processing. [digital stereo imagery and computer mosaicking
NASA Technical Reports Server (NTRS)
Green, W. B.
1977-01-01
The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.
Survey on Neural Networks Used for Medical Image Processing
Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori
2010-01-01
This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) What are the major applications of neural networks in medical image processing now and in the nearby future? (2) What are the major strengths and weakness of applying neural networks for solving medical image processing tasks? We believe that this would be very helpful researchers who are involved in medical image processing with neural network techniques. PMID:26740861
Medical image processing on the GPU - past, present and future.
Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M
2013-12-01
Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges.
Image processing methods for visual prostheses based on DSP
NASA Astrophysics Data System (ADS)
Liu, Huwei; Zhao, Ying; Tian, Yukun; Ren, Qiushi; Chai, Xinyu
2008-12-01
Visual prostheses for extreme vision impairment have come closer to reality during these few years. The task of this research has been to design exoteric devices and study image processing algorithms and methods for different complexity images. We have developed a real-time system capable of image capture and processing to obtain most available and important image features for recognition and simulation experiment based on DSP (Digital Signal Processor). Beyond developing hardware system, we introduce algorithms such as resolution reduction, information extraction, dilation and erosion, square (circular) pixelization and Gaussian pixelization. And we classify images with different stages according to different complexity such as simple images, medium complex images, complex images. As a result, this paper will get the needed signal for transmitting to electrode array and images for simulation experiment.
Design of a distributed CORBA based image processing server.
Giess, C; Evers, H; Heid, V; Meinzer, H P
2000-01-01
This paper presents the design and implementation of a distributed image processing server based on CORBA. Existing image processing tools were encapsulated in a common way with this server. Data exchange and conversion is done automatically inside the server, hiding these tasks from the user. The different image processing tools are visible as one large collection of algorithms and due to the use of CORBA are accessible via intra-/internet.
Image-Processing Software For A Hypercube Computer
NASA Technical Reports Server (NTRS)
Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.
1992-01-01
Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.
Automated Image Processing : An Efficient Pipeline Data-Flow Architecture
NASA Astrophysics Data System (ADS)
Barreault, G.; Rivoire, A.; Jourlin, M.; Laboure, M. J.; Ramon, S.; Zeboudj, R.; Pinoli, J. C.
1987-10-01
In the context of Expert-Systems there is a pressing need of efficient Image Processing algorithms to fit the various applications. This paper presents a new electronic card that performs Image Acquisition, Processing and Display, with an IBM-PC/XT or AT as a host computer. This card features a Pipeline data flow architecture, an efficient and cost effective solution to most of the Image Processing problems.
Optimizing signal and image processing applications using Intel libraries
NASA Astrophysics Data System (ADS)
Landré, Jérôme; Truchetet, Frédéric
2007-01-01
This paper presents optimized signal and image processing libraries from Intel Corporation. Intel Performance Primitives (IPP) is a low-level signal and image processing library developed by Intel Corporation to optimize code on Intel processors. Open Computer Vision library (OpenCV) is a high-level library dedicated to computer vision tasks. This article describes the use of both libraries to build flexible and efficient signal and image processing applications.
NASA Astrophysics Data System (ADS)
Mekkaoui, Imen; Moulin, Kevin; Croisille, Pierre; Pousin, Jerome; Viallon, Magalie
2016-08-01
Cardiac motion presents a major challenge in diffusion weighted MRI, often leading to large signal losses that necessitate repeated measurements. The diffusion process in the myocardium is difficult to investigate because of the unqualified sensitivity of diffusion measurements to cardiac motion. A rigorous mathematical formalism is introduced to quantify the effect of tissue motion in diffusion imaging. The presented mathematical model, based on the Bloch-Torrey equations, takes into account deformations according to the laws of continuum mechanics. Approximating this mathematical model by using finite elements method, numerical simulations can predict the sensitivity of the diffusion signal to cardiac motion. Different diffusion encoding schemes are considered and the diffusion weighted MR signals, computed numerically, are compared to available results in literature. Our numerical model can identify the existence of two time points in the cardiac cycle, at which the diffusion is unaffected by myocardial strain and cardiac motion. Of course, these time points depend on the type of diffusion encoding scheme. Our numerical results also show that the motion sensitivity of the diffusion sequence can be reduced by using either spin echo technique with acceleration motion compensation diffusion gradients or stimulated echo acquisition mode with unipolar and bipolar diffusion gradients.
NASA Astrophysics Data System (ADS)
Mekkaoui, Imen; Moulin, Kevin; Croisille, Pierre; Pousin, Jerome; Viallon, Magalie
2016-08-01
Cardiac motion presents a major challenge in diffusion weighted MRI, often leading to large signal losses that necessitate repeated measurements. The diffusion process in the myocardium is difficult to investigate because of the unqualified sensitivity of diffusion measurements to cardiac motion. A rigorous mathematical formalism is introduced to quantify the effect of tissue motion in diffusion imaging. The presented mathematical model, based on the Bloch–Torrey equations, takes into account deformations according to the laws of continuum mechanics. Approximating this mathematical model by using finite elements method, numerical simulations can predict the sensitivity of the diffusion signal to cardiac motion. Different diffusion encoding schemes are considered and the diffusion weighted MR signals, computed numerically, are compared to available results in literature. Our numerical model can identify the existence of two time points in the cardiac cycle, at which the diffusion is unaffected by myocardial strain and cardiac motion. Of course, these time points depend on the type of diffusion encoding scheme. Our numerical results also show that the motion sensitivity of the diffusion sequence can be reduced by using either spin echo technique with acceleration motion compensation diffusion gradients or stimulated echo acquisition mode with unipolar and bipolar diffusion gradients.
Mekkaoui, Imen; Moulin, Kevin; Croisille, Pierre; Pousin, Jerome; Viallon, Magalie
2016-08-01
Cardiac motion presents a major challenge in diffusion weighted MRI, often leading to large signal losses that necessitate repeated measurements. The diffusion process in the myocardium is difficult to investigate because of the unqualified sensitivity of diffusion measurements to cardiac motion. A rigorous mathematical formalism is introduced to quantify the effect of tissue motion in diffusion imaging. The presented mathematical model, based on the Bloch-Torrey equations, takes into account deformations according to the laws of continuum mechanics. Approximating this mathematical model by using finite elements method, numerical simulations can predict the sensitivity of the diffusion signal to cardiac motion. Different diffusion encoding schemes are considered and the diffusion weighted MR signals, computed numerically, are compared to available results in literature. Our numerical model can identify the existence of two time points in the cardiac cycle, at which the diffusion is unaffected by myocardial strain and cardiac motion. Of course, these time points depend on the type of diffusion encoding scheme. Our numerical results also show that the motion sensitivity of the diffusion sequence can be reduced by using either spin echo technique with acceleration motion compensation diffusion gradients or stimulated echo acquisition mode with unipolar and bipolar diffusion gradients. PMID:27385441
Kim, Boklye; Yeo, Desmond T B; Bhagalia, Roshni
2008-02-01
There has been vast interest in determining the feasibility of functional magnetic resonance imaging (fMRI) as an accurate method of imaging brain function for patient evaluations. The assessment of fMRI as an accurate tool for activation localization largely depends on the software used to process the time series data. The performance evaluation of different analysis tools is not reliable unless truths in motion and activation are known. Lack of valid truths has been the limiting factor for comparisons of different algorithms. Until now, currently available phantom data do not include comprehensive accounts of head motion. While most fMRI studies assume no interslice motion during the time series acquisition in fMRI data acquired using a multislice and single-shot echo-planar imaging sequence, each slice is subject to a different set of motion parameters. In this study, in addition to known three-dimensional motion parameters applied to each slice, included in the time series computation are geometric distortion from field inhomogeneity and spin saturation effect as a result of out-of-plane head motion. We investigated the effect of these head motion-related artifacts and present a validation of the mapping slice-to-volume (MSV) algorithm for motion correction and activation detection against the known truths. MSV was evaluated, and showed better performance in comparison with other widely used fMRI data processing software, which corrects for head motion with a volume-to-volume realignment method. Furthermore, improvement in signal detection was observed with the implementation of the geometric distortion correction and spin saturation effect compensation features in MSV. PMID:17662548
NASA Astrophysics Data System (ADS)
Charafi, My. M.; Sadok, A.; Kamal, A.; Menai, A.
A quasi-three-dimensional mathematical model has been developed to study the morphological processes based on equilibrium sediment transport method. The flow velocities are computed by a two-dimensional horizontal depth-averaged flow model (H2D) in combination with logarithmic velocity profiles. The transport of sediment particles by a flow water has been considered in the form of bed load and suspended load. The bed load transport rate is defined as the transport of particles by rolling and saltating along the bed surface and is given by the Van Rijn relationship (1987). The equilibrium suspended load transport is described in terms of an equilibrium sediment concentration profile (ce) and a logarithmic velocity (u). Based on the equilibrium transport, the bed change rate is given by integration of the sediment mass-balance equation. The model results have been compared with a Van Rijn results (equilibrium approach) and good agreement has been found.
Cosmelli, Diego; Palacios, Adrián G
2007-01-01
Convergence of clinical, empirical, methodological and theoretical approaches aimed at understanding the relation between brain function and cognition, is by now standard in most if not all academic programs in the area of Cognitive Science. This confederation of disciplines is one of the liveliest domains of inquiry and discussion into some of the most fundamental--and historically resilient--questions human beings have posed themselves. The contributions gathered in this special issue of Biological Research, directly inspired by the ongoing work at the Instituto de Sistemas Complejos de Valparaiso and the December 2006 CONICYT-INSERM-SFI workshop "Networks in Cognitive Systems/Trends and Challenge in Biomedicine: From Cerebral Process to Mathematical Tools Design", Chile, represent an explicit invitation to the reader to dive deeper into this fascinating terrain.
Mathematic modeling of the Earth's surface and the process of remote sensing
NASA Technical Reports Server (NTRS)
Balter, B. M.
1979-01-01
It is shown that real data from remote sensing of the Earth from outer space are not best suited to the search for optimal procedures with which to process such data. To work out the procedures, it was proposed that data synthesized with the help of mathematical modeling be used. A criterion for simularity to reality was formulated. The basic principles for constructing methods for modeling the data from remote sensing are recommended. A concrete method is formulated for modeling a complete cycle of radiation transformations in remote sensing. A computer program is described which realizes the proposed method. Some results from calculations are presented which show that the method satisfies the requirements imposed on it.
Research on non-destructive testing method of silkworm cocoons based on image processing technology
NASA Astrophysics Data System (ADS)
Gan, Yong; Kong, Qing-hua; Wei, Li-fu
2008-03-01
The major studied in this dissertation is the non-destructive testing method of silkworm cocoon's quality, based on the digital image processing and photoelectricity technology. Through the images collection and the data analysis, procession and calculation of the tested silkworm cocoons with the non-destructive testing technology, internet applications automatically reckon all items of the classification indexes. Finally we can conclude the classification result and the purchase price of the silkworm cocoons. According to the domestic classification standard of the silkworm cocoons, the author investigates various testing methods of silkworm cocoons which are used or have been explored at present, and devices a non-destructive testing scheme of the silkworm cocoons based on the digital image processing and photoelectricity technology. They are dissertated about the project design of the experiment. The precisions of all the implements are demonstrated. I establish Manifold mathematic models, compare them with each other and analyze the precision with technology of databank to get the best mathematic model to figure out the weight of the dried silkworm cocoon shells. The classification methods of all the complementary items are designed well and truly. The testing method has less error and reaches an advanced level of the present domestic non-destructive testing technology of the silkworm cocoons.
ERIC Educational Resources Information Center
Bacdayan, Andrew W.
1997-01-01
Describes in economic and mathematical terms the learning production process at the individual level. Presents a model to determine which factors influence this process. Tests the model, using data from two widely divergent learning situations, ranging from lecture-oriented college economics courses to programmed instruction to learn eighth-grade…
ERIC Educational Resources Information Center
Bugden, Stephanie; Ansari, Daniel
2011-01-01
In recent years, there has been an increasing focus on the role played by basic numerical magnitude processing in the typical and atypical development of mathematical skills. In this context, tasks measuring both the intentional and automatic processing of numerical magnitude have been employed to characterize how children's representation and…
Multispectral image restoration of historical documents based on LAAMs and mathematical morphology
NASA Astrophysics Data System (ADS)
Lechuga-S., Edwin; Valdiviezo-N., Juan C.; Urcid, Gonzalo
2014-09-01
This research introduces an automatic technique designed for the digital restoration of the damaged parts in historical documents. For this purpose an imaging spectrometer is used to acquire a set of images in the wavelength interval from 400 to 1000 nm. Assuming the presence of linearly mixed spectral pixels registered from the multispectral image, our technique uses two lattice autoassociative memories to extract the set of pure pigments conforming a given document. Through an spectral unmixing analysis, our method produces fractional abundance maps indicating the distributions of each pigment in the scene. These maps are then used to locate cracks and holes in the document under study. The restoration process is performed by the application of a region filling algorithm, based on morphological dilation, followed by a color interpolation to restore the original appearance of the filled areas. This procedure has been successfully applied to the analysis and restoration of three multispectral data sets: two corresponding to artificially superimposed scripts and a real data acquired from a Mexican pre-Hispanic codex, whose restoration results are presented.
A color image processing pipeline for digital microscope
NASA Astrophysics Data System (ADS)
Liu, Yan; Liu, Peng; Zhuang, Zhefeng; Chen, Enguo; Yu, Feihong
2012-10-01
Digital microscope has found wide application in the field of biology, medicine et al. A digital microscope differs from traditional optical microscope in that there is no need to observe the sample through an eyepiece directly, because the optical image is projected directly on the CCD/CMOS camera. However, because of the imaging difference between human eye and sensor, color image processing pipeline is needed for the digital microscope electronic eyepiece to get obtain fine image. The color image pipeline for digital microscope, including the procedures that convert the RAW image data captured by sensor into real color image, is of great concern to the quality of microscopic image. The color pipeline for digital microscope is different from digital still cameras and video cameras because of the specific requirements of microscopic image, which should have the characters of high dynamic range, keeping the same color with the objects observed and a variety of image post-processing. In this paper, a new color image processing pipeline is proposed to satisfy the requirements of digital microscope image. The algorithm of each step in the color image processing pipeline is designed and optimized with the purpose of getting high quality image and accommodating diverse user preferences. With the proposed pipeline implemented on the digital microscope platform, the output color images meet the various analysis requirements of images in the medicine and biology fields very well. The major steps of color imaging pipeline proposed include: black level adjustment, defect pixels removing, noise reduction, linearization, white balance, RGB color correction, tone scale correction and gamma correction.
Experiments with recursive estimation in astronomical image processing
NASA Technical Reports Server (NTRS)
Busko, I.
1992-01-01
Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.
Experiences with digital processing of images at INPE
NASA Technical Reports Server (NTRS)
Mascarenhas, N. D. A. (Principal Investigator)
1984-01-01
Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.
NASA Technical Reports Server (NTRS)
Masuoka, E.; Rose, J.; Quattromani, M.
1981-01-01
Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.
Feasibility studies of optical processing of image bandwidth compression schemes
NASA Astrophysics Data System (ADS)
Hunt, B. R.; Strickland, R. N.; Schowengerdt, R. A.
1983-05-01
This research focuses on these three areas: (1) formulation of alternative architectural concepts for image bandwidth compression, i.e., the formulation of components and schematic diagrams which differ from conventional digital bandwidth compression schemes by being implemented by various optical computation methods; (2) simulation of optical processing concepts for image bandwidth compression, so as to gain insight into typical performance parameters and elements of system performance sensitivity; and (3) maturation of optical processing for image bandwidth compression until the overall state of optical methods in image compression becomes equal to that of digital image compression.
Breast image pre-processing for mammographic tissue segmentation.
He, Wenda; Hogg, Peter; Juette, Arne; Denton, Erika R E; Zwiggelaar, Reyer
2015-12-01
During mammographic image acquisition, a compression paddle is used to even the breast thickness in order to obtain optimal image quality. Clinical observation has indicated that some mammograms may exhibit abrupt intensity change and low visibility of tissue structures in the breast peripheral areas. Such appearance discrepancies can affect image interpretation and may not be desirable for computer aided mammography, leading to incorrect diagnosis and/or detection which can have a negative impact on sensitivity and specificity of screening mammography. This paper describes a novel mammographic image pre-processing method to improve image quality for analysis. An image selection process is incorporated to better target problematic images. The processed images show improved mammographic appearances not only in the breast periphery but also across the mammograms. Mammographic segmentation and risk/density classification were performed to facilitate a quantitative and qualitative evaluation. When using the processed images, the results indicated more anatomically correct segmentation in tissue specific areas, and subsequently better classification accuracies were achieved. Visual assessments were conducted in a clinical environment to determine the quality of the processed images and the resultant segmentation. The developed method has shown promising results. It is expected to be useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.
Using quantum filters to process images of diffuse axonal injury
NASA Astrophysics Data System (ADS)
Pineda Osorio, Mateo
2014-06-01
Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.
The Development of Sun-Tracking System Using Image Processing
Lee, Cheng-Dar; Huang, Hong-Cheng; Yeh, Hong-Yih
2013-01-01
This article presents the development of an image-based sun position sensor and the algorithm for how to aim at the Sun precisely by using image processing. Four-quadrant light sensors and bar-shadow photo sensors were used to detect the Sun's position in the past years. Nevertheless, neither of them can maintain high accuracy under low irradiation conditions. Using the image-based Sun position sensor with image processing can address this drawback. To verify the performance of the Sun-tracking system including an image-based Sun position sensor and a tracking controller with embedded image processing algorithm, we established a Sun image tracking platform and did the performance testing in the laboratory; the results show that the proposed Sun tracking system had the capability to overcome the problem of unstable tracking in cloudy weather and achieve a tracking accuracy of 0.04°. PMID:23615582
Airy-Kaup-Kupershmidt filters applied to digital image processing
NASA Astrophysics Data System (ADS)
Hoyos Yepes, Laura Cristina
2015-09-01
The Kaup-Kupershmidt operator is applied to the two-dimensional solution of the Airy-diffusion equation and the resulting filter is applied via convolution to image processing. The full procedure is implemented using Maple code with the package ImageTools. Some experiments were performed using a wide category of images including biomedical images generated by magnetic resonance, computarized axial tomography, positron emission tomography, infrared and photon diffusion. The Airy-Kaup-Kupershmidt filter can be used as a powerful edge detector and as powerful enhancement tool in image processing. It is expected that the Airy-Kaup-Kupershmidt could be incorporated in standard programs for image processing such as ImageJ.
Stochastic Process Underlying Emergent Recognition of Visual Objects Hidden in Degraded Images
Murata, Tsutomu; Hamada, Takashi; Shimokawa, Tetsuya; Tanifuji, Manabu; Yanagida, Toshio
2014-01-01
When a degraded two-tone image such as a “Mooney” image is seen for the first time, it is unrecognizable in the initial seconds. The recognition of such an image is facilitated by giving prior information on the object, which is known as top-down facilitation and has been intensively studied. Even in the absence of any prior information, however, we experience sudden perception of the emergence of a salient object after continued observation of the image, whose processes remain poorly understood. This emergent recognition is characterized by a comparatively long reaction time ranging from seconds to tens of seconds. In this study, to explore this time-consuming process of emergent recognition, we investigated the properties of the reaction times for recognition of degraded images of various objects. The results show that the time-consuming component of the reaction times follows a specific exponential function related to levels of image degradation and subject's capability. Because generally an exponential time is required for multiple stochastic events to co-occur, we constructed a descriptive mathematical model inspired by the neurophysiological idea of combination coding of visual objects. Our model assumed that the coincidence of stochastic events complement the information loss of a degraded image leading to the recognition of its hidden object, which could successfully explain the experimental results. Furthermore, to see whether the present results are specific to the task of emergent recognition, we also conducted a comparison experiment with the task of perceptual decision making of degraded images, which is well known to be modeled by the stochastic diffusion process. The results indicate that the exponential dependence on the level of image degradation is specific to emergent recognition. The present study suggests that emergent recognition is caused by the underlying stochastic process which is based on the coincidence of multiple stochastic events
Image data processing of earth resources management. [technology transfer
NASA Technical Reports Server (NTRS)
Desio, A. W.
1974-01-01
Various image processing and information extraction systems are described along with the design and operation of an interactive multispectral information system, IMAGE 100. Analyses of ERTS data, using IMAGE 100, over a number of U.S. sites are presented. The following analyses are included: investigations of crop inventory and management using remote sensing; and (2) land cover classification for environmental impact assessments. Results show that useful information is provided by IMAGE 100 analyses of ERTS data in digital form.
Mathematical modeling and analysis of EDM process parameters based on Taguchi design of experiments
NASA Astrophysics Data System (ADS)
Laxman, J.; Raj, K. Guru
2015-12-01
Electro Discharge Machining is a process used for machining very hard metals, deep and complex shapes by metal erosion in all types of electro conductive materials. The metal is removed through the action of an electric discharge of short duration and high current density between the tool and the work piece. The eroded metal on the surface of both work piece and the tool is flushed away by the dielectric fluid. The objective of this work is to develop a mathematical model for an Electro Discharge Machining process which provides the necessary equations to predict the metal removal rate, electrode wear rate and surface roughness. Regression analysis is used to investigate the relationship between various process parameters. The input parameters are taken as peak current, pulse on time, pulse off time, tool lift time. and the Metal removal rate, electrode wear rate and surface roughness are as responses. Experiments are conducted on Titanium super alloy based on the Taguchi design of experiments i.e. L27 orthogonal experiments.
Suárez-Pellicioni, M; Núñez-Peña, M I; Colomé, A
2013-12-01
This study uses event-related brain potentials to investigate the difficulties that high math anxious individuals face when processing dramatically incorrect solutions to simple arithmetical problems. To this end, thirteen high math-anxious (HMA) and thirteen low math-anxious (LMA) individuals were presented with simple addition problems in a verification task. The proposed solution could be correct, incorrect but very close to the correct one (small-split), or dramatically incorrect (large-split). The two groups did not differ in mathematical ability or trait anxiety. We reproduced previous results for flawed scores suggesting HMA difficulties in processing large-split solutions. Moreover, large-split solutions elicited a late positive component (P600/P3b) which was more enhanced and delayed in the HMA group. Our study proposes that the pattern of flawed scores found by previous studies (and that we replicate) has to do with HMA individuals'difficulties in inhibiting an extended processing of irrelevant information (large-split solutions).
Issakhov, Alibek
2014-01-01
This paper presents the mathematical model of the thermal process from thermal power plant to aquatic environment of the reservoir-cooler, which is located in the Pavlodar region, 17 Km to the north-east of Ekibastuz town. The thermal process in reservoir-cooler with different hydrometeorological conditions is considered, which is solved by three-dimensional Navier-Stokes equations and temperature equation for an incompressible flow in a stratified medium. A numerical method based on the projection method, divides the problem into three stages. At the first stage, it is assumed that the transfer of momentum occurs only by convection and diffusion. Intermediate velocity field is solved by fractional steps method. At the second stage, three-dimensional Poisson equation is solved by the Fourier method in combination with tridiagonal matrix method (Thomas algorithm). Finally, at the third stage, it is expected that the transfer is only due to the pressure gradient. Numerical method determines the basic laws of the hydrothermal processes that qualitatively and quantitatively are approximated depending on different hydrometeorological conditions. PMID:24991644
Issakhov, Alibek
2014-01-01
This paper presents the mathematical model of the thermal process from thermal power plant to aquatic environment of the reservoir-cooler, which is located in the Pavlodar region, 17 Km to the north-east of Ekibastuz town. The thermal process in reservoir-cooler with different hydrometeorological conditions is considered, which is solved by three-dimensional Navier-Stokes equations and temperature equation for an incompressible flow in a stratified medium. A numerical method based on the projection method, divides the problem into three stages. At the first stage, it is assumed that the transfer of momentum occurs only by convection and diffusion. Intermediate velocity field is solved by fractional steps method. At the second stage, three-dimensional Poisson equation is solved by the Fourier method in combination with tridiagonal matrix method (Thomas algorithm). Finally, at the third stage, it is expected that the transfer is only due to the pressure gradient. Numerical method determines the basic laws of the hydrothermal processes that qualitatively and quantitatively are approximated depending on different hydrometeorological conditions. PMID:24991644
A mathematical feasibility argument for the use of aptamers in chemotherapy and imaging ☆
Boushaba, Khalid; Levine, Howard; Hamilton, Marit Nilsen
2009-01-01
A challenge for drug design is to create molecules with optimal functions that also partition efficiently into the appropriate in vivo compartment(s). This is particularly true in cancer treatments because cancer cells upregulate their expression of multidrug resistance transporters, which necessitates a higher concentration of extracellular drug to promote sufficiently high intracellular concentrations for cell killing. Pharmacokinetics can be improved by ancillary molecules, such as cyclodextrins, that increase the effective concentrations of hydrophobic drugs in the blood by providing hydrophobic binding pockets. However, the extent to which the extracellular concentration of drug can be increased is limited. A second approach, different from the ”push” mechanism just discussed, is a ”pull” mechanism by which the effective intracellular concentrations of a drug is increased by a molecule with an affinity for the drug that is located inside the cell. Here we propose and give a proof in principle that intracellular RNA aptamers might perform this function. The mathematical model considers the following: Suppose I denotes a drug (inhibitor) that must be distributed spatially throughout a cell, but that tends to remain outside the cell due the transport properties of the cell membrane. Suppose that E, a deleterious enzyme that binds to I, is expressed by the cell and remains in the cell. It may be that the equilibrium E+I⇄κ−1κ1P is not sufficiently far enough to the right to drive enough free inhibitor into the cell to completely inhibit the enzyme. Here we evaluate the use of an intracellular aptamer with affinity for the inhibitor (I) to increase the efficiency of inhibitor transport across the cell membrane and thus drive the above equilibrium further to the right than would ordinarily be the case. We show that this outcome will occur if (1) the aptamer neither binds too tightly nor too weakly to the inhibitor than the enzyme and (2) the aptamer is
High resolution image processing on low-cost microcomputers
NASA Technical Reports Server (NTRS)
Miller, R. L.
1993-01-01
Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator)
1984-01-01
Several papers addressing image analysis and pattern recognition techniques for satellite imagery are presented. Texture classification, image rectification and registration, spatial parameter estimation, and surface fitting are discussed.
Protocols for Image Processing based Underwater Inspection of Infrastructure Elements
NASA Astrophysics Data System (ADS)
O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; Pakrashi, Vikram
2015-07-01
Image processing can be an important tool for inspecting underwater infrastructure elements like bridge piers and pile wharves. Underwater inspection often relies on visual descriptions of divers who are not necessarily trained in specifics of structural degradation and the information may often be vague, prone to error or open to significant variation of interpretation. Underwater vehicles, on the other hand can be quite expensive to deal with for such inspections. Additionally, there is now significant encouragement globally towards the deployment of more offshore renewable wind turbines and wave devices and the requirement for underwater inspection can be expected to increase significantly in the coming years. While the merit of image processing based assessment of the condition of underwater structures is understood to a certain degree, there is no existing protocol on such image based methods. This paper discusses and describes an image processing protocol for underwater inspection of structures. A stereo imaging image processing method is considered in this regard and protocols are suggested for image storage, imaging, diving, and inspection. A combined underwater imaging protocol is finally presented which can be used for a variety of situations within a range of image scenes and environmental conditions affecting the imaging conditions. An example of detecting marine growth is presented of a structure in Cork Harbour, Ireland.
Image processing of globular clusters - Simulation for deconvolution tests (GlencoeSim)
NASA Astrophysics Data System (ADS)
Blazek, Martin; Pata, Petr
2016-10-01
This paper presents an algorithmic approach for efficiency tests of deconvolution algorithms in astronomic image processing. Due to the existence of noise in astronomical data there is no certainty that a mathematically exact result of stellar deconvolution exists and iterative or other methods such as aperture or PSF fitting photometry are commonly used. Iterative methods are important namely in the case of crowded fields (e.g., globular clusters). For tests of the efficiency of these iterative methods on various stellar fields, information about the real fluxes of the sources is essential. For this purpose a simulator of artificial images with crowded stellar fields provides initial information on source fluxes for a robust statistical comparison of various deconvolution methods. The "GlencoeSim" simulator and the algorithms presented in this paper consider various settings of Point-Spread Functions, noise types and spatial distributions, with the aim of producing as realistic an astronomical optical stellar image as possible.
Monitoring Car Drivers' Condition Using Image Processing
NASA Astrophysics Data System (ADS)
Adachi, Kazumasa; Yamamto, Nozomi; Yamamoto, Osami; Nakano, Tomoaki; Yamamoto, Shin
We have developed a car driver monitoring system for measuring drivers' consciousness, with which we aim to reduce car accidents caused by drowsiness of drivers. The system consists of the following three subsystems: an image capturing system with a pulsed infrared CCD camera, a system for detecting blinking waveform by the images using a neural network with which we can extract images of face and eye areas, and a system for measuring drivers' consciousness analyzing the waveform with a fuzzy inference technique and others. The third subsystem extracts three factors from the waveform first, and analyzed them with a statistical method, while our previous system used only one factor. Our experiments showed that the three-factor method we used this time was more effective to measure drivers' consciousness than the one-factor method we described in the previous paper. Moreover, the method is more suitable for fitting parameters of the system to each individual driver.
Future trends in image processing software and hardware
NASA Technical Reports Server (NTRS)
Green, W. B.
1979-01-01
JPL image processing applications are examined, considering future trends in fields such as planetary exploration, electronics, astronomy, computers, and Landsat. Attention is given to adaptive search and interrogation of large image data bases, the display of multispectral imagery recorded in many spectral channels, merging data acquired by a variety of sensors, and developing custom large scale integrated chips for high speed intelligent image processing user stations and future pipeline production processors.
Nevo, Uri; Özarslan, Evren; Komlosh, Michal E.; Koay, Cheng Guan; Sarlls, Joelle E.; Basser, Peter J.
2014-01-01
The pulsed-field gradient (PFG) MR experiment enables one to measure particle displacements, velocities, and even higher moments of complex fluid motions. In diffusion-weighted MRI (DWI) in living tissue, where the PFG MRI experiment is used to measure diffusion, Brownian motion is assumed to dominate the displacements causing the observed signal loss. However, motions of water molecules caused by various active biological processes occurring at different length and time scales may also cause additional dephasing of magnetization and signal loss. To help understand their relative effects on the DWI signal attenuation, we used an integrated experimental and theoretical framework: a Rheo-NMR, which served as an experimental model system to precisely prescribe a microscopic velocity distribution; and a mathematical model that relates the DW signal intensity in the Rheo-NMR to experimental parameters that characterize the impressed velocity field. A technical innovation reported here is our use of ‘natural’ (in this case, polar) coordinates both to simplify the description the fluid motion within the Couette cell of the Rheo-NMR, as well as to acquire and reconstruct magnitude and phase MR images obtained within it. We use this integrated model system to demonstrate how shear flows appears as pseudo-diffusion in magnitude DW MR signals obtained using PFG spin-echo (PGSE) NMR and MRI sequences. Our results lead us to reinterpret the possible causes of signal loss in DWI in vivo, in particular to revise and generalize the previous notion of intra-voxel incoherent motion (IVIM) in order to describe activity driven flows that appear as pseudo-diffusion over multiple length and time scales in living tissues. PMID:20886564
Image Processing In Laser-Beam-Steering Subsystem
NASA Technical Reports Server (NTRS)
Lesh, James R.; Ansari, Homayoon; Chen, Chien-Chung; Russell, Donald W.
1996-01-01
Conceptual design of image-processing circuitry developed for proposed tracking apparatus described in "Beam-Steering Subsystem For Laser Communication" (NPO-19069). In proposed system, desired frame rate achieved by "windowed" readout scheme in which only pixels containing and surrounding two spots read out and others skipped without being read. Image data processed rapidly and efficiently to achieve high frequency response.
Mimos: a description framework for exchanging medical image processing results.
Aubry, F; Todd-Pokropek, A
2001-01-01
Image processing plays increasingly important role in using medical images, both for routine as for research purposes, due to the growing interest in functional studies (PET, MR, etc.). Unfortunately, there exist nearly as many formats for data and results coding as image processing procedures. If Dicom presently supports a kind of structured reporting of image studies, it does not take into account the semantics of the image handling domain. This can impede the exchange and the interpretation of processing results. In order to facilitate the use of image processing results, we have designed a framework for representing image processing results. This framework, whose principle is called an "ontology" in the literature, extends the formalism, which we have used in our previous work on image databases. It permits a systematic representation of the entities and information involved in the processing, that is not only input data, command parameters, output data, but also software and hardware descriptions, and relationships between these different parameters. Consequently, this framework allows the building of standardized documents, which can be exchanged amongst various users. As the framework is based on a formal grammar, documents can be encoded using XML. They are thus compatible with Internet / Intranet technology. In this paper, the main characteristics of the framework are presented and illustrated. We also discuss implementation issues in order to be able to integrate documents, and correlated images, handling these with a classical Web browser.
Assessment of vessel diameters for MR brain angiography processed images
NASA Astrophysics Data System (ADS)
Moraru, Luminita; Obreja, Cristian-Dragos; Moldovanu, Simona
2015-12-01
The motivation was to develop an assessment method to measure (in)visible differences between the original and the processed images in MR brain angiography as a method of evaluation of the status of the vessel segments (i.e. the existence of the occlusion or intracerebral vessels damaged as aneurysms). Generally, the image quality is limited, so we improve the performance of the evaluation through digital image processing. The goal is to determine the best processing method that allows an accurate assessment of patients with cerebrovascular diseases. A total of 10 MR brain angiography images were processed by the following techniques: histogram equalization, Wiener filter, linear contrast adjustment, contrastlimited adaptive histogram equalization, bias correction and Marr-Hildreth filter. Each original image and their processed images were analyzed into the stacking procedure so that the same vessel and its corresponding diameter have been measured. Original and processed images were evaluated by measuring the vessel diameter (in pixels) on an established direction and for the precise anatomic location. The vessel diameter is calculated using the plugin ImageJ. Mean diameter measurements differ significantly across the same segment and for different processing techniques. The best results are provided by the Wiener filter and linear contrast adjustment methods and the worst by Marr-Hildreth filter.
Graphical user interface for image acquisition and processing
Goldberg, Kenneth A.
2002-01-01
An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.
Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition
NASA Technical Reports Server (NTRS)
Downie, John D.; Tucker, Deanne (Technical Monitor)
1994-01-01
Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.
Metal artifact reduction in dental CT images using polar mathematical morphology.
Naranjo, Valery; Lloréns, Roberto; Alcañiz, Mariano; López-Mir, Fernando
2011-04-01
Most dental implant planning systems use a 3D representation of the CT scan of the patient under study as it provides a more intuitive view of the human jaw. The presence of metallic objects in human jaws, such as amalgam or gold fillings, provokes several artifacts like streaking and beam hardening which makes the reconstruction process difficult. In order to reduce these artifacts, several methods have been proposed using the raw data, directly obtained from the tomographs, in different ways. However, in DICOM-based applications this information is not available, and thus the need of a new method that handles this task in the DICOM domain. The presented method performs a morphological filtering in the polar domain yielding output images less affected by artifacts (even in cases of multiple metallic objects) without causing significant smoothing of the anatomic structures, which allows a great improvement in the 3D reconstruction. The algorithm has been automated and compared to other image denoising methods with successful results. PMID:21227532
Mathematical simulation of thermal decomposition processes in coking polymers during intense heating
Shlenskii, O.F.; Polyakov, A.A.
1994-12-01
Description of nonstationary heat transfer in heat-shielding materials based on cross-linked polymers, mathematical simulation of chemical engineering processes of treating coking and fiery coals, and designing calculations all require taking thermal destruction kinetics into account. The kinetics of chemical transformations affects the substance density change depending on the temperature, the time, the heat-release function, and other properties of materials. The traditionally accepted description of the thermal destruction kinetics of coking materials is based on formulating a set of kinetic equations, in which only chemical transformations are taken into account. However, such an approach does not necessarily agree with the obtained experimental data for the case of intense heating. The authors propose including the parameters characterizing the decrease of intermolecular interaction in a comparatively narrow temperature interval (20-40 K) into the set of kinetic equations. In the neighborhood of a certain temperature T{sub 1}, which is called the limiting temperature of thermal decomposition, a decrease in intermolecular interaction causes an increase in the rates of chemical and phase transformations. The effect of the enhancement of destruction processes has been found experimentally by the contact thermal analysis method.
Mathematical model of solid food pasteurization by ohmic heating: influence of process parameters.
Marra, Francesco
2014-01-01
Pasteurization of a solid food undergoing ohmic heating has been analysed by means of a mathematical model, involving the simultaneous solution of Laplace's equation, which describes the distribution of electrical potential within a food, the heat transfer equation, using a source term involving the displacement of electrical potential, the kinetics of inactivation of microorganisms likely to be contaminating the product. In the model, thermophysical and electrical properties as function of temperature are used. Previous works have shown the occurrence of heat loss from food products to the external environment during ohmic heating. The current model predicts that, when temperature gradients are established in the proximity of the outer ohmic cell surface, more cold areas are present at junctions of electrodes with lateral sample surface. For these reasons, colder external shells are the critical areas to be monitored, instead of internal points (typically geometrical center) as in classical pure conductive heat transfer. Analysis is carried out in order to understand the influence of pasteurisation process parameters on this temperature distribution. A successful model helps to improve understanding of these processing phenomenon, which in turn will help to reduce the magnitude of the temperature differential within the product and ultimately provide a more uniformly pasteurized product.
NASA Astrophysics Data System (ADS)
Ataei, Sh; Mahmud, Z.; Khalid, M. N.
2014-04-01
The students learning outcomes clarify what students should know and be able to demonstrate after completing their course. So, one of the issues on the process of teaching and learning is how to assess students' learning. This paper describes an application of the dichotomous Rasch measurement model in measuring the cognitive process of engineering students' learning of mathematics. This study provides insights into the perspective of 54 engineering students' cognitive ability in learning Calculus III based on Bloom's Taxonomy on 31 items. The results denote that some of the examination questions are either too difficult or too easy for the majority of the students. This analysis yields FIT statistics which are able to identify if there is data departure from the Rasch theoretical model. The study has identified some potential misfit items based on the measurement of ZSTD where the removal misfit item was accomplished based on the MNSQ outfit of above 1.3 or less than 0.7 logit. Therefore, it is recommended that these items be reviewed or revised to better match the range of students' ability in the respective course.
ERIC Educational Resources Information Center
Scheiner, Thorsten
2016-01-01
The initial assumption of this article is that there is an overemphasis on abstraction-from-actions theoretical approaches in research on knowing and learning mathematics. This article uses a critical reflection on research on students' ways of constructing mathematical concepts to distinguish between abstraction-from-actions theoretical…
ERIC Educational Resources Information Center
Stylianou, Despina A.
2013-01-01
Representation and justification are two central "mathematical practices". In the past, each has been examined to gain insights in the functions that they have in students' mathematical problem solving. Here, we examine the ways that representation and justification interact and influence the development of one another. We focus on the…
Using Mental Imagery Processes for Teaching and Research in Mathematics and Computer Science
ERIC Educational Resources Information Center
Arnoux, Pierre; Finkel, Alain
2010-01-01
The role of mental representations in mathematics and computer science (for teaching or research) is often downplayed or even completely ignored. Using an ongoing work on the subject, we argue for a more systematic study and use of mental representations, to get an intuition of mathematical concepts, and also to understand and build proofs. We…
ERIC Educational Resources Information Center
Schuchardt, Anita M.; Schunn, Christian D.
2016-01-01
Amid calls for integrating science, technology, engineering, and mathematics (iSTEM) in K-12 education, there is a pressing need to uncover productive methods of integration. Prior research has shown that increasing contextual linkages between science and mathematics is associated with student problem solving and conceptual understanding. However,…
Partial difference operators on weighted graphs for image processing on surfaces and point clouds.
Lozes, Francois; Elmoataz, Abderrahim; Lezoray, Olivier
2014-09-01
Partial difference equations (PDEs) and variational methods for image processing on Euclidean domains spaces are very well established because they permit to solve a large range of real computer vision problems. With the recent advent of many 3D sensors, there is a growing interest in transposing and solving PDEs on surfaces and point clouds. In this paper, we propose a simple method to solve such PDEs using the framework of PDEs on graphs. This latter approach enables us to transcribe, for surfaces and point clouds, many models and algorithms designed for image processing. To illustrate our proposal, three problems are considered: (1) p -Laplacian restoration and inpainting; (2) PDEs mathematical morphology; and (3) active contours segmentation.
IR camera system with an advanced image processing technologies
NASA Astrophysics Data System (ADS)
Ohkubo, Syuichi; Tamura, Tetsuo
2016-05-01
We have developed image processing technologies for resolving issues caused by the inherent UFPA (uncooled focal plane array) sensor characteristics to spread its applications. For example, large time constant of an uncooled IR (infra-red) sensor limits its application field, because motion blur is caused in monitoring the objective moving at high speed. The developed image processing technologies can eliminate the blur and retrieve almost the equivalent image observed in still motion. This image processing is based on the idea that output of the IR sensor is construed as the convolution of radiated IR energy from the objective and impulse response of the IR sensor. With knowledge of the impulse response and moving speed of the objective, the IR energy from the objective can be de-convolved from the observed images. We have successfully retrieved the image without blur using the IR sensor of 15 ms time constant under the conditions in which the objective is moving at the speed of about 10 pixels/60 Hz. The image processing for reducing FPN (fixed pattern noise) has also been developed. UFPA having the responsivity in the narrow wavelength region, e.g., around 8 μm is appropriate for measuring the surface of glass. However, it suffers from severe FPN due to lower sensitivity compared with 8-13 μm. The developed image processing exploits the images of the shutter itself, and can reduce FPN significantly.
Image processing system to analyze droplet distributions in sprays
NASA Technical Reports Server (NTRS)
Bertollini, Gary P.; Oberdier, Larry M.; Lee, Yong H.
1987-01-01
An image processing system was developed which automatically analyzes the size distributions in fuel spray video images. Images are generated by using pulsed laser light to freeze droplet motion in the spray sample volume under study. This coherent illumination source produces images which contain droplet diffraction patterns representing the droplets degree of focus. The analysis is performed by extracting feature data describing droplet diffraction patterns in the images. This allows the system to select droplets from image anomalies and measure only those droplets considered in focus. Unique features of the system are the totally automated analysis and droplet feature measurement from the grayscale image. The feature extraction and image restoration algorithms used in the system are described. Preliminary performance data is also given for two experiments. One experiment gives a comparison between a synthesized distribution measured manually and automatically. The second experiment compares a real spray distribution measured using current methods against the automatic system.
Processing of polarametric SAR images. Final report
Warrick, A.L.; Delaney, P.A.
1995-09-01
The objective of this work was to develop a systematic method of combining multifrequency polarized SAR images. It is shown that the traditional methods of correlation, hard targets, and template matching fail to produce acceptable results. Hence, a new algorithm was developed and tested. The new approach combines the three traditional methods and an interpolation method. An example is shown that demonstrates the new algorithms performance. The results are summarized suggestions for future research are presented.
Processing ISS Images of Titan's Surface
NASA Technical Reports Server (NTRS)
Perry, Jason; McEwen, Alfred; Fussner, Stephanie; Turtle, Elizabeth; West, Robert; Porco, Carolyn; Knowles, Ben; Dawson, Doug
2005-01-01
One of the primary goals of the Cassini-Huygens mission, in orbit around Saturn since July 2004, is to understand the surface and atmosphere of Titan. Surface investigations are primarily accomplished with RADAR, the Visual and Infrared Mapping Spectrometer (VIMS), and the Imaging Science Subsystem (ISS) [1]. The latter two use methane "windows", regions in Titan's reflectance spectrum where its atmosphere is most transparent, to observe the surface. For VIMS, this produces clear views of the surface near 2 and 5 microns [2]. ISS uses a narrow continuum band filter (CB3) at 938 nanometers. While these methane windows provide our best views of the surface, the images produced are not as crisp as ISS images of satellites like Dione and Iapetus [3] due to the atmosphere. Given a reasonable estimate of contrast (approx.30%), the apparent resolution of features is approximately 5 pixels due to the effects of the atmosphere and the Modulation Transfer Function of the camera [1,4]. The atmospheric haze also reduces contrast, especially with increasing emission angles [5].
Image processing of underwater multispectral imagery
Zawada, D.G.
2003-01-01
Capturing in situ fluorescence images of marine organisms presents many technical challenges. The effects of the medium, as well as the particles and organisms within it, are intermixed with the desired signal. Methods for extracting and preparing the imagery for analysis are discussed in reference to a novel underwater imaging system called the low-light-level underwater multispectral imaging system (LUMIS). The instrument supports both uni- and multispectral collections, each of which is discussed in the context of an experimental application. In unispectral mode, LUMIS was used to investigate the spatial distribution of phytoplankton. A thin sheet of laser light (532 nm) induced chlorophyll fluorescence in the phytoplankton, which was recorded by LUMIS. Inhomogeneities in the light sheet led to the development of a beam-pattern-correction algorithm. Separating individual phytoplankton cells from a weak background fluorescence field required a two-step procedure consisting of edge detection followed by a series of binary morphological operations. In multispectral mode, LUMIS was used to investigate the bio-assay potential of fluorescent pigments in corals. Problems with the commercial optical-splitting device produced nonlinear distortions in the imagery. A tessellation algorithm, including an automated tie-point-selection procedure, was developed to correct the distortions. Only pixels corresponding to coral polyps were of interest for further analysis. Extraction of these pixels was performed by a dynamic global-thresholding algorithm.
Need for image processing in infrared camera design
NASA Astrophysics Data System (ADS)
Allred, Lloyd G.; Jones, Martin H.
2000-03-01
While the value of image processing has been longly recognized, this is usually done during post-processing. For scientific application, the presence of large noise errors, data drop out, and dead sensors would invalidate any conclusion made from the data until noise-removal and sensor calibration has been accomplished. With the growing need for ruggedized, real-time image acquisition systems, including applications to automotive and aerospace, post processing may not be an option. With post processing, the operator does not have the opportunity to view the cleaned-up image. Focal plane arrays are plagued by bad sensors, high manufacturing costs, and low yields, often forcing a six digit cost tag. Perhaps infrared camera design is too serious an issue to leave to the camera manufacturers. Alternative camera designs using a single spinning mirror can yield perfect infrared images at rates up to 12000 frames per second using a fraction of the hardware in the current focal-plane arrays. Using a 768 X 5 sensor array, redundant 2048 X 768 images are produced by each row of the sensor array. Sensor arrays with flawed sensors would no longer need to be discarded because data from dead sensors can be discarded, thus increasing manufacturing yields and reducing manufacturing costs. Furthermore, very rapid image processing chips are available, allowing for real-time morphological image processing (including real-time sensor calibration), thus significantly increasing thermal precision, making thermal imaging amenable for an increased variety of applications.
Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry
NASA Technical Reports Server (NTRS)
Hong, Yie-Ming
1973-01-01
Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.
Omega: An Object-Oriented Image/Symbol Processing Environment
NASA Astrophysics Data System (ADS)
Carlotto, Mark J.; Fong, Jennifer B.
1989-01-01
A Common Lisp software system to support integrated image and symbolic processing applications is described. The system, termed Omega is implemented on a Symbolics Lisp Machine and is organized into modules to facilitate the development of user applications and for software transportability. An object-oriented programming language similar to Symbolics Zetalisp/Flavors is implemented in Common Lisp and is used for creating symbolic objects known as tokens. Tokens are used to represent images, significant areas in images, and regions that define the spatial extent of the significant areas. The extent of point, line, and areal features is represented by polygons, label maps, boundary points, row- and column-oriented run-length encoded rasters, and bounding rectangles. Macros provide a common means for image processing functions and spatial operators to access spatial representations. The implementation of image processing, segmentation, and symbolic processing functions within Omega are described.
Data management in pattern recognition and image processing systems
NASA Technical Reports Server (NTRS)
Zobrist, A. L.; Bryant, N. A.
1976-01-01
Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.
Theoretical Analysis of Radiographic Images by Nonstationary Poisson Processes
NASA Astrophysics Data System (ADS)
Tanaka, Kazuo; Yamada, Isao; Uchida, Suguru
1980-12-01
This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples of the one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process.
ERIC Educational Resources Information Center
Muis, Krista R.; Psaradellis, Cynthia; Chevrier, Marianne; Di Leo, Ivana; Lajoie, Susanne P.
2016-01-01
We developed an intervention based on the learning by teaching paradigm to foster self-regulatory processes and better learning outcomes during complex mathematics problem solving in a technology-rich learning environment. Seventy-eight elementary students were randomly assigned to 1 of 2 conditions: learning by preparing to teach, or learning for…
ERIC Educational Resources Information Center
Paddack, Megan
2009-01-01
The purpose of this study was to investigate and describe how middle school mathematics teachers "make meaning" of proofs and the process of proving in the context of their classroom practices. A framework of "making meaning," created by the researcher, guided the data collection and analysis phases of the study. This framework describes the five…
ERIC Educational Resources Information Center
Samaniego, Kimberly Anne OBrien
2013-01-01
Efforts to improve student performance on high-stakes assessments in mathematics place teachers in the epicenter of multiple reform expectations. While studies have documented how teachers implement single reforms, very little is known about teachers' implementation processes when multiple expectations are imposed. With a focus on how…
ERIC Educational Resources Information Center
Kaosa-ard, Chanapat; Erawan, Waraporn; Damrongpanit, Suntonrapot; Suksawang, Poonpong
2015-01-01
The researcher applied latent profile analysis to study the difference of the students' mathematical process skill. These skills are problem solving skills, reasoning skills, communication and presentation skills, connection knowledge skills, and creativity skills. Samples were 2,485 seventh-grade students obtained from Multi-stage Random…
ERIC Educational Resources Information Center
Martin, Jason
2013-01-01
Taylor series convergence is a complicated mathematical structure which incorporates multiple concepts. Therefore, it can be very difficult for students to initially comprehend. How might students make sense of this structure? How might experts make sense of this structure? To answer these questions, an exploratory study was conducted using…
A Micro-analysis of Video Images from a Mathematics Lesson.
ERIC Educational Resources Information Center
Chaussecourte, Philippe
2001-01-01
Based on the analysis of the video recording of a French Grade 7 mathematics, this analysis bears on two specific episodes when two pupils--a boy and a girl-- successively go to the board and correct an exercise. The methodology used is a psychoanalytically-oriented clinical approach enriched with a micro-analytical description of the behavior of…
STEM Images Revealing STEM Conceptions of Pre-Service Chemistry and Mathematics Teachers
ERIC Educational Resources Information Center
Akaygun, Sevil; Aslan-Tutak, Fatma
2016-01-01
Science, technology, engineering, and mathematics (STEM) education has been an integral part of many countries' educational policies. In last decade, various practices have been implemented to make STEM areas valuable for 21st century generation. These actions require reconsideration of both pre- and in-service teacher education because those who…
Image Harvest: an open-source platform for high-throughput plant image processing and analysis
Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal
2016-01-01
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917
Image Harvest: an open-source platform for high-throughput plant image processing and analysis.
Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal
2016-05-01
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets.
Image Harvest: an open-source platform for high-throughput plant image processing and analysis.
Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal
2016-05-01
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917
NASA Technical Reports Server (NTRS)
Harrison, D. C.; Sandler, H.; Miller, H. A.
1975-01-01
The present collection of papers outlines advances in ultrasonography, scintigraphy, and commercialization of medical technology as applied to cardiovascular diagnosis in research and clinical practice. Particular attention is given to instrumentation, image processing and display. As necessary concomitants to mathematical analysis, recently improved magnetic recording methods using tape or disks and high-speed computers of large capacity are coming into use. Major topics include Doppler ultrasonic techniques, high-speed cineradiography, three-dimensional imaging of the myocardium with isotopes, sector-scanning echocardiography, and commercialization of the echocardioscope. Individual items are announced in this issue.
Processing, analysis, recognition, and automatic understanding of medical images
NASA Astrophysics Data System (ADS)
Tadeusiewicz, Ryszard; Ogiela, Marek R.
2004-07-01
Paper presents some new ideas introducing automatic understanding of the medical images semantic content. The idea under consideration can be found as next step on the way starting from capturing of the images in digital form as two-dimensional data structures, next going throw images processing as a tool for enhancement of the images visibility and readability, applying images analysis algorithms for extracting selected features of the images (or parts of images e.g. objects), and ending on the algorithms devoted to images classification and recognition. In the paper we try to explain, why all procedures mentioned above can not give us full satisfaction in many important medical problems, when we do need understand image semantic sense, not only describe the image in terms of selected features and/or classes. The general idea of automatic images understanding is presented as well as some remarks about the successful applications of such ideas for increasing potential possibilities and performance of computer vision systems dedicated to advanced medical images analysis. This is achieved by means of applying linguistic description of the picture merit content. After this we try use new AI methods to undertake tasks of the automatic understanding of images semantics in intelligent medical information systems. A successful obtaining of the crucial semantic content of the medical image may contribute considerably to the creation of new intelligent multimedia cognitive medical systems. Thanks to the new idea of cognitive resonance between stream of the data extracted form the image using linguistic methods and expectations taken from the representation of the medical knowledge, it is possible to understand the merit content of the image even if the form of the image is very different from any known pattern.
An Image Processing Approach to Linguistic Translation
NASA Astrophysics Data System (ADS)
Kubatur, Shruthi; Sreehari, Suhas; Hegde, Rajeshwari
2011-12-01
The art of translation is as old as written literature. Developments since the Industrial Revolution have influenced the practice of translation, nurturing schools, professional associations, and standard. In this paper, we propose a method of translation of typed Kannada text (taken as an image) into its equivalent English text. The National Instruments (NI) Vision Assistant (version 8.5) has been used for Optical character Recognition (OCR). We developed a new way of transliteration (which we call NIV transliteration) to simplify the training of characters. Also, we build a special type of dictionary for the purpose of translation.
Detecting jaundice by using digital image processing
NASA Astrophysics Data System (ADS)
Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.
2014-03-01
When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.
High performance image processing of SPRINT
DeGroot, T.
1994-11-15
This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.
Evaluation of clinical image processing algorithms used in digital mammography.
Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde
2009-03-01
Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the
Evaluation of clinical image processing algorithms used in digital mammography.
Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde
2009-03-01
Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the
Teixeira, Ana P; Carinhas, Nuno; Dias, João M L; Cruz, Pedro; Alves, Paula M; Carrondo, Manuel J T; Oliveira, Rui
2007-12-01
Systems biology is an integrative science that aims at the global characterization of biological systems. Huge amounts of data regarding gene expression, proteins activity and metabolite concentrations are collected by designing systematic genetic or environmental perturbations. Then the challenge is to integrate such data in a global model in order to provide a global picture of the cell. The analysis of these data is largely dominated by nonparametric modelling tools. In contrast, classical bioprocess engineering has been primarily founded on first principles models, but it has systematically overlooked the details of the embedded biological system. The full complexity of biological systems is currently assumed by systems biology and this knowledge can now be taken by engineers to decide how to optimally design and operate their processes. This paper discusses possible methodologies for the integration of systems biology and bioprocess engineering with emphasis on applications involving animal cell cultures. At the mathematical systems level, the discussion is focused on hybrid semi-parametric systems as a way to bridge systems biology and bioprocess engineering.
Mathematical modelling of flow and transport processes in tissue engineering bioreactors
NASA Astrophysics Data System (ADS)
Waters, Sarah; Pearson, Natalie; Oliver, James; Shipley, Rebecca
2014-11-01
To artificially engineer tissues numerous biophysical and biochemical processes must be integrated to produce tissues with the desired in vivo properties. Tissue engineering bioreactors are cell culture systems which aim to mimic the in vivo environment. We consider a hollow fibre membrane bioreactor (HFMB), which utilises fluid flow to enhance the delivery of growth factors and nutrients to, and metabolite removal from, the cells, as well as provide appropriate mechanical stimuli to the cells. Biological tissues comprise a wide variety of interacting components, and multiphase models provide a natural framework to investigate such interactions. We present a suite of mathematical models (capturing different experimental setups) which consider the fluid flow, solute transport, and cell yield and distribution within a HFMB. The governing equations are simplified by exploiting the slender geometry of the bioreactor system, so that, e.g., lubrication theory may be used to describe flow in the lumen. We interrogate the models to illustrate typical behaviours of each setup in turn, and highlight the dependence of results on key experimentally controllable parameter values. Once validated, such models can be used to inform and direct future experiments.
Mathematical Modeling of Three-Dimensional Delamination Processes of Laminated Composites
NASA Astrophysics Data System (ADS)
Gasser, Thomas C.; Holzapfel, Gerhard A.
The mathematical modeling of 3D delamination failure of laminated composites is discussed. Strong discontinuities are considered in the kinematically framework, which provides the basis for the embedded representation of discontinuities in finite elements. Suitable expressions for a transversely isotropic traction law in form of a displacement-energy function are derived to describe the constitutive response of the interface of laminated composites. Softening phenomena of interfaces are modeled by an isotropic damage law, while the continuous bulk material is modeled as an elastic fiber-reinforced composite. The variational formulation is based on a three-field Hu-Washizu functional which is accompanied with the enhanced assumed strain method. Three different finite element formulations are delineated. A biomechanical example investigates the dissection of the middle layer of a healthy artery, and compares the numerical results of the different finite element formulations obtained from regular and distorted meshes. Soft tissue dissection occurs, for example, during balloon angioplasty, which is a mechanical procedure frequently performed to reduce the severity of atherosclerotic stenoses. Physical and numerical analyses of delamination processes are of pressing scientific and clinical need.
Mathematical model for strip surface roughness of stainless steel in cold rolling process
NASA Astrophysics Data System (ADS)
Chen, Jinshan; Li, Changsheng; Zhu, Tao; Han, Wenlong; Cao, Yong
2013-05-01
Surface roughness control is one of the most important subjects during producing stainless steel strips. In this paper, under the conditions of introducing to the concepts of transferring ratio and genetic factor and through the further theoretical analysis, a set of theoretical models about strip surface roughness were put forward in stainless steel cold tandem rolling. Meanwhile, the lubrication experiment in cold rolling process of SUS430 stainless steel strip was carried out in order to comprehensively study surface roughness. The effect of main factors on transferring ratio and genetic factor was analyzed quantitatively, such as reduction, initial thickness, deformation resistance, emulsion technological parameters and so on. Attenuation function equations used for describing roll surface roughness were set up, and also strip surface roughness at the entry of last mill was solved approximately. Ultimately, mathematical model on strip surface roughness for cold tandem rolling of stainless steel was built, and then it was used into the practical production. A great number of statistical results show that experimental data is in excellent agreement with the given regression equations, and exactly, the relative deviation on roughness between calculated and measured is less than 6.34%.
Real-time image processing architecture for robot vision
NASA Astrophysics Data System (ADS)
Persa, Stelian; Jonker, Pieter P.
2000-10-01
This paper presents a study of the impact of MMX technology and PIII Streaming SIMD (Single Instruction stream, Multiple Data stream). Extensions in image processing and machine vision application, which, because of their hard real time constrains, is an undoubtedly challenging task. A comparison with traditional scalar code and with other parallel SIMD architecture (IMPA-VISION board) is discussed with emphasis of the particular programming strategies for speed optimization. More precisely we discuss the low level and intermediate level image processing algorithms, which are best suited for parallel SIMD implementation. High-level image processing algorithms are more suitable for parallel implementation on MIMD architectures. While the IMAP-VISION system performs better because of the large number of processing elements, the MMX processor and PIII (with Streaming SIMD Extensions) remains a good candidate for low-level image processing.
Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.
1985-01-01
Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.
Land image data processing requirements for the EOS era
NASA Technical Reports Server (NTRS)
Wharton, Stephen W.; Newcomer, Jeffrey A.
1989-01-01
Requirements are proposed for a hybrid approach to image analysis that combines the functionality of a general-purpose image processing system with the knowledge representation and manipulation capabilities associated with expert systems to improve the productivity of scientists in extracting information from remotely sensed image data. The overall functional objectives of the proposed system are to: (1) reduce the level of human interaction required on a scene-by-scene basis to perform repetitive image processing tasks; (2) allow the user to experiment with ad hoc rules and procedures for the extraction, description, and identification of the features of interest; and (3) facilitate the derivation, application, and dissemination of expert knowledge for target recognition whose scope of application is not necessarily limited to the image(s) from which it was derived.
Ground control requirements for precision processing of ERTS images
Burger, Thomas C.
1972-01-01
When the first Earth Resources Technology Satellite (ERTS-A) flies in 1972, NASA expects to receive and bulk-process 9,000 images a week. From this deluge of images, a few will be selected for precision processing; that is, about 5 percent will be further treated to improve the geometry of the scene, both in the relative and absolute sense. Control points are required for this processing. This paper describes the control requirements for relating ERTS images to a reference surface of the earth. Enough background on the ERTS-A satellite is included to make the requirements meaningful to the user.
Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.
Koprowski, Robert
2015-11-01
The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis.
Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.
Koprowski, Robert
2015-11-01
The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. PMID:25676816
Image pre-processing for optimizing automated photogrammetry performances
NASA Astrophysics Data System (ADS)
Guidi, G.; Gonizzi, S.; Micoli, L. L.
2014-05-01
The purpose of this paper is to analyze how optical pre-processing with polarizing filters and digital pre-processing with HDR imaging, may improve the automated 3D modeling pipeline based on SFM and Image Matching, with special emphasis on optically non-cooperative surfaces of shiny or dark materials. Because of the automatic detection of homologous points, the presence of highlights due to shiny materials, or nearly uniform dark patches produced by low reflectance materials, may produce erroneous matching involving wrong 3D point estimations, and consequently holes and topological errors on the mesh originated by the associated dense 3D cloud. This is due to the limited dynamic range of the 8 bit digital images that are matched each other for generating 3D data. The same 256 levels can be more usefully employed if the actual dynamic range is compressed, avoiding luminance clipping on the darker and lighter image areas. Such approach is here considered both using optical filtering and HDR processing with tone mapping, with experimental evaluation on different Cultural Heritage objects characterized by non-cooperative optical behavior. Three test images of each object have been captured from different positions, changing the shooting conditions (filter/no-filter) and the image processing (no processing/HDR processing), in order to have the same 3 camera orientations with different optical and digital pre-processing, and applying the same automated process to each photo set.
Predictive images of postoperative levator resection outcome using image processing software
Mawatari, Yuki; Fukushima, Mikiko
2016-01-01
Purpose This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Methods Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Results Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Conclusion Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. PMID:27757008
Quantum imaging as an ancilla-assisted process tomography
NASA Astrophysics Data System (ADS)
Ghalaii, M.; Afsary, M.; Alipour, S.; Rezakhani, A. T.
2016-10-01
We show how a recent experiment of quantum imaging with undetected photons can basically be described as an (a partial) ancilla-assisted process tomography in which the object is described by an amplitude-damping quantum channel. We propose a simplified quantum circuit version of this scenario, which also enables one to recast quantum imaging in quantum computation language. Our analogy and analysis may help us to better understand the role of classical and/or quantum correlations in imaging experiments.
A novel data processing technique for image reconstruction of penumbral imaging
NASA Astrophysics Data System (ADS)
Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin
2011-06-01
CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.
NASA Astrophysics Data System (ADS)
Zielinski, Jerzy S.
The dramatic increase in number and volume of digital images produced in medical diagnostics, and the escalating demand for rapid access to these relevant medical data, along with the need for interpretation and retrieval has become of paramount importance to a modern healthcare system. Therefore, there is an ever growing need for processed, interpreted and saved images of various types. Due to the high cost and unreliability of human-dependent image analysis, it is necessary to develop an automated method for feature extraction, using sophisticated mathematical algorithms and reasoning. This work is focused on digital image signal processing of biological and biomedical data in one- two- and three-dimensional space. Methods and algorithms presented in this work were used to acquire data from genomic sequences, breast cancer, and biofilm images. One-dimensional analysis was applied to DNA sequences which were presented as a non-stationary sequence and modeled by a time-dependent autoregressive moving average (TD-ARMA) model. Two-dimensional analyses used 2D-ARMA model and applied it to detect breast cancer from x-ray mammograms or ultrasound images. Three-dimensional detection and classification techniques were applied to biofilm images acquired using confocal laser scanning microscopy. Modern medical images are geometrically arranged arrays of data. The broadening scope of imaging as a way to organize our observations of the biophysical world has led to a dramatic increase in our ability to apply new processing techniques and to combine multiple channels of data into sophisticated and complex mathematical models of physiological function and dysfunction. With explosion of the amount of data produced in a field of biomedicine, it is crucial to be able to construct accurate mathematical models of the data at hand. Two main purposes of signal modeling are: data size conservation and parameter extraction. Specifically, in biomedical imaging we have four key problems
Recent advances in imaging subcellular processes
Myers, Kenneth A.; Janetopoulos, Christopher
2016-01-01
Cell biology came about with the ability to first visualize cells. As microscopy techniques advanced, the early microscopists became the first cell biologists to observe the inner workings and subcellular structures that control life. This ability to see organelles within a cell provided scientists with the first understanding of how cells function. The visualization of the dynamic architecture of subcellular structures now often drives questions as researchers seek to understand the intricacies of the cell. With the advent of fluorescent labeling techniques, better and new optical techniques, and more sensitive and faster cameras, a whole array of questions can now be asked. There has been an explosion of new light microscopic techniques, and the race is on to build better and more powerful imaging systems so that we can further our understanding of the spatial and temporal mechanisms controlling molecular cell biology. PMID:27408708
Computer tomography imaging of fast plasmachemical processes
Denisova, N. V.; Katsnelson, S. S.; Pozdnyakov, G. A.
2007-11-15
Results are presented from experimental studies of the interaction of a high-enthalpy methane plasma bunch with gaseous methane in a plasmachemical reactor. The interaction of the plasma flow with the rest gas was visualized by using streak imaging and computer tomography. Tomography was applied for the first time to reconstruct the spatial structure and dynamics of the reagent zones in the microsecond range by the maximum entropy method. The reagent zones were identified from the emission of atomic hydrogen (the H{sub {alpha}} line) and molecular carbon (the Swan bands). The spatiotemporal behavior of the reagent zones was determined, and their relation to the shock-wave structure of the plasma flow was examined.
Recent advances in imaging subcellular processes.
Myers, Kenneth A; Janetopoulos, Christopher
2016-01-01
Cell biology came about with the ability to first visualize cells. As microscopy techniques advanced, the early microscopists became the first cell biologists to observe the inner workings and subcellular structures that control life. This ability to see organelles within a cell provided scientists with the first understanding of how cells function. The visualization of the dynamic architecture of subcellular structures now often drives questions as researchers seek to understand the intricacies of the cell. With the advent of fluorescent labeling techniques, better and new optical techniques, and more sensitive and faster cameras, a whole array of questions can now be asked. There has been an explosion of new light microscopic techniques, and the race is on to build better and more powerful imaging systems so that we can further our understanding of the spatial and temporal mechanisms controlling molecular cell biology. PMID:27408708
An Image Database on a Parallel Processing Network.
ERIC Educational Resources Information Center
Philip, G.; And Others
1991-01-01
Describes the design and development of an image database for photographs in the Ulster Museum (Northern Ireland) that used parallelism from a transputer network. Topics addressed include image processing techniques; documentation needed for the photographs, including indexing, classifying, and cataloging; problems; hardware and software aspects;…
Mathematical modeling and simulation of the space shuttle imaging radar antennas
NASA Technical Reports Server (NTRS)
Campbell, R. W.; Melick, K. E.; Coffey, E. L., III
1978-01-01
Simulations of space shuttle synthetic aperture radar antennas under the influence of space environmental conditions were carried out at L, C, and X-band. Mathematical difficulties in modeling large, non-planar array antennas are discussed, and an approximate modeling technique is presented. Results for several antenna error conditions are illustrated in far-field profile patterns, earth surface footprint contours, and summary graphs.
Fingerprint pattern restoration by digital image processing techniques.
Wen, Che-Yen; Yu, Chiu-Chung
2003-09-01
Fingerprint evidence plays an important role in solving criminal problems. However, defective (lacking information needed for completeness) or contaminated (undesirable information included) fingerprint patterns make identifying and recognizing processes difficult. Unfortunately. this is the usual case. In the recognizing process (enhancement of patterns, or elimination of "false alarms" so that a fingerprint pattern can be searched in the Automated Fingerprint Identification System (AFIS)), chemical and physical techniques have been proposed to improve pattern legibility. In the identifying process, a fingerprint examiner can enhance contaminated (but not defective) fingerprint patterns under guidelines provided by the Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST), the Scientific Working Group on Imaging Technology (SWGIT), and an AFIS working group within the National Institute of Justice. Recently, the image processing techniques have been successfully applied in forensic science. For example, we have applied image enhancement methods to improve the legibility of digital images such as fingerprints and vehicle plate numbers. In this paper, we propose a novel digital image restoration technique based on the AM (amplitude modulation)-FM (frequency modulation) reaction-diffusion method to restore defective or contaminated fingerprint patterns. This method shows its potential application to fingerprint pattern enhancement in the recognizing process (but not for the identifying process). Synthetic and real images are used to show the capability of the proposed method. The results of enhancing fingerprint patterns by the manual process and our method are evaluated and compared. PMID:14535661
New Windows based Color Morphological Operators for Biomedical Image Processing
NASA Astrophysics Data System (ADS)
Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia
2016-04-01
Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.
Image processing for flight crew enhanced situation awareness
NASA Technical Reports Server (NTRS)
Roberts, Barry
1993-01-01
This presentation describes the image processing work that is being performed for the Enhanced Situational Awareness System (ESAS) application. Specifically, the presented work supports the Enhanced Vision System (EVS) component of ESAS.
ERIC Educational Resources Information Center
Sworder, Steven
In 1989, a study was conducted at Saddleback College (SC) to analyze the success rates of students who delayed enrollment into a mathematics class after completing the assessment process. The study population consisted of 1,027 students who participated in the mathematics assessment process between July and August 1988 and who enrolled in at least…
ERIC Educational Resources Information Center
Santagata, Rossella; Bray, Wendy
2016-01-01
This study examined processes at the core of teacher professional development (PD) experiences that might positively impact teacher learning and more specifically teacher change. Four processes were considered in the context of a PD program focused on student mathematical errors: analysis of students' mathematical misconceptions as a lever for…
Image processing with the radial Hilbert transform of photo-thermal imaging for carious detection
NASA Astrophysics Data System (ADS)
El-Sharkawy, Yasser H.
2014-03-01
Knowledge of heat transfer in biological bodies has many diagnostic and therapeutic applications involving either raising or lowering of temperature, and often requires precise monitoring of the spatial distribution of thermal histories that are produced during a treatment protocol. The present paper therefore aims to design and implementation of laser therapeutic and imaging system used for carious tracking and drilling by develop a mathematical algorithm using Hilbert transform for edge detection of photo-thermal imaging. photothermal imaging has the ability to penetrate and yield information about an opaque medium well beyond the range of conventional optical imaging. Owing to this ability, Q- switching Nd:YAG laser at wavelength 1064 nm has been extensively used in human teeth to study the sub-surface deposition of laser radiation. The high absorption coefficient of the carious rather than normal region rise its temperature generating IR thermal radiation captured by high resolution thermal camera. Changing the pulse repetition frequency of the laser pulses affects the penetration depth of the laser, which can provide three-dimensional (3D) images in arbitrary planes and allow imaging deep within a solid tissue.
NASA Astrophysics Data System (ADS)
Andersen, Anders H.; Rayens, William S.; Li, Ren-Cang; Blonder, Lee X.
2000-10-01
In this paper we describe the enormous potential that multilinear models hold for the analysis of data from neuroimaging experiments that rely on functional magnetic resonance imaging (MRI) or other imaging modalities. A case is made for why one might fully expect that the successful introduction of these models to the neuroscience community could define the next generation of structure-seeking paradigms in the area. In spite of the potential for immediate application, there is much to do from the perspective of statistical science. That is, although multilinear models have already been particularly successful in chemistry and psychology, relatively little is known about their statistical properties. To that end, our research group at the University of Kentucky has made significant progress. In particular, we are in the process of developing formal influence measures for multilinear methods as well as associated classification models and effective implementations. We believe that these problems will be among the most important and useful to the scientific community. Details are presented herein and an application is given in the context of facial emotion processing experiments.
Digital image processing for the earth resources technology satellite data.
NASA Technical Reports Server (NTRS)
Will, P. M.; Bakis, R.; Wesley, M. A.
1972-01-01
This paper discusses the problems of digital processing of the large volumes of multispectral image data that are expected to be received from the ERTS program. Correction of geometric and radiometric distortions are discussed and a byte oriented implementation is proposed. CPU timing estimates are given for a System/360 Model 67, and show that a processing throughput of 1000 image sets per week is feasible.
ELAS: A powerful, general purpose image processing package
NASA Technical Reports Server (NTRS)
Walters, David; Rickman, Douglas
1991-01-01
ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.
The design of a distributed image processing and dissemination system
Rafferty, P.; Hower, L.
1990-01-01
The design and implementation of a distributed image processing and dissemination system was undertaken and accomplished as part of a prototype communication and intelligence (CI) system, the contingency support system (CSS), which is intended to support contingency operations of the Tactical Air Command. The system consists of six (6) Sun 3/180C workstations with integrated ITEX image processors and three (3) 3/50 diskless workstations located at four (4) system nodes (INEL, base, and mobiles). All 3/180C workstations are capable of image system server functions where as the 3/50s are image system clients only. Distribution is accomplished via both local and wide area networks using standard Defense Data Network (DDN) protocols (i.e., TCP/IP, et al.) and Defense Satellite Communication Systems (DSCS) compatible SHF Transportable Satellite Earth Terminals (TSET). Image applications utilize Sun's Remote Procedure Call (RPC) to facilitate the image system client and server relationships. The system provides functions to acquire, display, annotate, process, transfer, and manage images via an icon, panel, and menu oriented Sunview{trademark} based user interface. Image spatial resolution is 512 {times} 480 with 8-bits/pixel black and white and 12/24 bits/pixel color depending on system configuration. Compression is used during various image display and transmission functions to reduce the dynamic range of image data of 12/6/3/2 bits/pixel depending on the application. Image acquisition is accomplished in real-time or near-real-time by special purpose Itex image hardware. As a result all image displays are highly interactive with attention given to subsecond response time. 3 refs., 7 figs.
Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images
NASA Astrophysics Data System (ADS)
Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk
2007-02-01
The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.
Digital interactive image analysis by array processing
NASA Technical Reports Server (NTRS)
Sabels, B. E.; Jennings, J. D.
1973-01-01
An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.
Multimission image processing and science data visualization
NASA Technical Reports Server (NTRS)
Green, William B.
1993-01-01
The Operational Science Analysis (OSA) Functional area supports science instrument data display, analysis, visualization and photo processing in support of flight operations of planetary spacecraft managed by the Jet Propulsion Laboratory (JPL). This paper describes the data products generated by the OSA functional area, and the current computer system used to generate these data products. The objectives on a system upgrade now in process are described. The design approach to development of the new system are reviewed, including use of the Unix operating system and X-Window display standards to provide platform independence, portability, and modularity within the new system, is reviewed. The new system should provide a modular and scaleable capability supporting a variety of future missions at JPL.
High Dynamic Range Processing for Magnetic Resonance Imaging
Sukerkar, Preeti A.; Meade, Thomas J.
2013-01-01
Purpose To minimize feature loss in T1- and T2-weighted MRI by merging multiple MR images acquired at different TR and TE to generate an image with increased dynamic range. Materials and Methods High Dynamic Range (HDR) processing techniques from the field of photography were applied to a series of acquired MR images. Specifically, a method to parameterize the algorithm for MRI data was developed and tested. T1- and T2-weighted images of a number of contrast agent phantoms and a live mouse were acquired with varying TR and TE parameters. The images were computationally merged to produce HDR-MR images. All acquisitions were performed on a 7.05 T Bruker PharmaScan with a multi-echo spin echo pulse sequence. Results HDR-MRI delineated bright and dark features that were either saturated or indistinguishable from background in standard T1- and T2-weighted MRI. The increased dynamic range preserved intensity gradation over a larger range of T1 and T2 in phantoms and revealed more anatomical features in vivo. Conclusions We have developed and tested a method to apply HDR processing to MR images. The increased dynamic range of HDR-MR images as compared to standard T1- and T2-weighted images minimizes feature loss caused by magnetization recovery or low SNR. PMID:24250788
Image processing in an enhanced and synthetic vision system
NASA Astrophysics Data System (ADS)
Mueller, Rupert M.; Palubinskas, Gintautas; Gemperlein, Hans
2002-07-01
'Synthetic Vision' and 'Sensor Vision' complement to an ideal system for the pilot's situation awareness. To fuse these two data sets the sensor images are first segmented by a k-means algorithm and then features are extracted by blob analysis. These image features are compared with the features of the projected airport data using fuzzy logic in order to identify the runway in the sensor image and to improve the aircraft navigation data. This process is necessary due to inaccurate input data i.e. position and attitude of the aircraft. After identifying the runway, obstacles can be detected using the sensor image. The extracted information is presented to the pilot's display system and combined with the appropriate information from the MMW radar sensor in a subsequent fusion processor. A real time image processing procedure is discussed and demonstrated with IR measurements of a FLIR system during landing approaches.
Processing infrared images for target detection: A literature study
NASA Astrophysics Data System (ADS)
Alblas, B. P.
1988-07-01
Methods of image processing applied to IR images to obtain better detection and/or recognition of military targets, particularly vehicles, are reviewed. The following subjects are dealt with: histogram specification, scanline degradation, correlation, clutter and noise. Only a few studies deal with the effects of image processing on human performance. Most of the literature concerns computer vision. Local adaptive and image dependent techniques appear to be the most promising methods of obtaining higher observation performance. In particular the size-contrast box filter and histogram specification methods seem to be suitable. There is a need for a generally applicable definition of image quality and clutter level to evaluate the utility of a specified algorithm. Proposals for further research are given.
Particle sizing in rocket motor studies utilizing hologram image processing
NASA Technical Reports Server (NTRS)
Netzer, David; Powers, John
1987-01-01
A technique of obtaining particle size information from holograms of combustion products is described. The holograms are obtained with a pulsed ruby laser through windows in a combustion chamber. The reconstruction is done with a krypton laser with the real image being viewed through a microscope. The particle size information is measured with a Quantimet 720 image processing system which can discriminate various features and perform measurements of the portions of interest in the image. Various problems that arise in the technique are discussed, especially those that are a consequence of the speckle due to the diffuse illumination used in the recording process.
Using image processing techniques on proximity probe signals in rotordynamics
NASA Astrophysics Data System (ADS)
Diamond, Dawie; Heyns, Stephan; Oberholster, Abrie
2016-06-01
This paper proposes a new approach to process proximity probe signals in rotordynamic applications. It is argued that the signal be interpreted as a one dimensional image. Existing image processing techniques can then be used to gain information about the object being measured. Some results from one application is presented. Rotor blade tip deflections can be calculated through localizing phase information in this one dimensional image. It is experimentally shown that the newly proposed method performs more accurately than standard techniques, especially where the sampling rate of the data acquisition system is inadequate by conventional standards.
Quantum processing of images by continuous wave optical parametric amplification.
Lopez, L; Treps, N; Chalopin, B; Fabre, C; Maître, A
2008-01-11
We have experimentally shown that a degenerate optical parametric oscillator pumped by a cw laser, inserted in a cavity having degenerate transverse modes such as a hemiconfocal or confocal cavity, and operating below the oscillation threshold in the regime of phase sensitive amplification, is able to process input images of various shapes in the quantum regime. More precisely, when deamplified, the image is amplitude squeezed; when amplified, its two polarization components are intensity correlated at the quantum level. In addition, the amplification process of the images is shown to take place in the noiseless regime.
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
NASA Astrophysics Data System (ADS)
Worman, A.; Kjellin, J. P.; Lindahl, A.; Johansson, H.
2005-05-01
To throw light on coupled hydrological, chemical and microbiological processes in treatment wetlands, this study uses both radioactive water and reactive tracers. A tracer mixture consisting of tritiated water, P-32 in the form of PO4- and N-15 in the form of N2O was injected to the 2.6 hectare large Ekeby wetland, Sweden. From the breakthrough curves of tritium, the mean residence time of water in pond 1 can be estimated to be about 3 to 3.5 days. The total injected activity of phosphorus was 17.98 GBq and about 13.73 GBq was recovered at the outlet during the investigation period ending 10 days and 16 hours after the start of the injection. This implies that 24% of the phosphate solution was removed in the November - December period in which the experiment was performed. The total injected amount of N-15 was 42.1 grams and 29.6 grams was retained at the effluent. This means that 30% of the nitrogen was either retained in the wetland or removed due to denitrification. An analysis of regular monitoring data shows that the annual removal rate in the entire wetland (each flow line passes two ponds in series) is about 50% for total phosphorus and 25% for total nitrogen. Probably, the most important mechanism for this removal is adsorption onto particulate matter and deposition. Analyses of vegetation material indicate that a certain (minor) fraction was adsorbed to submersed and emerging macrophytes, like Elodera Canadensis, Thypa sp. (Cattail) and Glyceria sp. (Manna grass). A 2D mathematical model for both water flow and solute transport could explain the N-transport through the wetland. The model accounts for the rate-limited exchange with bed sediments and denitrification in the water and bed sediment. Independent batch tests indicate a particularly high microbiological activity in the bed sediments. The rate-limited exchange with the bed limits also the denitrification capacity of the wetland.
Anomalous diffusion process applied to magnetic resonance image enhancement.
Senra Filho, A C da S; Salmon, C E Garrido; Murta Junior, L O
2015-03-21
Diffusion process is widely applied to digital image enhancement both directly introducing diffusion equation as in anisotropic diffusion (AD) filter, and indirectly by convolution as in Gaussian filter. Anomalous diffusion process (ADP), given by a nonlinear relationship in diffusion equation and characterized by an anomalous parameters q, is supposed to be consistent with inhomogeneous media. Although classic diffusion process is widely studied and effective in various image settings, the effectiveness of ADP as an image enhancement is still unknown. In this paper we proposed the anomalous diffusion filters in both isotropic (IAD) and anisotropic (AAD) forms for magnetic resonance imaging (MRI) enhancement. Filters based on discrete implementation of anomalous diffusion were applied to noisy MRI T2w images (brain, chest and abdominal) in order to quantify SNR gains estimating the performance for the proposed anomalous filter when realistic noise is added to those images. Results show that for images containing complex structures, e.g. brain structures, anomalous diffusion presents the highest enhancements when compared to classical diffusion approach. Furthermore, ADP presented a more effective enhancement for images containing Rayleigh and Gaussian noise. Anomalous filters showed an ability to preserve anatomic edges and a SNR improvement of 26% for brain images, compared to classical filter. In addition, AAD and IAD filters showed optimum results for noise distributions that appear on extreme situations on MRI, i.e. in low SNR images with approximate Rayleigh noise distribution, and for high SNR images with Gaussian or non central χ noise distributions. AAD and IAD filter showed the best results for the parametric range 1.2 < q < 1.6, suggesting that the anomalous diffusion regime is more suitable for MRI. This study indicates the proposed anomalous filters as promising approaches in qualitative and quantitative MRI enhancement.
Dielectric barrier discharge image processing by Photoshop
NASA Astrophysics Data System (ADS)
Dong, Lifang; Li, Xuechen; Yin, Zengqian; Zhang, Qingli
2001-09-01
In this paper, the filamentary pattern of dielectric barrier discharge has been processed by using Photoshop, the coordinates of each filament can also be obtained. By using Photoshop two different ways have been used to analyze the spatial order of the pattern formation in dielectric barrier discharge. The results show that the distance of the neighbor filaments at U equals 14 kV and d equals 0.9 mm is about 1.8 mm. In the scope of the experimental error, the results from the two different methods are similar.
Image processing for improved eye-tracking accuracy
NASA Technical Reports Server (NTRS)
Mulligan, J. B.; Watson, A. B. (Principal Investigator)
1997-01-01
Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.
Optimizing the processing and presentation of PPCR imaging
NASA Astrophysics Data System (ADS)
Davies, Andrew G.; Cowen, Arnold R.; Parkin, Geoff J. S.; Bury, Robert F.
1996-03-01
Photostimulable phosphor computed radiography (CR) is becoming an increasingly popular image acquisition system. The acceptability of this technique, both diagnostically, ergonomically and economically is highly influenced by the method by which the image data is presented to the user. Traditional CR systems utilize an 11' by 14' film hardcopy format, and can place two images per exposure onto this film, which does not correspond to sizes and presentations provided by conventional techniques. It is also the authors' experience that the image enhancement algorithms provided by traditional CR systems do not provide optimal image presentation. An alternative image enhancement algorithm was developed, along with a number of hardcopy formats, designed to match the requirements of the image reporting process. The new image enhancement algorithm, called dynamic range reduction (DRR), is designed to provide a single presentation per exposure, maintaining the appearance of a conventional radiograph, while optimizing the rendition of diagnostically relevant features within the image. The algorithm was developed on a Sun SPARCstation, but later ported to a Philips' EasyVisionRAD workstation. Print formats were developed on the EasyVision to improve the acceptability of the CR hardcopy. For example, for mammographic examinations, four mammograms (a cranio-caudal and medio-lateral view of each breast) are taken for each patient, with all images placed onto a single sheet of 14' by 17' film. The new composite format provides a more suitable image presentation for reporting, and is more economical to produce. It is the use of enhanced image processing and presentation which has enabled all mammography undertaken within the general infirmary to be performed using the CR/EasyVisionRAD DRR/3M 969 combination, without recourse to conventional film/screen mammography.
Parallel processing of ADS40 images on PC network
NASA Astrophysics Data System (ADS)
Qiu, Feng; Duan, Yansong; Zhang, Jianqing
2009-10-01
In this paper, we aim to design a parallel processing system based on economic hardware environment to optimize photogrammetric process of Leica ADS40 images considering ideas and methods of parallel computing. We adopt parallel computing PCAM principle to design and implement a test system for parallel processing of ADS40 images. The test system consists of common personal computers and local gigabits network. It can make full use of network computing and storage resources under a economical and practical cost to deal with ADS40 images. Experiment shows that it achieves significant improvement of processing efficiency. Furthermore, the robustness and compatibility of this system is much higher than stand alone computer system because of system's redundancy based on network. In conclusion, parallel processing system based on PC network brings us a much more efficiency solution of ADS40's photogrammetric production.
Negative tone imaging process and materials for EUV lighography
NASA Astrophysics Data System (ADS)
Tarutani, Shinji; Nihashi, Wataru; Hirano, Shuuji; Yokokawa, Natsumi; Takizawa, Hiroo
2013-03-01
The advantages of NTI process in EUV is demonstrated by optical simulation method for 0.25NA and 0.33NA illumination system with view point of optical aerial image quality and photon density. The extendability of NTI for higher NA system is considered for further tight pitch and small size contact hole imaging capability. Process and material design strategy to NTI were discussed with consideration on comparison to ArF NTI process and materials, and challenges in EUV materials dedicated to NTI process were discussed as well. A new polymer was well designed for EUV-NTD process, and the resists formulated with the new polymer demonstrated good advantage of resolution and sensitivity in isolated trench imaging, and 24 nm half pitch resolution at dense C/H, with 0.3NA MET tool.