Science.gov

Sample records for teaching image processing

  1. Image Processing for Teaching.

    ERIC Educational Resources Information Center

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  2. Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services

    NASA Astrophysics Data System (ADS)

    di, L.; Deng, M.

    2010-12-01

    Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory

  3. Teaching Effectively with Visual Effect in an Image-Processing Class.

    ERIC Educational Resources Information Center

    Ng, G. S.

    1997-01-01

    Describes a course teaching the use of computers in emulating human visual capability and image processing and proposes an interactive presentation using multimedia technology to capture and sustain student attention. Describes the three phase presentation: introduction of image processing equipment, presentation of lecture material, and…

  4. Image Processing for Teaching: Transforming a Scientific Research Tool into an Educational Technology.

    ERIC Educational Resources Information Center

    Greenberg, Richard

    1998-01-01

    Describes the Image Processing for Teaching (IPT) project which provides digital image processing to excite students about science and mathematics as they use research-quality software on microcomputers. Provides information on IPT whose components of this dissemination project have been widespread teacher education, curriculum-based materials…

  5. The teaching of computer programming and digital image processing in radiography.

    PubMed

    Allan, G L; Zylinski, J

    1998-06-01

    The increased use of digital processing techniques in Medical Radiations imaging modalities, along with the rapid advance in information technology has resulted in a significant change in the delivery of radiographic teaching programs. This paper details a methodology used to concurrently educate radiographers in both computer programming and image processing. The students learn to program in visual basic applications (VBA), and the programming skills are contextualised by requiring the students to write a digital subtraction angiography (DSA) package. Program code generation and image presentation interface is undertaken by the spreadsheet Microsoft Excel. The user-friendly nature of this common interface enables all students to readily begin program creation. The teaching of programming and image processing skills by this method may be readily generalised to other vocational fields where digital image manipulation is a professional requirement. PMID:9726504

  6. Teaching image processing and pattern recognition with the Intel OpenCV library

    NASA Astrophysics Data System (ADS)

    Kozłowski, Adam; Królak, Aleksandra

    2009-06-01

    In this paper we present an approach to teaching image processing and pattern recognition with the use of the OpenCV library. Image processing, pattern recognition and computer vision are important branches of science and apply to tasks ranging from critical, involving medical diagnostics, to everyday tasks including art and entertainment purposes. It is therefore crucial to provide students of image processing and pattern recognition with the most up-to-date solutions available. In the Institute of Electronics at the Technical University of Lodz we facilitate the teaching process in this subject with the OpenCV library, which is an open-source set of classes, functions and procedures that can be used in programming efficient and innovative algorithms for various purposes. The topics of student projects completed with the help of the OpenCV library range from automatic correction of image quality parameters or creation of panoramic images from video to pedestrian tracking in surveillance camera video sequences or head-movement-based mouse cursor control for the motorically impaired.

  7. Teaching High School Science Using Image Processing: A Case Study of Implementation of Computer Technology.

    ERIC Educational Resources Information Center

    Greenberg, Richard; Raphael, Jacqueline; Keller, Jill L.; Tobias, Sheila

    1998-01-01

    Outlines an in-depth case study of teachers' use of image processing in biology, earth science, and physics classes in one high school science department. Explores issues surrounding technology implementation. Contains 21 references. (DDR)

  8. A self-teaching image processing and voice-recognition-based, intelligent and interactive system to educate visually impaired children

    NASA Astrophysics Data System (ADS)

    Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad

    2010-02-01

    A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.

  9. SSMiles: Using Models to Teach about Remote Sensing and Image Processing.

    ERIC Educational Resources Information Center

    Tracy, Dyanne M., Ed.

    1994-01-01

    Presents an introductory lesson on remote sensing and image processing to be used in cooperative groups. Students are asked to solve a problem by gathering information, making inferences, transforming data into other forms, and making and testing hypotheses. Includes four expansions of the lesson and a reproducible student worksheet. (MKR)

  10. Image Processing

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  11. Images of Teaching.

    ERIC Educational Resources Information Center

    Hargrove, Kathy

    2003-01-01

    This article explores different teaching styles, including instructional managers (who focus on orchestrating sets of activities for groups and individuals), caring persons (who are more deeply concerned about how the work of the classroom contributes to the students' growth as individuals), or generous experts (who act as mentors). (Contains 1…

  12. Teaching Reflection Seismic Processing

    NASA Astrophysics Data System (ADS)

    Forel, D.; Benz, T.; Pennington, W. D.

    2004-12-01

    Without pictures, it is difficult to give students a feeling for wave propagation, transmission, and reflection. Even with pictures, wave propagation is still static to many. However, when students use and modify scripts that generate wavefronts and rays through a geologic model that they have modified themselves, we find that students gain a real feeling for wave propagation. To facilitate teaching 2-D seismic reflection data processing (from acquisition through migration) to our undergraduate and graduate Reflection Seismology students, we use Seismic Un*x (SU) software. SU is maintained and distributed by Colorado School of Mines, and it is freely available (at www.cwp.mines.edu/cwpcodes). Our approach includes use of synthetic and real seismic data, processing scripts, and detailed explanation of the scripts. Our real data were provided by Gregory F. Moore of the University of Hawaii. This approach can be used by any school at virtually no expense for either software or data, and can provide students with a sound introduction to techniques used in processing of reflection seismic data. The same software can be used for other purposes, such as research, with no additional expense. Students who have completed a course using SU are well equipped to begin using it for research, as well. Scripts for each processing step are supplied and explained to the students. Our detailed description of the scripts means students do not have to know anything about SU to start. Experience with the Unix operating system is preferable but not necessary -- our notes include Computer Hints to help the beginner work with the Unix operating system. We include several examples of synthetic model building, acquiring shot gathers through synthetic models, sorting shot gathers to CMP gathers, gain, 1-D frequency filtering, f-k filtering, deconvolution, semblance displays and velocity analysis, flattening data (NMO), stacking the CMPs, and migration. We use two real (marine) data sets. One

  13. Image Processing

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A new spinoff product was derived from Geospectra Corporation's expertise in processing LANDSAT data in a software package. Called ATOM (for Automatic Topographic Mapping), it's capable of digitally extracting elevation information from stereo photos taken by spaceborne cameras. ATOM offers a new dimension of realism in applications involving terrain simulations, producing extremely precise maps of an area's elevations at a lower cost than traditional methods. ATOM has a number of applications involving defense training simulations and offers utility in architecture, urban planning, forestry, petroleum and mineral exploration.

  14. Digital image processing.

    PubMed

    Seeram, Euclid

    2004-01-01

    Digital image processing is now commonplace in radiology, nuclear medicine and sonography. This article outlines underlying principles and concepts of digital image processing. After completing this article, readers should be able to: List the limitations of film-based imaging. Identify major components of a digital imaging system. Describe the history and application areas of digital image processing. Discuss image representation and the fundamentals of digital image processing. Outline digital image processing techniques and processing operations used in selected imaging modalities. Explain the basic concepts and visualization tools used in 3-D and virtual reality imaging. Recognize medical imaging informatics as a new area of specialization for radiologic technologists. PMID:15352557

  15. Teaching: A Reflective Process

    ERIC Educational Resources Information Center

    German, Susan; O'Day, Elizabeth

    2009-01-01

    In this article, the authors describe how they used formative assessments to ferret out possible misconceptions among middle-school students in a unit about weather-related concepts. Because they teach fifth- and eighth-grade science, this assessment also gives them a chance to see how student understanding develops over the years. This year they…

  16. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  17. Teaching Image-Processing Concepts in Junior High School: Boys' and Girls' Achievements and Attitudes towards Technology

    ERIC Educational Resources Information Center

    Barak, Moshe; Asad, Khaled

    2012-01-01

    Background: This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose: The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these…

  18. Image-Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1986-01-01

    Apple Image-Processing Educator (AIPE) explores ability of microcomputers to provide personalized computer-assisted instruction (CAI) in digital image processing of remotely sensed images. AIPE is "proof-of-concept" system, not polished production system. User-friendly prompts provide access to explanations of common features of digital image processing and of sample programs that implement these features.

  19. Multispectral imaging and image processing

    NASA Astrophysics Data System (ADS)

    Klein, Julie

    2014-02-01

    The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.

  20. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  1. Hyperspectral image processing methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  2. Hyperspectral image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  3. Hybrid image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1990-01-01

    Partly-digital, partly-optical 'hybrid' image processing attempts to use the properties of each domain to synergistic advantage: while Fourier optics furnishes speed, digital processing allows the use of much greater algorithmic complexity. The video-rate image-coordinate transformation used is a critical technology for real-time hybrid image-pattern recognition. Attention is given to the separation of pose variables, image registration, and both single- and multiple-frame registration.

  4. Subroutines For Image Processing

    NASA Technical Reports Server (NTRS)

    Faulcon, Nettie D.; Monteith, James H.; Miller, Keith W.

    1988-01-01

    Image Processing Library computer program, IPLIB, is collection of subroutines facilitating use of COMTAL image-processing system driven by HP 1000 computer. Functions include addition or subtraction of two images with or without scaling, display of color or monochrome images, digitization of image from television camera, display of test pattern, manipulation of bits, and clearing of screen. Provides capability to read or write points, lines, and pixels from image; read or write at location of cursor; and read or write array of integers into COMTAL memory. Written in FORTRAN 77.

  5. Teaching image-processing concepts in junior high school: boys' and girls' achievements and attitudes towards technology

    NASA Astrophysics Data System (ADS)

    Barak, Moshe; Asad, Khaled

    2012-04-01

    Background : This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose : The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these subjects to the children's world and to the digital culture characterizing society today. Sample : The participants were 60 junior high-school students (9th grade). Design and method : Data collection included observations in the classes, administering an attitude questionnaire before and after the course, giving an achievement exam and analyzing the students' final projects. Results and conclusions : The findings indicated that boys' and girls' achievements were similar throughout the course, and all managed to handle the mathematical knowledge without any particular difficulties. Learners' motivation to engage in the subject was high in the project-based learning part of the course in which they dealt, for instance, with editing their own pictures and experimenting with a facial recognition method. However, the students were less interested in learning the theory at the beginning of the course. The course increased the girls', more than the boys', interest in learning scientific-technological subjects in school, and the gender gap in this regard was bridged.

  6. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  7. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  8. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  9. Concerns about Teaching Process: Student Teachers' Perspective

    ERIC Educational Resources Information Center

    Cakmak, Melek

    2008-01-01

    The aim of this study is to determine the student teachers' concerns about the teaching process including the teaching profession, teaching methods, planning, instruction, evaluation and classroom management. A total of 156 student teachers from five departments in the Gazi faculty of education participated in the study. A questionnaire including…

  10. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  11. Visual color image processing

    NASA Astrophysics Data System (ADS)

    Qiu, Guoping; Schaefer, Gerald

    1999-12-01

    In this paper, we propose a color image processing method by combining modern signal processing technique with knowledge about the properties of the human color vision system. Color signals are processed differently according to their visual importance. The emphasis of the technique is on the preservation of total visual quality of the image and simultaneously taking into account computational efficiency. A specific color image enhancement technique, termed Hybrid Vector Median Filtering is presented. Computer simulations have been performed to demonstrate that the new approach is technically sound and results are comparable to or better than traditional methods.

  12. Meteorological image processing applications

    NASA Technical Reports Server (NTRS)

    Bracken, P. A.; Dalton, J. T.; Hasler, A. F.; Adler, R. F.

    1979-01-01

    Meteorologists at NASA's Goddard Space Flight Center are conducting an extensive program of research in weather and climate related phenomena. This paper focuses on meteorological image processing applications directed toward gaining a detailed understanding of severe weather phenomena. In addition, the paper discusses the ground data handling and image processing systems used at the Goddard Space Flight Center to support severe weather research activities and describes three specific meteorological studies which utilized these facilities.

  13. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  14. Onboard image processing

    NASA Technical Reports Server (NTRS)

    Martin, D. R.; Samulon, A. S.

    1979-01-01

    The possibility of onboard geometric correction of Thematic Mapper type imagery to make possible image registration is considered. Typically, image registration is performed by processing raw image data on the ground. The geometric distortion (e.g., due to variation in spacecraft location and viewing angle) is estimated by using a Kalman filter updated by correlating the received data with a small reference subimage, which has known location. Onboard image processing dictates minimizing the complexity of the distortion estimation while offering the advantages of a real time environment. In keeping with this, the distortion estimation can be replaced by information obtained from the Global Positioning System and from advanced star trackers. Although not as accurate as the conventional ground control point technique, this approach is capable of achieving subpixel registration. Appropriate attitude commands can be used in conjunction with image processing to achieve exact overlap of image frames. The magnitude of the various distortion contributions, the accuracy with which they can be measured in real time, and approaches to onboard correction are investigated.

  15. Delivering labeled teaching images over the Web.

    PubMed Central

    Lehmann, H. P.; Nguyen, B.; Freedman, J.

    1998-01-01

    The Web provides educators with the best opportunity to date for distributing teaching images across the educational enterprise and within the clinical environment. Experience in the pre-Web era showed that labels and information linked to parts of the image are crucial to student learning. Standard Web technology does not enable the delivery of labeled images. We have developed an environment called OverLayer that succeeds in the authoring and delivering of such images in a variety of formats. OverLayer has a number of functional specifications, based on the literature and on our experience, among them, the following: Users should be able to find components by name or by image; to receive feedback about their choice to test themselves. The image should be of arbitrary size; should be reusable; should be linked to further information; should be stand-alone files. The labels should not obscure the image; should be linked to further information. Images should be stand-alone files that can be transferred among faculty members. Implemented in Java, OverLayer (http:/(/)omie.med.jhmi.edu/overlayer) has at its heart a set of object classes that have been reused in a number of applets for different teaching purposes and a file format for creating OverLayer images. We have created a 350-image histology library and a 500-image pathology library, and are working on a 400-image GI endoscopy library. We hope that the OverLayer suite of classes and implementations will help to further the gains made by previous image-based hyperlinked technologies. Images Figure 3 PMID:9929253

  16. Image sets for satellite image processing systems

    NASA Astrophysics Data System (ADS)

    Peterson, Michael R.; Horner, Toby; Temple, Asael

    2011-06-01

    The development of novel image processing algorithms requires a diverse and relevant set of training images to ensure the general applicability of such algorithms for their required tasks. Images must be appropriately chosen for the algorithm's intended applications. Image processing algorithms often employ the discrete wavelet transform (DWT) algorithm to provide efficient compression and near-perfect reconstruction of image data. Defense applications often require the transmission of images and video across noisy or low-bandwidth channels. Unfortunately, the DWT algorithm's performance deteriorates in the presence of noise. Evolutionary algorithms are often able to train image filters that outperform DWT filters in noisy environments. Here, we present and evaluate two image sets suitable for the training of such filters for satellite and unmanned aerial vehicle imagery applications. We demonstrate the use of the first image set as a training platform for evolutionary algorithms that optimize discrete wavelet transform (DWT)-based image transform filters for satellite image compression. We evaluate the suitability of each image as a training image during optimization. Each image is ranked according to its suitability as a training image and its difficulty as a test image. The second image set provides a test-bed for holdout validation of trained image filters. These images are used to independently verify that trained filters will provide strong performance on unseen satellite images. Collectively, these image sets are suitable for the development of image processing algorithms for satellite and reconnaissance imagery applications.

  17. Students' Perceptions of the Teaching Evaluation Process

    ERIC Educational Resources Information Center

    Kite, Mary E.; Subedi, Prabin Chandra; Bryant-Lees, Kinsey B.

    2015-01-01

    We explored how students view the teaching evaluation process and assessed their self-reported behaviors when completing student evaluations of teaching (SETs). We administered a 28-item survey assessing these views to students from a cross section of majors across 20 institutions (N = 597). Responses to this measure were analyzed using…

  18. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  19. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  20. Retinomorphic image processing.

    PubMed

    Ghosh, Kuntal; Bhaumik, Kamales; Sarkar, Sandip

    2008-01-01

    The present work is aimed at understanding and explaining some of the aspects of visual signal processing at the retinal level while exploiting the same towards the development of some simple techniques in the domain of digital image processing. Classical studies on retinal physiology revealed the nature of contrast sensitivity of the receptive field of bipolar or ganglion cells, which lie in the outer and inner plexiform layers of the retina. To explain these observations, a difference of Gaussian (DOG) filter was suggested, which was subsequently modified to a Laplacian of Gaussian (LOG) filter for computational ease in handling two-dimensional retinal inputs. Till date almost all image processing algorithms, used in various branches of science and engineering had followed LOG or one of its variants. Recent observations in retinal physiology however, indicate that the retinal ganglion cells receive input from a larger area than the classical receptive fields. We have proposed an isotropic model for the non-classical receptive field of the retinal ganglion cells, corroborated from these recent observations, by introducing higher order derivatives of Gaussian expressed as linear combination of Gaussians only. In digital image processing, this provides a new mechanism of edge detection on one hand and image half-toning on the other. It has also been found that living systems may sometimes prefer to "perceive" the external scenario by adding noise to the received signals in the pre-processing level for arriving at better information on light and shade in the edge map. The proposed model also provides explanation to many brightness-contrast illusions hitherto unexplained not only by the classical isotropic model but also by some other Gestalt and Constructivist models or by non-isotropic multi-scale models. The proposed model is easy to implement both in the analog and digital domain. A scheme for implementation in the analog domain generates a new silicon retina

  1. Teaching Geoscience with Visualizations: Using Images, Animations and Models Effectively

    NASA Astrophysics Data System (ADS)

    Manduca, C. A.; Hall-Wallace, M.; Mogk, D.; Tversky, B.; Slotta, J.; Crabaugh, J.

    2004-05-01

    Visualizing the Earth, its processes, and its evolution through time is a fundamental aspect of geoscience. Geoscientists use a wide variety of tools to assist them in creating their own mental images. For example, we now use multilayered visualizations of geographically referenced data to analyze the relationships between different variables and we create animations to look at changes in data or model output through time. An NAGT On the Cutting Edge emerging theme workshop focused on the use of visualization tools in teaching geoscience by addressing the question "How do we teach geoscience with visualizations effectively?" The workshop held February 26-29 at Carleton College brought together geoscientists who are leaders in using visualizations in their teaching, learning scientists who study how we perceive and learn from visualizations, and creators of visualizations and visualization tools. Participants considered what we know about using visualizations effectively to teach geoscience, what important questions need to be answered to improve our ability to teach effectively, and what resources are needed to increase the capability of teaching with visualizations in the geosciences. Discussion focused on how we use visualizations in our teaching to describe and explain geoscience concepts and to explore and understand data. In addition, a section of the workshop focused on powerful emerging tools and technologies for visualization and their use in geoscience education. Workshop leaders and participants have created a web-site that includes visualizations useful in teaching, an annotated bibliography of research about teaching and learning with visualizations, essays by workshop participants about their work with visualizations, and information for visualization creators. Further information can be found at serc.carleton.edu/NAGTWorkshops/visualize04.

  2. Image processing technology

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Balick, L.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.

  3. Time to Teach: Teaching-Learning Processes in Primary Schools.

    ERIC Educational Resources Information Center

    Bennett, Neville

    A model of the teaching-learning process identifies and describes varied behavioral dimensions of the classroom and how they relate to pupil achievement. The model is based on the assumption that the total amount of engaged time on a particular topic is the most important determinant of achievement and has the components of: (1) quantity of…

  4. Introduction to computer image processing

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  5. Teaching Writing Skills: Focus on the Process.

    ERIC Educational Resources Information Center

    Fraser, Carol

    Current views of the writing process are explored, and implications are drawn from them for the teaching of writing skills in the second language class. Certain psychological processes seem to be common to most writing tasks, namely: (1) the conception stage; (2) the incubation stage, in which two mental processes are at work getting the facts and…

  6. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  7. scikit-image: image processing in Python.

    PubMed

    van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  8. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  9. Computer image processing and recognition

    NASA Technical Reports Server (NTRS)

    Hall, E. L.

    1979-01-01

    A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

  10. Smart Image Enhancement Process

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  11. Complex Dynamics in Academics' Developmental Processes in Teaching

    ERIC Educational Resources Information Center

    Trautwein, Caroline; Nückles, Matthias; Merkt, Marianne

    2015-01-01

    Improving teaching in higher education is a concern for universities worldwide. This study explored academics' developmental processes in teaching using episodic interviews and teaching portfolios. Eight academics in the context of teaching development reported changes in their teaching and change triggers. Thematic analyses revealed seven areas…

  12. IMAGES: An interactive image processing system

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.

    1981-01-01

    The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

  13. ASPIC: STARLINK image processing package

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Hartley, Ken F.; Penny, Alan J.; Kelly, B. D.; King, Dave J.; Lupton, W. F.; Tudhope, D.; Pike, C. D.; Cooke, J. A.; Pence, W. D.; Wallace, Patrick T.; Brownrigg, D. R. K.; Baines, Dave W. T.; Warren-Smith, Rodney F.; McNally, B. V.; Bell, L. L.; Jones, T. A.; Terrett, Dave L.; Pearce, D. J.; Carey, J. V.; Currie, Malcolm J.; Benn, Chris; Beard, S. M.; Giddings, Jack R.; Balona, Luis A.; Harrison, B.; Wood, Roger; Sparkes, Bill; Allan, Peter M.; Berry, David S.; Shirt, J. V.

    2015-10-01

    ASPIC handled basic astronomical image processing. Early releases concentrated on image arithmetic, standard filters, expansion/contraction/selection/combination of images, and displaying and manipulating images on the ARGS and other devices. Later releases added new astronomy-specific applications to this sound framework. The ASPIC collection of about 400 image-processing programs was written using the Starlink "interim" environment in the 1980; the software is now obsolete.

  14. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  15. The Process of Teaching Therapeutic Horseback Riding.

    ERIC Educational Resources Information Center

    Renker, Lorraine

    Therapeutic horseback riding for persons with disabilities provides physical, mental, social, and emotional benefits. Most research in this area has focused on the product or benefits of therapeutic riding, while the process of teaching horseback riding has received little attention. Research from the fields of regular education, special…

  16. Teaching, Communication, and Book Choice Processes

    ERIC Educational Resources Information Center

    Ryan, Dana Marie

    2013-01-01

    Allowing students to select their own books for independent reading has been linked to increased reading engagement, heightened motivation to read, and greater independence and efficacy in reading. However, there has been little exploration of the processes surrounding book choice in elementary classrooms, particularly teaching practices that…

  17. Teaching Process Writing in an Online Environment

    ERIC Educational Resources Information Center

    Carolan, Fergal; Kyppö, Anna

    2015-01-01

    This reflective practice paper offers some insights into teaching an interdisciplinary academic writing course aimed at promoting process writing. The study reflects on students' acquisition of writing skills and the teacher's support practices in a digital writing environment. It presents writers' experiences related to various stages of process…

  18. Teaching and Learning: A Collaborative Process.

    ERIC Educational Resources Information Center

    Goldberg, Merryl R.

    1990-01-01

    Explains the teaching-research method of instruction that employs the teacher and students as collaborative partners in the learning process. States that students attain knowledge through assimilating experiences in ways that are most meaningful for them. Case studies are included. (GG)

  19. Teaching Psychological Report Writing: Content and Process

    ERIC Educational Resources Information Center

    Wiener, Judith; Costaris, Laurie

    2012-01-01

    The purpose of this article is to discuss the process of teaching graduate students in school psychology to write psychological reports that teachers and parents find readable and that guide intervention. The consensus from studies across four decades of research is that effective psychological reports connect to the client's context; have clear…

  20. Teaching English as a Cultural Process.

    ERIC Educational Resources Information Center

    Bailey, Wilfrid C.

    Teaching of English is involved in the transmission of culture in two ways: (1) it is part of the complex process through which culture is transmitted; and (2) it can be a vehicle for the transmission of culture. The English teacher is faced with a combination of the two tasks of enculturation and acculturation. The effective teacher must clearly…

  1. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  2. Teaching Process Design through Integrated Process Synthesis

    ERIC Educational Resources Information Center

    Metzger, Matthew J.; Glasser, Benjamin J.; Patel, Bilal; Hildebrandt, Diane; Glasser, David

    2012-01-01

    The design course is an integral part of chemical engineering education. A novel approach to the design course was recently introduced at the University of the Witwatersrand, Johannesburg, South Africa. The course aimed to introduce students to systematic tools and techniques for setting and evaluating performance targets for processes, as well as…

  3. The APL image processing laboratory

    NASA Technical Reports Server (NTRS)

    Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.

    1984-01-01

    The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.

  4. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  5. Cooperative processes in image segmentation

    NASA Technical Reports Server (NTRS)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  6. Role of Clinical Images Based Teaching as a Supplement to Conventional Clinical Teaching in Dermatology

    PubMed Central

    Kumar, Gurumoorthy Rajesh; Madhavi, Sankar; Karthikeyan, Kaliaperumal; Thirunavakarasu, MR

    2015-01-01

    Introduction: Clinical Dermatology is a visually oriented specialty, where visually oriented teaching is more important than it is in any other specialty. It is essential that students must have repeated exposure to common dermatological disorders in the limited hours of Dermatology clinical teaching. Aim: This study was conducted to assess the effect of clinical images based teaching as a supplement to the patient based clinical teaching in Dermatology, among final year MBBS students. Methods: A clinical batch comprising of 19 students was chosen for the study. Apart from the routine clinical teaching sessions, clinical images based teaching was conducted. This teaching method was evaluated using a retrospective pre-post questionnaire. Students’ performance was assessed using Photo Quiz and an Objective Structured Clinical Examination (OSCE). Feedback about the addition of images based class was collected from students. Results: A significant improvement was observed in the self-assessment scores following images based teaching. Mean OSCE score was 6.26/10, and that of Photo Quiz was 13.6/20. Conclusion: This Images based Dermatology teaching has proven to be an excellent supplement to routine clinical cases based teaching. PMID:26677267

  7. Voyager image processing at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-01-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  8. Image Defects Useful In Teaching Students

    NASA Astrophysics Data System (ADS)

    Johnson, R. Barry

    1987-06-01

    In the early stages of teaching students the subject of lens design, the author has found it useful to present the concept of image defects in a form which relates the aberration polynomial terms to specific ray intercept data. The normal development of exact ray intercept errors (ɛx,ɛy) as a power series in terms of the canonical coordinates (ρ,θ,H,ɛ) following Buchdahl is in general observed to be theoretically pleasing. Students often find it difficult, however, to quickly grasp the physical interpretation of the numerous aberration coefficients. On the other hand, they seem to readily visualize the concept of real ray intercept deviations from the ideal image point. By expressing the actual ray intercept error at the image plane as additive contributions, it has been found illustrative to show how ray intercept data from a few selected trigonometrically-traced rays can be utilized to compute these contributions including explaining how to identify the oblique spherical and higher-order coma portions of the astigmatic and comatic error contributions. The necessity for including ray-trace data not in either the tangential or sagittal planes [e.g., (ρ,θ,H,ɛ)=(ρ,45°,H,ɛ)] to assure that all aberration coefficients are accounted for is also discussed.

  9. Ultrasound imaging in teaching cardiac physiology.

    PubMed

    Johnson, Christopher D; Montgomery, Laura E A; Quinn, Joe G; Roe, Sean M; Stewart, Michael T; Tansey, Etain A

    2016-09-01

    This laboratory session provides hands-on experience for students to visualize the beating human heart with ultrasound imaging. Simple views are obtained from which students can directly measure important cardiac dimensions in systole and diastole. This allows students to derive, from first principles, important measures of cardiac function, such as stroke volume, ejection fraction, and cardiac output. By repeating the measurements from a subject after a brief exercise period, an increase in stroke volume and ejection fraction are easily demonstrable, potentially with or without an increase in left ventricular end-diastolic volume (which indicates preload). Thus, factors that affect cardiac performance can readily be discussed. This activity may be performed as a practical demonstration and visualized using an overhead projector or networked computers, concentrating on using the ultrasound images to teach basic physiological principles. This has proved to be highly popular with students, who reported a significant improvement in their understanding of Frank-Starling's law of the heart with ultrasound imaging. PMID:27445285

  10. A Systems Approach to the Teaching-Learning Process.

    ERIC Educational Resources Information Center

    Belgard, Maria R.

    This paper introduces the concept of educational systems analysis, shows how it can be applied to the teaching-learning process, and indicates how the teaching-learning process, as a system, can be optimized by using operations research techniques. The teaching-learning process is viewed as a highly complex learning control system with the purpose…

  11. Industrial Applications of Image Processing

    NASA Astrophysics Data System (ADS)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  12. SWNT Imaging Using Multispectral Image Processing

    NASA Astrophysics Data System (ADS)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  13. An image processing algorithm for PPCR imaging

    NASA Astrophysics Data System (ADS)

    Cowen, Arnold R.; Giles, Anthony; Davies, Andrew G.; Workman, A.

    1993-09-01

    During 1990 The UK Department of Health installed two Photostimulable Phosphor Computed Radiography (PPCR) systems in the General Infirmary at Leeds with a view to evaluating the clinical and physical performance of the technology prior to its introduction into the NHS. An issue that came to light from the outset of the projects was the radiologists reservations about the influence of the standard PPCR computerized image processing on image quality and diagnostic performance. An investigation was set up by FAXIL to develop an algorithm to produce single format high quality PPCR images that would be easy to implement and allay the concerns of radiologists.

  14. Design of smart imagers with image processing

    NASA Astrophysics Data System (ADS)

    Serova, Evgeniya N.; Shiryaev, Yury A.; Udovichenko, Anton O.

    2005-06-01

    This paper is devoted to creation of novel CMOS APS imagers with focal plane parallel image preprocessing for smart technical vision and electro-optical systems based on neural implementation. Using analysis of main biological vision features, the desired artificial vision characteristics are defined. Image processing tasks can be implemented by smart focal plane preprocessing CMOS imagers with neural networks are determined. Eventual results are important for medicine, aerospace ecological monitoring, complexity, and ways for CMOS APS neural nets implementation. To reduce real image preprocessing time special methods based on edge detection and neighbored frame subtraction will be considered and simulated. To select optimal methods and mathematical operators for edge detection various medical, technical and aerospace images will be tested. The important research direction will be devoted to analogue implementation of main preprocessing operations (addition, subtraction, neighbored frame subtraction, module, and edge detection of pixel signals) in focal plane of CMOS APS imagers. We present the following results: the algorithm of edge detection for analog realization, and patented focal plane circuits for analog image reprocessing (edge detection and motion detection).

  15. Teaching Evolutionary Processes to Skeptical Students

    NASA Astrophysics Data System (ADS)

    Bobrowsky, M.

    2000-12-01

    Astronomy instructors teach about phenomena having very long time scales, and they are often challenged by skeptical students. This is particularly true when teaching a "Life in the Universe" unit or course, which includes some potentially controversial topics concerning biological evolution. Yet, the evidence is overwhelming that evolutionary processes have indeed taken place over long time scales. Whether the topic is the age of the earth, long-term astrophysical phenomena, or biological evolution, instructors should be aware of the supporting evidence. Presentation of the evidence, along with the methods of science that provide high levels of confidence in our current understanding, will help the instructor to respond to students' questions. This information will also allow the instructor to present the scientific content with confidence and not be deterred by special interest groups who, for religious or other reasons, do not want to provide students with the best scientific information that currently exists.

  16. An interactive image processing system.

    PubMed

    Troxel, D E

    1981-01-01

    A multiuser multiprocessing image processing system has been developed. It is an interactive picture manipulation and enhancement facility which is capable of executing a variety of image processing operations while simultaneously controlling real-time input and output of pictures. It was designed to provide a reliable picture processing system which would be cost-effective in the commercial production environment. Additional goals met by the system include flexibility and ease of operation and modification. PMID:21868923

  17. Image Processing: Some Challenging Problems

    NASA Astrophysics Data System (ADS)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  18. Image processing of aerodynamic data

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1985-01-01

    The use of digital image processing techniques in analyzing and evaluating aerodynamic data is discussed. An image processing system that converts images derived from digital data or from transparent film into black and white, full color, or false color pictures is described. Applications to black and white images of a model wing with a NACA 64-210 section in simulated rain and to computed low properties for transonic flow past a NACA 0012 airfoil are presented. Image processing techniques are used to visualize the variations of water film thicknesses on the wing model and to illustrate the contours of computed Mach numbers for the flow past the NACA 0012 airfoil. Since the computed data for the NACA 0012 airfoil are available only at discrete spatial locations, an interpolation method is used to provide values of the Mach number over the entire field.

  19. Student Teaching: Developing Images of a Profession.

    ERIC Educational Resources Information Center

    Costa, Arthur L.; Garmston, Robert J.

    1987-01-01

    Examines the tremendous impact that professionals have on neophytes in student teaching and suggests three major thrusts that need to be addressed during the student-teaching phase: (1) providing a model of what it means to be a professional educator; (2) passing along some of the tools of the trade; and (3) developing intellectual processes…

  20. Teaching People and Machines to Enhance Images

    NASA Astrophysics Data System (ADS)

    Berthouzoz, Floraine Sara Martianne

    Procedural tasks such as following a recipe or editing an image are very common. They require a person to execute a sequence of operations (e.g. chop onions, or sharpen the image) in order to achieve the goal of the task. People commonly use step-by-step tutorials to learn these tasks. We focus on software tutorials, more specifically photo manipulation tutorials, and present a set of tools and techniques to help people learn, compare and automate photo manipulation procedures. We describe three different systems that are each designed to help with a different stage in acquiring procedural knowledge. Today, people primarily rely on hand-crafted tutorials in books and on websites to learn photo manipulation procedures. However, putting together a high quality step-by-step tutorial is a time-consuming process. As a consequence, many online tutorials are poorly designed which can lead to confusion and slow down the learning process. We present a demonstration-based system for automatically generating succinct step-by-step visual tutorials of photo manipulations. An author first demonstrates the manipulation using an instrumented version of GIMP (GNU Image Manipulation Program) that records all changes in interface and application state. From the example recording, our system automatically generates tutorials that illustrate the manipulation using images, text, and annotations. It leverages automated image labeling (recognition of facial features and outdoor scene structures in our implementation) to generate more precise text descriptions of many of the steps in the tutorials. A user study finds that our tutorials are effective for learning the steps of a procedure; users are 20-44% faster and make 60-95% fewer errors when using our tutorials than when using screencapture video tutorials or hand-designed tutorials. We also demonstrate a new interface that allows learners to navigate, explore and compare large collections (i.e. thousands) of photo manipulation

  1. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  2. Signal and Image Processing Operations

    Energy Science and Technology Software Center (ESTSC)

    1995-05-10

    VIEW is a software system for processing arbitrary multidimensional signals. It provides facilities for numerical operations, signal displays, and signal databasing. The major emphasis of the system is on the processing of time-sequences and multidimensional images. The system is designed to be both portable and extensible. It runs currently on UNIX systems, primarily SUN workstations.

  3. Teaching People and Machines to Enhance Images

    NASA Astrophysics Data System (ADS)

    Berthouzoz, Floraine Sara Martianne

    Procedural tasks such as following a recipe or editing an image are very common. They require a person to execute a sequence of operations (e.g. chop onions, or sharpen the image) in order to achieve the goal of the task. People commonly use step-by-step tutorials to learn these tasks. We focus on software tutorials, more specifically photo manipulation tutorials, and present a set of tools and techniques to help people learn, compare and automate photo manipulation procedures. We describe three different systems that are each designed to help with a different stage in acquiring procedural knowledge. Today, people primarily rely on hand-crafted tutorials in books and on websites to learn photo manipulation procedures. However, putting together a high quality step-by-step tutorial is a time-consuming process. As a consequence, many online tutorials are poorly designed which can lead to confusion and slow down the learning process. We present a demonstration-based system for automatically generating succinct step-by-step visual tutorials of photo manipulations. An author first demonstrates the manipulation using an instrumented version of GIMP (GNU Image Manipulation Program) that records all changes in interface and application state. From the example recording, our system automatically generates tutorials that illustrate the manipulation using images, text, and annotations. It leverages automated image labeling (recognition of facial features and outdoor scene structures in our implementation) to generate more precise text descriptions of many of the steps in the tutorials. A user study finds that our tutorials are effective for learning the steps of a procedure; users are 20-44% faster and make 60-95% fewer errors when using our tutorials than when using screencapture video tutorials or hand-designed tutorials. We also demonstrate a new interface that allows learners to navigate, explore and compare large collections (i.e. thousands) of photo manipulation

  4. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision. PMID:18285181

  5. Associative architecture for image processing

    NASA Astrophysics Data System (ADS)

    Adar, Rutie; Akerib, Avidan

    1997-09-01

    This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

  6. Peer Observation of Teaching: A Decoupled Process

    ERIC Educational Resources Information Center

    Chamberlain, John Martyn; D'Artrey, Meriel; Rowe, Deborah-Anne

    2011-01-01

    This article details the findings of research into the academic teaching staff experience of peer observation of their teaching practice. Peer observation is commonly used as a tool to enhance a teacher's continuing professional development. Research participants acknowledged its ability to help develop their teaching practice, but they also…

  7. Digital processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  8. Amateur Image Pipeline Processing using Python plus PyRAF

    NASA Astrophysics Data System (ADS)

    Green, Wayne

    2012-05-01

    A template pipeline spanning observing planning to publishing is offered as a basis for establishing a long term observing program. The data reduction pipeline encapsulates all policy and procedures, providing an accountable framework for data analysis and a teaching framework for IRAF. This paper introduces the technical details of a complete pipeline processing environment using Python, PyRAF and a few other languages. The pipeline encapsulates all processing decisions within an auditable framework. The framework quickly handles the heavy lifting of image processing. It also serves as an excellent teaching environment for astronomical data management and IRAF reduction decisions.

  9. Seismic Imaging Processing and Migration

    Energy Science and Technology Software Center (ESTSC)

    2000-06-26

    Salvo is a 3D, finite difference, prestack, depth migration code for parallel computers. It is also capable of processing 2D and poststack data. The code requires as input a seismic dataset, a velocity model and a file of parameters that allows the user to select various options. The code uses this information to produce a seismic image. Some of the options available to the user include the application of various filters and imaging conditions. Themore » code also incorporates phase encoding (patent applied for) to process multiple shots simultaneously.« less

  10. Fingerprint recognition using image processing

    NASA Astrophysics Data System (ADS)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  11. Processes and priorities in planning mathematics teaching

    NASA Astrophysics Data System (ADS)

    Sullivan, Peter; Clarke, David J.; Clarke, Doug M.; Farrell, Lesley; Gerrard, Jessica

    2013-12-01

    Insights into teachers' planning of mathematics reported here were gathered as part of a broader project examining aspects of the implementation of the Australian curriculum in mathematics (and English). In particular, the responses of primary and secondary teachers to a survey of various aspects of decisions that inform their use of curriculum documents and assessment processes to plan their teaching are discussed. Teachers appear to have a clear idea of the overall topic as the focus of their planning, but they are less clear when asked to articulate the important ideas in that topic. While there is considerable diversity in the processes that teachers use for planning and in the ways that assessment information informs that planning, a consistent theme was that teachers make active decisions at all stages in the planning process. Teachers use a variety of assessment data in various ways, but these are not typically data extracted from external assessments. This research has important implications for those responsible for supporting teachers in the transition to the Australian Curriculum: Mathematics.

  12. Computer image processing: Geologic applications

    NASA Technical Reports Server (NTRS)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  13. Concept Learning through Image Processing.

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  14. Using Mathematica to Teach Process Units: A Distillation Case Study

    ERIC Educational Resources Information Center

    Rasteiro, Maria G.; Bernardo, Fernando P.; Saraiva, Pedro M.

    2005-01-01

    The question addressed here is how to integrate computational tools, namely interactive general-purpose platforms, in the teaching of process units. Mathematica has been selected as a complementary tool to teach distillation processes, with the main objective of leading students to achieve a better understanding of the physical phenomena involved…

  15. Enhancing the Teaching-Learning Process: A Knowledge Management Approach

    ERIC Educational Resources Information Center

    Bhusry, Mamta; Ranjan, Jayanthi

    2012-01-01

    Purpose: The purpose of this paper is to emphasize the need for knowledge management (KM) in the teaching-learning process in technical educational institutions (TEIs) in India, and to assert the impact of information technology (IT) based KM intervention in the teaching-learning process. Design/methodology/approach: The approach of the paper is…

  16. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  17. Humanistic Teaching: A Process for Training Teachers.

    ERIC Educational Resources Information Center

    London, Robert

    This paper outlines an approach to training teachers in a repertoire of behaviors thought essential to humanistic teaching. Exercises necessary to give students a reasonable mastery of desired teaching behaviors are provided. The outline contains the following tasks: community-building exercises; definitions of nonjudgemental, acceptant,…

  18. Chemistry Graduate Teaching Assistants' Experiences in Academic Laboratories and Development of a Teaching Self-image

    NASA Astrophysics Data System (ADS)

    Gatlin, Todd Adam

    Graduate teaching assistants (GTAs) play a prominent role in chemistry laboratory instruction at research based universities. They teach almost all undergraduate chemistry laboratory courses. However, their role in laboratory instruction has often been overlooked in educational research. Interest in chemistry GTAs has been placed on training and their perceived expectations, but less attention has been paid to their experiences or their potential benefits from teaching. This work was designed to investigate GTAs' experiences in and benefits from laboratory instructional environments. This dissertation includes three related studies on GTAs' experiences teaching in general chemistry laboratories. Qualitative methods were used for each study. First, phenomenological analysis was used to explore GTAs' experiences in an expository laboratory program. Post-teaching interviews were the primary data source. GTAs experiences were described in three dimensions: doing, knowing, and transferring. Gains available to GTAs revolved around general teaching skills. However, no gains specifically related to scientific development were found in this laboratory format. Case-study methods were used to explore and illustrate ways GTAs develop a GTA self-image---the way they see themselves as instructors. Two general chemistry laboratory programs that represent two very different instructional frameworks were chosen for the context of this study. The first program used a cooperative project-based approach. The second program used weekly, verification-type activities. End of the semester interviews were collected and served as the primary data source. A follow-up case study of a new cohort of GTAs in the cooperative problem-based laboratory was undertaken to investigate changes in GTAs' self-images over the course of one semester. Pre-semester and post-semester interviews served as the primary data source. Findings suggest that GTAs' construction of their self-image is shaped through the

  19. History Making and the Plains Indians. Teaching with Images.

    ERIC Educational Resources Information Center

    Rothwell, Jennifer Truran

    1997-01-01

    Considers the use of cultural images and symbols of Native Americans to reflect, interpret, and justify the westward expansion of the United States. Seldom overtly racist, paintings and lithographs of the time often presented a benign and romantic vision of the West. Includes suggested teaching ideas. (MJP)

  20. Vehicle positioning using image processing

    NASA Astrophysics Data System (ADS)

    Kaur, Amardeep; Watkins, Steve E.; Swift, Theresa M.

    2009-03-01

    An image-processing approach is described that detects the position of a vehicle on a bridge. A load-bearing vehicle must be carefully positioned on a bridge for quantitative bridge monitoring. The personnel required for setup and testing and the time required for bridge closure or traffic control are important management and cost considerations. Consequently, bridge monitoring and inspections are good candidates for smart embedded systems. The objectives of this work are to reduce the need for personnel time and to minimize the time for bridge closure. An approach is proposed that uses a passive target on the bridge and camera instrumentation on the load vehicle. The orientation of the vehicle-mounted camera and the target determine the position. The experiment used pre-defined concentric circles as the target, a FireWire camera for image capture, and MATLAB for computer processing. Various image-processing techniques are compared for determining the orientation of the target circles with respect to speed and accuracy in the positioning application. The techniques for determining the target orientation use algorithms based on using the centroid feature, template matching, color feature, and Hough transforms. Timing parameters are determined for each algorithm to determine the feasibility for real-time use in a position triggering system. Also, the effect of variations in the size and color of the circles are examined. The development can be combined with embedded sensors and sensor nodes for a complete automated procedure. As the load vehicle moves to the proper position, the image-based system can trigger an embedded measurement, which is then transmitted back to the vehicle control computer through a wireless link.

  1. Magnetic resonance imaging simulator: a teaching tool for radiology.

    PubMed

    Rundle, D; Kishore, S; Seshadri, S; Wehrli, F

    1990-11-01

    The increasing use of magnetic resonance imaging (MRI) as a clinical modality has put an enormous burden on medical institutions to cost effectively teach MRI scanning techniques to technologists and physicians. Since MRI scanner time is a scarce resource, it would be ideal if the teaching could be effectively performed off-line. In order to meet this goal, the radiology Department at the University of Pennsylvania has designed and developed a Magnetic Resonance Imaging Simulator. The simulator in its current implementation mimics the General Electric Signa (General Electric Magnetic Resonance Imaging System, Milwaukee, WI) scanner's user interface for image acquisition. The design is general enough to be applied to other MRI scanners. One unique feature of the simulator is its incorporation of an image-synthesis module that permits the user to derive images for any arbitrary combination of pulsing parameters for spin-echo, gradient-echo, and inversion recovery pulse sequences. These images are computed in 5 seconds. The development platform chosen is a standard Apple Macintosh II (Apple Computer, Inc, Cupertino, CA) computer with no specialized hardware peripherals. The user interface is implemented in HyperCard (Apple Computer Inc, Cupertino, CA). All other software development including synthesis and display functions are implemented under the Macintosh Programmer's Workshop 'C' environment. The scan parameters, demographics, and images are tracked using an Oracle (Oracle Corp, Redwood Shores, CA) data base. Images are currently stored on magnetic disk but could be stored on optical media with minimal effort. PMID:2085559

  2. The Self-Teaching Process in Higher Education.

    ERIC Educational Resources Information Center

    Hills, P. J.

    The traditional methods of university education and some alternative approaches are considered in light of the course needs of both students and teacher. The self-teaching system is examined in the overall context of the learning process. This is followed by three case studies of the development and use of self-teaching systems, one for chemical…

  3. Factors Causing Demotivation in EFL Teaching Process: A Case Study

    ERIC Educational Resources Information Center

    Aydin, Selami

    2012-01-01

    Studies have mainly focused on strategies to motivate teachers or the student-teacher motivation relationships rather than teacher demotivation in the English as a foreign language (EFL) teaching process, whereas no data have been found on the factors that cause teacher demotivation in the Turkish EFL teaching contexts at the elementary education…

  4. Teaching Methods Influencing the Sustainability of the Teaching Process in Technology Education in General Education Schools

    ERIC Educational Resources Information Center

    Soobik, Mart

    2014-01-01

    The sustainability of technology education is related to a traditional understanding of craft and the methods used to teach it; however, the methods used in the teaching process have been influenced by the innovative changes accompanying the development of technology. In respect to social and economic development, it is important to prepare young…

  5. Image processing photosensor for robots

    NASA Astrophysics Data System (ADS)

    Vinogradov, Sergey L.; Shubin, Vitaly E.

    1995-01-01

    Some aspects of the possible applications of new, nontraditional generation of the advanced photosensors having the inherent internal image processing for multifunctional optoelectronic systems such as machine vision systems (MVS) are discussed. The optical information in these solid-state photosensors, so-called photoelectric structures with memory (PESM), is registered and stored in the form of 2D charge and potential patterns in the plane of the layers, and then it may be transferred and transformed in a normal direction due to interaction of these patterns. PESM ensure high operation potential of the massively parallel processing with effective rate up to 1014 operation/bit/s in such integral operations as addition, subtraction, contouring, correlation of images and so on. Most diverse devices and apparatus may be developed on their base, ranging from automatic rangefinders to the MVS for furnishing robotized industries. Principal features, physical backgrounds of the main primary operations, complex functional algorithms for object selection, tracking, and guidance are briefly described. The examples of the possible application of the PESM as an intellectual 'supervideosensor', that combines a high-quality imager, memory media and a high-capacity special-purpose processor will be presented.

  6. Image processing software for imaging spectrometry

    NASA Technical Reports Server (NTRS)

    Mazer, Alan S.; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    The paper presents a software system, Spectral Analysis Manager (SPAM), which has been specifically designed and implemented to provide the exploratory analysis tools necessary for imaging spectrometer data, using only modest computational resources. The basic design objectives are described as well as the major algorithms designed or adapted for high-dimensional images. Included in a discussion of system implementation are interactive data display, statistical analysis, image segmentation and spectral matching, and mixture analysis.

  7. Teaching about the Physics of Medical Imaging

    NASA Astrophysics Data System (ADS)

    Zollman, Dean; McBride, Dyan; Murphy, Sytil; Aryal, Bijaya; Kalita, Spartak; Wirjawan, Johannes v. d.

    2010-07-01

    Even before the discovery of X-rays, attempts at non-invasive medical imaging required an understanding of fundamental principles of physics. Students frequently do not see these connections because they are not taught in beginning physics courses. To help students understand that physics and medical imaging are closely connected, we have developed a series of active learning units. For each unit we begin by studying how students transfer their knowledge from traditional physics classes and everyday experiences to medical applications. Then, we build instructional materials to take advantage of the students' ability to use their existing learning and knowledge resources. Each of the learning units involves a combination of hands-on activities, which present analogies, and interactive computer simulations. Our learning units introduce students to the contemporary imaging techniques of CT scans, magnetic resonance imaging (MRI), positron emission tomography (PET), and wavefront aberrometry. The project's web site is http://web.phys.ksu.edu/mmmm/.

  8. Image Processing: A State-of-the-Art Way to Learn Science.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    Teachers participating in the Image Processing for Teaching Process, begun at the University of Arizona's Lunar and Planetary Laboratory in 1989, find this technology ideal for encouraging student discovery, promoting constructivist science or math experiences, and adapting in classrooms. Because image processing is not a computerized text, it…

  9. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  10. Image processing technique for arbitrary image positioning in holographic stereogram

    NASA Astrophysics Data System (ADS)

    Kang, Der-Kuan; Yamaguchi, Masahiro; Honda, Toshio; Ohyama, Nagaaki

    1990-12-01

    In a one-step holographic stereogram, if the series of original images are used just as they are taken from perspective views, three-dimensional images are usually reconstructed in back of the hologram plane. In order to enhance the sense of perspective of the reconstructed images and minimize blur of the interesting portions, we introduce an image processing technique for making a one-step flat format holographic stereogram in which three-dimensional images can be observed at an arbitrary specified position. Experimental results show the effect of the image processing. Further, we show results of a medical application using this image processing.

  11. Using Goldenrod Galls to Teach Science Process Skills.

    ERIC Educational Resources Information Center

    Peard, Terry L.

    1994-01-01

    Emphasizes the importance of using examples from the student's environment to aid in teaching science process skills. The author uses diagrams to aid in discussing the various uses of goldenrod (Solidago sp) galls in the classroom. (ZWH)

  12. Teaching the Inquiry Process to 21st Century Learners

    ERIC Educational Resources Information Center

    Carnesi, Sabrina; DiGiorgio, Karen

    2009-01-01

    Unlike the static, set-in-stone research project, the inquiry process is an interactive cycle used to teach research in any content area. The inquiry process engages students in a way that promotes critical thinking, higher-level processing, and the use of more varied and appropriate resources. This article introduces the inquiry process and…

  13. Using Classic and Contemporary Visual Images in Clinical Teaching.

    ERIC Educational Resources Information Center

    Edwards, Janine C.

    1990-01-01

    The patient's body is an image that medical students and residents use to process information. The classic use of images using the patient is qualitative and personal. The contemporary use of images is quantitative and impersonal. The contemporary use of imaging includes radiographic, nuclear, scintigraphic, and nuclear magnetic resonance…

  14. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  15. The Tao of Teaching: Romance and Process.

    ERIC Educational Resources Information Center

    Schindler, Stefan

    1991-01-01

    Because college teaching aims to elevate, not entertain, it must be nourished and appreciated as a pedagogical alchemy mixing facts and feelings, ideas and skills, history and mystery. The current debate on educational reform should focus more on quality of learning experience, and on how to create and sustain it. (MSE)

  16. Teaching Science: A Picture Perfect Process.

    ERIC Educational Resources Information Center

    Leyden, Michael B.

    1994-01-01

    Explains how teachers can use graphs and graphing concepts when teaching art, language arts, history, social studies, and science. Students can graph the lifespans of the Ninja Turtles' Renaissance namesakes (Donatello, Michelangelo, Raphael, and Leonardo da Vinci) or world population growth. (MDM)

  17. Image enhancement based on gamma map processing

    NASA Astrophysics Data System (ADS)

    Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An

    2010-05-01

    This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.

  18. What Should Schools Teach? Issues of Process and Content.

    ERIC Educational Resources Information Center

    Perrone, Vito

    1988-01-01

    When discussing what schools should teach, questions of both content and process must be addressed. Although many observers believe that a fixed content should be learned, it is impossible to separate content and process. In the process of education, experiences build on each other. This fact should cause educators to question the continuities…

  19. Law and Pop Culture: Teaching and Learning about Law Using Images from Popular Culture.

    ERIC Educational Resources Information Center

    Joseph, Paul R.

    2000-01-01

    Believes that using popular culture images of law, lawyers, and the legal system is an effective way for teaching about real law. Offers examples of incorporating popular culture images when teaching about law. Includes suggestions for teaching activities, a mock trial based on Dr. Seuss's book "Yertle the Turtle," and additional resources. (CMK)

  20. Cluster-based parallel image processing toolkit

    NASA Astrophysics Data System (ADS)

    Squyres, Jeffery M.; Lumsdaine, Andrew; Stevenson, Robert L.

    1995-03-01

    Many image processing tasks exhibit a high degree of data locality and parallelism and map quite readily to specialized massively parallel computing hardware. However, as network technologies continue to mature, workstation clusters are becoming a viable and economical parallel computing resource, so it is important to understand how to use these environments for parallel image processing as well. In this paper we discuss our implementation of parallel image processing software library (the Parallel Image Processing Toolkit). The Toolkit uses a message- passing model of parallelism designed around the Message Passing Interface (MPI) standard. Experimental results are presented to demonstrate the parallel speedup obtained with the Parallel Image Processing Toolkit in a typical workstation cluster over a wide variety of image processing tasks. We also discuss load balancing and the potential for parallelizing portions of image processing tasks that seem to be inherently sequential, such as visualization and data I/O.

  1. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  2. Applications Of Image Processing In Criminalistics

    NASA Astrophysics Data System (ADS)

    Krile, Thomas F.; Walkup, John F.; Barsallo, Adonis; Olimb, Hal; Tarng, Jaw-Horng

    1987-01-01

    A review of some basic image processing techniques for enhancement and restoration of images is given. Both digital and optical approaches are discussed. Fingerprint images are used as examples to illustrate the various processing techniques and their potential applications in criminalistics.

  3. Using Paper Helicopters to Teach Statistical Process Control

    ERIC Educational Resources Information Center

    Johnson, Danny J.

    2011-01-01

    This hands-on project uses a paper helicopter to teach students how to distinguish between common and special causes of variability when developing and using statistical process control charts. It allows the student to experience a process that is out-of-control due to imprecise or incomplete product design specifications and to discover how the…

  4. How Teachers Teach the Writing Process. Final Report.

    ERIC Educational Resources Information Center

    Perl, Sondra; And Others

    Presented in this report are the results of a three-year case study designed (1) to document what happened in the classrooms of 10 teachers who were trained in a process approach to the teaching of writing, and (2) to provide those teachers with occasions to deepen their understanding of the process approach, by collaborating with them in the…

  5. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  6. Handbook on COMTAL's Image Processing System

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.

    1983-01-01

    An image processing system is the combination of an image processor with other control and display devices plus the necessary software needed to produce an interactive capability to analyze and enhance image data. Such an image processing system installed at NASA Langley Research Center, Instrument Research Division, Acoustics and Vibration Instrumentation Section (AVIS) is described. Although much of the information contained herein can be found in the other references, it is hoped that this single handbook will give the user better access, in concise form, to pertinent information and usage of the image processing system.

  7. Sequential Processes In Image Generation.

    ERIC Educational Resources Information Center

    Kosslyn, Stephen M.; And Others

    1988-01-01

    Results of three experiments are reported, which indicate that images of simple two-dimensional patterns are formed sequentially. The subjects included 48 undergraduates and 16 members of the Harvard University (Cambridge, Mass.) community. A new objective methodology indicates that images of complex letters require more time to generate. (TJH)

  8. Image processing on the IBM personal computer

    NASA Technical Reports Server (NTRS)

    Myers, H. J.; Bernstein, R.

    1985-01-01

    An experimental, personal computer image processing system has been developed which provides a variety of processing functions in an environment that connects programs by means of a 'menu' for both casual and experienced users. The system is implemented by a compiled BASIC program that is coupled to assembly language subroutines. Image processing functions encompass subimage extraction, image coloring, area classification, histogramming, contrast enhancement, filtering, and pixel extraction.

  9. Semi-automated Image Processing for Preclinical Bioluminescent Imaging

    PubMed Central

    Slavine, Nikolai V; McColl, Roderick W

    2015-01-01

    Objective Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. Methods In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. Results We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. Conclusion The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment. PMID:26618187

  10. Image processing applied to laser cladding process

    SciTech Connect

    Meriaudeau, F.; Truchetet, F.

    1996-12-31

    The laser cladding process, which consists of adding a melt powder to a substrate in order to improve or change the behavior of the material against corrosion, fatigue and so on, involves a lot of parameters. In order to perform good tracks some parameters need to be controlled during the process. The authors present here a low cost performance system using two CCD matrix cameras. One camera provides surface temperature measurements while the other gives information relative to the powder distribution or geometric characteristics of the tracks. The surface temperature (thanks to Beer Lambert`s law) enables one to detect variations in the mass feed rate. Using such a system the authors are able to detect fluctuation of 2 to 3g/min in the mass flow rate. The other camera gives them information related to the powder distribution, a simple algorithm applied to the data acquired from the CCD matrix camera allows them to see very weak fluctuations within both gaz flux (carriage or protection gaz). During the process, this camera is also used to perform geometric measurements. The height and the width of the track are obtained in real time and enable the operator to find information related to the process parameters such as the speed processing, the mass flow rate. The authors display the result provided by their system in order to enhance the efficiency of the laser cladding process. The conclusion is dedicated to a summary of the presented works and the expectations for the future.

  11. Distributing an electronic thoracic imaging teaching file using the Internet, Mosaic, and personal computers.

    PubMed

    Galvin, J R; D'Alessandro, M P; Kurihara, Y; Erkonen, W E; Knutson, T A; Lacey, D L

    1995-02-01

    A high quality film-based teaching file requires effort and expense to create and maintain. The effort is worthwhile because film collections are important vehicles for increasing a radiologist's personal data base of clinical experience. Expert clinical reasoning is to a large extent the process of comparing a current case to a data base of individual cases available in memory. A teaching file would be most helpful if it were available at the view box where it could be used to extend a radiologist's clinical experience. Unfortunately, a film-based file is confined to one area, usually remote from the view box. In addition, searching though a film file is difficult, the films wear out over time, and films are easily lost or stolen. Our goal is the creation of a thoracic imaging teaching file that solves these problems by providing a digital collection of images, videos, and text that can be used in the work place by many users simultaneously. The first part of this teaching file is now continuously available locally within our department and globally to users of the Internet. PMID:7839992

  12. Image Processing in Intravascular OCT

    NASA Astrophysics Data System (ADS)

    Wang, Zhao; Wilson, David L.; Bezerra, Hiram G.; Rollins, Andrew M.

    Coronary artery disease is the leading cause of death in the world. Intravascular optical coherence tomography (IVOCT) is rapidly becoming a promising imaging modality for characterization of atherosclerotic plaques and evaluation of coronary stenting. OCT has several unique advantages over alternative technologies, such as intravascular ultrasound (IVUS), due to its better resolution and contrast. For example, OCT is currently the only imaging modality that can measure the thickness of the fibrous cap of an atherosclerotic plaque in vivo. OCT also has the ability to accurately assess the coverage of individual stent struts by neointimal tissue over time. However, it is extremely time-consuming to analyze IVOCT images manually to derive quantitative diagnostic metrics. In this chapter, we introduce some computer-aided methods to automate the common IVOCT image analysis tasks.

  13. Yes! The Business Department Teaches Data Processing

    ERIC Educational Resources Information Center

    Nord, Daryl; Seymour, Tom

    1978-01-01

    After a brief discussion of the history and current status of business data processing versus computer science, this article focuses on the characteristics of a business data processing curriculum as compared to a computer science curriculum, including distinctions between the FORTRAN and COBOL programming languages. (SH)

  14. Teaching Information Systems Development via Process Variants

    ERIC Educational Resources Information Center

    Tan, Wee-Kek; Tan, Chuan-Hoo

    2010-01-01

    Acquiring the knowledge to assemble an integrated Information System (IS) development process that is tailored to the specific needs of a project has become increasingly important. It is therefore necessary for educators to impart to students this crucial skill. However, Situational Method Engineering (SME) is an inherently complex process that…

  15. Combining advanced imaging processing and low cost remote imaging capabilities

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew J.; McQuiddy, Brian

    2008-04-01

    Target images are very important for evaluating the situation when Unattended Ground Sensors (UGS) are deployed. These images add a significant amount of information to determine the difference between hostile and non-hostile activities, the number of targets in an area, the difference between animals and people, the movement dynamics of targets, and when specific activities of interest are taking place. The imaging capabilities of UGS systems need to provide only target activity and not images without targets in the field of view. The current UGS remote imaging systems are not optimized for target processing and are not low cost. McQ describes in this paper an architectural and technologic approach for significantly improving the processing of images to provide target information while reducing the cost of the intelligent remote imaging capability.

  16. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  17. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  18. Utilizing image processing techniques to compute herbivory.

    PubMed

    Olson, T E; Barlow, V M

    2001-01-01

    Leafy spurge (Euphorbia esula L. sensu lato) is a perennial weed species common to the north-central United States and southern Canada. The plant is a foreign species toxic to cattle. Spurge infestation can reduce cattle carrying capacity by 50 to 75 percent [1]. University of Wyoming Entomology doctoral candidate Vonny Barlow is conducting research in the area of biological control of leafy spurge via the Aphthona nigriscutis Foudras flea beetle. He is addressing the question of variability within leafy spurge and its potential impact on flea beetle herbivory. One component of Barlow's research consists of measuring the herbivory of leafy spurge plant specimens after introducing adult beetles. Herbivory is the degree of consumption of the plant's leaves and was measured in two different manners. First, Barlow assigned each consumed plant specimen a visual rank from 1 to 5. Second, image processing techniques were applied to "before" and "after" images of each plant specimen in an attempt to more accurately quantify herbivory. Standardized techniques were used to acquire images before and after beetles were allowed to feed on plants for a period of 12 days. Matlab was used as the image processing tool. The image processing algorithm allowed the user to crop the portion of the "before" image containing only plant foliage. Then Matlab cropped the "after" image with the same dimensions, converted the images from RGB to grayscale. The grayscale image was converted to binary based on a user defined threshold value. Finally, herbivory was computed based on the number of black pixels in the "before" and "after" images. The image processing results were mixed. Although, this image processing technique depends on user input and non-ideal images, the data is useful to Barlow's research and offers insight into better imaging systems and processing algorithms. PMID:11347423

  19. Chemical Process Design: An Integrated Teaching Approach.

    ERIC Educational Resources Information Center

    Debelak, Kenneth A.; Roth, John A.

    1982-01-01

    Reviews a one-semester senior plant design/laboratory course, focusing on course structure, student projects, laboratory assignments, and course evaluation. Includes discussion of laboratory exercises related to process waste water and sludge. (SK)

  20. Teaching the Dance Class: Strategies to Enhance Skill Acquisition, Mastery and Positive Self-Image

    ERIC Educational Resources Information Center

    Mainwaring, Lynda M.; Krasnow, Donna H.

    2010-01-01

    Effective teaching of dance skills is informed by a variety of theoretical frameworks and individual teaching and learning styles. The purpose of this paper is to present practical teaching strategies that enhance the mastery of skills and promote self-esteem, self-efficacy, and positive self-image. The predominant thinking and primary research…

  1. How Digital Image Processing Became Really Easy

    NASA Astrophysics Data System (ADS)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  2. Non-linear Post Processing Image Enhancement

    NASA Technical Reports Server (NTRS)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  3. Student Evaluation of Teaching: An Instrument and a Development Process

    ERIC Educational Resources Information Center

    Alok, Kumar

    2011-01-01

    This article describes the process of faculty-led development of a student evaluation of teaching instrument at Centurion School of Rural Enterprise Management, a management institute in India. The instrument was to focus on teacher behaviors that students get an opportunity to observe. Teachers and students jointly contributed a number of…

  4. The Teaching of L2 Pronunciation through Processing Instruction

    ERIC Educational Resources Information Center

    Gonzales-Bueno, Manuela; Quintana-Lara, Marcela

    2011-01-01

    The goal of this study is to pilot test whether the instructional approach known as Processing Instruction could be adapted to the teaching of second language (L2) pronunciation. The target sounds selected were the Spanish tap and trill. Three groups of high school students of Spanish as a foreign language participated in the study. One group…

  5. Developing Evaluative Tool for Online Learning and Teaching Process

    ERIC Educational Resources Information Center

    Aksal, Fahriye A.

    2011-01-01

    The research study aims to underline the development of a new scale on online learning and teaching process based on factor analysis. Further to this, the research study resulted in acceptable scale which embraces social interaction role, interaction behaviour, barriers, capacity for interaction, group interaction as sub-categories to evaluate…

  6. A Plan for Teaching Data Processing to Library Science Students.

    ERIC Educational Resources Information Center

    Losee, Robert M., Jr.

    An outline is proposed for a library school course in data processing for libraries that is different from other such courses in that it emphasizes the operations of the computer itself over the study of library computer systems. The course begins with a study of computer hardware then moves to the teaching of assembly language using the MIX…

  7. Process versus Product Task Interpretation and Parental Teaching Practice.

    ERIC Educational Resources Information Center

    Renshaw, Peter D.; Gardner, Ruth

    1990-01-01

    Reports on research on parental teaching strategies with children aged three and four years. Findings support Dweck and Elliott's view that adults who are process oriented rather than product oriented act more as resources than as judges; focus children on learning rather than outcome; and respond to errors as natural and useful rather than as…

  8. Information Technologies and Globalization: New Perspectives of Teaching Learning Process

    ERIC Educational Resources Information Center

    Hussain, Irshad

    2008-01-01

    This article discusses how information technologies and globalization have opened new avenues and horizons for educators and learners. It discusses different experiences of using information and communication technologies (ICTs) in teaching learning process the world over in the age of globalization. It focuses on the ways these new trends have…

  9. RDI Advising Model for Improving the Teaching-Learning Process

    ERIC Educational Resources Information Center

    de la Fuente, Jesus; Lopez-Medialdea, Ana Maria

    2007-01-01

    Introduction: Advising in Educational Psychology from the perspective of RDI takes on a stronger investigative, innovative nature. The model proposed by De la Fuente et al (2006, 2007) and Education & Psychology (2007) was applied to the field of improving teaching-learning processes at a school. Hypotheses were as follows: (1) interdependence…

  10. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  11. Anthropological methods of optical image processing

    NASA Astrophysics Data System (ADS)

    Ginzburg, V. M.

    1981-12-01

    Some applications of the new method for optical image processing, based on a prior separation of informative elements (IE) with the help of a defocusing equal to the average eye defocusing, considered in a previous paper, are described. A diagram of a "drawing" robot with the use of defocusing and other mechanisms of the human visual system (VS) is given. Methods of narrowing the TV channel bandwidth and elimination of noises in computer image processing by prior image defocusing are described.

  12. Water surface capturing by image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  13. Automatic processing, analysis, and recognition of images

    NASA Astrophysics Data System (ADS)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.

    2004-11-01

    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  14. SUPRIM: easily modified image processing software.

    PubMed

    Schroeter, J P; Bretaudiere, J P

    1996-01-01

    A flexible, modular software package intended for the processing of electron microscopy images is presented. The system consists of a set of image processing tools or filters, written in the C programming language, and a command line style user interface based on the UNIX shell. The pipe and filter structure of UNIX and the availability of command files in the form of shell scripts eases the construction of complex image processing procedures from the simpler tools. Implementation of a new image processing algorithm in SUPRIM may often be performed by construction of a new shell script, using already existing tools. Currently, the package has been used for two- and three-dimensional image processing and reconstruction of macromolecules and other structures of biological interest. PMID:8742734

  15. Teaching Word Processing in the Library.

    ERIC Educational Resources Information Center

    Teo, Elizabeth A.; Jenkins, Sylvia M.

    A description is provided of a program developed at Moraine Valley Community College (MVCC), in Illinois, for providing word processing instruction in the library, including recommendations for program development based on MVCC experience and results from a survey of program participants. The first part of the paper discusses a model development…

  16. Image processing for cameras with fiber bundle image relay.

    PubMed

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection. PMID:25968031

  17. Using the Results of Teaching Evaluations to Improve Teaching: A Case Study of a New Systematic Process

    ERIC Educational Resources Information Center

    Malouff, John M.; Reid, Jackie; Wilkes, Janelle; Emmerton, Ashley J.

    2015-01-01

    This article describes a new 14-step process for using student evaluations of teaching to improve teaching. The new process includes examination of student evaluations in the context of instructor goals, student evaluations of the same course completed in prior terms, and evaluations of similar courses taught by other instructors. The process has…

  18. CT Image Processing Using Public Digital Networks

    PubMed Central

    Rhodes, Michael L.; Azzawi, Yu-Ming; Quinn, John F.; Glenn, William V.; Rothman, Stephen L.G.

    1984-01-01

    Nationwide commercial computer communication is now commonplace for those applications where digital dialogues are generally short and widely distributed, and where bandwidth does not exceed that of dial-up telephone lines. Image processing using such networks is prohibitive because of the large volume of data inherent to digital pictures. With a blend of increasing bandwidth and distributed processing, network image processing becomes possible. This paper examines characteristics of a digital image processing service for a nationwide network of CT scanner installations. Issues of image transmission, data compression, distributed processing, software maintenance, and interfacility communication are also discussed. Included are results that show the volume and type of processing experienced by a network of over 50 CT scanners for the last 32 months.

  19. Image processing for drawing recognition

    NASA Astrophysics Data System (ADS)

    Feyzkhanov, Rustem; Zhelavskaya, Irina

    2014-03-01

    The task of recognizing edges of rectangular structures is well known. Still, almost all of them work with static images and has no limit on work time. We propose application of conducting homography for the video stream which can be obtained from the webcam. We propose algorithm which can be successfully used for this kind of application. One of the main use cases of such application is recognition of drawings by person on the piece of paper before webcam.

  20. Parallel digital signal processing architectures for image processing

    NASA Astrophysics Data System (ADS)

    Kshirsagar, Shirish P.; Hartley, David A.; Harvey, David M.; Hobson, Clifford A.

    1994-10-01

    This paper describes research into a high speed image processing system using parallel digital signal processors for the processing of electro-optic images. The objective of the system is to reduce the processing time of non-contact type inspection problems including industrial and medical applications. A single processor can not deliver sufficient processing power required for the use of applications hence, a MIMD system is designed and constructed to enable fast processing of electro-optic images. The Texas Instruments TMS320C40 digital signal processor is used due to its high speed floating point CPU and the support for the parallel processing environment. A custom designed VISION bus is provided to transfer images between processors. The system is being applied for solder joint inspection of high technology printed circuit boards.

  1. Stable image acquisition for mobile image processing applications

    NASA Astrophysics Data System (ADS)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  2. Teaching sustainable design: A collaborative process

    SciTech Connect

    Theis, C.C.

    1997-12-31

    This paper describes a collaborative educational experience in the Schools of Architecture and Landscape Architecture at Louisiana State University. During the Fall Semester of 1996 an upper-level architectural design studio worked with a peer group of landscape architecture students on the design of a master plan for an environmentally sensitive residential development on Cat Island, a barrier island located approximately eight miles south of Gulfport, Mississippi. This paper presents the methodology and results of the project, describes the collaborative process, and assesses both the viability of the design solutions and the value of the educational experience.

  3. Using Image Modelling to Teach Newton's Laws with the Ollie Trick

    ERIC Educational Resources Information Center

    Dias, Marco Adriano; Carvalho, Paulo Simeão; Vianna, Deise Miranda

    2016-01-01

    Image modelling is a video-based teaching tool that is a combination of strobe images and video analysis. This tool can enable a qualitative and a quantitative approach to the teaching of physics, in a much more engaging and appealling way than the traditional expositive practice. In a specific scenario shown in this paper, the Ollie trick, we…

  4. Characteristics of Mindless Teaching Evaluations and the Moderating Effects of Image Compatibility.

    ERIC Educational Resources Information Center

    Dunegan, Kenneth J.; Hrivnak, Mary W.

    2003-01-01

    At 3 times, 164 management students completed student evaluations of teaching (SET), 150 completed an image compatibility questionnaire, and 155 evaluated instructors' overall performance. SET scores and overall evaluations were significantly correlated only when actual and ideal images of instructors were incompatible. When teaching was…

  5. Process Development in the Teaching Laboratory

    NASA Astrophysics Data System (ADS)

    Klein, Leonard C.; Dana, Susanne M.

    1998-06-01

    Many experiences in high school and undergraduate laboratories are well-tested cookbook recipes that have already been designed to yield optimal results; the well-known synthesis of aspirin is such an example. In this project for advanced placement or second-year high school chemistry students, students mimic the process development in industrial laboratories by investigating the effect of varying conditions in the synthesis of aspirin. The class decides on criteria that should be explored (quantity of catalyst, temperature of reaction, etc.). The class is then divided into several teams with each team assigned a variable to study. Each team must submit a proposal describing how they will explore the variable before they start their study. After data on yield and purity has been gathered and evaluated, students discuss which method is most desirable, based on their agreed-upon criteria. This exercise provides an opportunity for students to review many topics from the course (rate of reaction, limiting reagents, Beer's Law) while participating in a cooperative exercise designed to imitate industrial process development.

  6. Applications of Digital Image Processing 11

    NASA Technical Reports Server (NTRS)

    Cho, Y. -C.

    1988-01-01

    A new technique, digital image velocimetry, is proposed for the measurement of instantaneous velocity fields of time dependent flows. A time sequence of single-exposure images of seed particles are captured with a high-speed camera, and a finite number of the single-exposure images are sampled within a prescribed period in time. The sampled images are then digitized on an image processor, enhanced, and superimposed to construct an image which is equivalent to a multiple exposure image used in both laser speckle velocimetry and particle image velocimetry. The superimposed image and a single-exposure Image are digitally Fourier transformed for extraction of information on the velocity field. A great enhancement of the dynamic range of the velocity measurement is accomplished through the new technique by manipulating the Fourier transform of both the single-exposure image and the superimposed image. Also the direction of the velocity vector is unequivocally determined. With the use of a high-speed video camera, the whole process from image acquisition to velocity determination can be carried out electronically; thus this technique can be developed into a real-time capability.

  7. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  8. Interactive image processing in swallowing research

    NASA Astrophysics Data System (ADS)

    Dengel, Gail A.; Robbins, JoAnne; Rosenbek, John C.

    1991-06-01

    Dynamic radiographic imaging of the mouth, larynx, pharynx, and esophagus during swallowing is used commonly in clinical diagnosis, treatment and research. Images are recorded on videotape and interpreted conventionally by visual perceptual methods, limited to specific measures in the time domain and binary decisions about the presence or absence of events. An image processing system using personal computer hardware and original software has been developed to facilitate measurement of temporal, spatial and temporospatial parameters. Digitized image sequences derived from videotape are manipulated and analyzed interactively. Animation is used to preserve context and increase efficiency of measurement. Filtering and enhancement functions heighten image clarity and contrast, improving visibility of details which are not apparent on videotape. Distortion effects and extraneous head and body motions are removed prior to analysis, and spatial scales are controlled to permit comparison among subjects. Effects of image processing on intra- and interjudge reliability and research applications are discussed.

  9. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  10. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  11. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  12. Accelerated image processing on FPGAs.

    PubMed

    Draper, Bruce A; Beveridge, J Ross; Böhm, A P Willem; Ross, Charles; Chawathe, Monica

    2003-01-01

    The Cameron project has developed a language called single assignment C (SA-C), and a compiler for mapping image-based applications written in SA-C to field programmable gate arrays (FPGAs). The paper tests this technology by implementing several applications in SA-C and compiling them to an Annapolis Microsystems (AMS) WildStar board with a Xilinx XV2000E FPGA. The performance of these applications on the FPGA is compared to the performance of the same applications written in assembly code or C for an 800 MHz Pentium III. (Although no comparison across processors is perfect, these chips were the first of their respective classes fabricated at 0.18 microns, and are therefore of comparable ages.) We find that applications written in SA-C and compiled to FPGAs are between 8 and 800 times faster than the equivalent program run on the Pentium III. PMID:18244709

  13. Digital Image Processing in Private Industry.

    ERIC Educational Resources Information Center

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  14. Checking Fits With Digital Image Processing

    NASA Technical Reports Server (NTRS)

    Davis, R. M.; Geaslen, W. D.

    1988-01-01

    Computer-aided video inspection of mechanical and electrical connectors feasible. Report discusses work done on digital image processing for computer-aided interface verification (CAIV). Two kinds of components examined: mechanical mating flange and electrical plug.

  15. Recent developments in digital image processing at the Image Processing Laboratory of JPL.

    NASA Technical Reports Server (NTRS)

    O'Handley, D. A.

    1973-01-01

    Review of some of the computer-aided digital image processing techniques recently developed. Special attention is given to mapping and mosaicking techniques and to preliminary developments in range determination from stereo image pairs. The discussed image processing utilization areas include space, biomedical, and robotic applications.

  16. Command Line Image Processing System (CLIPS)

    NASA Astrophysics Data System (ADS)

    Fleagle, S. R.; Meyers, G. L.; Kulinski, R. G.

    1985-06-01

    An interactive image processing language (CLIPS) has been developed for use in an image processing environment. CLIPS uses a simple syntax with extensive on-line help to allow even the most naive user perform complex image processing tasks. In addition, CLIPS functions as an interpretive language complete with data structures and program control statements. CLIPS statements fall into one of three categories: command, control,and utility statements. Command statements are expressions comprised of intrinsic functions and/or arithmetic operators which act directly on image or user defined data. Some examples of CLIPS intrinsic functions are ROTATE, FILTER AND EXPONENT. Control statements allow a structured programming style through the use of statements such as DO WHILE and IF-THEN - ELSE. Utility statements such as DEFINE, READ, and WRITE, support I/O and user defined data structures. Since CLIPS uses a table driven parser, it is easily adapted to any environment. New commands may be added to CLIPS by writing the procedure in a high level language such as Pascal or FORTRAN and inserting the syntax for that command into the table. However, CLIPS was designed by incorporating most imaging operations into the language as intrinsic functions. CLIPS allows the user to generate new procedures easily with these powerful functions in an interactive or off line fashion using a text editor. The fact that CLIPS can be used to generate complex procedures quickly or perform basic image processing functions interactively makes it a valuable tool in any image processing environment.

  17. CAD/CAM-coupled image processing systems

    NASA Astrophysics Data System (ADS)

    Ahlers, Rolf-Juergen; Rauh, W.

    1990-08-01

    Image processing systems have found wide application in industry. For most computer integrated manufacturing faci- lities it is necessary to adapt these systems thus that they can automate the interaction with and the integration of CAD and CAM Systems. In this paper new approaches will be described that make use of the coupling of CAD and image processing as well as the automatic generation of programmes for the machining of products.

  18. Color image processing for date quality evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Dah Jye; Archibald, James K.

    2010-01-01

    Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing a challenging task. Since most agricultural applications only require analysis on a predefined set or range of colors, mapping these relevant colors to a small number of indexes allows simple and efficient color image processing for quality evaluation. This paper presents a simple but efficient color mapping and image processing technique that is designed specifically for real-time quality evaluation of Medjool dates. In contrast with more complex color image processing techniques, the proposed color mapping method makes it easy for a human operator to specify and adjust color-preference settings for different color groups representing distinct quality levels. Using this color mapping technique, the color image is first converted to a color map that has one color index represents a color value for each pixel. Fruit maturity level is evaluated based on these color indices. A skin lamination threshold is then determined based on the fruit surface characteristics. This adaptive threshold is used to detect delaminated fruit skin and hence determine the fruit quality. The performance of this robust color grading technique has been used for real-time Medjool date grading.

  19. Image processing technique based on image understanding architecture

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2000-12-01

    Effectiveness of image applications is directly based on its abilities to resolve ambiguity and uncertainty in the real images. That requires tight integration of low-level image processing with high-level knowledge-based reasoning, which is the solution of the image understanding problem. This article presents a generic computational framework necessary for the solution of image understanding problem -- Spatial Turing Machine. Instead of tape of symbols, it works with hierarchical networks dually represented as discrete and continuous structures. Dual representation provides natural transformation of the continuous image information into the discrete structures, making it available for analysis. Such structures are data and algorithms at the same time and able to perform graph and diagrammatic operations being the basis of intelligence. They can create derivative structures that play role of context, or 'measurement device,' giving the ability to analyze, and run top-bottom algorithms. Symbols naturally emerge there, and symbolic operations work in combination with new simplified methods of computational intelligence. That makes images and scenes self-describing, and provides flexible ways of resolving uncertainty. Classification of images truly invariant to any transformation could be done via matching their derivative structures. New proposed architecture does not require supercomputers, opening ways to the new image technologies.

  20. Nanosecond image processing using stimulated photon echoes.

    PubMed

    Xu, E Y; Kröll, S; Huestis, D L; Kachru, R; Kim, M K

    1990-05-15

    Processing of two-dimensional images on a nanosecond time scale is demonstrated using the stimulated photon echoes in a rare-earth-doped crystal (0.1 at. % Pr(3+):LaF(3)). Two spatially encoded laser pulses (pictures) resonant with the (3)P(0)-(3)H(4) transition of Pr(3+) were stored by focusing the image pulses sequentially into the Pr(3+):LaF(3) crystal. The stored information is retrieved and processed by a third read pulse, generating the echo that is the spatial convolution or correlation of the input images. Application of this scheme to high-speed pattern recognition is discussed. PMID:19768008

  1. New approach for underwater imaging and processing

    NASA Astrophysics Data System (ADS)

    Wen, Yanan; Tian, Weijian; Zheng, Bing; Zhou, Guozun; Dong, Hui; Wu, Qiong

    2014-05-01

    Due to the absorptive and scattering nature of water, the characteristic of underwater image is different with it in the air. Underwater image is characterized by their poor visibility and noise. Getting clear original image and image processing are two important problems to be solved in underwater clear vision area. In this paper a new approach technology is presented to solve these problems. Firstly, an inhomogeneous illumination method is developed to get the clear original image. Normal illumination image system and inhomogeneous illumination image system are used to capture the image in same distance. The result shows that the contrast and definition of processed image is get great improvement by inhomogeneous illumination method. Secondly, based on the theory of photon transmitted in the water and the particularity of underwater target detecting, the characters of laser scattering on underwater target surface and spatial and temporal characters of oceanic optical channel have been studied. Based on the Monte Carlo simulation, we studied how the parameters of water quality and other systemic parameters affect the light transmitting through water at spatial and temporal region and provided the theoretical sustentation of enhancing the SNR and operational distance.

  2. Using NASA Space Imaging to Teach Earth and Sun Topics in Professional Development Courses for In-Service Teachers

    NASA Astrophysics Data System (ADS)

    Verner, E.; Bruhweiler, F. C.; Long, T.; Edwards, S.; Ofman, L.; Brosius, J. W.; Holman, G.; St Cyr, O. C.; Krotkov, N. A.; Fatoyinbo Agueh, T.

    2012-12-01

    several PD courses using NASA imaging technology. It includes various ways to study selected topics in physics and astronomy. We use NASA Images to develop lesson plans and EPO materials for PreK-8 grades. Topics are Space based and they vary from measurements, magnetism on Earth to that for our Sun. In addition we cover topics on ecosystem structure, biomass and water on Earth. Hands-on experiments, computer simulations, analysis of real-time NASA data, and vigorous seminar discussions are blended in an inquiry-driven curriculum to instill confident understanding of basic physical science and modern, effective methods for teaching it. Course also demonstrates ways how scientific thinking and hands-on activities could be implemented in the classroom. This course is designed to provide the non-science student a confident understanding of basic physical science and modern, effective methods for teaching it. Most of topics were selected using National Science Standards and National Mathematics Standards to be addressed in grades PreK-8. The course focuses on helping in several areas of teaching: 1) Build knowledge of scientific concepts and processes; 2) Understand the measurable attributes of objects and the units and methods of measurements; 3) Conducting data analysis (collecting, organizing, presenting scientific data, and to predict the result); 4) Use hands-on approaches to teach science; 5) Be familiar with Internet science teaching resources. Here we share our experiences and challenges we faced teaching this course.

  3. Image-processing with augmented reality (AR)

    NASA Astrophysics Data System (ADS)

    Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash

    2013-03-01

    In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.

  4. Image processing via ultrasonics - Status and promise

    NASA Technical Reports Server (NTRS)

    Kornreich, P. G.; Kowel, S. T.; Mahapatra, A.; Nouhi, A.

    1979-01-01

    Acousto-electric devices for electronic imaging of light are discussed. These devices are more versatile than line scan imaging devices in current use. They have the capability of presenting the image information in a variety of modes. The image can be read out in the conventional line scan mode. It can be read out in the form of the Fourier, Hadamard, or other transform. One can take the transform along one direction of the image and line scan in the other direction, or perform other combinations of image processing functions. This is accomplished by applying the appropriate electrical input signals to the device. Since the electrical output signal of these devices can be detected in a synchronous mode, substantial noise reduction is possible

  5. Overview on METEOSAT geometrical image data processing

    NASA Technical Reports Server (NTRS)

    Diekmann, Frank J.

    1994-01-01

    Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

  6. Data Processing: Teaching Data Processing with a Microcomputer.

    ERIC Educational Resources Information Center

    Tesch, Robert C., Sr.

    1980-01-01

    The microcomputer, an inexpensive computer unit, is now available for business education teachers to use in data processing courses. Elements of a computer course design are discussed: selecting equipment, selecting students, and course design. Explanations of the terms "hardware" and "software" are included. (CT)

  7. Theoretical description of teaching-learning processes: a multidisciplinary approach.

    PubMed

    Bordogna, C M; Albano, E V

    2001-09-10

    A multidisciplinary approach based on concepts from sociology, educational psychology, statistical physics, and computational science is developed for the theoretical description of teaching-learning processes that take place in the classroom. The emerging model is consistent with well-established empirical results, such as the higher achievements reached working in collaborative groups and the influence of the structure of the group on the achievements of the individuals. Furthermore, another social learning process that takes place in massive interactions among individuals via the Internet is also investigated. PMID:11531550

  8. Teaching the NIATx Model of Process Improvement as an Evidence-Based Process

    ERIC Educational Resources Information Center

    Evans, Alyson C.; Rieckmann, Traci; Fitzgerald, Maureen M.; Gustafson, David H.

    2007-01-01

    Process Improvement (PI) is an approach for helping organizations to identify and resolve inefficient and ineffective processes through problem solving and pilot testing change. Use of PI in improving client access, retention and outcomes in addiction treatment is on the rise through the teaching of the Network for the Improvement of Addiction…

  9. Real-time optical image processing techniques

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1988-01-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  10. Visualisation of Ecohydrological Processes and Relationships for Teaching Using Advanced Techniques

    NASA Astrophysics Data System (ADS)

    Guan, H.; Wang, H.; Gutierrez-Jurado, H. A.; Yang, Y.; Deng, Z.

    2014-12-01

    Ecohydrology is an emerging discipline with a rapid research growth. This calls for enhancing ecohydrology education in both undergraduate and postgraduate levels. In other hydrology disciplines, hydrological processes are commonly observed in environments (e.g. streamflow, infiltration) or easily demonstrated in labs (e.g. Darcy's column). It is relatively difficult to demonstrate ecohydrological concepts and processes (e.g. soil-vegetation water relationship) in teaching. In this presentation, we report examples of using some advanced techniques to illustrate ecohydrological concepts, relationships, and processes, with measurements based on a native vegetation catchment in South Australia. They include LIDAR images showing the relationship between topography-control hdyroclimatic conditions and vegetation distribution, electrical resistivity tomography derived images showing stem structures, continuous stem water potential monitoring showing diurnal variations of plant water status, root zone moisture depletion during dry spells, and responses to precipitation inputs, and incorporating sapflow measurements to demonstrate environmental stress on plant stomatal behaviours.

  11. Bistatic SAR: Signal Processing and Image Formation.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  12. Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images

    ERIC Educational Resources Information Center

    Perry, Jamie; Kuehn, David; Langlois, Rick

    2007-01-01

    Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two-dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.…

  13. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  14. Investigating the Learning to Teach Process: Pedagogy, Innovation Adoption, Expertise Development, and Technology Integration

    ERIC Educational Resources Information Center

    Sun, Yan

    2013-01-01

    This dissertation reported three studies whose overarching purpose is to enhance our understanding about how teachers learn to teach by revealing the learning to teach process. Each of three studies revealed the learning to teach process from different perspectives. Guided by the Pedagogical Content Knowledge (PCK) framework, the first study…

  15. Image Processing Application for Cognition (IPAC) - Traditional and Emerging Topics in Image Processing in Astronomy (Invited)

    NASA Astrophysics Data System (ADS)

    Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.

    2008-08-01

    A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).

  16. 3D seismic image processing for interpretation

    NASA Astrophysics Data System (ADS)

    Wu, Xinming

    Extracting fault, unconformity, and horizon surfaces from a seismic image is useful for interpretation of geologic structures and stratigraphic features. Although interpretation of these surfaces has been automated to some extent by others, significant manual effort is still required for extracting each type of these geologic surfaces. I propose methods to automatically extract all the fault, unconformity, and horizon surfaces from a 3D seismic image. To a large degree, these methods just involve image processing or array processing which is achieved by efficiently solving partial differential equations. For fault interpretation, I propose a linked data structure, which is simpler than triangle or quad meshes, to represent a fault surface. In this simple data structure, each sample of a fault corresponds to exactly one image sample. Using this linked data structure, I extract complete and intersecting fault surfaces without holes from 3D seismic images. I use the same structure in subsequent processing to estimate fault slip vectors. I further propose two methods, using precomputed fault surfaces and slips, to undo faulting in seismic images by simultaneously moving fault blocks and faults themselves. For unconformity interpretation, I first propose a new method to compute a unconformity likelihood image that highlights both the termination areas and the corresponding parallel unconformities and correlative conformities. I then extract unconformity surfaces from the likelihood image and use these surfaces as constraints to more accurately estimate seismic normal vectors that are discontinuous near the unconformities. Finally, I use the estimated normal vectors and use the unconformities as constraints to compute a flattened image, in which seismic reflectors are all flat and vertical gaps correspond to the unconformities. Horizon extraction is straightforward after computing a map of image flattening; we can first extract horizontal slices in the flattened space

  17. A Pipeline Tool for CCD Image Processing

    NASA Astrophysics Data System (ADS)

    Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.

  18. Thermal Imaging Processes of Polymer Nanocomposite Coatings

    NASA Astrophysics Data System (ADS)

    Meth, Jeffrey

    2015-03-01

    Laser induced thermal imaging (LITI) is a process whereby infrared radiation impinging on a coating on a donor film transfers that coating to a receiving film to produce a pattern. This talk describes how LITI patterning can print color filters for liquid crystal displays, and details the physical processes that are responsible for transferring the nanocomposite coating in a coherent manner that does not degrade its optical properties. Unique features of this process involve heating rates of 107 K/s, and cooling rates of 104 K/s, which implies that not all of the relaxation modes of the polymer are accessed during the imaging process. On the microsecond time scale, the polymer flow is forced by devolatilization of solvents, followed by deformation akin to the constrained blister test, and then fracture caused by differential thermal expansion. The unique combination of disparate physical processes demonstrates the gamut of physics that contribute to advanced material processing in an industrial setting.

  19. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  20. Fundamental Concepts of Digital Image Processing

    DOE R&D Accomplishments Database

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  1. Teaching Comprehension Processes Using Magazines, Paperback Novels, and Content Area Textbooks.

    ERIC Educational Resources Information Center

    Nist, Sherrie L.; And Others

    1983-01-01

    Argues that teaching students the process of comprehension and ways to improve their own comprehension helps to develop skills in reluctant or poor readers. Offers teaching ideas that involve a variety of reading materials. (FL)

  2. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  3. Future projects in pulse image processing

    NASA Astrophysics Data System (ADS)

    Kinser, Jason M.

    1999-03-01

    Pulse-Couple Neural Networks have generated quite a bit of interest as image processing tools. Past applications include image segmentation, edge extraction, texture extraction, de-noising, object isolation, foveation and fusion. These past applications do not comprise a complete list of useful applications of the PCNN. Future avenues of research will include level set analysis, binary (optical) correlators, artificial life simulations, maze running and filter jet analysis. This presentation will explore these future avenues of PCNN research.

  4. CCD architecture for spacecraft SAR image processing

    NASA Technical Reports Server (NTRS)

    Arens, W. E.

    1977-01-01

    A real-time synthetic aperture radar (SAR) image processing architecture amenable to future on-board spacecraft applications is currently under development. Using state-of-the-art charge-coupled device (CCD) technology, low cost and power are inherent features. Other characteristics include the ability to reprogram correlation reference functions, correct for range migration, and compensate for antenna beam pointing errors on the spacecraft in real time. The first spaceborne demonstration is scheduled to be flown as an experiment on a 1982 Shuttle imaging radar mission (SIR-B). This paper describes the architecture and implementation characteristics of this initial spaceborne CCD SAR image processor.

  5. Infrared image processing and data analysis

    NASA Astrophysics Data System (ADS)

    Ibarra-Castanedo, C.; González, D.; Klein, M.; Pilla, M.; Vallerand, S.; Maldague, X.

    2004-12-01

    Infrared thermography in nondestructive testing provides images (thermograms) in which zones of interest (defects) appear sometimes as subtle signatures. In this context, raw images are not often appropriate since most will be missed. In some other cases, what is needed is a quantitative analysis such as for defect detection and characterization. In this paper, presentation is made of various methods of data analysis required either at preprocessing and/or processing images. References from literature are provided for briefly discussed known methods while novelties are elaborated in more details within the text which include also experimental results.

  6. Industrial Holography Combined With Image Processing

    NASA Astrophysics Data System (ADS)

    Schorner, J.; Rottenkolber, H.; Roid, W.; Hinsch, K.

    1988-01-01

    Holographic test methods have gained to become a valuable tool for the engineer in research and development. But also in the field of non-destructive quality control holographic test equipment is now accepted for tests within the production line. The producer of aircraft tyres e. g. are using holographic tests to prove the guarantee of their tyres. Together with image processing the whole test cycle is automatisized. The defects within the tyre are found automatically and are listed on an outprint. The power engine industry is using holographic vibration tests for the optimization of their constructions. In the plastics industry tanks, wheels, seats and fans are tested holographically to find the optimum of shape. The automotive industry makes holography a tool for noise reduction. Instant holography and image processing techniques for quantitative analysis have led to an economic application of holographic test methods. New developments of holographic units in combination with image processing are presented.

  7. DSP based image processing for retinal prosthesis.

    PubMed

    Parikh, Neha J; Weiland, James D; Humayun, Mark S; Shah, Saloni S; Mohile, Gaurav S

    2004-01-01

    The real-time image processing in retinal prosthesis consists of the implementation of various image processing algorithms like edge detection, edge enhancement, decimation etc. The algorithmic computations in real-time may have high level of computational complexity and hence the use of digital signal processors (DSPs) for the implementation of such algorithms is proposed here. This application desires that the DSPs be highly computationally efficient while working on low power. DSPs have computational capabilities of hundreds of millions of instructions per second (MIPS) or millions of floating point operations per second (MFLOPS) along with certain processor configurations having low power. The various image processing algorithms, the DSP requirements and capabilities of different platforms would be discussed in this paper. PMID:17271974

  8. Three-dimensional image signals: processing methods

    NASA Astrophysics Data System (ADS)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  9. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the

  10. Use of Low-cost 3-D Images in Teaching Gross Anatomy.

    ERIC Educational Resources Information Center

    Richards, Boyd F.; And Others

    1987-01-01

    With advances in computer technology, it has become possible to create three-dimensional (3-D) images of anatomical structures for use in teaching gross anatomy. Reported is a survey of attitudes of 91 first-year medical students toward the use of 3-D images in their anatomy course. Reactions to the 3-D images and suggestions for improvement are…

  11. Processing infrared images of aircraft lapjoints

    NASA Technical Reports Server (NTRS)

    Syed, Hazari; Winfree, William P.; Cramer, K. E.

    1992-01-01

    Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.

  12. Results of precision processing (scene correction) of ERTS-1 images using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Bernstein, R.

    1973-01-01

    ERTS-1 MSS and RBV data recorded on computer compatible tapes have been analyzed and processed, and preliminary results have been obtained. No degradation of intensity (radiance) information occurred in implementing the geometric correction. The quality and resolution of the digitally processed images are very good, due primarily to the fact that the number of film generations and conversions is reduced to a minimum. Processing times of digitally processed images are about equivalent to the NDPF electro-optical processor.

  13. FLIPS: Friendly Lisp Image Processing System

    NASA Astrophysics Data System (ADS)

    Gee, Shirley J.

    1991-08-01

    The Friendly Lisp Image Processing System (FLIPS) is the interface to Advanced Target Detection (ATD), a multi-resolutional image analysis system developed by Hughes in conjunction with the Hughes Research Laboratories. Both menu- and graphics-driven, FLIPS enhances system usability by supporting the interactive nature of research and development. Although much progress has been made, fully automated image understanding technology that is both robust and reliable is not a reality. In situations where highly accurate results are required, skilled human analysts must still verify the findings of these systems. Furthermore, the systems often require processing times several orders of magnitude greater than that needed by veteran personnel to analyze the same image. The purpose of FLIPS is to facilitate the ability of an image analyst to take statistical measurements on digital imagery in a timely fashion, a capability critical in research environments where a large percentage of time is expended in algorithm development. In many cases, this entails minor modifications or code tinkering. Without a well-developed man-machine interface, throughput is unduly constricted. FLIPS provides mechanisms which support rapid prototyping for ATD. This paper examines the ATD/FLIPS system. The philosophy of ATD in addressing image understanding problems is described, and the capabilities of FLIPS are discussed, along with a description of the interaction between ATD and FLIPS. Finally, an overview of current plans for the system is outlined.

  14. Product review: lucis image processing software.

    PubMed

    Johnson, J E

    1999-04-01

    Lucis is a software program that allows the manipulation of images through the process of selective contrast pattern emphasis. Using an image-processing algorithm called Differential Hysteresis Processing (DHP), Lucis extracts and highlights patterns based on variations in image intensity (luminance). The result is that details can be seen that would otherwise be hidden in deep shadow or excessive brightness. The software is contained on a single floppy disk, is easy to install on a PC, simple to use, and runs on Windows 95, Windows 98, and Windows NT operating systems. The cost is $8,500 for a license, but is estimated to save a great deal of money in photographic materials, time, and labor that would have otherwise been spent in the darkroom. Superb images are easily obtained from unstained (no lead or uranium) sections, and stored image files sent to laser printers are of publication quality. The software can be used not only for all types of microscopy, including color fluorescence light microscopy, biological and materials science electron microscopy (TEM and SEM), but will be beneficial in medicine, such as X-ray films (pending approval by the FDA), and in the arts. PMID:10206154

  15. Processing Images of Craters for Spacecraft Navigation

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  16. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  17. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  18. Enhanced neutron imaging detector using optical processing

    SciTech Connect

    Hutchinson, D.P.; McElhaney, S.A.

    1992-08-01

    Existing neutron imaging detectors have limited count rates due to inherent property and electronic limitations. The popular multiwire proportional counter is qualified by gas recombination to a count rate of less than 10{sup 5} n/s over the entire array and the neutron Anger camera, even though improved with new fiber optic encoding methods, can only achieve 10{sup 6} cps over a limited array. We present a preliminary design for a new type of neutron imaging detector with a resolution of 2--5 mm and a count rate capability of 10{sup 6} cps pixel element. We propose to combine optical and electronic processing to economically increase the throughput of advanced detector systems while simplifying computing requirements. By placing a scintillator screen ahead of an optical image processor followed by a detector array, a high throughput imaging detector may be constructed.

  19. Feedback regulation of microscopes by image processing.

    PubMed

    Tsukada, Yuki; Hashimoto, Koichi

    2013-05-01

    Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system. PMID:23594233

  20. FITSH: Software Package for Image Processing

    NASA Astrophysics Data System (ADS)

    Pál, András

    2011-11-01

    FITSH provides a standalone environment for analysis of data acquired by imaging astronomical detectors. The package provides utilities both for the full pipeline of subsequent related data processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple image combinations, spatial transformations and interpolations, etc.) and for aiding the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The utilities in the package are built on the top of the commonly used UNIX/POSIX shells (hence the name of the package), therefore both frequently used and well-documented tools for such environments can be exploited and managing massive amount of data is rather convenient.

  1. Simplified labeling process for medical image segmentation.

    PubMed

    Gao, Mingchen; Huang, Junzhou; Huang, Xiaolei; Zhang, Shaoting; Metaxas, Dimitris N

    2012-01-01

    Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms. PMID:23286072

  2. MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING

    PubMed Central

    ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN

    2013-01-01

    In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

  3. Mariner 9 - Image processing and products.

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Green, W. B.; Cutts, J. A.; Jahelka, E. D.; Johansen, R. A.; Sander, M. J.; Seidman, J. B.; Young, A. T.; Soderblom, L. A.

    1972-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the different levels of decalibration and analysis.

  4. Mariner 9 - Image processing and products.

    NASA Technical Reports Server (NTRS)

    Levinthal, E. C.; Green, W. B.; Cutts, J. A.; Jahelka, E. D.; Johansen, R. A.; Sander, M. J.; Seidman, J. B.; Young, A. T.; Soderblom, L. A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the different levels of decalibration and analysis.

  5. Mariner 9-Image processing and products

    USGS Publications Warehouse

    Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.

  6. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  7. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  8. Progressive band processing for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Schultz, Robert C.

    Hyperspectral imaging has emerged as an image processing technique in many applications. The reason that hyperspectral data is called hyperspectral is mainly because the massive amount of information provided by the hundreds of spectral bands that can be used for data analysis. However, due to very high band-to-band correlation much information may be also redundant. Consequently, how to effectively and best utilize such rich spectral information becomes very challenging. One general approach is data dimensionality reduction which can be performed by data compression techniques, such as data transforms, and data reduction techniques, such as band selection. This dissertation presents a new area in hyperspectral imaging, to be called progressive hyperspectral imaging, which has not been explored in the past. Specifically, it derives a new theory, called Progressive Band Processing (PBP) of hyperspectral data that can significantly reduce computing time and can also be realized in real-time. It is particularly suited for application areas such as hyperspectral data communications and transmission where data can be communicated and transmitted progressively through spectral or satellite channels with limited data storage. Most importantly, PBP allows users to screen preliminary results before deciding to continue with processing the complete data set. These advantages benefit users of hyperspectral data by reducing processing time and increasing the timeliness of crucial decisions made based on the data such as identifying key intelligence information when a required response time is short.

  9. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  10. Improving Synthetic Aperture Image by Image Compounding in Beamforming Process

    NASA Astrophysics Data System (ADS)

    Martínez-Graullera, Oscar; Higuti, Ricardo T.; Martín, Carlos J.; Ullate, Luis. G.; Romero, David; Parrilla, Montserrat

    2011-06-01

    In this work, signal processing techniques are used to improve the quality of image based on multi-element synthetic aperture techniques. Using several apodization functions to obtain different side lobes distribution, a polarity function and a threshold criterium are used to develop an image compounding technique. The spatial diversity is increased using an additional array, which generates complementary information about the defects, improving the results of the proposed algorithm and producing high resolution and contrast images. The inspection of isotropic plate-like structures using linear arrays and Lamb waves is presented. Experimental results are shown for a 1-mm-thick isotropic aluminum plate with artificial defects using linear arrays formed by 30 piezoelectric elements, with the low dispersion symmetric mode S0 at the frequency of 330 kHz.

  11. Limiting liability via high resolution image processing

    SciTech Connect

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  12. Visual parameter optimisation for biomedical image processing

    PubMed Central

    2015-01-01

    Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538

  13. Processing Infrared Images For Fire Management Applications

    NASA Astrophysics Data System (ADS)

    Warren, John R.; Pratt, William K.

    1981-12-01

    The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.

  14. Subband/transform functions for image processing

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  15. Remote online processing of multispectral image data

    NASA Astrophysics Data System (ADS)

    Groh, Christine; Rothe, Hendrik

    2005-10-01

    Within the scope of this paper a both compact and economical data acquisition system for multispecral images is described. It consists of a CCD camera, a liquid crystal tunable filter in combination with an associated concept for data processing. Despite of their limited functionality (e.g.regarding calibration) in comparison with commercial systems such as AVIRIS the use of these upcoming compact multispectral camera systems can be advantageous in many applications. Additional benefit can be derived adding online data processing. In order to maintain the systems low weight and price this work proposes to separate data acquisition and processing modules, and transmit pre-processed camera data online to a stationary high performance computer for further processing. The inevitable data transmission has to be optimised because of bandwidth limitations. All mentioned considerations hold especially for applications involving mini-unmanned-aerial-vehicles (mini-UAVs). Due to their limited internal payload the use of a lightweight, compact camera system is of particular importance. This work emphasises on the optimal software interface in between pre-processed data (from the camera system), transmitted data (regarding small bandwidth) and post-processed data (based on high performance computer). Discussed parameters are pre-processing algorithms, channel bandwidth, and resulting accuracy in the classification of multispectral image data. The benchmarked pre-processing algorithms include diagnostic statistics, test of internal determination coefficients as well as loss-free and lossy data compression methods. The resulting classification precision is computed in comparison to a classification performed with the original image dataset.

  16. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  17. Bitplane Image Coding With Parallel Coefficient Processing.

    PubMed

    Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor

    2016-01-01

    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible. PMID:26441420

  18. Image processing via VLSI: A concept paper

    NASA Technical Reports Server (NTRS)

    Nathan, R.

    1982-01-01

    Implementing specific image processing algorithms via very large scale integrated systems offers a potent solution to the problem of handling high data rates. Two algorithms stand out as being particularly critical -- geometric map transformation and filtering or correlation. These two functions form the basis for data calibration, registration and mosaicking. VLSI presents itself as an inexpensive ancillary function to be added to almost any general purpose computer and if the geometry and filter algorithms are implemented in VLSI, the processing rate bottleneck would be significantly relieved. A set of image processing functions that limit present systems to deal with future throughput needs, translates these functions to algorithms, implements via VLSI technology and interfaces the hardware to a general purpose digital computer is developed.

  19. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing. PMID:11567193

  20. EOS image data processing system definition study

    NASA Technical Reports Server (NTRS)

    Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.

  1. Medical imaging education in biomedical engineering curriculum: courseware development and application through a hybrid teaching model.

    PubMed

    Zhao, Weizhao; Li, Xiping; Chen, Hairong; Manns, Fabrice

    2012-01-01

    Medical Imaging is a key training component in Biomedical Engineering programs. Medical imaging education is interdisciplinary training, involving physics, mathematics, chemistry, electrical engineering, computer engineering, and applications in biology and medicine. Seeking an efficient teaching method for instructors and an effective learning environment for students has long been a goal for medical imaging education. By the support of NSF grants, we developed the medical imaging teaching software (MITS) and associated dynamic assessment tracking system (DATS). The MITS/DATS system has been applied to junior and senior medical imaging classes through a hybrid teaching model. The results show that student's learning gain improved, particularly in concept understanding and simulation project completion. The results also indicate disparities in subjective perception between junior and senior classes. Three institutions are collaborating to expand the courseware system and plan to apply it to different class settings. PMID:23367069

  2. Using Photographic Images as an Interactive Online Teaching Strategy

    ERIC Educational Resources Information Center

    Perry, Beth

    2006-01-01

    Teaching via distance requires inventive instructional strategies to facilitate an optimum learning experience. This qualitative research study evaluated the effect of one unique online teaching strategy called "photovoice" [Wang, C., & Burris, M. (1997). "Photovoice: Concept, methodology, and use for participatory needs assessment." "Health…

  3. The Graphic Novel Classroom: POWerful Teaching and Learning with Images

    ERIC Educational Resources Information Center

    Bakis, Maureen

    2011-01-01

    Could you use a superhero to teach reading, writing, critical thinking, and problem solving? While seeking the answer, secondary language arts teacher Maureen Bakis discovered a powerful pedagogy that teaches those skills and more. The amazingly successful results prompted her to write this practical guide that shows middle and high school…

  4. Electronics Signal Processing for Medical Imaging

    NASA Astrophysics Data System (ADS)

    Turchetta, Renato

    This paper describes the way the signal coming from a radiation detector is conditioned and processed to produce images useful for medical applications. First of all, the small signal produce by the radiation is processed by analogue electronics specifically designed to produce a good signal-over-noise ratio. The optimised analogue signal produced at this stage can then be processed and transformed into digital information that is eventually stored in a computer, where it can be further processed as required. After an introduction to the general requirements of the processing electronics, we will review the basic building blocks that process the `tiny' analogue signal coming from a radiation detector. We will in particular analyse how it is possible to optimise the signal-over-noise ratio of the electronics. Some exercises, developed in the tutorial, will help to understand this fundamental part. The blocks needed to process the analogue signal and transform it into a digital code will be described. The description of electronics systems used for medical imaging systems will conclude the lecture.

  5. Comparative positioning of ships on the basis of neural processing of digital images

    NASA Astrophysics Data System (ADS)

    Stateczny, A.

    2003-04-01

    Satellite and radar systems have been the main information sources in marine navigation in recent years. Apart from commonly known anti-collision functions, the marine navigational radar constitutes the basis for a future comparative system of ship positioning. The sonar is an additional source of image information in the system. In this way, the data are derived from observing the surroundings of the ship's total measuring area. The system of comparative navigation is an attractive alternative to satellite navigation due to its autonomy and independence from external appliances. The methods of analytic comparison of digitally recorded images applied so far are based on complex and time-consuming calculation algorithms. A new approach in comparative navigation is the application of artificial neural networks for plotting the ship's position. In the positioning process, previously registered images can be made use of, as well as their positions plotted for instance by means of the GPS system or by geodetic methods. The teaching sequence is constituted by the registered images correlated with positions; it is performed earlier and can last for any length of time. After the process of teaching the network is completed, the dynamically registered images are put on the network input as they come, and a position interpolation is performed based on images recognized as closest to the image analyzed. A merit of this method is teaching the network with real images, along with their disturbances and distortions. The teaching sequence includes images analogous to those that will be used in practice. During the system's working the response of the network (plotting the ship's position) is almost immediate. A basic problem of this method is the need for previous registration of numerous real images in various hydrometeorological conditions. The registered images should be subjected to digital processing, to the compression process in particular. One of the processing methods is

  6. Computer image processing in marine resource exploration

    NASA Technical Reports Server (NTRS)

    Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.

    1976-01-01

    Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.

  7. IMAGE 100: The interactive multispectral image processing system

    NASA Technical Reports Server (NTRS)

    Schaller, E. S.; Towles, R. W.

    1975-01-01

    The need for rapid, cost-effective extraction of useful information from vast quantities of multispectral imagery available from aircraft or spacecraft has resulted in the design, implementation and application of a state-of-the-art processing system known as IMAGE 100. Operating on the general principle that all objects or materials possess unique spectral characteristics or signatures, the system uses this signature uniqueness to identify similar features in an image by simultaneously analyzing signatures in multiple frequency bands. Pseudo-colors, or themes, are assigned to features having identical spectral characteristics. These themes are displayed on a color CRT, and may be recorded on tape, film, or other media. The system was designed to incorporate key features such as interactive operation, user-oriented displays and controls, and rapid-response machine processing. Owing to these features, the user can readily control and/or modify the analysis process based on his knowledge of the input imagery. Effective use can be made of conventional photographic interpretation skills and state-of-the-art machine analysis techniques in the extraction of useful information from multispectral imagery. This approach results in highly accurate multitheme classification of imagery in seconds or minutes rather than the hours often involved in processing using other means.

  8. Analysis of physical processes via imaging vectors

    NASA Astrophysics Data System (ADS)

    Volovodenko, V.; Efremova, N.; Efremov, V.

    2016-06-01

    Practically, all modeling processes in one way or another are random. The foremost formulated theoretical foundation embraces Markov processes, being represented in different forms. Markov processes are characterized as a random process that undergoes transitions from one state to another on a state space, whereas the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. In the Markov processes the proposition (model) of the future by no means changes in the event of the expansion and/or strong information progression relative to preceding time. Basically, modeling physical fields involves process changing in time, i.e. non-stationay processes. In this case, the application of Laplace transformation provides unjustified description complications. Transition to other possibilities results in explicit simplification. The method of imaging vectors renders constructive mathematical models and necessary transition in the modeling process and analysis itself. The flexibility of the model itself using polynomial basis leads to the possible rapid transition of the mathematical model and further analysis acceleration. It should be noted that the mathematical description permits operator representation. Conversely, operator representation of the structures, algorithms and data processing procedures significantly improve the flexibility of the modeling process.

  9. Improvement of hospital processes through business process management in Qaem Teaching Hospital: A work in progress

    PubMed Central

    Yarmohammadian, Mohammad H.; Ebrahimipour, Hossein; Doosty, Farzaneh

    2014-01-01

    In a world of continuously changing business environments, organizations have no option; however, to deal with such a big level of transformation in order to adjust the consequential demands. Therefore, many companies need to continually improve and review their processes to maintain their competitive advantages in an uncertain environment. Meeting these challenges requires implementing the most efficient possible business processes, geared to the needs of the industry and market segments that the organization serves globally. In the last 10 years, total quality management, business process reengineering, and business process management (BPM) have been some of the management tools applied by organizations to increase business competiveness. This paper is an original article that presents implementation of “BPM” approach in the healthcare domain that allows an organization to improve and review its critical business processes. This project was performed in “Qaem Teaching Hospital” in Mashhad city, Iran and consists of four distinct steps; (1) identify business processes, (2) document the process, (3) analyze and measure the process, and (4) improve the process. Implementing BPM in Qaem Teaching Hospital changed the nature of management by allowing the organization to avoid the complexity of disparate, soloed systems. BPM instead enabled the organization to focus on business processes at a higher level. PMID:25540784

  10. Preservice Teachers' Views of Inclusive Science Teaching as Shaped by Images of Teaching, Learning, and Knowing.

    ERIC Educational Resources Information Center

    Southerland, Sherry A.; Gess-Newsome, Julie

    1999-01-01

    Interpretive analysis of preservice teachers' writings and discussions during an elementary-science methods course identified the teachers' positivist views of knowledge, learning, and teaching as prominent tools for guiding understanding of and reaction to ideas of teaching science to diverse student populations. Discusses the impact on teachers'…