Science.gov

Sample records for teaching image processing

  1. Image Processing for Teaching.

    ERIC Educational Resources Information Center

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  2. Teaching digital image processing and computer vision in a quantitative imaging electronic classroom

    NASA Astrophysics Data System (ADS)

    Sonka, Milan

    1998-06-01

    In 1996, the University of Iowa launched a multiphase project for the development of a well-structured interdisciplinary image systems engineering curriculum with both depth and breadth in its offerings. This project has been supported by equipment grants from the Hewlett Packard Company. The new teaching approach that we are currently developing is very dissimilar to that we used in previous years. Lectures consist of presentation of concepts, immediately followed by examples, and practical exploratory problems. Six image processing classes have been offered in the new collaborative learning environment during the first two academic years. This paper outlines the employed educational approach we are taking and summarizes our early experience.

  3. Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services

    NASA Astrophysics Data System (ADS)

    di, L.; Deng, M.

    2010-12-01

    Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory Digital Image Processing, A Remote Sensing Perspective" authored by John Jensen. The textbook is widely adopted in the geography departments around the world for training students on digital processing of remote sensing images. In the traditional teaching setting for the course, the instructor prepares a set of sample remote sensing images to be used for the course. Commercial desktop remote sensing software, such as ERDAS, is used for students to do the lab exercises. The students have to do the excurses in the lab and can only use the simple images. For this specific course at GMU, we developed GeoBrain-based lab excurses for the course. With GeoBrain, students now can explore petabytes of remote sensing images in the NASA, NOAA, and USGS data archives instead of dealing only with sample images. Students have a much more powerful computing facility available for their lab excurses. They can explore the data and do the excurses any time at any place they want as long as they can access the Internet through the Web Browser. The feedbacks from students are all very positive about the learning experience on the digital image processing with the help of GeoBrain web processing services. The teaching/lab materials and GeoBrain services are freely available to anyone at http://www.laits.gmu.edu.

  4. Teaching Effectively with Visual Effect in an Image-Processing Class.

    ERIC Educational Resources Information Center

    Ng, G. S.

    1997-01-01

    Describes a course teaching the use of computers in emulating human visual capability and image processing and proposes an interactive presentation using multimedia technology to capture and sustain student attention. Describes the three phase presentation: introduction of image processing equipment, presentation of lecture material, and…

  5. Image Processing for Teaching: Transforming a Scientific Research Tool into an Educational Technology.

    ERIC Educational Resources Information Center

    Greenberg, Richard

    1998-01-01

    Describes the Image Processing for Teaching (IPT) project which provides digital image processing to excite students about science and mathematics as they use research-quality software on microcomputers. Provides information on IPT whose components of this dissemination project have been widespread teacher education, curriculum-based materials…

  6. Image Processing for Teaching: Transforming a Scientific Research Tool into an Educational Technology.

    ERIC Educational Resources Information Center

    Greenberg, Richard

    1998-01-01

    Describes the Image Processing for Teaching (IPT) project which provides digital image processing to excite students about science and mathematics as they use research-quality software on microcomputers. Provides information on IPT whose components of this dissemination project have been widespread teacher education, curriculum-based materials

  7. Applying a visual language for image processing as a graphical teaching tool in medical imaging

    NASA Astrophysics Data System (ADS)

    Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.

  8. Teaching image processing and pattern recognition with the Intel OpenCV library

    NASA Astrophysics Data System (ADS)

    Kozłowski, Adam; Królak, Aleksandra

    2009-06-01

    In this paper we present an approach to teaching image processing and pattern recognition with the use of the OpenCV library. Image processing, pattern recognition and computer vision are important branches of science and apply to tasks ranging from critical, involving medical diagnostics, to everyday tasks including art and entertainment purposes. It is therefore crucial to provide students of image processing and pattern recognition with the most up-to-date solutions available. In the Institute of Electronics at the Technical University of Lodz we facilitate the teaching process in this subject with the OpenCV library, which is an open-source set of classes, functions and procedures that can be used in programming efficient and innovative algorithms for various purposes. The topics of student projects completed with the help of the OpenCV library range from automatic correction of image quality parameters or creation of panoramic images from video to pedestrian tracking in surveillance camera video sequences or head-movement-based mouse cursor control for the motorically impaired.

  9. Teaching High School Science Using Image Processing: A Case Study of Implementation of Computer Technology.

    ERIC Educational Resources Information Center

    Greenberg, Richard; Raphael, Jacqueline; Keller, Jill L.; Tobias, Sheila

    1998-01-01

    Outlines an in-depth case study of teachers' use of image processing in biology, earth science, and physics classes in one high school science department. Explores issues surrounding technology implementation. Contains 21 references. (DDR)

  10. A self-teaching image processing and voice-recognition-based, intelligent and interactive system to educate visually impaired children

    NASA Astrophysics Data System (ADS)

    Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad

    2010-02-01

    A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.

  11. SSMiles: Using Models to Teach about Remote Sensing and Image Processing.

    ERIC Educational Resources Information Center

    Tracy, Dyanne M., Ed.

    1994-01-01

    Presents an introductory lesson on remote sensing and image processing to be used in cooperative groups. Students are asked to solve a problem by gathering information, making inferences, transforming data into other forms, and making and testing hypotheses. Includes four expansions of the lesson and a reproducible student worksheet. (MKR)

  12. Image Processing

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  13. Image Processing

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Images are prepared from data acquired by the multispectral scanner aboard Landsat, which views Earth in four ranges of the electromagnetic spectrum, two visible bands and two infrared. Scanner picks up radiation from ground objects and converts the radiation signatures to digital signals, which are relayed to Earth and recorded on tape. Each tape contains "pixels" or picture elements covering a ground area; computerized equipment processes the tapes and plots each pixel, line be line to produce the basic image. Image can be further processed to correct sensor errors, to heighten contrast for feature emphasis or to enhance the end product in other ways. Key factor in conversion of digital data to visual form is precision of processing equipment. Jet Propulsion Laboratory prepared a digital mosaic that was plotted and enhanced by Optronics International, Inc. by use of the company's C-4300 Colorwrite, a high precision, high speed system which manipulates and analyzes digital data and presents it in visual form on film. Optronics manufactures a complete family of image enhancement processing systems to meet all users' needs. Enhanced imagery is useful to geologists, hydrologists, land use planners, agricultural specialists geographers and others.

  14. Images of Teaching.

    ERIC Educational Resources Information Center

    Hargrove, Kathy

    2003-01-01

    This article explores different teaching styles, including instructional managers (who focus on orchestrating sets of activities for groups and individuals), caring persons (who are more deeply concerned about how the work of the classroom contributes to the students' growth as individuals), or generous experts (who act as mentors). (Contains 1…

  15. Image Processing

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A new spinoff product was derived from Geospectra Corporation's expertise in processing LANDSAT data in a software package. Called ATOM (for Automatic Topographic Mapping), it's capable of digitally extracting elevation information from stereo photos taken by spaceborne cameras. ATOM offers a new dimension of realism in applications involving terrain simulations, producing extremely precise maps of an area's elevations at a lower cost than traditional methods. ATOM has a number of applications involving defense training simulations and offers utility in architecture, urban planning, forestry, petroleum and mineral exploration.

  16. Teaching: A Reflective Process

    ERIC Educational Resources Information Center

    German, Susan; O'Day, Elizabeth

    2009-01-01

    In this article, the authors describe how they used formative assessments to ferret out possible misconceptions among middle-school students in a unit about weather-related concepts. Because they teach fifth- and eighth-grade science, this assessment also gives them a chance to see how student understanding develops over the years. This year they…

  17. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  18. Teaching Image-Processing Concepts in Junior High School: Boys' and Girls' Achievements and Attitudes towards Technology

    ERIC Educational Resources Information Center

    Barak, Moshe; Asad, Khaled

    2012-01-01

    Background: This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose: The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these…

  19. Teaching Image-Processing Concepts in Junior High School: Boys' and Girls' Achievements and Attitudes towards Technology

    ERIC Educational Resources Information Center

    Barak, Moshe; Asad, Khaled

    2012-01-01

    Background: This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose: The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these

  20. Computers in Public Schools: Changing the Image with Image Processing.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  1. Learning to Use Geographic Information Systems and Image Processing and Analysis to Teach Ocean Science to Middle School Students

    NASA Astrophysics Data System (ADS)

    Moore, S. D.; Martin, J.; Kinzel, M.

    2004-12-01

    This presentation will provide a middle school teacher's perspective on Ocean Explorers, a three-year project directed at teachers and schools in California. Funded by the Information Technology Experiences for Students and Teachers (ITEST) program at the National Science Foundation, Ocean Explorers is giving support to teams of teachers that will serve as local user groups for the exploration of geographic information systems (GIS) and image processing and analysis (IPA) as educational technologies for studying ocean science. Conducted as a collaboration between the nonprofit Center for Image Processing in Education and the Channel Islands National Marine Sanctuary, the project is providing mentoring, software, equipment, funding, and training on how to design inquiry-based activities that support achievement of California's standards for science, technology, mathematics, and reading education. During year two of Ocean Explorers, the teams of teachers will begin to use GIS and IPA as tools for involving their students in original research on issues of interest to their home communities. With assistance from the Ocean Explorers project, the teachers will create inquiry-based activities for their students that will help their school achieve targeted standards. This presentation will focus on plans by one teacher for involving students from St. Mary's Middle School, Fullerton, California, in tracking of ocean pollution and beach closures along the Southern California coast.

  2. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  3. Hyperspectral image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  4. Hybrid image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1990-01-01

    Partly-digital, partly-optical 'hybrid' image processing attempts to use the properties of each domain to synergistic advantage: while Fourier optics furnishes speed, digital processing allows the use of much greater algorithmic complexity. The video-rate image-coordinate transformation used is a critical technology for real-time hybrid image-pattern recognition. Attention is given to the separation of pose variables, image registration, and both single- and multiple-frame registration.

  5. Subroutines For Image Processing

    NASA Technical Reports Server (NTRS)

    Faulcon, Nettie D.; Monteith, James H.; Miller, Keith W.

    1988-01-01

    Image Processing Library computer program, IPLIB, is collection of subroutines facilitating use of COMTAL image-processing system driven by HP 1000 computer. Functions include addition or subtraction of two images with or without scaling, display of color or monochrome images, digitization of image from television camera, display of test pattern, manipulation of bits, and clearing of screen. Provides capability to read or write points, lines, and pixels from image; read or write at location of cursor; and read or write array of integers into COMTAL memory. Written in FORTRAN 77.

  6. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  7. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  8. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  9. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  10. Image sets for satellite image processing systems

    NASA Astrophysics Data System (ADS)

    Peterson, Michael R.; Horner, Toby; Temple, Asael

    2011-06-01

    The development of novel image processing algorithms requires a diverse and relevant set of training images to ensure the general applicability of such algorithms for their required tasks. Images must be appropriately chosen for the algorithm's intended applications. Image processing algorithms often employ the discrete wavelet transform (DWT) algorithm to provide efficient compression and near-perfect reconstruction of image data. Defense applications often require the transmission of images and video across noisy or low-bandwidth channels. Unfortunately, the DWT algorithm's performance deteriorates in the presence of noise. Evolutionary algorithms are often able to train image filters that outperform DWT filters in noisy environments. Here, we present and evaluate two image sets suitable for the training of such filters for satellite and unmanned aerial vehicle imagery applications. We demonstrate the use of the first image set as a training platform for evolutionary algorithms that optimize discrete wavelet transform (DWT)-based image transform filters for satellite image compression. We evaluate the suitability of each image as a training image during optimization. Each image is ranked according to its suitability as a training image and its difficulty as a test image. The second image set provides a test-bed for holdout validation of trained image filters. These images are used to independently verify that trained filters will provide strong performance on unseen satellite images. Collectively, these image sets are suitable for the development of image processing algorithms for satellite and reconnaissance imagery applications.

  11. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  12. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  13. Teaching Geoscience with Visualizations: Using Images, Animations and Models Effectively

    NASA Astrophysics Data System (ADS)

    Manduca, C. A.; Hall-Wallace, M.; Mogk, D.; Tversky, B.; Slotta, J.; Crabaugh, J.

    2004-05-01

    Visualizing the Earth, its processes, and its evolution through time is a fundamental aspect of geoscience. Geoscientists use a wide variety of tools to assist them in creating their own mental images. For example, we now use multilayered visualizations of geographically referenced data to analyze the relationships between different variables and we create animations to look at changes in data or model output through time. An NAGT On the Cutting Edge emerging theme workshop focused on the use of visualization tools in teaching geoscience by addressing the question "How do we teach geoscience with visualizations effectively?" The workshop held February 26-29 at Carleton College brought together geoscientists who are leaders in using visualizations in their teaching, learning scientists who study how we perceive and learn from visualizations, and creators of visualizations and visualization tools. Participants considered what we know about using visualizations effectively to teach geoscience, what important questions need to be answered to improve our ability to teach effectively, and what resources are needed to increase the capability of teaching with visualizations in the geosciences. Discussion focused on how we use visualizations in our teaching to describe and explain geoscience concepts and to explore and understand data. In addition, a section of the workshop focused on powerful emerging tools and technologies for visualization and their use in geoscience education. Workshop leaders and participants have created a web-site that includes visualizations useful in teaching, an annotated bibliography of research about teaching and learning with visualizations, essays by workshop participants about their work with visualizations, and information for visualization creators. Further information can be found at serc.carleton.edu/NAGTWorkshops/visualize04.

  14. Image processing technology

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Balick, L.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.

  15. Introduction to computer image processing

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  16. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  17. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  18. Image processing and recognition for biological images.

    PubMed

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  19. Image processing and recognition for biological images

    PubMed Central

    Uchida, Seiichi

    2013-01-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  20. Smart Image Enhancement Process

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  1. Complex Dynamics in Academics' Developmental Processes in Teaching

    ERIC Educational Resources Information Center

    Trautwein, Caroline; Nückles, Matthias; Merkt, Marianne

    2015-01-01

    Improving teaching in higher education is a concern for universities worldwide. This study explored academics' developmental processes in teaching using episodic interviews and teaching portfolios. Eight academics in the context of teaching development reported changes in their teaching and change triggers. Thematic analyses revealed seven areas…

  2. Processing Visual Images

    SciTech Connect

    Litke, Alan

    2006-03-27

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  3. Teaching English as a Cultural Process.

    ERIC Educational Resources Information Center

    Bailey, Wilfrid C.

    Teaching of English is involved in the transmission of culture in two ways: (1) it is part of the complex process through which culture is transmitted; and (2) it can be a vehicle for the transmission of culture. The English teacher is faced with a combination of the two tasks of enculturation and acculturation. The effective teacher must clearly…

  4. Computer Aided Teaching of Digital Signal Processing.

    ERIC Educational Resources Information Center

    Castro, Ian P.

    1990-01-01

    Describes a microcomputer-based software package developed at the University of Surrey for teaching digital signal processing to undergraduate science and engineering students. Menu-driven software capabilities are explained, including demonstration of qualitative concepts and experimentation with quantitative data, and examples are given of…

  5. Teaching and Learning: A Collaborative Process.

    ERIC Educational Resources Information Center

    Goldberg, Merryl R.

    1990-01-01

    Explains the teaching-research method of instruction that employs the teacher and students as collaborative partners in the learning process. States that students attain knowledge through assimilating experiences in ways that are most meaningful for them. Case studies are included. (GG)

  6. Teaching Psychological Report Writing: Content and Process

    ERIC Educational Resources Information Center

    Wiener, Judith; Costaris, Laurie

    2012-01-01

    The purpose of this article is to discuss the process of teaching graduate students in school psychology to write psychological reports that teachers and parents find readable and that guide intervention. The consensus from studies across four decades of research is that effective psychological reports connect to the client's context; have clear

  7. Teaching, Communication, and Book Choice Processes

    ERIC Educational Resources Information Center

    Ryan, Dana Marie

    2013-01-01

    Allowing students to select their own books for independent reading has been linked to increased reading engagement, heightened motivation to read, and greater independence and efficacy in reading. However, there has been little exploration of the processes surrounding book choice in elementary classrooms, particularly teaching practices that…

  8. Teaching Psychological Report Writing: Content and Process

    ERIC Educational Resources Information Center

    Wiener, Judith; Costaris, Laurie

    2012-01-01

    The purpose of this article is to discuss the process of teaching graduate students in school psychology to write psychological reports that teachers and parents find readable and that guide intervention. The consensus from studies across four decades of research is that effective psychological reports connect to the client's context; have clear…

  9. Teaching, Communication, and Book Choice Processes

    ERIC Educational Resources Information Center

    Ryan, Dana Marie

    2013-01-01

    Allowing students to select their own books for independent reading has been linked to increased reading engagement, heightened motivation to read, and greater independence and efficacy in reading. However, there has been little exploration of the processes surrounding book choice in elementary classrooms, particularly teaching practices that

  10. FORTRAN Algorithm for Image Processing

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Hull, David R.

    1987-01-01

    FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

  11. Teaching Process Design through Integrated Process Synthesis

    ERIC Educational Resources Information Center

    Metzger, Matthew J.; Glasser, Benjamin J.; Patel, Bilal; Hildebrandt, Diane; Glasser, David

    2012-01-01

    The design course is an integral part of chemical engineering education. A novel approach to the design course was recently introduced at the University of the Witwatersrand, Johannesburg, South Africa. The course aimed to introduce students to systematic tools and techniques for setting and evaluating performance targets for processes, as well as…

  12. Teaching Process Design through Integrated Process Synthesis

    ERIC Educational Resources Information Center

    Metzger, Matthew J.; Glasser, Benjamin J.; Patel, Bilal; Hildebrandt, Diane; Glasser, David

    2012-01-01

    The design course is an integral part of chemical engineering education. A novel approach to the design course was recently introduced at the University of the Witwatersrand, Johannesburg, South Africa. The course aimed to introduce students to systematic tools and techniques for setting and evaluating performance targets for processes, as well as

  13. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  14. The APL image processing laboratory

    NASA Technical Reports Server (NTRS)

    Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.

    1984-01-01

    The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.

  15. Role of Clinical Images Based Teaching as a Supplement to Conventional Clinical Teaching in Dermatology

    PubMed Central

    Kumar, Gurumoorthy Rajesh; Madhavi, Sankar; Karthikeyan, Kaliaperumal; Thirunavakarasu, MR

    2015-01-01

    Introduction: Clinical Dermatology is a visually oriented specialty, where visually oriented teaching is more important than it is in any other specialty. It is essential that students must have repeated exposure to common dermatological disorders in the limited hours of Dermatology clinical teaching. Aim: This study was conducted to assess the effect of clinical images based teaching as a supplement to the patient based clinical teaching in Dermatology, among final year MBBS students. Methods: A clinical batch comprising of 19 students was chosen for the study. Apart from the routine clinical teaching sessions, clinical images based teaching was conducted. This teaching method was evaluated using a retrospective pre-post questionnaire. Students’ performance was assessed using Photo Quiz and an Objective Structured Clinical Examination (OSCE). Feedback about the addition of images based class was collected from students. Results: A significant improvement was observed in the self-assessment scores following images based teaching. Mean OSCE score was 6.26/10, and that of Photo Quiz was 13.6/20. Conclusion: This Images based Dermatology teaching has proven to be an excellent supplement to routine clinical cases based teaching. PMID:26677267

  16. Cooperative processes in image segmentation

    NASA Technical Reports Server (NTRS)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  17. Voyager image processing at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-01-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  18. Industrial Applications of Image Processing

    NASA Astrophysics Data System (ADS)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  19. Image Processing and Geographic Information

    NASA Astrophysics Data System (ADS)

    McLeod, Ronald G.; Daily, Julie; Kiss, Kenneth

    1985-12-01

    A Geographic Information System, which is a product of System Development Corporation's Image Processing System and a commercially available Data Base Management System, is described. The architecture of the system allows raster (image) data type, graphics data type, and tabular data type input and provides for the convenient analysis and display of spatial information. A variety of functions are supported through the Geographic Information System including ingestion of foreign data formats, image polygon encoding, image overlay, image tabulation, costatistical modelling of image and tabular information, and tabular to image conversion. The report generator in the DBMS is utilized to prepare quantitative tabular output extracted from spatially referenced images. An application of the Geographic Information System to a variety of data sources and types is highlighted. The application utilizes sensor image data, graphically encoded map information available from government sources, and statistical tables.

  20. SWNT Imaging Using Multispectral Image Processing

    NASA Astrophysics Data System (ADS)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  1. Trends In Microcomputer Image Processing

    NASA Astrophysics Data System (ADS)

    Strum, William E.

    1988-05-01

    We have seen, in the last four years, the microcomputer become the platform of choice for many image processing applications. By 1991, Frost and Sullivan forecasts that 75% of all image processing will be carried out on microcomputers. Many factors have contributed to this trend and will be discussed in the following paper.

  2. Design of smart imagers with image processing

    NASA Astrophysics Data System (ADS)

    Serova, Evgeniya N.; Shiryaev, Yury A.; Udovichenko, Anton O.

    2005-06-01

    This paper is devoted to creation of novel CMOS APS imagers with focal plane parallel image preprocessing for smart technical vision and electro-optical systems based on neural implementation. Using analysis of main biological vision features, the desired artificial vision characteristics are defined. Image processing tasks can be implemented by smart focal plane preprocessing CMOS imagers with neural networks are determined. Eventual results are important for medicine, aerospace ecological monitoring, complexity, and ways for CMOS APS neural nets implementation. To reduce real image preprocessing time special methods based on edge detection and neighbored frame subtraction will be considered and simulated. To select optimal methods and mathematical operators for edge detection various medical, technical and aerospace images will be tested. The important research direction will be devoted to analogue implementation of main preprocessing operations (addition, subtraction, neighbored frame subtraction, module, and edge detection of pixel signals) in focal plane of CMOS APS imagers. We present the following results: the algorithm of edge detection for analog realization, and patented focal plane circuits for analog image reprocessing (edge detection and motion detection).

  3. Image processing with LERBS

    NASA Astrophysics Data System (ADS)

    Dalmo, Rune; Bratlie, Jostein; Zanaty, Peter

    2014-12-01

    We investigate the performance of image compression using a custom transform, related to the discrete cosine transform, where the shape of the waveform basis function can be adjusted via setting a shape parameter. A strategy for generating quantization tables for various shapes of the basis function, including the cosine function, is proposed.

  4. Industrial applications of process imaging and image processing

    NASA Astrophysics Data System (ADS)

    Scott, David M.; Sunshine, Gregg; Rosen, Lou; Jochen, Ed

    2001-02-01

    Process imaging is the art of visualizing events inside closed industrial processes. Image processing is the art of mathematically manipulating digitized images to extract quantitative information about such processes. Ongoing advances in camera and computer technology have made it feasible to apply these abilities to measurement needs in the chemical industry. To illustrate the point, this paper describes several applications developed at DuPont, where a variety of measurements are based on in-line, at-line, and off-line imaging. Application areas include compounding, melt extrusion, crystallization, granulation, media milling, and particle characterization. Polymer compounded with glass fiber is evaluated by a patented radioscopic (real-time X-ray imaging) technique to measure concentration and dispersion uniformity of the glass. Contamination detection in molten polymer (important for extruder operations) is provided by both proprietary and commercial on-line systems. Crystallization in production reactors is monitored using in-line probes and flow cells. Granulation is controlled by at-line measurements of granule size obtained from image processing. Tomographic imaging provides feedback for improved operation of media mills. Finally, particle characterization is provided by a robotic system that measures individual size and shape for thousands of particles without human supervision. Most of these measurements could not be accomplished with other (non-imaging) techniques.

  5. Toolkit for parallel image processing

    NASA Astrophysics Data System (ADS)

    Squyres, Jeffery M.; Lumsdaine, Andrew; Stevenson, Robert L.

    1998-09-01

    In this paper, we present the design and implementation of a parallel image processing software library (the Parallel Image Processing Toolkit). The Toolkit not only supplies a rich set of image processing routines, it is designed principally as an extensible framework containing generalized parallel computational kernels to support image processing. Users can easily add their own image processing routines without knowledge or explicit use of the underlying data distribution mechanisms or parallel computing model. Shared memory and multi-level memory hierarchies are exploited to achieve high performance on each node, thereby minimizing overall parallel execution time. Multiple load balancing schemes have been implemented within the parallel framework that transparently distribute the computational load evenly on a distributed memory computing environment. Inside the Toolkit, a message-passing model of parallelism is designed around the Message Passing Interface standard. Experimental results are presented to demonstrate the parallel speedup obtained with the Parallel Image Processing Toolkit in a typical workstation cluster with some common image processing tasks.

  6. Astronomical Image Processing with Hadoop

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-07-01

    In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.

  7. Image Processing: Some Challenging Problems

    NASA Astrophysics Data System (ADS)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  8. Morphological image processing techniques in thermographic imaging.

    PubMed

    Schulze, M A; Pearce, J A

    1993-01-01

    Mathematical morphology is a set algebra that defines some important new techniques in image processing. Morphological filters are closely related to order statistic and other nonlinear filters, but they are uniquely sensitive to shape. A morphological filter will preserve shapes similar to its structuring element shape while modifying dissimilar shapes. Most morphological filters are effective at removing both linear and nonlinear noise processes. However, the standard morphological operators introduce a statistical and deterministic bias to images. Fortunately, these operators exist in complementary pairs that are equally and oppositely biased. One way to alleviate the bias is to average the two complementary operators. The filters formed by such averages are the midrange filter (basic operators), the pseudomedian filter (singly compound operators) and the LOCO filter (doubly compound operators). In thermographic imaging, one often wishes to find exact temperatures or accurate isothermal contours. Therefore, techniques used to remove sensor noise and scanning artifact should not introduce bias. The LOCO filter that we have devised provides the shape control and noise suppression of morphological techniques without biasing the image. We will demonstrate the effects of different structuring element shapes on thermographic images of tissue heated by laser irradiation and electrosurgery. PMID:8329594

  9. Sgraffito simulation through image processing

    NASA Astrophysics Data System (ADS)

    Guerrero, Roberto A.; Serón Arbeloa, Francisco J.

    2011-10-01

    This paper presents a tool for simulating the traditional Sgraffito technique through digital image processing. The tool is based on a digital image pile and a set of attributes recovered from the image at the bottom of the pile using the Streit and Buchanan multiresolution image pyramid. This technique tries to preserve the principles of artistic composition by means of the attributes of color, luminance and shape recovered from the foundation image. A couple of simulated scratching objects will establish how the recovered attributes have to be painted. Different attributes can be painted by using different scratching primitives. The resulting image will be a colorimetric composition reached from the image on the top of the pile, the color of the images revealed by scratching and the inner characteristics of each scratching primitive. The technique combines elements of image processing, art and computer graphics allowing users to make their own free compositions and providing a means for the development of visual communication skills within the user-observer relationship. The technique enables the application of the given concepts in non artistic fields with specific subject tools.

  10. Acousto-optic image processing.

    PubMed

    Balakshy, Vladimir I; Kostyuk, Dmitry E

    2009-03-01

    Acousto-optic processing of images is based on the angular selectivity of acousto-optic interaction resulting in spatial filtration of the image spectrum. We present recent theoretical and experimental investigations carried out in this field. Much attention is given to the analysis of the acousto-optic cell transfer function form depending on the crystal cut, the geometry of acousto-optic interaction, and the ultrasound frequency. Computer simulation results of the two-dimensional acousto-optic spatial filtration of some elementary images are presented. A new method of phase object visualization is suggested and examined that makes it possible to separate amplitude and phase information contained in an optical image. The potentialities of the acousto-optic image processing are experimentally demonstrated by examples of edge enhancement and optical wavefront visualization effects. PMID:19252612

  11. Toward a Student-Centred Process of Teaching Arithmetic

    ERIC Educational Resources Information Center

    Eriksson, Gota

    2011-01-01

    This article describes a way toward a student-centred process of teaching arithmetic, where the content is harmonized with the students' conceptual levels. At school start, one classroom teacher is guided in recurrent teaching development meetings in order to develop teaching based on the students' prerequisites and to successively learn the…

  12. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  13. Signal and Image Processing Operations

    Energy Science and Technology Software Center (ESTSC)

    1995-05-10

    VIEW is a software system for processing arbitrary multidimensional signals. It provides facilities for numerical operations, signal displays, and signal databasing. The major emphasis of the system is on the processing of time-sequences and multidimensional images. The system is designed to be both portable and extensible. It runs currently on UNIX systems, primarily SUN workstations.

  14. Teaching People and Machines to Enhance Images

    NASA Astrophysics Data System (ADS)

    Berthouzoz, Floraine Sara Martianne

    Procedural tasks such as following a recipe or editing an image are very common. They require a person to execute a sequence of operations (e.g. chop onions, or sharpen the image) in order to achieve the goal of the task. People commonly use step-by-step tutorials to learn these tasks. We focus on software tutorials, more specifically photo manipulation tutorials, and present a set of tools and techniques to help people learn, compare and automate photo manipulation procedures. We describe three different systems that are each designed to help with a different stage in acquiring procedural knowledge. Today, people primarily rely on hand-crafted tutorials in books and on websites to learn photo manipulation procedures. However, putting together a high quality step-by-step tutorial is a time-consuming process. As a consequence, many online tutorials are poorly designed which can lead to confusion and slow down the learning process. We present a demonstration-based system for automatically generating succinct step-by-step visual tutorials of photo manipulations. An author first demonstrates the manipulation using an instrumented version of GIMP (GNU Image Manipulation Program) that records all changes in interface and application state. From the example recording, our system automatically generates tutorials that illustrate the manipulation using images, text, and annotations. It leverages automated image labeling (recognition of facial features and outdoor scene structures in our implementation) to generate more precise text descriptions of many of the steps in the tutorials. A user study finds that our tutorials are effective for learning the steps of a procedure; users are 20-44% faster and make 60-95% fewer errors when using our tutorials than when using screencapture video tutorials or hand-designed tutorials. We also demonstrate a new interface that allows learners to navigate, explore and compare large collections (i.e. thousands) of photo manipulation tutorials based on their command-level structure. Sites such as tutorialized.com or good-tutorials.com collect tens of thousands of photo manipulation tutorials. These collections typically contain many different tutorials for the same task. For example, there are many different tutorials that describe how to recolor the hair of a person in an image. Learners often want to compare these tutorials to understand the different ways a task can be done. They may also want to identify common strategies that are used across tutorials for a variety of tasks. However, the large number of tutorials in these collections and their inconsistent formats can make it difficult for users to systematically explore and compare them. Current tutorial collections do not exploit the underlying command-level structure of tutorials, and to explore the collection users have to either page through long lists of tutorial titles or perform keyword searches on the natural language tutorial text. We present a new browsing interface to help learners navigate, explore and compare collections of photo manipulation tutorials based on their command-level structure. Our browser indexes tutorials by their commands, identifies common strategies within the tutorial collection, and highlights the similarities and differences between sets of tutorials that execute the same task. User feedback suggests that our interface is easy to understand and use, and that users find command-level browsing to be useful for exploring large tutorial collections. They strongly preferred to explore tutorial collections with our browser over keyword search. Finally, we present a framework for generating content-adaptive macros (programs) that can transfer complex photo manipulation procedures to new target images. After learners master a photo manipulation procedure, they often repeatedly apply it to multiple images. For example, they might routinely apply the same vignetting effect to all their photographs. This process can be very tedious especially for procedures that involve many steps. While image manipulation programs provide basic macro authoring tools that allow users to record and then replay a sequence of operations, these macros are very brittle and cannot adapt to new images. We present a more comprehensive approach for generating content-adaptive macros that can automatically transfer operations to new target images. To create these macro, we make use of multiple training demonstrations. Specifically, we use automated image labeling and machine learning techniques to to adapt the parameters of each operation to the new image content. We show that our framework is able to learn a large class of the most commonly-used manipulations using as few as 20 training demonstrations. Our content-adaptive macros allow users to transfer photo manipulation procedures with a single button click and thereby significantly simplify repetitive procedures.

  15. Peer Observation of Teaching: A Decoupled Process

    ERIC Educational Resources Information Center

    Chamberlain, John Martyn; D'Artrey, Meriel; Rowe, Deborah-Anne

    2011-01-01

    This article details the findings of research into the academic teaching staff experience of peer observation of their teaching practice. Peer observation is commonly used as a tool to enhance a teacher's continuing professional development. Research participants acknowledged its ability to help develop their teaching practice, but they also…

  16. Differential morphology and image processing.

    PubMed

    Maragos, P

    1996-01-01

    Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision. PMID:18285181

  17. Associative architecture for image processing

    NASA Astrophysics Data System (ADS)

    Adar, Rutie; Akerib, Avidan

    1997-09-01

    This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

  18. Computer processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.

    1984-01-01

    In the past 20 years, a substantial amount of effort has been expended on the development of computer techniques for enhancement of X-ray images and for automated extraction of quantitative diagnostic information. The historical development of these methods is described. Illustrative examples are presented and factors influencing the relative success or failure of various techniques are discussed. Some examples of current research in radiographic image processing is described.

  19. Digital processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  20. Amateur Image Pipeline Processing using Python plus PyRAF

    NASA Astrophysics Data System (ADS)

    Green, Wayne

    2012-05-01

    A template pipeline spanning observing planning to publishing is offered as a basis for establishing a long term observing program. The data reduction pipeline encapsulates all policy and procedures, providing an accountable framework for data analysis and a teaching framework for IRAF. This paper introduces the technical details of a complete pipeline processing environment using Python, PyRAF and a few other languages. The pipeline encapsulates all processing decisions within an auditable framework. The framework quickly handles the heavy lifting of image processing. It also serves as an excellent teaching environment for astronomical data management and IRAF reduction decisions.

  1. FITS Liberator: Image processing software

    NASA Astrophysics Data System (ADS)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  2. Phase in Optical Image Processing

    NASA Astrophysics Data System (ADS)

    Naughton, Thomas J.

    2010-04-01

    The use of phase has a long standing history in optical image processing, with early milestones being in the field of pattern recognition, such as VanderLugt's practical construction technique for matched filters, and (implicitly) Goodman's joint Fourier transform correlator. In recent years, the flexibility afforded by phase-only spatial light modulators and digital holography, for example, has enabled many processing techniques based on the explicit encoding and decoding of phase. One application area concerns efficient numerical computations. Pushing phase measurement to its physical limits, designs employing the physical properties of phase have ranged from the sensible to the wonderful, in some cases making computationally easy problems easier to solve and in other cases addressing mathematics' most challenging computationally hard problems. Another application area is optical image encryption, in which, typically, a phase mask modulates the fractional Fourier transformed coefficients of a perturbed input image, and the phase of the inverse transform is then sensed as the encrypted image. The inherent linearity that makes the system so elegant mitigates against its use as an effective encryption technique, but we show how a combination of optical and digital techniques can restore confidence in that security. We conclude with the concept of digital hologram image processing, and applications of same that are uniquely suited to optical implementation, where the processing, recognition, or encryption step operates on full field information, such as that emanating from a coherently illuminated real-world three-dimensional object.

  3. Seismic Imaging Processing and Migration

    Energy Science and Technology Software Center (ESTSC)

    2000-06-26

    Salvo is a 3D, finite difference, prestack, depth migration code for parallel computers. It is also capable of processing 2D and poststack data. The code requires as input a seismic dataset, a velocity model and a file of parameters that allows the user to select various options. The code uses this information to produce a seismic image. Some of the options available to the user include the application of various filters and imaging conditions. Themore » code also incorporates phase encoding (patent applied for) to process multiple shots simultaneously.« less

  4. Fingerprint recognition using image processing

    NASA Astrophysics Data System (ADS)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  5. Computer image processing: Geologic applications

    NASA Technical Reports Server (NTRS)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  6. Process Approach to Teaching Writing Applied in Different Teaching Models

    ERIC Educational Resources Information Center

    Sun, Chunling; Feng, Guoping

    2009-01-01

    English writing, as a basic language skill for second language learners, is being paid close attention to. How to achieve better results in English teaching and how to develop students' writing competence remain an arduous task for English teachers. Based on the review of the concerning literature from other researchers as well as a summery of the

  7. Concept Learning through Image Processing.

    ERIC Educational Resources Information Center

    Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

    This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

  8. SAR Image Processing using GPU

    NASA Astrophysics Data System (ADS)

    Shanmugha Sundaram, GA; Sujith Maddikonda, Syam

    Synthetic aperture Radar (SAR) has been extensively used for space-borne Earth observations in recent times. In conventional SAR systems analog beam-steering techniques are capable of implementing multiple operational modes, such as the Stripmap, ScanSAR, and Spotlight, to fulfill the different requirements in terms of spatial resolution and coverage. Future RADAR satellites need to resolve the complex issues such as wide area coverage and resolution. Digital beamforming (DBF) is a promising technique to overcome the problems mentioned above. In communication satellites DBF technique is already implemented. This paper discuses the relevance of DBF in space-borne RADAR satellites for enhancements to quality imaging. To implement DBF in SAR, processing of SAR data is an important step. This work focused on processing of Level 1.1 and 1.5 SAR image data. The SAR raw data is computationally intensive to process. To resolve the computation problem, high performance computing (HPC) is necessary. The relevance of HPC for SAR data processing using an off-the-shelf graphical processing unit (GPU) over CPU is discussed in this paper. Quantitative estimates on SAR image processing performance comparisons using both CPU and GPU are also provided as validation for the results.

  9. Enhancing the Teaching-Learning Process: A Knowledge Management Approach

    ERIC Educational Resources Information Center

    Bhusry, Mamta; Ranjan, Jayanthi

    2012-01-01

    Purpose: The purpose of this paper is to emphasize the need for knowledge management (KM) in the teaching-learning process in technical educational institutions (TEIs) in India, and to assert the impact of information technology (IT) based KM intervention in the teaching-learning process. Design/methodology/approach: The approach of the paper is

  10. Using Mathematica to Teach Process Units: A Distillation Case Study

    ERIC Educational Resources Information Center

    Rasteiro, Maria G.; Bernardo, Fernando P.; Saraiva, Pedro M.

    2005-01-01

    The question addressed here is how to integrate computational tools, namely interactive general-purpose platforms, in the teaching of process units. Mathematica has been selected as a complementary tool to teach distillation processes, with the main objective of leading students to achieve a better understanding of the physical phenomena involved…

  11. Enhancing the Teaching-Learning Process: A Knowledge Management Approach

    ERIC Educational Resources Information Center

    Bhusry, Mamta; Ranjan, Jayanthi

    2012-01-01

    Purpose: The purpose of this paper is to emphasize the need for knowledge management (KM) in the teaching-learning process in technical educational institutions (TEIs) in India, and to assert the impact of information technology (IT) based KM intervention in the teaching-learning process. Design/methodology/approach: The approach of the paper is…

  12. Image processing applications in NDE

    SciTech Connect

    Morris, R.A.

    1980-01-01

    Nondestructive examination (NDE) can be defined as a technique or collection of techniques that permits one to determine some property of a material or object without damaging the object. There are a large number of such techniques and most of them use visual imaging in one form or another. They vary from holographic interferometry where displacements under stress are measured to the visual inspection of an objects surface to detect cracks after penetrant has been applied. The use of image processing techniques on the images produced by NDE is relatively new and can be divided into three general categories: classical image enhancement; mensuration techniques; and quantitative sensitometry. An example is discussed of how image processing techniques are used to nondestructively and destructively test the product throughout its life cycle. The product that will be followed is the microballoon target used in the laser fusion program. The laser target is a small (50 to 100 ..mu..m - dia) glass sphere with typical wall thickness of 0.5 to 6 ..mu..m. The sphere may be used as is or may be given a number of coatings of any number of materials. The beads are mass produced by the millions and the first nondestructive test is to separate the obviously bad beads (broken or incomplete) from the good ones. After this has been done, the good beads must be inspected for spherocity and wall thickness uniformity. The microradiography of the glass, uncoated bead is performed on a specially designed low-energy x-ray machine. The beads are mounted in a special jig and placed on a Kodak high resolution plate in a vacuum chamber that contains the x-ray source. The x-ray image is made with an energy less that 2 keV and the resulting images are then inspected at a magnification of 500 to 1000X. Some typical results are presented.

  13. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  14. Chemistry Graduate Teaching Assistants' Experiences in Academic Laboratories and Development of a Teaching Self-image

    NASA Astrophysics Data System (ADS)

    Gatlin, Todd Adam

    Graduate teaching assistants (GTAs) play a prominent role in chemistry laboratory instruction at research based universities. They teach almost all undergraduate chemistry laboratory courses. However, their role in laboratory instruction has often been overlooked in educational research. Interest in chemistry GTAs has been placed on training and their perceived expectations, but less attention has been paid to their experiences or their potential benefits from teaching. This work was designed to investigate GTAs' experiences in and benefits from laboratory instructional environments. This dissertation includes three related studies on GTAs' experiences teaching in general chemistry laboratories. Qualitative methods were used for each study. First, phenomenological analysis was used to explore GTAs' experiences in an expository laboratory program. Post-teaching interviews were the primary data source. GTAs experiences were described in three dimensions: doing, knowing, and transferring. Gains available to GTAs revolved around general teaching skills. However, no gains specifically related to scientific development were found in this laboratory format. Case-study methods were used to explore and illustrate ways GTAs develop a GTA self-image---the way they see themselves as instructors. Two general chemistry laboratory programs that represent two very different instructional frameworks were chosen for the context of this study. The first program used a cooperative project-based approach. The second program used weekly, verification-type activities. End of the semester interviews were collected and served as the primary data source. A follow-up case study of a new cohort of GTAs in the cooperative problem-based laboratory was undertaken to investigate changes in GTAs' self-images over the course of one semester. Pre-semester and post-semester interviews served as the primary data source. Findings suggest that GTAs' construction of their self-image is shaped through the interaction of 1) prior experiences, 2) training, 3) beliefs about the nature of knowledge, 4) beliefs about the nature of laboratory work, and 5) involvement in the laboratory setting. Further GTAs' self-images are malleable and susceptible to change through their laboratory teaching experiences. Overall, this dissertation contributes to chemistry education by providing a model useful for exploring GTAs' development of a self-image in laboratory teaching. This work may assist laboratory instructors and coordinators in reconsidering, when applicable, GTA training and support. This work also holds considerable implications for how teaching experiences are conceptualized as part of the chemistry graduate education experience. Findings suggest that appropriate teaching experiences may contribute towards better preparing graduate students for their journey in becoming scientists.

  15. Image Processing and Data Analysis

    NASA Astrophysics Data System (ADS)

    Starck, Jean-Luc; Murtagh, Fionn D.; Bijaoui, Albert

    1998-07-01

    Powerful techniques have been developed in recent years for the analysis of digital data, especially the manipulation of images. This book provides an in-depth introduction to a range of these innovative, avant-garde data-processing techniques. It develops the reader's understanding of each technique and then shows with practical examples how they can be applied to improve the skills of graduate students and researchers in astronomy, electrical engineering, physics, geophysics and medical imaging. What sets this book apart from others on the subject is the complementary blend of theory and practical application. Throughout, it is copiously illustrated with real-world examples from astronomy, electrical engineering, remote sensing and medicine. It also shows how many, more traditional, methods can be enhanced by incorporating the new wavelet and multiscale methods into the processing. For graduate students and researchers already experienced in image processing and data analysis, this book provides an indispensable guide to a wide range of exciting and original data-analysis techniques.

  16. Magnetic resonance imaging simulator: a teaching tool for radiology.

    PubMed

    Rundle, D; Kishore, S; Seshadri, S; Wehrli, F

    1990-11-01

    The increasing use of magnetic resonance imaging (MRI) as a clinical modality has put an enormous burden on medical institutions to cost effectively teach MRI scanning techniques to technologists and physicians. Since MRI scanner time is a scarce resource, it would be ideal if the teaching could be effectively performed off-line. In order to meet this goal, the radiology Department at the University of Pennsylvania has designed and developed a Magnetic Resonance Imaging Simulator. The simulator in its current implementation mimics the General Electric Signa (General Electric Magnetic Resonance Imaging System, Milwaukee, WI) scanner's user interface for image acquisition. The design is general enough to be applied to other MRI scanners. One unique feature of the simulator is its incorporation of an image-synthesis module that permits the user to derive images for any arbitrary combination of pulsing parameters for spin-echo, gradient-echo, and inversion recovery pulse sequences. These images are computed in 5 seconds. The development platform chosen is a standard Apple Macintosh II (Apple Computer, Inc, Cupertino, CA) computer with no specialized hardware peripherals. The user interface is implemented in HyperCard (Apple Computer Inc, Cupertino, CA). All other software development including synthesis and display functions are implemented under the Macintosh Programmer's Workshop 'C' environment. The scan parameters, demographics, and images are tracked using an Oracle (Oracle Corp, Redwood Shores, CA) data base. Images are currently stored on magnetic disk but could be stored on optical media with minimal effort. PMID:2085559

  17. Factors Causing Demotivation in EFL Teaching Process: A Case Study

    ERIC Educational Resources Information Center

    Aydin, Selami

    2012-01-01

    Studies have mainly focused on strategies to motivate teachers or the student-teacher motivation relationships rather than teacher demotivation in the English as a foreign language (EFL) teaching process, whereas no data have been found on the factors that cause teacher demotivation in the Turkish EFL teaching contexts at the elementary education…

  18. Fractional Modeling Method of Cognition Process in Teaching Evaluation

    NASA Astrophysics Data System (ADS)

    Zhao, Chunna; Wu, Minhua; Zhao, Yu; Luo, Liming; Li, Yingshun

    Cognition process has been translated into other quantitative indicators in some assessment decision systems. In teaching evaluation system a fractional cognition process model is proposed in this paper. The fractional model is built on fractional calculus theory combining with classroom teaching features. The fractional coefficient is determined by the actual course information. Student self-parameter is decided by the actual situation potential of each individual student. The detailed descriptions are displayed through building block diagram. The objective quantitative description can be given in the fractional cognition process model. And the teaching quality assessments will be more objective and accurate based on the above quantitative description.

  19. Teaching about the Physics of Medical Imaging

    NASA Astrophysics Data System (ADS)

    Zollman, Dean; McBride, Dyan; Murphy, Sytil; Aryal, Bijaya; Kalita, Spartak; Wirjawan, Johannes v. d.

    2010-07-01

    Even before the discovery of X-rays, attempts at non-invasive medical imaging required an understanding of fundamental principles of physics. Students frequently do not see these connections because they are not taught in beginning physics courses. To help students understand that physics and medical imaging are closely connected, we have developed a series of active learning units. For each unit we begin by studying how students transfer their knowledge from traditional physics classes and everyday experiences to medical applications. Then, we build instructional materials to take advantage of the students' ability to use their existing learning and knowledge resources. Each of the learning units involves a combination of hands-on activities, which present analogies, and interactive computer simulations. Our learning units introduce students to the contemporary imaging techniques of CT scans, magnetic resonance imaging (MRI), positron emission tomography (PET), and wavefront aberrometry. The project's web site is http://web.phys.ksu.edu/mmmm/.

  20. Image processing software for imaging spectrometry

    NASA Technical Reports Server (NTRS)

    Mazer, Alan S.; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    The paper presents a software system, Spectral Analysis Manager (SPAM), which has been specifically designed and implemented to provide the exploratory analysis tools necessary for imaging spectrometer data, using only modest computational resources. The basic design objectives are described as well as the major algorithms designed or adapted for high-dimensional images. Included in a discussion of system implementation are interactive data display, statistical analysis, image segmentation and spectral matching, and mixture analysis.

  1. Images on the Web for Astronomy Teaching: Image Repositories

    NASA Astrophysics Data System (ADS)

    Fraknoi, Andrew

    This guide lists and reviews 61 Web sites with catalogs of astronomical images that are useful for both formal and informal education. Some are general sites (including images covering many topics), whereas others are particular to one topic or one instrument. We briefly discuss getting started in using images, and copyright and fair use issues.

  2. Multicomputer processing for medical imaging

    NASA Astrophysics Data System (ADS)

    Goddard, Iain; Greene, Jonathon; Bouzas, Brian

    1995-04-01

    Medical imaging applications have growing processing requirements, and scalable multicomputers are needed to support these applications. Scalability -- performance speedup equal to the increased number of processors -- is necessary for a cost-effective multicomputer. We performed tests of performance and scalability on one through 16 processors on a RACE multicomputer using Parallel Application system (PAS) software. Data transfer and synchronization mechanisms introduced a minimum of overhead to the multicomputer's performance. We implemented magnetic resonance (MR) image reconstruction and multiplanar reformatting (MPR) algorithms, and demonstrated high scalability; the 16- processor configuration was 80% to 90% efficient, and the smaller configurations had higher efficiencies. Our experience is that PAS is a robust and high-productivity tool for developing scalable multicomputer applications.

  3. Image Processing: A State-of-the-Art Way to Learn Science.

    ERIC Educational Resources Information Center

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    Teachers participating in the Image Processing for Teaching Process, begun at the University of Arizona's Lunar and Planetary Laboratory in 1989, find this technology ideal for encouraging student discovery, promoting constructivist science or math experiences, and adapting in classrooms. Because image processing is not a computerized text, it…

  4. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  5. Using Classic and Contemporary Visual Images in Clinical Teaching.

    ERIC Educational Resources Information Center

    Edwards, Janine C.

    1990-01-01

    The patient's body is an image that medical students and residents use to process information. The classic use of images using the patient is qualitative and personal. The contemporary use of images is quantitative and impersonal. The contemporary use of imaging includes radiographic, nuclear, scintigraphic, and nuclear magnetic resonance…

  6. Using Classic and Contemporary Visual Images in Clinical Teaching.

    ERIC Educational Resources Information Center

    Edwards, Janine C.

    1990-01-01

    The patient's body is an image that medical students and residents use to process information. The classic use of images using the patient is qualitative and personal. The contemporary use of images is quantitative and impersonal. The contemporary use of imaging includes radiographic, nuclear, scintigraphic, and nuclear magnetic resonance

  7. Teaching with "Voix et Images de France"

    ERIC Educational Resources Information Center

    Marrow, G. D.

    1970-01-01

    A report on the classroom use of Voix et Images de France," the French text prepared by the Centre de Recherche et d'Etude pourla Diffusion du Francais (CREDIF) at the Ecole Normale Superieure de Saint-Cloud in France. (FB)

  8. Processing of medical images using Maple

    NASA Astrophysics Data System (ADS)

    Toro Betancur, V.

    2013-05-01

    Maple's Image Tools package was used to process medical images. The results showed clearer images and records of its intensities and entropy. The medical images of a rhinocerebral mucormycosis patient, who was not early diagnosed, were processed and analyzed using Maple's tools, which showed, in a clearer way, the affected parts in the perinasal cavities.

  9. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  10. Teaching Science: A Picture Perfect Process.

    ERIC Educational Resources Information Center

    Leyden, Michael B.

    1994-01-01

    Explains how teachers can use graphs and graphing concepts when teaching art, language arts, history, social studies, and science. Students can graph the lifespans of the Ninja Turtles' Renaissance namesakes (Donatello, Michelangelo, Raphael, and Leonardo da Vinci) or world population growth. (MDM)

  11. A Triangular Teaching Process in Mass Communication.

    ERIC Educational Resources Information Center

    Allen, Ed L.

    From August 1974 to June 1975, three schools in the Owensboro, Kentucky area cooperated in an effort to provide an instructional program in mass communications to high school students. A four-year comprehensive high school, a vocational school, and Kentucky Wesleyan College pooled teaching staff, equipment, and facility resources in a course…

  12. The Tao of Teaching: Romance and Process.

    ERIC Educational Resources Information Center

    Schindler, Stefan

    1991-01-01

    Because college teaching aims to elevate, not entertain, it must be nourished and appreciated as a pedagogical alchemy mixing facts and feelings, ideas and skills, history and mystery. The current debate on educational reform should focus more on quality of learning experience, and on how to create and sustain it. (MSE)

  13. Law and Pop Culture: Teaching and Learning about Law Using Images from Popular Culture.

    ERIC Educational Resources Information Center

    Joseph, Paul R.

    2000-01-01

    Believes that using popular culture images of law, lawyers, and the legal system is an effective way for teaching about real law. Offers examples of incorporating popular culture images when teaching about law. Includes suggestions for teaching activities, a mock trial based on Dr. Seuss's book "Yertle the Turtle," and additional resources. (CMK)

  14. Using a Cognitive-Process Approach To Teach Social Skills.

    ERIC Educational Resources Information Center

    Collet-Klingenberg, Lana; Chadsey-Rusch, Janis

    This study evaluated a cognitive-process approach used to train three secondary-aged students with moderate mental retardation on a social skill involving response to criticism. The cognitive-process approach teaches a generative process of social behavior rather than specific component behaviors; relies on receptive and expressive language…

  15. What Should Schools Teach? Issues of Process and Content.

    ERIC Educational Resources Information Center

    Perrone, Vito

    1988-01-01

    When discussing what schools should teach, questions of both content and process must be addressed. Although many observers believe that a fixed content should be learned, it is impossible to separate content and process. In the process of education, experiences build on each other. This fact should cause educators to question the continuities…

  16. Image enhancement based on gamma map processing

    NASA Astrophysics Data System (ADS)

    Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An

    2010-05-01

    This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.

  17. Ethical implications of digital images for teaching and learning purposes: an integrative review

    PubMed Central

    Kornhaber, Rachel; Betihavas, Vasiliki; Baber, Rodney J

    2015-01-01

    Background Digital photography has simplified the process of capturing and utilizing medical images. The process of taking high-quality digital photographs has been recognized as efficient, timely, and cost-effective. In particular, the evolution of smartphone and comparable technologies has become a vital component in teaching and learning of health care professionals. However, ethical standards in relation to digital photography for teaching and learning have not always been of the highest standard. The inappropriate utilization of digital images within the health care setting has the capacity to compromise patient confidentiality and increase the risk of litigation. Therefore, the aim of this review was to investigate the literature concerning the ethical implications for health professionals utilizing digital photography for teaching and learning. Methods A literature search was conducted utilizing five electronic databases, PubMed, Embase (Excerpta Medica Database), Cumulative Index to Nursing and Allied Health Literature, Educational Resources Information Center, and Scopus, limited to English language. Studies that endeavored to evaluate the ethical implications of digital photography for teaching and learning purposes in the health care setting were included. Results The search strategy identified 514 papers of which nine were retrieved for full review. Four papers were excluded based on the inclusion criteria, leaving five papers for final analysis. Three key themes were developed: knowledge deficit, consent and beyond, and standards driving scope of practice. Conclusion The assimilation of evidence in this review suggests that there is value for health professionals utilizing digital photography for teaching purposes in health education. However, there is limited understanding of the process of obtaining and storage and use of such mediums for teaching purposes. Disparity was also highlighted related to policy and guideline identification and development in clinical practice. Therefore, the implementation of policy to guide practice requires further research. PMID:26089681

  18. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  19. Applications Of Image Processing In Criminalistics

    NASA Astrophysics Data System (ADS)

    Krile, Thomas F.; Walkup, John F.; Barsallo, Adonis; Olimb, Hal; Tarng, Jaw-Horng

    1987-01-01

    A review of some basic image processing techniques for enhancement and restoration of images is given. Both digital and optical approaches are discussed. Fingerprint images are used as examples to illustrate the various processing techniques and their potential applications in criminalistics.

  20. Using Paper Helicopters to Teach Statistical Process Control

    ERIC Educational Resources Information Center

    Johnson, Danny J.

    2011-01-01

    This hands-on project uses a paper helicopter to teach students how to distinguish between common and special causes of variability when developing and using statistical process control charts. It allows the student to experience a process that is out-of-control due to imprecise or incomplete product design specifications and to discover how the…

  1. How Teachers Teach the Writing Process. Final Report.

    ERIC Educational Resources Information Center

    Perl, Sondra; And Others

    Presented in this report are the results of a three-year case study designed (1) to document what happened in the classrooms of 10 teachers who were trained in a process approach to the teaching of writing, and (2) to provide those teachers with occasions to deepen their understanding of the process approach, by collaborating with them in the…

  2. Using Paper Helicopters to Teach Statistical Process Control

    ERIC Educational Resources Information Center

    Johnson, Danny J.

    2011-01-01

    This hands-on project uses a paper helicopter to teach students how to distinguish between common and special causes of variability when developing and using statistical process control charts. It allows the student to experience a process that is out-of-control due to imprecise or incomplete product design specifications and to discover how the

  3. Unified Digital Image Display And Processing System

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Maguire, Gerald Q.; Noz, Marilyn E.; Schimpf, James H.

    1981-11-01

    Our institution like many others, is faced with a proliferation of medical imaging techniques. Many of these methods give rise to digital images (e.g. digital radiography, computerized tomography (CT) , nuclear medicine and ultrasound). We feel that a unified, digital system approach to image management (storage, transmission and retrieval), image processing and image display will help in integrating these new modalities into the present diagnostic radiology operations. Future techniques are likely to employ digital images, so such a system could readily be expanded to include other image sources. We presently have the core of such a system. We can both view and process digital nuclear medicine (conventional gamma camera) images, positron emission tomography (PET) and CT images on a single system. Images from our recently installed digital radiographic unit can be added. Our paper describes our present system, explains the rationale for its configuration, and describes the directions in which it will expand.

  4. Amplitude image processing by diffractive optics.

    PubMed

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F

    2016-02-22

    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing. PMID:26906990

  5. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  6. NASA Regional Planetary Image Facility image retrieval and processing system

    NASA Technical Reports Server (NTRS)

    Slavney, Susan

    1986-01-01

    The general design and analysis functions of the NASA Regional Planetary Image Facility (RPIF) image workstation prototype are described. The main functions of the MicroVAX II based workstation will be database searching, digital image retrieval, and image processing and display. The uses of the Transportable Applications Executive (TAE) in the system are described. File access and image processing programs use TAE tutor screens to receive parameters from the user and TAE subroutines are used to pass parameters to applications programs. Interface menus are also provided by TAE.

  7. Coordination in serial-parallel image processing

    NASA Astrophysics Data System (ADS)

    Wójcik, Waldemar; Dubovoi, Vladymyr M.; Duda, Marina E.; Romaniuk, Ryszard S.; Yesmakhanova, Laura; Kozbakova, Ainur

    2015-12-01

    Serial-parallel systems used to convert the image. The control of their work results with the need to solve coordination problem. The paper summarizes the model of coordination of resource allocation in relation to the task of synchronizing parallel processes; the genetic algorithm of coordination developed, its adequacy verified in relation to the process of parallel image processing.

  8. Teaching the Process...with Calvin & Hobbes.

    ERIC Educational Resources Information Center

    Big6 Newsletter, 1998

    1998-01-01

    Discusses the use of Calvin & Hobbes comic strips to point out steps in the Big6 information problem-solving process. Students can see what Calvin is doing wrong and then explain how to improve the process. (LRW)

  9. Sequential Processes In Image Generation.

    ERIC Educational Resources Information Center

    Kosslyn, Stephen M.; And Others

    1988-01-01

    Results of three experiments are reported, which indicate that images of simple two-dimensional patterns are formed sequentially. The subjects included 48 undergraduates and 16 members of the Harvard University (Cambridge, Mass.) community. A new objective methodology indicates that images of complex letters require more time to generate. (TJH)

  10. Semi-automated Image Processing for Preclinical Bioluminescent Imaging

    PubMed Central

    Slavine, Nikolai V; McColl, Roderick W

    2015-01-01

    Objective Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. Methods In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. Results We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. Conclusion The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment. PMID:26618187

  11. Image processing on the IBM personal computer

    NASA Technical Reports Server (NTRS)

    Myers, H. J.; Bernstein, R.

    1985-01-01

    An experimental, personal computer image processing system has been developed which provides a variety of processing functions in an environment that connects programs by means of a 'menu' for both casual and experienced users. The system is implemented by a compiled BASIC program that is coupled to assembly language subroutines. Image processing functions encompass subimage extraction, image coloring, area classification, histogramming, contrast enhancement, filtering, and pixel extraction.

  12. Signal processing of infrared imaging system

    NASA Astrophysics Data System (ADS)

    Li, Layuan

    1986-01-01

    The signal processing techniques of infrared imaging system are discussed. Performance of PEV for chopping mode in the system and some basic designing principles of the system are described. Main methods for processing signal of infrared imaging system are suggested. Emphasis is laid on the multiple fields accumulation and image difference processing technique. On the basis of describing the main principle of the method, the concrete project is put forward. Some test results are also given.

  13. Image information compression in the image recognition process: medical example

    NASA Astrophysics Data System (ADS)

    Galas, Jacek

    1995-08-01

    The effectiveness of image recognition methods is strongly dependent on the process of intelligent image information compression. This paper presents the Radon-Fourier transformation as a tool for this task. It is shown that only two slices of Radon-Fourier transformation are sufficient to classify the set of medical images. Some classification results for one transformation slice are presented. These results are compared with full 2D Fourier transformation processing.

  14. Image processing applied to laser cladding process

    SciTech Connect

    Meriaudeau, F.; Truchetet, F.

    1996-12-31

    The laser cladding process, which consists of adding a melt powder to a substrate in order to improve or change the behavior of the material against corrosion, fatigue and so on, involves a lot of parameters. In order to perform good tracks some parameters need to be controlled during the process. The authors present here a low cost performance system using two CCD matrix cameras. One camera provides surface temperature measurements while the other gives information relative to the powder distribution or geometric characteristics of the tracks. The surface temperature (thanks to Beer Lambert`s law) enables one to detect variations in the mass feed rate. Using such a system the authors are able to detect fluctuation of 2 to 3g/min in the mass flow rate. The other camera gives them information related to the powder distribution, a simple algorithm applied to the data acquired from the CCD matrix camera allows them to see very weak fluctuations within both gaz flux (carriage or protection gaz). During the process, this camera is also used to perform geometric measurements. The height and the width of the track are obtained in real time and enable the operator to find information related to the process parameters such as the speed processing, the mass flow rate. The authors display the result provided by their system in order to enhance the efficiency of the laser cladding process. The conclusion is dedicated to a summary of the presented works and the expectations for the future.

  15. Videos and images from 25 years of teaching compressible flow

    NASA Astrophysics Data System (ADS)

    Settles, Gary

    2008-11-01

    Compressible flow is a very visual topic due to refractive optical flow visualization and the public fascination with high-speed flight. Films, video clips, and many images are available to convey this in the classroom. An overview of this material is given and selected examples are shown, drawn from educational films, the movies, television, etc., and accumulated over 25 years of teaching basic and advanced compressible-flow courses. The impact of copyright protection and the doctrine of fair use is also discussed.

  16. Why Do We Need Image Processing?

    PubMed Central

    MacDonald, Glen

    2013-01-01

    Image processing is often viewed as arbitrarily manipulating an image to achieve an aesthetic standard or to support a preferred reality. However, image processing is more accurately defined as a means of translation between the human visual system and digital imaging devices. The human visual system does not perceive the world in the same manner as digital detectors, with display devices imposing additional noise and bandwidth restrictions. Salient differences between the human and digital detectors will be shown, along with some basic processing steps for achieving translation. Image processing must be approached in a manner consistent with the scientific method so that others may reproduce, and validate, one's results. This includes recording and reporting processing actions, and applying similar treatments to adequate control images.

  17. Image Processing in Intravascular OCT

    NASA Astrophysics Data System (ADS)

    Wang, Zhao; Wilson, David L.; Bezerra, Hiram G.; Rollins, Andrew M.

    Coronary artery disease is the leading cause of death in the world. Intravascular optical coherence tomography (IVOCT) is rapidly becoming a promising imaging modality for characterization of atherosclerotic plaques and evaluation of coronary stenting. OCT has several unique advantages over alternative technologies, such as intravascular ultrasound (IVUS), due to its better resolution and contrast. For example, OCT is currently the only imaging modality that can measure the thickness of the fibrous cap of an atherosclerotic plaque in vivo. OCT also has the ability to accurately assess the coverage of individual stent struts by neointimal tissue over time. However, it is extremely time-consuming to analyze IVOCT images manually to derive quantitative diagnostic metrics. In this chapter, we introduce some computer-aided methods to automate the common IVOCT image analysis tasks.

  18. Digital image processing of metric camera imagery

    NASA Astrophysics Data System (ADS)

    Lohmann, P.

    1985-04-01

    The use of digitized Spacelab metric camera imagery for map updating is demonstrated for an area of Germany featuring agricultural and industrial areas, and a region of the White Nile. LANDSAT and Spacelab images were combined, and digital image processing techniques used for image enhancement. Updating was achieved by semiautomatic techniques, but for many applications manual editing may be feasible.

  19. Image restoration and diffusion processes

    NASA Astrophysics Data System (ADS)

    Carasso, Alfred S.

    1993-06-01

    A new supplementary a-priori constraint, the slow evolution from the boundary constraint, (SEB), sharply reduces noise contamination in a large class of space-invariant image deblurring problems that occur in medical, industrial, surveillance, environmental, and astronomical application. The noise suppressing properties of SEB restoration can be proved mathematically, on the basis of rigorous error bounds for the reconstruction, as a function of the noise level in the blurred image data. This analysis proceeds by reformulating the image deblurring problem into an equivalent ill-posed problem for a time-reversed diffusion equation. The SEB constraint does not require smoothness of the image. An effective, fast, non-iterative procedure, based on FFT algorithms, may be used to compute SEB restorations. For a 512 X 512 image, the procedure requires about 45 seconds of cpu time on a Sun/sparc2. A documented deblurring experiment, on an image with significant high frequency content, illustrates the computational significance of the SEB constraint by comparing SEB and Tikhonov-Miller reconstructions using optimal values of the regularization parameters.

  20. Teaching Information Systems Development via Process Variants

    ERIC Educational Resources Information Center

    Tan, Wee-Kek; Tan, Chuan-Hoo

    2010-01-01

    Acquiring the knowledge to assemble an integrated Information System (IS) development process that is tailored to the specific needs of a project has become increasingly important. It is therefore necessary for educators to impart to students this crucial skill. However, Situational Method Engineering (SME) is an inherently complex process that…

  1. Yes! The Business Department Teaches Data Processing

    ERIC Educational Resources Information Center

    Nord, Daryl; Seymour, Tom

    1978-01-01

    After a brief discussion of the history and current status of business data processing versus computer science, this article focuses on the characteristics of a business data processing curriculum as compared to a computer science curriculum, including distinctions between the FORTRAN and COBOL programming languages. (SH)

  2. Combining advanced imaging processing and low cost remote imaging capabilities

    NASA Astrophysics Data System (ADS)

    Rohrer, Matthew J.; McQuiddy, Brian

    2008-04-01

    Target images are very important for evaluating the situation when Unattended Ground Sensors (UGS) are deployed. These images add a significant amount of information to determine the difference between hostile and non-hostile activities, the number of targets in an area, the difference between animals and people, the movement dynamics of targets, and when specific activities of interest are taking place. The imaging capabilities of UGS systems need to provide only target activity and not images without targets in the field of view. The current UGS remote imaging systems are not optimized for target processing and are not low cost. McQ describes in this paper an architectural and technologic approach for significantly improving the processing of images to provide target information while reducing the cost of the intelligent remote imaging capability.

  3. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  4. Image processing utilizing an APL interface

    NASA Astrophysics Data System (ADS)

    Zmola, Carl; Kapp, Oscar H.

    1991-03-01

    The past few years have seen the growing use of digital techniques in the analysis of electron microscope image data. This trend is driven by the need to maximize the information extracted from the electron micrograph by submitting its digital representation to the broad spectrum of analytical techniques made available by the digital computer. We are developing an image processing system for the analysis of digital images obtained with a scanning transmission electron microscope (STEM) and a scanning electron microscope (SEM). This system, run on an IBM PS/2 model 70/A21, uses menu-based image processing and an interactive APL interface which permits the direct manipulation of image data.

  5. Teaching the Dance Class: Strategies to Enhance Skill Acquisition, Mastery and Positive Self-Image

    ERIC Educational Resources Information Center

    Mainwaring, Lynda M.; Krasnow, Donna H.

    2010-01-01

    Effective teaching of dance skills is informed by a variety of theoretical frameworks and individual teaching and learning styles. The purpose of this paper is to present practical teaching strategies that enhance the mastery of skills and promote self-esteem, self-efficacy, and positive self-image. The predominant thinking and primary research…

  6. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  7. Chemical Process Design: An Integrated Teaching Approach.

    ERIC Educational Resources Information Center

    Debelak, Kenneth A.; Roth, John A.

    1982-01-01

    Reviews a one-semester senior plant design/laboratory course, focusing on course structure, student projects, laboratory assignments, and course evaluation. Includes discussion of laboratory exercises related to process waste water and sludge. (SK)

  8. A Process-Oriented Framework for Acquiring Online Teaching Competencies

    ERIC Educational Resources Information Center

    Abdous, M'hammed

    2011-01-01

    As a multidimensional construct which requires multiple competencies, online teaching is forcing universities to rethink traditional faculty roles and competencies. With this consideration in mind, this paper presents a process-oriented framework structured around three sequential non-linear phases: (1) "before": preparing, planning, and…

  9. RDI Advising Model for Improving the Teaching-Learning Process

    ERIC Educational Resources Information Center

    de la Fuente, Jesus; Lopez-Medialdea, Ana Maria

    2007-01-01

    Introduction: Advising in Educational Psychology from the perspective of RDI takes on a stronger investigative, innovative nature. The model proposed by De la Fuente et al (2006, 2007) and Education & Psychology (2007) was applied to the field of improving teaching-learning processes at a school. Hypotheses were as follows: (1) interdependence…

  10. A Case Study of How Teaching Practice Process Takes Place

    ERIC Educational Resources Information Center

    Yalin Ucar, Meltem

    2012-01-01

    The process of "learning" carries an important role in the teaching practice which provides teacher candidates with professional development. Being responsible for the learning experiences in that level, cooperating teacher, teacher candidate, mentor and practice school are the important variables which determine the quality of the teaching…

  11. Toward a Generative Model of the Teaching-Learning Process.

    ERIC Educational Resources Information Center

    McMullen, David W.

    Until the rise of cognitive psychology, models of the teaching-learning process (TLP) stressed external rather than internal variables. Models remained general descriptions until control theory introduced explicit system analyses. Cybernetic models emphasize feedback and adaptivity but give little attention to creativity. Research on artificial…

  12. Direct Influence of English Teachers in the Teaching Learning Process

    ERIC Educational Resources Information Center

    Inamullah, Hafiz Muhammad; Hussain, Ishtiaq; Ud Din, M. Naseer

    2008-01-01

    Teachers play a vital role in the classroom environment. Interaction between teacher and students is an essential part of the teaching/learning process. An educator, Flanders originally developed an instrument called Flanders Interaction Analysis (FIA). The FIA system was designed to categorize the types, quantity of verbal interaction and direct…

  13. Direct Influence of English Teachers in the Teaching Learning Process

    ERIC Educational Resources Information Center

    Inamullah, Hafiz Muhammad; Hussain, Ishtiaq; Ud Din, M. Naseer

    2008-01-01

    Teachers play a vital role in the classroom environment. Interaction between teacher and students is an essential part of the teaching/learning process. An educator, Flanders originally developed an instrument called Flanders Interaction Analysis (FIA). The FIA system was designed to categorize the types, quantity of verbal interaction and direct

  14. A Plan for Teaching Data Processing to Library Science Students.

    ERIC Educational Resources Information Center

    Losee, Robert M., Jr.

    An outline is proposed for a library school course in data processing for libraries that is different from other such courses in that it emphasizes the operations of the computer itself over the study of library computer systems. The course begins with a study of computer hardware then moves to the teaching of assembly language using the MIX…

  15. Developing Evaluative Tool for Online Learning and Teaching Process

    ERIC Educational Resources Information Center

    Aksal, Fahriye A.

    2011-01-01

    The research study aims to underline the development of a new scale on online learning and teaching process based on factor analysis. Further to this, the research study resulted in acceptable scale which embraces social interaction role, interaction behaviour, barriers, capacity for interaction, group interaction as sub-categories to evaluate…

  16. Student Evaluation of Teaching: An Instrument and a Development Process

    ERIC Educational Resources Information Center

    Alok, Kumar

    2011-01-01

    This article describes the process of faculty-led development of a student evaluation of teaching instrument at Centurion School of Rural Enterprise Management, a management institute in India. The instrument was to focus on teacher behaviors that students get an opportunity to observe. Teachers and students jointly contributed a number of…

  17. Non-linear Post Processing Image Enhancement

    NASA Technical Reports Server (NTRS)

    Hunt, Shawn; Lopez, Alex; Torres, Angel

    1997-01-01

    A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

  18. How Digital Image Processing Became Really Easy

    NASA Astrophysics Data System (ADS)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  19. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  20. Water surface capturing by image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  1. Automatic processing, analysis, and recognition of images

    NASA Astrophysics Data System (ADS)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.

    2004-11-01

    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  2. Community Assessment in Teaching the Research Process

    ERIC Educational Resources Information Center

    Craddock, IdaMae

    2013-01-01

    Community assessment is the concept of using wider professional communities to provide authentic assessment to students. It means using the knowledge available in one's immediate surroundings and through Web 2.0 tools to enrich instructional processes. It means using retirees, experts, and volunteers from professional organizations and…

  3. Teaching Word Processing in the Library.

    ERIC Educational Resources Information Center

    Teo, Elizabeth A.; Jenkins, Sylvia M.

    A description is provided of a program developed at Moraine Valley Community College (MVCC), in Illinois, for providing word processing instruction in the library, including recommendations for program development based on MVCC experience and results from a survey of program participants. The first part of the paper discusses a model development…

  4. From image to data using common image-processing techniques.

    PubMed

    Sysko, Laura R; Davis, Michael A

    2010-10-01

    A digital microscopy image is an array of number values, which with adequate contrast can be interpreted as spatial information. Through processing and analysis by mathematical means, using computer-assisted imaging software programs, raw image data contrast can be enhanced to improve the extraction of image features for measurement and analysis. This mathematical feature extraction (referred to as segmentation) provides the basis for general image processing. The methods discussed in this unit address common image analysis challenges such as object counting with touching objects, objects within other objects, and object identification in a field with uneven illumination or uneven brightness, along with step-by-step procedures for achieving these results. PMID:20938916

  5. Image processing for cameras with fiber bundle image relay.

    PubMed

    Olivas, Stephen J; Arianpour, Ashkan; Stamenov, Igor; Morrison, Rick; Stack, Ron A; Johnson, Adam R; Agurok, Ilya P; Ford, Joseph E

    2015-02-10

    Some high-performance imaging systems generate a curved focal surface and so are incompatible with focal plane arrays fabricated by conventional silicon processing. One example is a monocentric lens, which forms a wide field-of-view high-resolution spherical image with a radius equal to the focal length. Optical fiber bundles have been used to couple between this focal surface and planar image sensors. However, such fiber-coupled imaging systems suffer from artifacts due to image sampling and incoherent light transfer by the fiber bundle as well as resampling by the focal plane, resulting in a fixed obscuration pattern. Here, we describe digital image processing techniques to improve image quality in a compact 126° field-of-view, 30 megapixel panoramic imager, where a 12 mm focal length F/1.35 lens made of concentric glass surfaces forms a spherical image surface, which is fiber-coupled to six discrete CMOS focal planes. We characterize the locally space-variant system impulse response at various stages: monocentric lens image formation onto the 2.5 μm pitch fiber bundle, image transfer by the fiber bundle, and sensing by a 1.75 μm pitch backside illuminated color focal plane. We demonstrate methods to mitigate moiré artifacts and local obscuration, correct for sphere to plane mapping distortion and vignetting, and stitch together the image data from discrete sensors into a single panorama. We compare processed images from the prototype to those taken with a 10× larger commercial camera with comparable field-of-view and light collection. PMID:25968031

  6. Using the Results of Teaching Evaluations to Improve Teaching: A Case Study of a New Systematic Process

    ERIC Educational Resources Information Center

    Malouff, John M.; Reid, Jackie; Wilkes, Janelle; Emmerton, Ashley J.

    2015-01-01

    This article describes a new 14-step process for using student evaluations of teaching to improve teaching. The new process includes examination of student evaluations in the context of instructor goals, student evaluations of the same course completed in prior terms, and evaluations of similar courses taught by other instructors. The process has…

  7. Characteristics of Mindless Teaching Evaluations and the Moderating Effects of Image Compatibility.

    ERIC Educational Resources Information Center

    Dunegan, Kenneth J.; Hrivnak, Mary W.

    2003-01-01

    At 3 times, 164 management students completed student evaluations of teaching (SET), 150 completed an image compatibility questionnaire, and 155 evaluated instructors' overall performance. SET scores and overall evaluations were significantly correlated only when actual and ideal images of instructors were incompatible. When teaching was…

  8. The Chromostereoscopic Process: A Novel Single Image Stereoscopic Process

    NASA Astrophysics Data System (ADS)

    Steenblik, Richard A.

    1987-06-01

    A novel stereoscopic depth encoding/decoding process has been developed which considerably simplifies the creation and presentation of stereoscopic images in a wide range of display media. The patented chromostereoscopic process is unique because the encoding of depth information is accomplished in a single image. The depth encoded image can be viewed with the unaided eye as a normal two dimensional image. The image attains the appearance of depth, however, when viewed by means of the inexpensive and compact depth decoding passive optical system. The process is compatible with photographic, printed, video, slide projected, computer graphic, and laser generated color images. The range of perceived depth in a given image can be selected by the viewer through the use of "tunable depth" decoding optics, allowing infinite and smooth tuning from exaggerated normal depth through zero depth to exaggerated inverse depth. The process is insensitive to the head position of the viewer. Depth encoding is accomplished by mapping the desired perceived depth of an image component into spectral color. Depth decoding is performed by an optical system which shifts the spatial positions of the colors in the image to create left and right views. The process is particularly well suited to the creation of stereoscopic laser shows. Other applications are also being pursued.

  9. CT Image Processing Using Public Digital Networks

    PubMed Central

    Rhodes, Michael L.; Azzawi, Yu-Ming; Quinn, John F.; Glenn, William V.; Rothman, Stephen L.G.

    1984-01-01

    Nationwide commercial computer communication is now commonplace for those applications where digital dialogues are generally short and widely distributed, and where bandwidth does not exceed that of dial-up telephone lines. Image processing using such networks is prohibitive because of the large volume of data inherent to digital pictures. With a blend of increasing bandwidth and distributed processing, network image processing becomes possible. This paper examines characteristics of a digital image processing service for a nationwide network of CT scanner installations. Issues of image transmission, data compression, distributed processing, software maintenance, and interfacility communication are also discussed. Included are results that show the volume and type of processing experienced by a network of over 50 CT scanners for the last 32 months.

  10. Stable image acquisition for mobile image processing applications

    NASA Astrophysics Data System (ADS)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  11. Image processing moves into the manufacturing mainstream

    NASA Astrophysics Data System (ADS)

    Mackie, Bruce R.

    1993-10-01

    Manufacturing efficiency improvements have traditionally been driven by the requirements of innovative manufacturers and their ability to modify, adapt and adopt technologies and techniques not intuitively related to the processes currently employed in manufacturing processes. The application of image processing to the requirements of manufacturers has thus far been limited to off-line verification tasks, where the benefits are notable but the impact on manufacturing flexibility, efficiency and productivity are negligible. It is now possible to incorporate image processing within the process control inner loop, transparently, to realize a degree of product quality and process productivity only speculated upon in the past.

  12. Teaching sustainable design: A collaborative process

    SciTech Connect

    Theis, C.C.

    1997-12-31

    This paper describes a collaborative educational experience in the Schools of Architecture and Landscape Architecture at Louisiana State University. During the Fall Semester of 1996 an upper-level architectural design studio worked with a peer group of landscape architecture students on the design of a master plan for an environmentally sensitive residential development on Cat Island, a barrier island located approximately eight miles south of Gulfport, Mississippi. This paper presents the methodology and results of the project, describes the collaborative process, and assesses both the viability of the design solutions and the value of the educational experience.

  13. Process Development in the Teaching Laboratory

    NASA Astrophysics Data System (ADS)

    Klein, Leonard C.; Dana, Susanne M.

    1998-06-01

    Many experiences in high school and undergraduate laboratories are well-tested cookbook recipes that have already been designed to yield optimal results; the well-known synthesis of aspirin is such an example. In this project for advanced placement or second-year high school chemistry students, students mimic the process development in industrial laboratories by investigating the effect of varying conditions in the synthesis of aspirin. The class decides on criteria that should be explored (quantity of catalyst, temperature of reaction, etc.). The class is then divided into several teams with each team assigned a variable to study. Each team must submit a proposal describing how they will explore the variable before they start their study. After data on yield and purity has been gathered and evaluated, students discuss which method is most desirable, based on their agreed-upon criteria. This exercise provides an opportunity for students to review many topics from the course (rate of reaction, limiting reagents, Beer's Law) while participating in a cooperative exercise designed to imitate industrial process development.

  14. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  15. Image processing technology for enhanced situational awareness

    NASA Astrophysics Data System (ADS)

    Page, S. F.; Smith, M. I.; Hickman, D.

    2009-09-01

    This paper discusses the integration of a number of advanced image and data processing technologies in support of the development of next-generation Situational Awareness systems for counter-terrorism and crime fighting applications. In particular, the paper discusses the European Union Framework 7 'SAMURAI' project, which is investigating novel approaches to interactive Situational Awareness using cooperative networks of heterogeneous imaging sensors. Specific focus is given to novel Data Fusion aspects of the research which aim to improve system performance through intelligently fusing both image data and non image data sources, resolving human-machine conflicts, and refining the Situational Awareness picture. In addition, the paper highlights some recent advances in supporting image processing technologies. Finally, future trends in image-based Situational Awareness are identified, such as Post-Event Analysis (also known as 'Back-Tracking'), and the associated technical challenges are discussed.

  16. Interactive image processing in swallowing research

    NASA Astrophysics Data System (ADS)

    Dengel, Gail A.; Robbins, JoAnne; Rosenbek, John C.

    1991-06-01

    Dynamic radiographic imaging of the mouth, larynx, pharynx, and esophagus during swallowing is used commonly in clinical diagnosis, treatment and research. Images are recorded on videotape and interpreted conventionally by visual perceptual methods, limited to specific measures in the time domain and binary decisions about the presence or absence of events. An image processing system using personal computer hardware and original software has been developed to facilitate measurement of temporal, spatial and temporospatial parameters. Digitized image sequences derived from videotape are manipulated and analyzed interactively. Animation is used to preserve context and increase efficiency of measurement. Filtering and enhancement functions heighten image clarity and contrast, improving visibility of details which are not apparent on videotape. Distortion effects and extraneous head and body motions are removed prior to analysis, and spatial scales are controlled to permit comparison among subjects. Effects of image processing on intra- and interjudge reliability and research applications are discussed.

  17. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  18. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  19. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  20. Accelerated image processing on FPGAs.

    PubMed

    Draper, Bruce A; Beveridge, J Ross; Böhm, A P Willem; Ross, Charles; Chawathe, Monica

    2003-01-01

    The Cameron project has developed a language called single assignment C (SA-C), and a compiler for mapping image-based applications written in SA-C to field programmable gate arrays (FPGAs). The paper tests this technology by implementing several applications in SA-C and compiling them to an Annapolis Microsystems (AMS) WildStar board with a Xilinx XV2000E FPGA. The performance of these applications on the FPGA is compared to the performance of the same applications written in assembly code or C for an 800 MHz Pentium III. (Although no comparison across processors is perfect, these chips were the first of their respective classes fabricated at 0.18 microns, and are therefore of comparable ages.) We find that applications written in SA-C and compiled to FPGAs are between 8 and 800 times faster than the equivalent program run on the Pentium III. PMID:18244709

  1. Digital Image Processing in Private Industry.

    ERIC Educational Resources Information Center

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success

  2. HYPERSPECTRAL IMAGING FOR FOOD PROCESSING AUTOMATION

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A hyperspectral imaging system could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system inc...

  3. Recent developments in digital image processing at the Image Processing Laboratory of JPL.

    NASA Technical Reports Server (NTRS)

    O'Handley, D. A.

    1973-01-01

    Review of some of the computer-aided digital image processing techniques recently developed. Special attention is given to mapping and mosaicking techniques and to preliminary developments in range determination from stereo image pairs. The discussed image processing utilization areas include space, biomedical, and robotic applications.

  4. Command Line Image Processing System (CLIPS)

    NASA Astrophysics Data System (ADS)

    Fleagle, S. R.; Meyers, G. L.; Kulinski, R. G.

    1985-06-01

    An interactive image processing language (CLIPS) has been developed for use in an image processing environment. CLIPS uses a simple syntax with extensive on-line help to allow even the most naive user perform complex image processing tasks. In addition, CLIPS functions as an interpretive language complete with data structures and program control statements. CLIPS statements fall into one of three categories: command, control,and utility statements. Command statements are expressions comprised of intrinsic functions and/or arithmetic operators which act directly on image or user defined data. Some examples of CLIPS intrinsic functions are ROTATE, FILTER AND EXPONENT. Control statements allow a structured programming style through the use of statements such as DO WHILE and IF-THEN - ELSE. Utility statements such as DEFINE, READ, and WRITE, support I/O and user defined data structures. Since CLIPS uses a table driven parser, it is easily adapted to any environment. New commands may be added to CLIPS by writing the procedure in a high level language such as Pascal or FORTRAN and inserting the syntax for that command into the table. However, CLIPS was designed by incorporating most imaging operations into the language as intrinsic functions. CLIPS allows the user to generate new procedures easily with these powerful functions in an interactive or off line fashion using a text editor. The fact that CLIPS can be used to generate complex procedures quickly or perform basic image processing functions interactively makes it a valuable tool in any image processing environment.

  5. Color image processing for date quality evaluation

    NASA Astrophysics Data System (ADS)

    Lee, Dah Jye; Archibald, James K.

    2010-01-01

    Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing a challenging task. Since most agricultural applications only require analysis on a predefined set or range of colors, mapping these relevant colors to a small number of indexes allows simple and efficient color image processing for quality evaluation. This paper presents a simple but efficient color mapping and image processing technique that is designed specifically for real-time quality evaluation of Medjool dates. In contrast with more complex color image processing techniques, the proposed color mapping method makes it easy for a human operator to specify and adjust color-preference settings for different color groups representing distinct quality levels. Using this color mapping technique, the color image is first converted to a color map that has one color index represents a color value for each pixel. Fruit maturity level is evaluated based on these color indices. A skin lamination threshold is then determined based on the fruit surface characteristics. This adaptive threshold is used to detect delaminated fruit skin and hence determine the fruit quality. The performance of this robust color grading technique has been used for real-time Medjool date grading.

  6. Using NASA Space Imaging to Teach Earth and Sun Topics in Professional Development Courses for In-Service Teachers

    NASA Astrophysics Data System (ADS)

    Verner, E.; Bruhweiler, F. C.; Long, T.; Edwards, S.; Ofman, L.; Brosius, J. W.; Holman, G.; St Cyr, O. C.; Krotkov, N. A.; Fatoyinbo Agueh, T.

    2012-12-01

    several PD courses using NASA imaging technology. It includes various ways to study selected topics in physics and astronomy. We use NASA Images to develop lesson plans and EPO materials for PreK-8 grades. Topics are Space based and they vary from measurements, magnetism on Earth to that for our Sun. In addition we cover topics on ecosystem structure, biomass and water on Earth. Hands-on experiments, computer simulations, analysis of real-time NASA data, and vigorous seminar discussions are blended in an inquiry-driven curriculum to instill confident understanding of basic physical science and modern, effective methods for teaching it. Course also demonstrates ways how scientific thinking and hands-on activities could be implemented in the classroom. This course is designed to provide the non-science student a confident understanding of basic physical science and modern, effective methods for teaching it. Most of topics were selected using National Science Standards and National Mathematics Standards to be addressed in grades PreK-8. The course focuses on helping in several areas of teaching: 1) Build knowledge of scientific concepts and processes; 2) Understand the measurable attributes of objects and the units and methods of measurements; 3) Conducting data analysis (collecting, organizing, presenting scientific data, and to predict the result); 4) Use hands-on approaches to teach science; 5) Be familiar with Internet science teaching resources. Here we share our experiences and challenges we faced teaching this course.

  7. Rotation Covariant Image Processing for Biomedical Applications

    PubMed Central

    Reisert, Marco

    2013-01-01

    With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences. PMID:23710255

  8. New approach for underwater imaging and processing

    NASA Astrophysics Data System (ADS)

    Wen, Yanan; Tian, Weijian; Zheng, Bing; Zhou, Guozun; Dong, Hui; Wu, Qiong

    2014-05-01

    Due to the absorptive and scattering nature of water, the characteristic of underwater image is different with it in the air. Underwater image is characterized by their poor visibility and noise. Getting clear original image and image processing are two important problems to be solved in underwater clear vision area. In this paper a new approach technology is presented to solve these problems. Firstly, an inhomogeneous illumination method is developed to get the clear original image. Normal illumination image system and inhomogeneous illumination image system are used to capture the image in same distance. The result shows that the contrast and definition of processed image is get great improvement by inhomogeneous illumination method. Secondly, based on the theory of photon transmitted in the water and the particularity of underwater target detecting, the characters of laser scattering on underwater target surface and spatial and temporal characters of oceanic optical channel have been studied. Based on the Monte Carlo simulation, we studied how the parameters of water quality and other systemic parameters affect the light transmitting through water at spatial and temporal region and provided the theoretical sustentation of enhancing the SNR and operational distance.

  9. Fingerprint image enhancement by differential hysteresis processing.

    PubMed

    Blotta, Eduardo; Moler, Emilce

    2004-05-10

    A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results. PMID:15062948

  10. Efficient Image Enhancement Algorithm Using Multi-Rate Image Processing

    NASA Astrophysics Data System (ADS)

    Okuno, Takeshi; Nishitani, Takao

    This paper describes an efficient image enhancement method based on the Multi-Scale Retinex (MSR) approach for pre-processing of video applications. The processing amount is drastically reduced to 4 orders less than that of the original MSR, and 1 order less than the latest fast MSR method. For the efficient processing, our proposed method employs multi-stage and multi-rate filter processing which is constructed by a x-y separable and polyphase structure. In addition, the MSR association is effectively implemented during the above multi-stage processing. The method also modifies a weighting function for enhancement to improve color rendition of bright areas in an image. A variety of evaluation results show that the performance of our simplified method is similar to those of the original MSR, in terms of visual perception, contrast enhancement effects, and hue changes. Moreover, experimental results show that pre-processing of the proposed method contributes to clear foreground object separation.

  11. Image-processing with augmented reality (AR)

    NASA Astrophysics Data System (ADS)

    Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash

    2013-03-01

    In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.

  12. Inline coherent imaging of laser processing

    NASA Astrophysics Data System (ADS)

    Fraser, James M.

    2011-03-01

    In applications ranging from noncontact microsurgery to semiconductor blind hole drilling, precise depth control of laser processing is essential. Even a priori characterization cannot compensate for material heterogeneity and stochasticity inherent to the material modification process. We image along the machining beam axis at high speeds (up to 312 kHz) to provide real-time feedback, even in high aspect ratio holes. The in situ metrology is based on broadband coherent imaging (similar to the medical imaging modality optical coherence tomography) and is practical for a wide-range of light sources and machining processes (e.g., thermal cutting using a quasi-continuous wave fiber laser, or nonlinear ablation achieved with ultrafast pulses). Coherent imaging has high dynamic range (> 60 dB) and strongly rejects incoherent signals allowing weak features to be observed in the presence of intense machining light and bright plasmas. High axial resolution ( 5 ?m) is achieved with broadband imaging light but center wavelength can be chosen appropriate to the application. Infrared (wavelength: 1320+/-35 nm) allow simultaneous monitoring of both surface and subsurface interfaces in nonabsorbing materials like tissue and semiconductors. Silicon based detector technology can be used with near infrared imaging light (804 +/- 30 nm) enabling high speed acquisition (>300 kHz) or low cost implementation (total imaging system <10k$). Machining with an appropriate broadband ultrafast laser allows machining and imaging to be done with the same light source. Ultrafast technology also enables nonlinear optical processing of the imaging light, opening the door to improved imaging modalities.

  13. Overview on METEOSAT geometrical image data processing

    NASA Technical Reports Server (NTRS)

    Diekmann, Frank J.

    1994-01-01

    Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

  14. Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images

    ERIC Educational Resources Information Center

    Perry, Jamie; Kuehn, David; Langlois, Rick

    2007-01-01

    Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two-dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.

  15. Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images

    ERIC Educational Resources Information Center

    Perry, Jamie; Kuehn, David; Langlois, Rick

    2007-01-01

    Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two-dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.…

  16. Visualisation of Ecohydrological Processes and Relationships for Teaching Using Advanced Techniques

    NASA Astrophysics Data System (ADS)

    Guan, H.; Wang, H.; Gutierrez-Jurado, H. A.; Yang, Y.; Deng, Z.

    2014-12-01

    Ecohydrology is an emerging discipline with a rapid research growth. This calls for enhancing ecohydrology education in both undergraduate and postgraduate levels. In other hydrology disciplines, hydrological processes are commonly observed in environments (e.g. streamflow, infiltration) or easily demonstrated in labs (e.g. Darcy's column). It is relatively difficult to demonstrate ecohydrological concepts and processes (e.g. soil-vegetation water relationship) in teaching. In this presentation, we report examples of using some advanced techniques to illustrate ecohydrological concepts, relationships, and processes, with measurements based on a native vegetation catchment in South Australia. They include LIDAR images showing the relationship between topography-control hdyroclimatic conditions and vegetation distribution, electrical resistivity tomography derived images showing stem structures, continuous stem water potential monitoring showing diurnal variations of plant water status, root zone moisture depletion during dry spells, and responses to precipitation inputs, and incorporating sapflow measurements to demonstrate environmental stress on plant stomatal behaviours.

  17. Teaching the NIATx Model of Process Improvement as an Evidence-Based Process

    ERIC Educational Resources Information Center

    Evans, Alyson C.; Rieckmann, Traci; Fitzgerald, Maureen M.; Gustafson, David H.

    2007-01-01

    Process Improvement (PI) is an approach for helping organizations to identify and resolve inefficient and ineffective processes through problem solving and pilot testing change. Use of PI in improving client access, retention and outcomes in addiction treatment is on the rise through the teaching of the Network for the Improvement of Addiction…

  18. Real-time optical image processing techniques

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1988-01-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  19. Image save and carry system-based teaching-file library

    NASA Astrophysics Data System (ADS)

    Morimoto, Kouji; Kimura, Michio; Fujii, Toshiyuki

    1994-05-01

    Digital imaging technology has introduced some new possibilities of forming teaching files without films. IS&C (Image Save & Carry) system, which is based on magneto-optic disc, is a good medium for this purpose, because of its large capacity, prompt access time, and unified format independent of operating systems. The author have constructed a teaching file library, on which user can add and edit images. CD-ROM and IS&C satisfy most of basic criteria for teaching file construction platform. CD-ROM is the best medium for circulating large numbers of identical copies, while IS&C is advantageous in personal addition and editing of library.

  20. Bistatic SAR: Signal Processing and Image Formation.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  1. Palm print image processing with PCNN

    NASA Astrophysics Data System (ADS)

    Yang, Jun; Zhao, Xianhong

    2010-08-01

    Pulse coupled neural networks (PCNN) is based on Eckhorn's model of cat visual cortex, and imitate mammals visual processing, and palm print has been found as a personal biological feature for a long history. This inspired us with the combination of them: a novel method for palm print processing is proposed, which includes pre-processing and feature extraction of palm print image using PCNN; then the feature of palm print image is used for identifying. Our experiment shows that a verification rate of 87.5% can be achieved at ideal condition. We also find that the verification rate decreases duo to rotate or shift of palm.

  2. Cosmic movement detection using image processing

    NASA Astrophysics Data System (ADS)

    Dhage, Sudhir N.; Mishra, Akassh A.; Patil, Rajesh

    2011-10-01

    Cosmic Movement Detection is concerned with the difficult task of taking the images of sky through high powered telescope or satellite transmitted images and performing image processing in order to discover new galaxies, stars and other cosmic objects or to describe already known galaxies, stars and other cosmic objects. Description meant to describe the type of cosmic object under consideration, whether it's already been recognized previously or not, whether it's moving close to Earth or moving away from Earth. It has several applications astrophysics, astronomy and astroscience. Automating this process to a computer requires the use of various image processing techniques. The method which the present paper describes is based on Doppler Effect of light i.e. red and blue shifting property. Several factors like poor illumination, noise disturbance, viewpoint-dependence, Climate factors, Transmission and Imaging conditions can affect the algorithm working. This paper reports an algorithm for Cosmic Movement Detection using image processing and Doppler Effect of light. The present paper suggests Cosmic Movement can be detected checking the color of the galaxy and it can help in determining whether it's moving towards Earth or its moving away.

  3. Large scale parallel document image processing

    NASA Astrophysics Data System (ADS)

    van der Zant, Tijn; Schomaker, Lambert; Valentijn, Edwin

    2008-01-01

    Building a system which allows to search a very large database of document images requires professionalization of hardware and software, e-science and web access. In astrophysics there is ample experience dealing with large data sets due to an increasing number of measurement instruments. The problem of digitization of historical documents of the Dutch cultural heritage is a similar problem. This paper discusses the use of a system developed at the Kapteyn Institute of Astrophysics for the processing of large data sets, applied to the problem of creating a very large searchable archive of connected cursive handwritten texts. The system is adapted to the specific needs of processing document images. It shows that interdisciplinary collaboration can be beneficial in the context of machine learning, data processing and professionalization of image processing and retrieval systems.

  4. Image Processing Application for Cognition (IPAC) - Traditional and Emerging Topics in Image Processing in Astronomy (Invited)

    NASA Astrophysics Data System (ADS)

    Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.

    2008-08-01

    A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).

  5. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  6. Thermal Imaging Processes of Polymer Nanocomposite Coatings

    NASA Astrophysics Data System (ADS)

    Meth, Jeffrey

    2015-03-01

    Laser induced thermal imaging (LITI) is a process whereby infrared radiation impinging on a coating on a donor film transfers that coating to a receiving film to produce a pattern. This talk describes how LITI patterning can print color filters for liquid crystal displays, and details the physical processes that are responsible for transferring the nanocomposite coating in a coherent manner that does not degrade its optical properties. Unique features of this process involve heating rates of 107 K/s, and cooling rates of 104 K/s, which implies that not all of the relaxation modes of the polymer are accessed during the imaging process. On the microsecond time scale, the polymer flow is forced by devolatilization of solvents, followed by deformation akin to the constrained blister test, and then fracture caused by differential thermal expansion. The unique combination of disparate physical processes demonstrates the gamut of physics that contribute to advanced material processing in an industrial setting.

  7. Fundamental Concepts of Digital Image Processing

    DOE R&D Accomplishments Database

    Twogood, R. E.

    1983-03-01

    The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

  8. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  9. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  10. Use of Low-cost 3-D Images in Teaching Gross Anatomy.

    ERIC Educational Resources Information Center

    Richards, Boyd F.; And Others

    1987-01-01

    With advances in computer technology, it has become possible to create three-dimensional (3-D) images of anatomical structures for use in teaching gross anatomy. Reported is a survey of attitudes of 91 first-year medical students toward the use of 3-D images in their anatomy course. Reactions to the 3-D images and suggestions for improvement are…

  11. DSP based image processing for retinal prosthesis.

    PubMed

    Parikh, Neha J; Weiland, James D; Humayun, Mark S; Shah, Saloni S; Mohile, Gaurav S

    2004-01-01

    The real-time image processing in retinal prosthesis consists of the implementation of various image processing algorithms like edge detection, edge enhancement, decimation etc. The algorithmic computations in real-time may have high level of computational complexity and hence the use of digital signal processors (DSPs) for the implementation of such algorithms is proposed here. This application desires that the DSPs be highly computationally efficient while working on low power. DSPs have computational capabilities of hundreds of millions of instructions per second (MIPS) or millions of floating point operations per second (MFLOPS) along with certain processor configurations having low power. The various image processing algorithms, the DSP requirements and capabilities of different platforms would be discussed in this paper. PMID:17271974

  12. Three-dimensional image signals: processing methods

    NASA Astrophysics Data System (ADS)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  13. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the pointing of in situ cameras, (8) marsinvrange: Inverse of marsrange . given a range file, re-computes an XYZ file that closely matches the original. . marsproj: Projects an XYZ coordinate through the camera model, and reports the line/sample coordinates of the point in the image, (9) marsprojfid: Given the output of marsfidfinder, projects the XYZ locations and compares them to the found locations, creating a report showing the fiducial errors in each image. marsrad: Radiometrically corrects an image, (10) marsrelabel: Updates coordinate system or camera model labels in an image, (11) marstiexyz: Given a stereo pair, allows the user to interactively pick a point in each image and reports the XYZ value corresponding to that pair of locations. marsunmosaic: Extracts a single frame from a mosaic, which will be created such that it could have been an input to the original mosaic. Useful for creating simulated input frames using different camera models than the original mosaic used, and (12) merinverter: Uses an inverse lookup table to convert 8-bit telemetered data to its 12-bit original form. Can be used in other missions despite the name.

  14. Processing infrared images of aircraft lapjoints

    NASA Technical Reports Server (NTRS)

    Syed, Hazari; Winfree, William P.; Cramer, K. E.

    1992-01-01

    Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.

  15. The Penn State astronomical image processing system

    NASA Astrophysics Data System (ADS)

    Truax, Ryland J.; Nousek, John A.; Feigelson, Eric D.; Lonsdale, Colin J.

    1987-06-01

    The needs of modern astronomy for image processing set demanding standards in simultaneously requiring fast computation speed, high-quality graphic display, large data storage, and interactive response. An innovative image processing system was designed, integrated, and used; it is based on a supermicro architecture which is tailored specifically for astronomy, which provides a highly cost-effective alternative to the traditional minicomputer installation. The paper describes the design rationale, equipment selection, and software developed to allow other astronomers with similar needs to benefit from the present experience.

  16. The Penn State astronomical image processing system

    NASA Technical Reports Server (NTRS)

    Truax, Ryland J.; Nousek, John A.; Feigelson, Eric D.; Lonsdale, Colin J.

    1987-01-01

    The needs of modern astronomy for image processing set demanding standards in simultaneously requiring fast computation speed, high-quality graphic display, large data storage, and interactive response. An innovative image processing system was designed, integrated, and used; it is based on a supermicro architecture which is tailored specifically for astronomy, which provides a highly cost-effective alternative to the traditional minicomputer installation. The paper describes the design rationale, equipment selection, and software developed to allow other astronomers with similar needs to benefit from the present experience.

  17. Image processing applications for geologic mapping

    SciTech Connect

    Abrams, M.; Blusson, A.; Carrere, V.; Nguyen, T.; Rabu, Y.

    1985-03-01

    The use of satellite data, particularly Landsat images, for geologic mapping provides the geologist with a powerful tool. The digital format of these data permits applications of image processing to extract or enhance information useful for mapping purposes. Examples are presented of lithologic classification using texture measures, automatic lineament detection and structural analysis, and use of registered multisource satellite data. In each case, the additional mapping information provided relative to the particular treatment is evaluated. The goal is to provide the geologist with a range of processing techniques adapted to specific mapping problems.

  18. Logarithmic spiral grids for image processing

    NASA Technical Reports Server (NTRS)

    Weiman, C. F. R.; Chaikin, G. M.

    1979-01-01

    A picture digitization grid based on logarithmic spirals rather than Cartesian coordinates is presented. Expressing this curvilinear grid as a conformal exponential mapping reveals useful image processing properties. The mapping induces a computational simplification that suggests parallel architectures in which most geometric transformations are effected by data shifting in memory rather than arithmetic on coordinates. These include fast, parallel noise-free rotation, scaling, and some projective transformations of pixel defined images. Conformality of the mapping preserves local picture-processing operations such as edge detection.

  19. Results of precision processing (scene correction) of ERTS-1 images using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Bernstein, R.

    1973-01-01

    ERTS-1 MSS and RBV data recorded on computer compatible tapes have been analyzed and processed, and preliminary results have been obtained. No degradation of intensity (radiance) information occurred in implementing the geometric correction. The quality and resolution of the digitally processed images are very good, due primarily to the fact that the number of film generations and conversions is reduced to a minimum. Processing times of digitally processed images are about equivalent to the NDPF electro-optical processor.

  20. Hardware implementation of machine vision systems: image and video processing

    NASA Astrophysics Data System (ADS)

    Botella, Guillermo; García, Carlos; Meyer-Bäse, Uwe

    2013-12-01

    This contribution focuses on different topics covered by the special issue titled `Hardware Implementation of Machine vision Systems' including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.

  1. Morphological Image Processing Applied in Biomedicine

    NASA Astrophysics Data System (ADS)

    Lotufo, Roberto A.; Rittner, Leticia; Audigier, Romaric; Machado, Rubens C.; Saúde, André V.

    This chapter presents the main concepts of morphological image processing. Mathematical morphology has application in diverse areas of image processing such as filtering, segmentation and pattern recognition, applied both to binary and gray-scale images. Section 4.2 addresses the basic binary morphological operations: erosion, dilation, opening and closing. We also present applications of the primary operators, paying particular attention to morphological reconstruction because of its importance and since it is still not widely known. In Sect. 4.3, the same concepts are extended to gray-scale images. Section 4.4 is devoted to watershed-based segmentation. There are many variants of the watershed transform. We introduce the watershed principles with real-world applications. The key to successful segmentation is the design of the marker to eliminate the over-segmentation problem. Finally, Sect. 4.5 presents the multi-scale watershed to segment brain structures from diffusion tensor imaging, a relatively recent imaging modality that is based on magnetic resonance.

  2. Processing Images of Craters for Spacecraft Navigation

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  3. Product review: lucis image processing software.

    PubMed

    Johnson, J E

    1999-04-01

    Lucis is a software program that allows the manipulation of images through the process of selective contrast pattern emphasis. Using an image-processing algorithm called Differential Hysteresis Processing (DHP), Lucis extracts and highlights patterns based on variations in image intensity (luminance). The result is that details can be seen that would otherwise be hidden in deep shadow or excessive brightness. The software is contained on a single floppy disk, is easy to install on a PC, simple to use, and runs on Windows 95, Windows 98, and Windows NT operating systems. The cost is $8,500 for a license, but is estimated to save a great deal of money in photographic materials, time, and labor that would have otherwise been spent in the darkroom. Superb images are easily obtained from unstained (no lead or uranium) sections, and stored image files sent to laser printers are of publication quality. The software can be used not only for all types of microscopy, including color fluorescence light microscopy, biological and materials science electron microscopy (TEM and SEM), but will be beneficial in medicine, such as X-ray films (pending approval by the FDA), and in the arts. PMID:10206154

  4. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  5. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  6. MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING

    PubMed Central

    ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN

    2013-01-01

    In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

  7. Feedback regulation of microscopes by image processing.

    PubMed

    Tsukada, Yuki; Hashimoto, Koichi

    2013-05-01

    Computational microscope systems are becoming a major part of imaging biological phenomena, and the development of such systems requires the design of automated regulation of microscopes. An important aspect of automated regulation is feedback regulation, which is the focus of this review. As modern microscope systems become more complex, often with many independent components that must work together, computer control is inevitable since the exact orchestration of parameters and timings for these multiple components is critical to acquire proper images. A number of techniques have been developed for biological imaging to accomplish this. Here, we summarize the basics of computational microscopy for the purpose of building automatically regulated microscopes focus on feedback regulation by image processing. These techniques allow high throughput data acquisition while monitoring both short- and long-term dynamic phenomena, which cannot be achieved without an automated system. PMID:23594233

  8. Mariner 9-Image processing and products

    USGS Publications Warehouse

    Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.

    1973-01-01

    The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.

  9. Web-based document image processing

    NASA Astrophysics Data System (ADS)

    Walker, Frank L.; Thoma, George R.

    1999-12-01

    Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

  10. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  11. Improving Synthetic Aperture Image by Image Compounding in Beamforming Process

    NASA Astrophysics Data System (ADS)

    Martínez-Graullera, Oscar; Higuti, Ricardo T.; Martín, Carlos J.; Ullate, Luis. G.; Romero, David; Parrilla, Montserrat

    2011-06-01

    In this work, signal processing techniques are used to improve the quality of image based on multi-element synthetic aperture techniques. Using several apodization functions to obtain different side lobes distribution, a polarity function and a threshold criterium are used to develop an image compounding technique. The spatial diversity is increased using an additional array, which generates complementary information about the defects, improving the results of the proposed algorithm and producing high resolution and contrast images. The inspection of isotropic plate-like structures using linear arrays and Lamb waves is presented. Experimental results are shown for a 1-mm-thick isotropic aluminum plate with artificial defects using linear arrays formed by 30 piezoelectric elements, with the low dispersion symmetric mode S0 at the frequency of 330 kHz.

  12. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  13. Progressive band processing for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Schultz, Robert C.

    Hyperspectral imaging has emerged as an image processing technique in many applications. The reason that hyperspectral data is called hyperspectral is mainly because the massive amount of information provided by the hundreds of spectral bands that can be used for data analysis. However, due to very high band-to-band correlation much information may be also redundant. Consequently, how to effectively and best utilize such rich spectral information becomes very challenging. One general approach is data dimensionality reduction which can be performed by data compression techniques, such as data transforms, and data reduction techniques, such as band selection. This dissertation presents a new area in hyperspectral imaging, to be called progressive hyperspectral imaging, which has not been explored in the past. Specifically, it derives a new theory, called Progressive Band Processing (PBP) of hyperspectral data that can significantly reduce computing time and can also be realized in real-time. It is particularly suited for application areas such as hyperspectral data communications and transmission where data can be communicated and transmitted progressively through spectral or satellite channels with limited data storage. Most importantly, PBP allows users to screen preliminary results before deciding to continue with processing the complete data set. These advantages benefit users of hyperspectral data by reducing processing time and increasing the timeliness of crucial decisions made based on the data such as identifying key intelligence information when a required response time is short.

  14. Process for making lyophilized radiographic imaging kit

    SciTech Connect

    Grogg, T.W.; Bates, P.E.; Bugaj, J.E.

    1985-04-09

    A process for making a lyophilized composition useful for skeletal imaging whereby an aqueous solution containing an ascorbate, gentisate, or reductate stabilizer is contacted with tin metal or an alloy containing tin and, thereafter, lyophilized. Preferably, such compositions also comprise a tissue-specific carrier and a stannous compound. It is particularly preferred to incorporate stannous oxide as a coating on the tin metal.

  15. Limiting liability via high resolution image processing

    SciTech Connect

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  16. Visual parameter optimisation for biomedical image processing

    PubMed Central

    2015-01-01

    Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538

  17. The Graphic Novel Classroom: POWerful Teaching and Learning with Images

    ERIC Educational Resources Information Center

    Bakis, Maureen

    2011-01-01

    Could you use a superhero to teach reading, writing, critical thinking, and problem solving? While seeking the answer, secondary language arts teacher Maureen Bakis discovered a powerful pedagogy that teaches those skills and more. The amazingly successful results prompted her to write this practical guide that shows middle and high school…

  18. Using Photographic Images as an Interactive Online Teaching Strategy

    ERIC Educational Resources Information Center

    Perry, Beth

    2006-01-01

    Teaching via distance requires inventive instructional strategies to facilitate an optimum learning experience. This qualitative research study evaluated the effect of one unique online teaching strategy called "photovoice" [Wang, C., & Burris, M. (1997). "Photovoice: Concept, methodology, and use for participatory needs assessment." "Health…

  19. Medical imaging education in biomedical engineering curriculum: courseware development and application through a hybrid teaching model.

    PubMed

    Zhao, Weizhao; Li, Xiping; Chen, Hairong; Manns, Fabrice

    2012-01-01

    Medical Imaging is a key training component in Biomedical Engineering programs. Medical imaging education is interdisciplinary training, involving physics, mathematics, chemistry, electrical engineering, computer engineering, and applications in biology and medicine. Seeking an efficient teaching method for instructors and an effective learning environment for students has long been a goal for medical imaging education. By the support of NSF grants, we developed the medical imaging teaching software (MITS) and associated dynamic assessment tracking system (DATS). The MITS/DATS system has been applied to junior and senior medical imaging classes through a hybrid teaching model. The results show that student's learning gain improved, particularly in concept understanding and simulation project completion. The results also indicate disparities in subjective perception between junior and senior classes. Three institutions are collaborating to expand the courseware system and plan to apply it to different class settings. PMID:23367069

  20. Subband/transform functions for image processing

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  1. Remote online processing of multispectral image data

    NASA Astrophysics Data System (ADS)

    Groh, Christine; Rothe, Hendrik

    2005-10-01

    Within the scope of this paper a both compact and economical data acquisition system for multispecral images is described. It consists of a CCD camera, a liquid crystal tunable filter in combination with an associated concept for data processing. Despite of their limited functionality (e.g.regarding calibration) in comparison with commercial systems such as AVIRIS the use of these upcoming compact multispectral camera systems can be advantageous in many applications. Additional benefit can be derived adding online data processing. In order to maintain the systems low weight and price this work proposes to separate data acquisition and processing modules, and transmit pre-processed camera data online to a stationary high performance computer for further processing. The inevitable data transmission has to be optimised because of bandwidth limitations. All mentioned considerations hold especially for applications involving mini-unmanned-aerial-vehicles (mini-UAVs). Due to their limited internal payload the use of a lightweight, compact camera system is of particular importance. This work emphasises on the optimal software interface in between pre-processed data (from the camera system), transmitted data (regarding small bandwidth) and post-processed data (based on high performance computer). Discussed parameters are pre-processing algorithms, channel bandwidth, and resulting accuracy in the classification of multispectral image data. The benchmarked pre-processing algorithms include diagnostic statistics, test of internal determination coefficients as well as loss-free and lossy data compression methods. The resulting classification precision is computed in comparison to a classification performed with the original image dataset.

  2. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  3. Bitplane Image Coding With Parallel Coefficient Processing.

    PubMed

    Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor

    2016-01-01

    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible. PMID:26441420

  4. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing. PMID:11567193

  5. EOS image data processing system definition study

    NASA Technical Reports Server (NTRS)

    Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.

  6. Comparative positioning of ships on the basis of neural processing of digital images

    NASA Astrophysics Data System (ADS)

    Stateczny, A.

    2003-04-01

    Satellite and radar systems have been the main information sources in marine navigation in recent years. Apart from commonly known anti-collision functions, the marine navigational radar constitutes the basis for a future comparative system of ship positioning. The sonar is an additional source of image information in the system. In this way, the data are derived from observing the surroundings of the ship's total measuring area. The system of comparative navigation is an attractive alternative to satellite navigation due to its autonomy and independence from external appliances. The methods of analytic comparison of digitally recorded images applied so far are based on complex and time-consuming calculation algorithms. A new approach in comparative navigation is the application of artificial neural networks for plotting the ship's position. In the positioning process, previously registered images can be made use of, as well as their positions plotted for instance by means of the GPS system or by geodetic methods. The teaching sequence is constituted by the registered images correlated with positions; it is performed earlier and can last for any length of time. After the process of teaching the network is completed, the dynamically registered images are put on the network input as they come, and a position interpolation is performed based on images recognized as closest to the image analyzed. A merit of this method is teaching the network with real images, along with their disturbances and distortions. The teaching sequence includes images analogous to those that will be used in practice. During the system's working the response of the network (plotting the ship's position) is almost immediate. A basic problem of this method is the need for previous registration of numerous real images in various hydrometeorological conditions. The registered images should be subjected to digital processing, to the compression process in particular. One of the processing methods is encoding the image by means of Kohonen network, and next giving the encoded vector to the input of GRNN (General Regression Neural Networks). The article presents a method of processing the image through encoding by Kohonen network and interpolating the ship's position by GRNN network.

  7. Using a creative teaching process with adult patients.

    PubMed

    Duffy, B

    1997-02-01

    Because the patient or caregiver must manage healthcare needs after the nurse has left the home, patient education is an important component to home health nursing (Rice, 1996). Fortunately, patient teaching is ideal for home healthcare. To make the most of the home learning environment with adult patients, the nurse must assess, design, develop, implement, and evaluate an individualized patient teaching plan. Throughout the ADDIE process, the nurse manipulates and integrates the home environment to maximize the possibility that the patient will accept, remember, and apply the information presented. Taking into account how adults learn, the nurse provides relevant problems and situations for the patient to practice newly acquired knowledge and skills. The instruction presents learners with alternatives to their current ways of thinking, behaving, and living (Brookfield, 1986). Given the information and the tools needed to regain a sense of control and experience life safely, within their abilities of medical illness or injury, informed adult patients are likely to experience fewer complications and enhanced self-esteem. For pertinent, timely, and personal healthcare instruction, there is no place quite like home. PMID:9146150

  8. Translational motion compensation in ISAR image processing.

    PubMed

    Wu, H; Grenier, D; Delisle, G Y; Fang, D G

    1995-01-01

    In inverse synthetic aperture radar (ISAR) imaging, the target rotational motion with respect to the radar line of sight contributes to the imaging ability, whereas the translational motion must be compensated out. This paper presents a novel two-step approach to translational motion compensation using an adaptive range tracking method for range bin alignment and a recursive multiple-scatterer algorithm (RMSA) for signal phase compensation. The initial step of RMSA is equivalent to the dominant-scatterer algorithm (DSA). An error-compensating point source is then recursively synthesized from the selected range bins, where each contains a prominent scatterer. Since the clutter-induced phase errors are reduced by phase averaging, the image speckle noise can be reduced significantly. Experimental data processing for a commercial aircraft and computer simulations confirm the validity of the approach. PMID:18291988

  9. Quantum information processing in optical images

    NASA Astrophysics Data System (ADS)

    Fabre, C.; Andersen, U.; Bachor, H.; Buchler, B.; Gigan, S.; Lam, P. K.; Matre, A.; Treps, N.

    2002-12-01

    Optical images can be used to transport, store and process information in a parallel way. We discuss different results obtained in the domain of 'quantum imaging', aiming at exploiting at the same time the quantum properties of optical images and their intrinsic parallelism. We define the notion of standard quantum limit (SQL) in optical resolution, set by the quantum noise of usual coherent light, and show that it can be much lower than the diffraction limit. We also prove that this limit can be circumvented by especially designed nonclassical and multimode light. We present an experiment showing that OPOs oscillating inside an exactly confocal cavity actually produce such transverse multimode nonclassical light. We finally describe another experiment which has surpassed the SQL in the case of beam positioning, both in the 1D and 2D cases.

  10. Computer image processing in marine resource exploration

    NASA Technical Reports Server (NTRS)

    Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.

    1976-01-01

    Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.

  11. IMAGE 100: The interactive multispectral image processing system

    NASA Technical Reports Server (NTRS)

    Schaller, E. S.; Towles, R. W.

    1975-01-01

    The need for rapid, cost-effective extraction of useful information from vast quantities of multispectral imagery available from aircraft or spacecraft has resulted in the design, implementation and application of a state-of-the-art processing system known as IMAGE 100. Operating on the general principle that all objects or materials possess unique spectral characteristics or signatures, the system uses this signature uniqueness to identify similar features in an image by simultaneously analyzing signatures in multiple frequency bands. Pseudo-colors, or themes, are assigned to features having identical spectral characteristics. These themes are displayed on a color CRT, and may be recorded on tape, film, or other media. The system was designed to incorporate key features such as interactive operation, user-oriented displays and controls, and rapid-response machine processing. Owing to these features, the user can readily control and/or modify the analysis process based on his knowledge of the input imagery. Effective use can be made of conventional photographic interpretation skills and state-of-the-art machine analysis techniques in the extraction of useful information from multispectral imagery. This approach results in highly accurate multitheme classification of imagery in seconds or minutes rather than the hours often involved in processing using other means.

  12. Improvement of hospital processes through business process management in Qaem Teaching Hospital: A work in progress.

    PubMed

    Yarmohammadian, Mohammad H; Ebrahimipour, Hossein; Doosty, Farzaneh

    2014-01-01

    In a world of continuously changing business environments, organizations have no option; however, to deal with such a big level of transformation in order to adjust the consequential demands. Therefore, many companies need to continually improve and review their processes to maintain their competitive advantages in an uncertain environment. Meeting these challenges requires implementing the most efficient possible business processes, geared to the needs of the industry and market segments that the organization serves globally. In the last 10 years, total quality management, business process reengineering, and business process management (BPM) have been some of the management tools applied by organizations to increase business competiveness. This paper is an original article that presents implementation of "BPM" approach in the healthcare domain that allows an organization to improve and review its critical business processes. This project was performed in "Qaem Teaching Hospital" in Mashhad city, Iran and consists of four distinct steps; (1) identify business processes, (2) document the process, (3) analyze and measure the process, and (4) improve the process. Implementing BPM in Qaem Teaching Hospital changed the nature of management by allowing the organization to avoid the complexity of disparate, soloed systems. BPM instead enabled the organization to focus on business processes at a higher level. PMID:25540784

  13. Improvement of hospital processes through business process management in Qaem Teaching Hospital: A work in progress

    PubMed Central

    Yarmohammadian, Mohammad H.; Ebrahimipour, Hossein; Doosty, Farzaneh

    2014-01-01

    In a world of continuously changing business environments, organizations have no option; however, to deal with such a big level of transformation in order to adjust the consequential demands. Therefore, many companies need to continually improve and review their processes to maintain their competitive advantages in an uncertain environment. Meeting these challenges requires implementing the most efficient possible business processes, geared to the needs of the industry and market segments that the organization serves globally. In the last 10 years, total quality management, business process reengineering, and business process management (BPM) have been some of the management tools applied by organizations to increase business competiveness. This paper is an original article that presents implementation of “BPM” approach in the healthcare domain that allows an organization to improve and review its critical business processes. This project was performed in “Qaem Teaching Hospital” in Mashhad city, Iran and consists of four distinct steps; (1) identify business processes, (2) document the process, (3) analyze and measure the process, and (4) improve the process. Implementing BPM in Qaem Teaching Hospital changed the nature of management by allowing the organization to avoid the complexity of disparate, soloed systems. BPM instead enabled the organization to focus on business processes at a higher level. PMID:25540784

  14. Intelligent image processing for machine safety

    NASA Astrophysics Data System (ADS)

    Harvey, Dennis N.

    1994-10-01

    This paper describes the use of intelligent image processing as a machine guarding technology. One or more color, linear array cameras are positioned to view the critical region(s) around a machine tool or other piece of manufacturing equipment. The image data is processed to provide indicators of conditions dangerous to the equipment via color content, shape content, and motion content. The data from these analyses is then sent to a threat evaluator. The purpose of the evaluator is to determine if a potentially machine-damaging condition exists based on the analyses of color, shape, and motion, and on `knowledge' of the specific environment of the machine. The threat evaluator employs fuzzy logic as a means of dealing with uncertainty in the vision data.

  15. Intelligent image processing for machine operator safety

    NASA Astrophysics Data System (ADS)

    Harvey, Dennis N.

    1994-09-01

    This paper describes the use of intelligent image processing as a machine tool operator safety technology. One or more color, linear array cameras are positioned to view the critical region(s) around a machine tool. The image data is processed to provide indicators of an operator danger condition via color content, shape content, and motion content. The data from these analyses is then sent to a threat evaluator. The purpose of the evaluator is to determine if a danger condition exists based on the analyses of color, shape, and motion, and on `knowledge' of the specific environment of the machine tool. The threat evaluator employs fuzzy logic as a means of dealing with uncertainty in the vision data.

  16. Enriching Student Concept Images: Teaching and Learning Fractions through a Multiple-Embodiment Approach

    ERIC Educational Resources Information Center

    Zhang, Xiaofen; Clements, M. A.; Ellerton, Nerida F.

    2015-01-01

    This study investigated how fifth-grade children's concept images of the unit fractions represented by the symbols 1/2, 1/3/ and 1/4 changed as a result of their participation in an instructional intervention based on multiple embodiments of fraction concepts. The participants' concept images were examined through pre- and post-teaching written

  17. Enriching Student Concept Images: Teaching and Learning Fractions through a Multiple-Embodiment Approach

    ERIC Educational Resources Information Center

    Zhang, Xiaofen; Clements, M. A.; Ellerton, Nerida F.

    2015-01-01

    This study investigated how fifth-grade children's concept images of the unit fractions represented by the symbols 1/2, 1/3/ and 1/4 changed as a result of their participation in an instructional intervention based on multiple embodiments of fraction concepts. The participants' concept images were examined through pre- and post-teaching written…

  18. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.

  19. Novel image processing approach to detect malaria

    NASA Astrophysics Data System (ADS)

    Mas, David; Ferrer, Belen; Cojoc, Dan; Finaurini, Sara; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev

    2015-09-01

    In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.

  20. IPLIB (Image processing library) user's manual

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.; Monteith, J. H.; Miller, K.

    1985-01-01

    IPLIB is a collection of HP FORTRAN 77 subroutines and functions that facilitate the use of a COMTAL image processing system driven by an HP-1000 computer. It is intended for programmers who want to use the HP 1000 to drive the COMTAL Vision One/20 system. It is assumed that the programmer knows HP 1000 FORTRAN 77 or at least one FORTRAN dialect. It is also assumed that the programmer has some familiarity with the COMTAL Vision One/20 system.

  1. Sorting Olive Batches for the Milling Process Using Image Processing.

    PubMed

    Aguilera Puerto, Daniel; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  2. Sorting Olive Batches for the Milling Process Using Image Processing

    PubMed Central

    Puerto, Daniel Aguilera; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  3. Pharmacy Students' Perceptions of a Teaching Evaluation Process

    PubMed Central

    Desselle, Shane P.

    2007-01-01

    Objective To assess PharmD students' perceptions of the usefulness of Duquesne University's Teaching Effectiveness Questionnaire (TEQ), the instrument currently employed for student evaluation of teaching. Methods Opinions of PharmD students regarding the TEQ were measured using a survey instrument comprised of Likert-type scales eliciting perceptions, behaviors, and self-reported biases. Results PharmD students viewed student evaluation of teaching as appropriate and necessary, but conceded that the faculty members receiving the best evaluations were not always the most effective teachers. Most students indicated a willingness to complete the TEQ when given the opportunity but expressed frustration that their feedback did not appear to improve subsequent teaching efforts. Conclusion The current TEQ mechanism for student evaluation of teaching is clearly useful but nevertheless imperfect with respect to its ability to improve teaching. Future research may examine other aspects of pharmacy students' roles as evaluators of teaching. PMID:17429506

  4. Enriching student concept images: Teaching and learning fractions through a multiple-embodiment approach

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaofen; Clements, M. A. (Ken); Ellerton, Nerida F.

    2015-06-01

    This study investigated how fifth-grade children's concept images of the unit fractions represented by the symbols , , and changed as a result of their participation in an instructional intervention based on multiple embodiments of fraction concepts. The participants' concept images were examined through pre- and post-teaching written questions and pre- and post-teaching one-to-one verbal interview questions. Results showed that at the pre-teaching stage, the student concept images of unit fractions were very narrow and mainly linked to area models. However, after the instructional intervention, the fifth graders were able to select and apply a variety of models in response to unit fraction tasks, and their concept images of unit fractions were enriched and linked to capacity, perimeter, linear and discrete models, as well as to area models. Their performances on tests had improved, and their conceptual understandings of unit fractions had developed.

  5. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  6. Phase Superposition Processing for Ultrasonic Imaging

    NASA Astrophysics Data System (ADS)

    Tao, L.; Ma, X. R.; Tian, H.; Guo, Z. X.

    1996-06-01

    In order to improve the resolution of defect reconstruction for non-destructive evaluation, a new phase superposition processing (PSP) method has been developed on the basis of a synthetic aperture focusing technique (SAFT). The proposed method synthesizes the magnitudes of phase-superposed delayed signal groups. A satisfactory image can be obtained by a simple algorithm processing time domain radio frequency signals directly. In this paper, the theory of PSP is introduced and some simulation and experimental results illustrating the advantage of PSP are given.

  7. Optical processing of imaging spectrometer data

    NASA Technical Reports Server (NTRS)

    Liu, Shiaw-Dong; Casasent, David

    1988-01-01

    The data-processing problems associated with imaging spectrometer data are reviewed; new algorithms and optical processing solutions are advanced for this computationally intensive application. Optical decision net, directed graph, and neural net solutions are considered. Decision nets and mineral element determination of nonmixture data are emphasized here. A new Fisher/minimum-variance clustering algorithm is advanced, initialization using minimum-variance clustering is found to be preferred and fast. Tests on a 500-class problem show the excellent performance of this algorithm.

  8. Automated synthesis of image processing procedures using AI planning techniques

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  9. High-speed imaging and image processing in voice disorders

    NASA Astrophysics Data System (ADS)

    Tigges, Monika; Wittenberg, Thomas; Rosanowski, Frank; Eysholdt, Ulrich

    1996-12-01

    A digital high-speed camera system for the endoscopic examination of the larynx delivers recording speeds of up to 10,000 frames/s. Recordings of up to 1 s duration can be stored and used for further evaluation. Maximum resolution is 128 multiplied by 128 pixel. The acoustic and electroglottographic signals are recorded simultaneously. An image processing program especially developed for this purpose renders time-way-waveforms (high-speed glottograms) of several locations on the vocal cords. From the graphs all of the known objective parameters of the voice can be derived. Results of examinations in normal subjects and patients are presented.

  10. Automatic processing method for astronomical CCD images

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Yang, Lei; Mao, Wei

    2002-12-01

    Since several hundreds of CCD images are obtained with the CCD camera in the Lower Latitude Meridian Circle (LLMC) every observational night, it is essential to adopt an automatic processing method to find the initial position of each object in these images, to center the object detected and to calculate its magnitude. In this paper several existing automatic search algorithms searching for objects in astronomical CCD images are reviewed. Our automatic searching algorithm is described, which include 5 steps: background calculating, filtering, object detecting and identifying, and defect eliminating. Then several existing two-dimensional centering algorithms are also reviewed, and our modified two-dimensional moment algorithm and an empirical formula for the centering threshold are presented. An algorithm for determining the magnitudes of objects is also presented in the paper. All these algorithms are programmed with VC++ programming language. In the last our method is tested with CCD images from the 1m RCC telescope in Yunnan Observatory, and some primary results are also given.

  11. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  12. Nonlinear Processing Of Quantitative Thermographic Images

    NASA Astrophysics Data System (ADS)

    Pearce, John A.; Ryu, Zee M.

    1984-03-01

    The minimum resolvable temperature difference in quantitative thermographic imaging is essentially determined by photo-detector noise processes. In fast scan devices the overall detector noise envelope may be +/-0.5°C. When imaging quasistatic thermal scenes, integration of successive frames (signal averaging) can be used to reduce the effect of detector noise. However, many interesting thermal scenes are rapidly changing, such as the thermal pattern resulting from a laser pulse or from radio frequency energy applied to tissues, and frame integration cannot be applied. In the transient temperature measurement case, some smoothing operator must be used to improve the minimum resolvable temperature difference. Linear smoothing operators, such as averaging and gaussian filters, all have the undesirable side effect of smearing the few edges and other small details in the thermal image. Median filters offer the potential of filtering detector noise while preserving edges and other details. Because the median operator is inherently nonlinear, its effect on the accuracy of measured temperatures cannot be predicted a priori. For the photodetector studied, photoconductive mercury-cadmium-telluride, the detector noise process was determined to have a symmetrical probability density function. Thus the median filter yielded an excellent estimate of the uncorrupted signal with no degradation of edges or other details. The median filter is compared to a triangular window function.

  13. The Airborne Ocean Color Imager - System description and image processing

    NASA Technical Reports Server (NTRS)

    Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.

    1992-01-01

    The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.

  14. Processing Neutron Imaging Data - Quo Vadis?

    NASA Astrophysics Data System (ADS)

    Kaestner, A. P.; Schulz, M.

    Once an experiment has ended at a neutron imaging instrument, users often ask themselves how to proceed with the collected data. Large amounts of data have been obtained, but for first time users there is often no plan or experience to evaluate the obtained information. The users are then depending on the support from the local contact, who unfortunately does not have the time to perform in-depth studies for every user. By instructing the users and providing evaluation tools either on-site or as free software this situation can be improved. With the continuous development of new instrument features that require increasingly complex analysis methods, there is a deficit on the side of developing tools that bring the new features to the user community. We propose to start a common platform for open source development of analysis tools dedicated to processing neutron imaging data.

  15. Development of the SOFIA Image Processing Tool

    NASA Technical Reports Server (NTRS)

    Adams, Alexander N.

    2011-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

  16. HYMOSS signal processing for pushbroom spectral imaging

    NASA Technical Reports Server (NTRS)

    Ludwig, David E.

    1991-01-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  17. Emerging Model of Questioning through the Process of Teaching and Learning Electrochemistry

    ERIC Educational Resources Information Center

    Iksan, Zanaton Haji; Daniel, Esther

    2015-01-01

    Verbal questioning is a technique used by teachers in the teaching and learning process. Research in Malaysia related to teachers' questioning in the chemistry teaching and learning process is more focused on the level of the questions asked rather than the content to ensure that students understand. Thus, the research discussed in this paper is…

  18. Effects of Using Online Tools in Improving Regulation of the Teaching-Learning Process

    ERIC Educational Resources Information Center

    de la Fuente, Jesus; Cano, Francisco; Justicia, Fernando; Pichardo, Maria del Carmen; Garcia-Berben, Ana Belen; Martinez-Vicente, Jose Manuel; Sander, Paul

    2007-01-01

    Introduction: The current panorama of Higher Education reveals a need to improve teaching and learning processes taking place there. The rise of the information society transforms how we organize learning and transmit knowledge. On this account, teaching-learning processes must be enhanced, the role of teachers and students must be evaluated, and…

  19. A New Image Processing and GIS Package

    NASA Technical Reports Server (NTRS)

    Rickman, D.; Luvall, J. C.; Cheng, T.

    1998-01-01

    The image processing and GIS package ELAS was developed during the 1980's by NASA. It proved to be a popular, influential and powerful in the manipulation of digital imagery. Before the advent of PC's it was used by hundreds of institutions, mostly schools. It is the unquestioned, direct progenitor or two commercial GIS remote sensing packages, ERDAS and MapX and influenced others, such as PCI. Its power was demonstrated by its use for work far beyond its original purpose, having worked several different types of medical imagery, photomicrographs of rock, images of turtle flippers and numerous other esoteric imagery. Although development largely stopped in the early 1990's the package still offers as much or more power and flexibility than any other roughly comparable package, public or commercial. It is a huge body or code, representing more than a decade of work by full time, professional programmers. The current versions all have several deficiencies compared to current software standards and usage, notably its strictly command line interface. In order to support their research needs the authors are in the process of fundamentally changing ELAS, and in the process greatly increasing its power, utility, and ease of use. The new software is called ELAS II. This paper discusses the design of ELAS II.

  20. Improving Teaching through a Peer Support "Teacher Consultation Process."

    ERIC Educational Resources Information Center

    Kerwin, Mike; Rhoads, Judith

    The Teaching Consultation Program (TCP) is one of the most popular faculty development programs offered in the University of Kentucky Community Colleges (UKCC's), having trained over 500 faculty since its implementation in 1977. The TCP is a confidential, peer consulting program available to faculty who wish to analyze their teaching behaviors and…

  1. Using Image Processing to Determine Emphysema Severity

    NASA Astrophysics Data System (ADS)

    McKenzie, Alexander; Sadun, Alberto

    2010-10-01

    Currently X-rays and computerized tomography (CT) scans are used to detect emphysema, but other tests are required to accurately quantify the amount of lung that has been affected by the disease. These images clearly show if a patient has emphysema, but are unable by visual scan alone, to quantify the degree of the disease, as it presents as subtle, dark spots on the lung. Our goal is to use these CT scans to accurately diagnose and determine emphysema severity levels in patients. This will be accomplished by performing several different analyses of CT scan images of several patients representing a wide range of severity of the disease. In addition to analyzing the original CT data, this process will convert the data to one and two bit images and will then examine the deviation from a normal distribution curve to determine skewness. Our preliminary results show that this method of assessment appears to be more accurate and robust than the currently utilized methods, which involve looking at percentages of radiodensities in the air passages of the lung.

  2. Snapping Sharks, Maddening Mindreaders, and Interactive Images: Teaching Correlation.

    ERIC Educational Resources Information Center

    Mitchell, Mark L.

    Understanding correlation coefficients is difficult for students. A free computer program that helps introductory psychology students distinguish between positive and negative correlation, and which also teaches them to understand the differences between correlation coefficients of different size is described in this paper. The program is…

  3. Mobile Phone Images and Video in Science Teaching and Learning

    ERIC Educational Resources Information Center

    Ekanayake, Sakunthala Yatigammana; Wishart, Jocelyn

    2014-01-01

    This article reports a study into how mobile phones could be used to enhance teaching and learning in secondary school science. It describes four lessons devised by groups of Sri Lankan teachers all of which centred on the use of the mobile phone cameras rather than their communication functions. A qualitative methodological approach was used to…

  4. Mobile Phone Images and Video in Science Teaching and Learning

    ERIC Educational Resources Information Center

    Ekanayake, Sakunthala Yatigammana; Wishart, Jocelyn

    2014-01-01

    This article reports a study into how mobile phones could be used to enhance teaching and learning in secondary school science. It describes four lessons devised by groups of Sri Lankan teachers all of which centred on the use of the mobile phone cameras rather than their communication functions. A qualitative methodological approach was used to

  5. Images of Struggle: Teaching Human Rights with Graphic Novels

    ERIC Educational Resources Information Center

    Carano, Kenneth T.; Clabough, Jeremiah

    2016-01-01

    The authors explore how graphic novels can be used in the middle and high school social studies classroom to teach human rights. The article begins with a rationale on the benefits of using graphic novels. It next focuses on four graphic novels related to human rights issues: "Maus I: A Survivor's Tale: My Father Bleeds" (Speigelman

  6. Images of Struggle: Teaching Human Rights with Graphic Novels

    ERIC Educational Resources Information Center

    Carano, Kenneth T.; Clabough, Jeremiah

    2016-01-01

    The authors explore how graphic novels can be used in the middle and high school social studies classroom to teach human rights. The article begins with a rationale on the benefits of using graphic novels. It next focuses on four graphic novels related to human rights issues: "Maus I: A Survivor's Tale: My Father Bleeds" (Speigelman…

  7. Multispectral image processing: the nature factor

    NASA Astrophysics Data System (ADS)

    Watkins, Wendell R.

    1998-09-01

    The images processed by our brain represent our window into the world. For some animals this window is derived from a single eye, for others, including humans, two eyes provide stereo imagery, for others like the black widow spider several eyes are used (8 eyes), and some insects like the common housefly utilize thousands of eyes (ommatidia). Still other animals like the bat and dolphin have eyes for regular vision, but employ acoustic sonar vision for seeing where their regular eyes don't work such as in pitch black caves or turbid water. Of course, other animals have adapted to dark environments by bringing along their own lighting such as the firefly and several creates from the depths of the ocean floor. Animal vision is truly varied and has developed over millennia in many remarkable ways. We have learned a lot about vision processes by studying these animal systems and can still learn even more.

  8. Low level image processing techniques using the pipeline image processing engine in the flight telerobotic servicer

    NASA Technical Reports Server (NTRS)

    Nashman, Marilyn; Chaconas, Karen J.

    1988-01-01

    The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.

  9. Use of Customized Digital Images for Teaching Human Anatomy.

    ERIC Educational Resources Information Center

    Harris, David E.

    1997-01-01

    Describes a technique for downloading digital images of a fresh human cadaver from a commercially available CD-ROM and from the Internet. The images can be annotated to illustrate specific anatomic features and display groups of images in a format that is easy to use during lectures and accessible to undergraduate students for study during and

  10. Use of Customized Digital Images for Teaching Human Anatomy.

    ERIC Educational Resources Information Center

    Harris, David E.

    1997-01-01

    Describes a technique for downloading digital images of a fresh human cadaver from a commercially available CD-ROM and from the Internet. The images can be annotated to illustrate specific anatomic features and display groups of images in a format that is easy to use during lectures and accessible to undergraduate students for study during and…

  11. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes and collecting these into "disturbance geobodies". These seismic image processing methods represents a first efficient step toward a construction of a robust technique to investigate sub-seismic strain, mapping noisy deformed zones and displacement within subsurface geology (Dutzer et al.,2011; Iacopini et al.,2012). In all these cases, accurate fault interpretation is critical in applied geology to building a robust and reliable reservoir model, and is essential for further study of fault seal behavior, and reservoir compartmentalization. They are also fundamental for understanding how deformation localizes within sedimentary basins, including the processes associated with active seismogenetic faults and mega-thrust systems in subduction zones. Dutzer, JF, Basford., H., Purves., S. 2009, Investigating fault sealing potential through fault relative seismic volume analysis. Petroleum Geology Conference series 2010, 7:509-515; doi:10.1144/0070509 Marfurt, K.J., Chopra, S., 2007, Seismic attributes for prospect identification and reservoir characterization. SEG Geophysical development Iacopini, D., Butler, RWH. & Purves, S. (2012). 'Seismic imaging of thrust faults and structural damage: a visualization workflow for deepwater thrust belts'. First Break, vol 5, no. 30, pp. 39-46.

  12. MISR Browse Images: Cold Land Processes Experiment (CLPX)

    Atmospheric Science Data Center

    2013-04-02

    MISR Browse Images: Cold Land Processes Experiment (CLPX) These MISR Browse ... series of images over the region observed during the NASA Cold Land Processes Experiment (CLPX). CLPX involved ground, airborne, and ...

  13. Ambassadors of the Swedish Nation: National Images in the Teaching of the Swedish Lecturers in Germany 1918-1945

    ERIC Educational Resources Information Center

    Åkerlund, Andreas

    2015-01-01

    This article analyses the teaching of Swedish language lecturers active in Germany during the first half of the twentieth century. It shows the centrality of literature and literary constructions and analyses images of Swedishness and the Swedish nation present in the teaching material of that time in relation to the national image present in…

  14. Ambassadors of the Swedish Nation: National Images in the Teaching of the Swedish Lecturers in Germany 1918-1945

    ERIC Educational Resources Information Center

    kerlund, Andreas

    2015-01-01

    This article analyses the teaching of Swedish language lecturers active in Germany during the first half of the twentieth century. It shows the centrality of literature and literary constructions and analyses images of Swedishness and the Swedish nation present in the teaching material of that time in relation to the national image present in

  15. Evaluating and Grading Students in Large-Scale Image Processing Courses.

    PubMed

    Artner, Nicole M; Janusch, Ines; Kropatsch, Walter G

    2015-01-01

    In undergraduate practical courses, it is common to work with groups of 100 or more students. These large-scale courses bring their own challenges. For example, course problems are too small and lack "the big picture"; grading becomes burdensome and repetitive for the teaching staff; and it is difficult to detect cheating. Based on their experience with a traditional large-scale practical course in image processing, the authors developed a novel course approach to teaching "Introduction to Digital Image Processing" (or EDBV, from the German course title Einführung in die Digitale Bild-Verarbeitung) for all undergraduate students of media informatics and visual computing and medical informatics at the TU Wien. PMID:26416367

  16. DKIST visible broadband imager data processing pipeline

    NASA Astrophysics Data System (ADS)

    Beard, Andrew; Cowan, Bruce; Ferayorni, Andrew

    2014-07-01

    The Daniel K. Inouye Solar Telescope (DKIST) Data Handling System (DHS) provides the technical framework and building blocks for developing on-summit instrument quality assurance and data reduction pipelines. The DKIST Visible Broadband Imager (VBI) is a first light instrument that alone will create two data streams with a bandwidth of 960 MB/s each. The high data rate and data volume of the VBI require near-real time processing capability for quality assurance and data reduction, and will be performed on-summit using Graphics Processing Unit (GPU) technology. The VBI data processing pipeline (DPP) is the first designed and developed using the DKIST DHS components, and therefore provides insight into the strengths and weaknesses of the framework. In this paper we lay out the design of the VBI DPP, examine how the underlying DKIST DHS components are utilized, and discuss how integration of the DHS framework with GPUs was accomplished. We present our results of the VBI DPP alpha release implementation of the calibration, frame selection reduction, and quality assurance display processing nodes.

  17. ATM experiment S-056 image processing requirements definition

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A plan is presented for satisfying the image data processing needs of the S-056 Apollo Telescope Mount experiment. The report is based on information gathered from related technical publications, consultation with numerous image processing experts, and on the experience that was in working on related image processing tasks over a two-year period.

  18. A Computer-Controlled Digital Image Teaching Atlas

    PubMed Central

    Helm, Carl E.; Mezrich, Reuben S.

    1988-01-01

    A computer-controlled learning environment is being developed using digital images downloaded from a clinical MRI system. These images are annotated, cross-referenced and otherwise amplified and detailed. They are collected into instructional segments dealing with specific topics and augmented with assessment and feedback components. We discuss the cognitive and psychometric foundations which are essential to the design of such systems. ImagesFigure 1Figure 2Figure 3Figure 4

  19. Effects of image processing on the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  20. SU-E-E-06: Teaching About the Gamma Camera and Ultrasound Imaging

    SciTech Connect

    Lowe, M; Spiro, A; Vogel, R; Donaldson, N; Gosselin, C

    2015-06-15

    Purpose: Instructional modules on applications of physics in medicine are being developed. The target audience consists of students who have had an introductory undergraduate physics course. This presentation will concentrate on an active learning approach to teach the principles of the gamma camera. There will also be a description of an apparatus to teach ultrasound imaging. Methods: Since a real gamma camera is not feasible in the undergraduate classroom, we have developed two types of optical apparatus that teach the main principles. To understand the collimator, LEDS mimic gamma emitters in the body, and the photons pass through an array of tubes. The distance, spacing, diameter, and length of the tubes can be varied to understand the effect upon the resolution of the image. To determine the positions of the gamma emitters, a second apparatus uses a movable green laser, fluorescent plastic in lieu of the scintillation crystal, acrylic rods that mimic the PMTs, and a photodetector to measure the intensity. The position of the laser is calculated with a centroid algorithm.To teach the principles of ultrasound imaging, we are using the sound head and pulser box of an educational product, variable gain amplifier, rotation table, digital oscilloscope, Matlab software, and phantoms. Results: Gamma camera curriculum materials have been implemented in the classroom at Loyola in 2014 and 2015. Written work shows good knowledge retention and a more complete understanding of the material. Preliminary ultrasound imaging materials were run in 2015. Conclusion: Active learning methods add another dimension to descriptions in textbooks and are effective in keeping the students engaged during class time. The teaching apparatus for the gamma camera and ultrasound imaging can be expanded to include more cases, and could potentially improve students’ understanding of artifacts and distortions in the images.

  1. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  2. Methods for processing and imaging marsh foraminifera

    USGS Publications Warehouse

    Dreher, Chandra A.; Flocks, James G.

    2011-01-01

    This study is part of a larger U.S. Geological Survey (USGS) project to characterize the physical conditions of wetlands in southwestern Louisiana. Within these wetlands, groups of benthic foraminifera-shelled amoeboid protists living near or on the sea floor-can be used as agents to measure land subsidence, relative sea-level rise, and storm impact. In the Mississippi River Delta region, intertidal-marsh foraminiferal assemblages and biofacies were established in studies that pre-date the 1970s, with a very limited number of more recent studies. This fact sheet outlines this project's improved methods, handling, and modified preparations for the use of Scanning Electron Microscope (SEM) imaging of these foraminifera. The objective is to identify marsh foraminifera to the taxonomic species level by using improved processing methods and SEM imaging for morphological characterization in order to evaluate changes in distribution and frequency relative to other environmental variables. The majority of benthic marsh foraminifera consists of agglutinated forms, which can be more delicate than porcelaneous forms. Agglutinated tests (shells) are made of particles such as sand grains or silt and clay material, whereas porcelaneous tests consist of calcite.

  3. HABE real-time image processing

    NASA Astrophysics Data System (ADS)

    Krainak, Joseph C.

    1999-07-01

    The HABE system performs real-time autonomous acquisition, pointing and tracking (ATP). The goal of the experiment, sponsored by the Ballistic Missile Defense Organization and administered by the US Air Force Research Laboratory, Kirtland AFB, Albuquerque, NM, is to demonstrate the acquisition, tracking and pointing technologies needed for an effective space-based missile defense system. The three sensor tracking system includes two IR cameras for passive tracking of a missile plume and an intensified visible camera used to capture the return of a high-energy laser pulse reflected by the missile's nose. The HABE real-time image processor uses the images captured by each sensor to find a track point. The VME-based hardware includes four Compaq Computer Corporation Alpha processors and seven Texas Instruments TMS320C4X processors. The C4x comports and the VME bus provide the pathways needed for inter-processor communications. The software design implements a list processing approach to command and control which provides for flexible task redefinition, addition, and deletion while minimizing the need for code changes. The design is implemented in C. Several system performance metrics are described and tabulated.

  4. Process Evaluation of a Teaching and Learning Centre at a Research University

    ERIC Educational Resources Information Center

    Smith, Deborah B.; Gadbury-Amyot, Cynthia C.

    2014-01-01

    This paper describes the evaluation of a teaching and learning centre (TLC) five?years after its inception at a mid-sized, midwestern state university. The mixed methods process evaluation gathered data from 209 attendees and non-attendees of the TLC from the full-time, benefit-eligible teaching faculty. Focus groups noted feelings of…

  5. Teaching and Learning Processes in Physical Activity: The Central Problem of Sport Pedagogy.

    ERIC Educational Resources Information Center

    Locke, Lawrence F.

    The use of an established research method, Research on Teacher Effectiveness (RTE), is considered as it may be applied to examine successful teaching methods in physical activities and sports. The basic process of RTE is observing teachers at work with students in the classroom or gymnasium. Research on the relationship between teaching behaviors…

  6. The Effect of Activity Based Lexis Teaching on Vocabulary Development Process

    ERIC Educational Resources Information Center

    Mert, Esra Lule

    2013-01-01

    "Teaching word" as a complimentary process of teaching Turkish is a crucial field of study. However, studies on this area are insufficient. The only aim of the designed activities that get under way with the constructivist approach on which new education programs are based is to provide students with vocabulary elements of Turkish. In…

  7. Using Process and Inqury to Teach Content: Projectile Motion and Graphing

    ERIC Educational Resources Information Center

    Rhea, Marilyn; Lucido, Patricia; Gregerson-Malm, Cheryl

    2005-01-01

    These series of lessons uses the process of student inquiry to teach the concepts of force and motion identified in the National Science Education Standards for grades 5-8. The lesson plan also uses technology as a teaching tool through the use of interactive Web sites. The lessons are built on the 5-E format and feature imbedded assessments.

  8. Evaluating Students' Learning and Communication Processes: Handbook 3--Diagnostic Teaching Units: Social Studies.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton. Student Evaluation Branch.

    Presenting the diagnostic teaching units for grades 7, 8, and 9 social studies, this handbook is intended to be used along with the companion handbook 1, "Evaluating Students' Learning and Communication Processes: Integrating Diagnostic Evaluation and Instruction." The student activities of the diagnostic teaching units in the handbook have been…

  9. The Process of Physics Teaching Assistants' Pedagogical Content Knowledge Development

    ERIC Educational Resources Information Center

    Seung, Eulsun

    2013-01-01

    This study explored the process of physics teaching assistants' (TAs) PCK development in the context of teaching a new undergraduate introductory physics course. "Matter and Interactions" (M&I) has recently adopted a new introductory physics course that focuses on the application of a small number of fundamental physical

  10. The Process of Physics Teaching Assistants' Pedagogical Content Knowledge Development

    ERIC Educational Resources Information Center

    Seung, Eulsun

    2013-01-01

    This study explored the process of physics teaching assistants' (TAs) PCK development in the context of teaching a new undergraduate introductory physics course. "Matter and Interactions" (M&I) has recently adopted a new introductory physics course that focuses on the application of a small number of fundamental physical…

  11. High Thinking Processes (HTP): Elements of Curricula and Teaching Able-Learners.

    ERIC Educational Resources Information Center

    Kaniel, Shlomo

    2002-01-01

    This article discusses preparing able learners for the technologically dynamic future by teaching High Thinking Processes (HTP). It describes components of HTP and four main elements for developing HTP: well organized and justified curricula with appropriate tasks; metacognitive teaching; learning communities and challenging environments; and…

  12. Twitter for Teaching: Can Social Media Be Used to Enhance the Process of Learning?

    ERIC Educational Resources Information Center

    Evans, Chris

    2014-01-01

    Can social media be used to enhance the process of learning by students in higher education? Social media have become widely adopted by students in their personal lives. However, the application of social media to teaching and learning remains to be fully explored. In this study, the use of the social media tool Twitter for teaching was…

  13. Twitter for Teaching: Can Social Media Be Used to Enhance the Process of Learning?

    ERIC Educational Resources Information Center

    Evans, Chris

    2014-01-01

    Can social media be used to enhance the process of learning by students in higher education? Social media have become widely adopted by students in their personal lives. However, the application of social media to teaching and learning remains to be fully explored. In this study, the use of the social media tool Twitter for teaching was

  14. Scaling-up Process-Oriented Guided Inquiry Learning Techniques for Teaching Large Information Systems Courses

    ERIC Educational Resources Information Center

    Trevathan, Jarrod; Myers, Trina; Gray, Heather

    2014-01-01

    Promoting engagement during lectures becomes significantly more challenging as class sizes increase. Therefore, lecturers need to experiment with new teaching methodologies to embolden deep learning outcomes and to develop interpersonal skills amongst students. Process Oriented Guided Inquiry Learning is a teaching approach that uses highly…

  15. Corn plant locating by image processing

    NASA Astrophysics Data System (ADS)

    Jia, Jiancheng; Krutz, Gary W.; Gibson, Harry W.

    1991-02-01

    The feasibility investigation of using machine vision technology to locate corn plants is an important issue for field production automation in the agricultural industry. This paper presents an approach which was developed to locate the center of a corn plant using image processing techniques. Corn plants were first identified using a main vein detection algorithm by detecting a local feature of corn leaves leaf main veins based on the spectral difference between mains and leaves then the center of the plant could be located using a center locating algorithm by tracing and extending each detected vein line and evaluating the center of the plant from intersection points of those lines. The experimental results show the usefulness of the algorithm for machine vision applications related to corn plant identification. Such a technique can be used for pre. cisc spraying of pesticides or biotech chemicals. 1.

  16. Intelligent elevator management system using image processing

    NASA Astrophysics Data System (ADS)

    Narayanan, H. Sai; Karunamurthy, Vignesh; Kumar, R. Barath

    2015-03-01

    In the modern era, the increase in the number of shopping malls and industrial building has led to an exponential increase in the usage of elevator systems. Thus there is an increased need for an effective control system to manage the elevator system. This paper is aimed at introducing an effective method to control the movement of the elevators by considering various cases where in the location of the person is found and the elevators are controlled based on various conditions like Load, proximity etc... This method continuously monitors the weight limit of each elevator while also making use of image processing to determine the number of persons waiting for an elevator in respective floors. Canny edge detection technique is used to find out the number of persons waiting for an elevator. Hence the algorithm takes a lot of cases into account and locates the correct elevator to service the respective persons waiting in different floors.

  17. Teaching Practice Trends Regarding the Teaching of the Design Process within a South African Context: A Situation Analysis

    ERIC Educational Resources Information Center

    Potgieter, Calvyn

    2013-01-01

    In this article an analysis is made of the responses of 95 technology education teachers, 14 technology education lecturers and 25 design practitioners to questionnaires regarding the teaching and the application of the design process. The main purpose of the questionnaires is to determine whether there are any trends regarding the strategies and…

  18. Teaching Practice Trends Regarding the Teaching of the Design Process within a South African Context: A Situation Analysis

    ERIC Educational Resources Information Center

    Potgieter, Calvyn

    2013-01-01

    In this article an analysis is made of the responses of 95 technology education teachers, 14 technology education lecturers and 25 design practitioners to questionnaires regarding the teaching and the application of the design process. The main purpose of the questionnaires is to determine whether there are any trends regarding the strategies and

  19. Image processing: digital versus polarization-based enhancementencoding techniques

    NASA Astrophysics Data System (ADS)

    El-Saba, Aed; Alsharif, Salim

    2012-06-01

    Image processing is a field of great interest for many applications. Nowadays it is very hard to name an application where image processing is not involved. Digital techniques remains the dominant ones applied to digital image processing with significant automation approaches that are built in image display, as in most digital cameras and digital TVs, to name few. Depending on the application, digital image processing techniques produces satisfactory accurate results. However, digital enhancement techniques suffer from the main constraint: slow processing speed, an inherited problem associated with any digital image processing technique. On the other hand optical image enhancement techniques such as the polarization-based ones produce satisfactory accurate results and at the same time overcome the processing time constraint associated with their digital counter ones. This paper presents a comparison between digital and polarization-based enhancement/encoding techniques with respect to their accuracy, security and processing time in automated pattern recognition applications.

  20. Student Evaluation of Teaching Effectiveness of a Nationwide Innovative Education Program on Image Display Technology

    ERIC Educational Resources Information Center

    Yueh, Hsiu-Ping; Chen, Tzy-Ling; Chiu, Li-An; Lee, San-Liang; Wang, An-Bang

    2012-01-01

    The study presented here explored a student evaluation of the teaching effectiveness of a nationwide innovative education program on image display technology in Taiwan. Using survey data collected through an online questionnaire system, covering 165 classes across 30 colleges and universities in Taiwan, the study aimed to understand the teaching…

  1. An Emphasis on Perception: Teaching Image Formation Using a Mechanistic Model of Vision.

    ERIC Educational Resources Information Center

    Allen, Sue; And Others

    An effective way to teach the concept of image is to give students a model of human vision which incorporates a simple mechanism of depth perception. In this study two almost identical versions of a curriculum in geometrical optics were created. One used a mechanistic, interpretive eye model, and in the other the eye was modeled as a passive,…

  2. Preservice Chemistry Teachers' Images about Science Teaching in Their Future Classrooms

    ERIC Educational Resources Information Center

    Elmas, Ridvan; Demirdogen, Betul; Geban, Omer

    2011-01-01

    The purpose of this study is to explore pre-service chemistry teachers' images of science teaching in their future classrooms. Also, association between instructional style, gender, and desire to be a teacher was explored. Sixty six pre-service chemistry teachers from three public universities participated in the data collection for this study. A…

  3. Image processing and products for the Magellan mission to Venus

    NASA Technical Reports Server (NTRS)

    Clark, Jerry; Alexander, Doug; Andres, Paul; Lewicki, Scott; Mcauley, Myche

    1992-01-01

    The Magellan mission to Venus is providing planetary scientists with massive amounts of new data about the surface geology of Venus. Digital image processing is an integral part of the ground data system that provides data products to the investigators. The mosaicking of synthetic aperture radar (SAR) image data from the spacecraft is being performed at JPL's Multimission Image Processing Laboratory (MIPL). MIPL hosts and supports the Image Data Processing Subsystem (IDPS), which was developed in a VAXcluster environment of hardware and software that includes optical disk jukeboxes and the TAE-VICAR (Transportable Applications Executive-Video Image Communication and Retrieval) system. The IDPS is being used by processing analysts of the Image Data Processing Team to produce the Magellan image data products. Various aspects of the image processing procedure are discussed.

  4. Teaching the Process of Science: Faculty Perceptions and an Effective Methodology

    PubMed Central

    Coil, David; Wenderoth, Mary Pat; Cunningham, Matthew

    2010-01-01

    Most scientific endeavors require science process skills such as data interpretation, problem solving, experimental design, scientific writing, oral communication, collaborative work, and critical analysis of primary literature. These are the fundamental skills upon which the conceptual framework of scientific expertise is built. Unfortunately, most college science departments lack a formalized curriculum for teaching undergraduates science process skills. However, evidence strongly suggests that explicitly teaching undergraduates skills early in their education may enhance their understanding of science content. Our research reveals that faculty overwhelming support teaching undergraduates science process skills but typically do not spend enough time teaching skills due to the perceived need to cover content. To encourage faculty to address this issue, we provide our pedagogical philosophies, methods, and materials for teaching science process skills to freshman pursuing life science majors. We build upon previous work, showing student learning gains in both reading primary literature and scientific writing, and share student perspectives about a course where teaching the process of science, not content, was the focus. We recommend a wider implementation of courses that teach undergraduates science process skills early in their studies with the goals of improving student success and retention in the sciences and enhancing general science literacy. PMID:21123699

  5. Spot restoration for GPR image post-processing

    SciTech Connect

    Paglieroni, David W; Beer, N. Reginald

    2014-05-20

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  6. Teaching Is . . .

    ERIC Educational Resources Information Center

    Harmin, Merrill; Gregory, Tom

    This book is an orientation to teaching designed to help establish a comfortable self-image and to develop appropriate skills. Readings and experiences are presented in the five-part text to aid in this orientation process. Part 1 discusses forming a group that functions as a support system, a human experience resource, and a source of feedback…

  7. Vision-sensing image analysis for GTAW process control

    SciTech Connect

    Long, D.D.

    1994-11-01

    Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

  8. Image processing and recognition using diffractive and digital techniques

    NASA Astrophysics Data System (ADS)

    Galas, Jacek

    1994-10-01

    Image processing and recognition methods are useful in many fields. According to situation, different techniques are used. For many years, methods based on optical Fourier transformation were very popular. Image recognition was performed generally by using optical correlators. Correlation techniques were strongly developed especially for military applications, but in many cases (industrial, biological and biomedical applications) these techniques suffer from a number of limitations. For these reasons, methods based on extraction and statistical processing of image features are more useful. Set of features can be extracted directly from an image (features based on image morphology, image moments etc.) or from image transforms (Fourier, Radon, Hough, Sine, Cosine etc.). The Fourier transformation is one of the most important in image processing. It can be simply performed by using an optical diffractometer. It allows to build image descriptors independent on image translation and after processing independent on image rotation. Diffractometers are very convenient in industrial and medical applications. Digital image processing and recognition were strongly developed on powerful workstations, however these procedures can also be implemented in PCs with DSP microprocessor cards or in situations where digital transforms used for image processing can be simply implemented and do not consume a lot of time. The example of biomedical image recognition performed in an optical way, by using a diffractometer, and in a digital system with a CCD camera will be described here.

  9. Prospective faculty developing understanding of teaching and learning processes in science

    NASA Astrophysics Data System (ADS)

    Pareja, Jose I.

    Historically, teaching has been considered a burden by many academics at institutions of higher education, particularly research scientists. Furthermore, university faculty and prospective faculty often have limited exposure to issues associated with effective teaching and learning. As a result, a series of ineffective teaching and learning strategies are pervasive in university classrooms. This exploratory case study focuses on four biology graduate teaching fellows (BGF) who participated in a National Science Foundation (NSF) GK-12 Program. Such programs were introduced by NSF to enhance the preparation of prospective faculty for their future professional responsibilities. In this particular program, BGF were paired with high school biology teachers (pedagogical mentors) for at least one year. During this yearlong partnership, BGF were involved in a series of activities related to teaching and learning ranging from classroom teaching, tutoring, lesson planning, grading, to participating in professional development conferences and reflecting upon their practices. The purpose of this study was to examine the changes in BGF understanding of teaching and learning processes in science as a function of their pedagogical content knowledge (PCK). In addition, the potential transfer of this knowledge between high school and higher education contexts was investigated. The findings of this study suggest that understanding of teaching and learning processes in science by the BGF changed. Specific aspects of the BGF involvement in the program (such as classroom observations, practice teaching, communicating with mentors, and reflecting upon one's practice) contributed to PCK development. In fact, there is evidence to suggest that constant reflection is critical in the process of change. Concurrently, BGFs enhanced understanding of science teaching and learning processes may be transferable from the high school context to the university context. Future research studies should be designed to explore explicitly this transfer phenomenon.

  10. The Effects of Image-Based Concept Mapping on the Learning Outcomes and Cognitive Processes of Mobile Learners

    ERIC Educational Resources Information Center

    Yen, Jung-Chuan; Lee, Chun-Yi; Chen, I-Jung

    2012-01-01

    The purpose of this study was to investigate the effects of different teaching strategies (text-based concept mapping vs. image-based concept mapping) on the learning outcomes and cognitive processes of mobile learners. Eighty-six college freshmen enrolled in the "Local Area Network Planning and Implementation" course taught by the first author…

  11. Viewpoints on Medical Image Processing: From Science to Application

    PubMed Central

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  12. Teaching Language Processing in a Group Setting Using the Formula Phonics Reading, Spelling, and Learning Formulas. An Instructional Guide.

    ERIC Educational Resources Information Center

    Vail, Colleen; Vail, Edward

    To teach language processing is to teach students how to clarify information in material they are reading or discussing so that they are able to understand, evaluate, learn, store, recall, and use it. The first section of this guide, which is based on the use of the Formula Phonics method, discusses the teaching of language processing in terms of…

  13. Computer Use by School Teachers in Teaching-Learning Process

    ERIC Educational Resources Information Center

    Bhalla, Jyoti

    2013-01-01

    Developing countries have a responsibility not merely to provide computers for schools, but also to foster a habit of infusing a variety of ways in which computers can be integrated in teaching-learning amongst the end users of these tools. Earlier researches lacked a systematic study of the manner and the extent of computer-use by teachers. The…

  14. Teaching Ethics in the Community College Data Processing Curriculum.

    ERIC Educational Resources Information Center

    Gottleber, T. T.

    1988-01-01

    Explores computer-related crime, possible causes for the dramatic increase in criminal activity in the last 20 years, and the moral standards of the computer criminal. Discusses the responsiblity of community colleges to teach individuals the moral implications inherent in the use of computers. Suggests objectives for a course dealing with these…

  15. Architecture as a Quality in the Learning and Teaching Process.

    ERIC Educational Resources Information Center

    Cold, Birgit

    Using an outline format accompanied by numerous photographs and sketches, this brochure explores the relationship of "school" to people's conceptions, actions, and physical surroundings, highlighting changes over the past 20 years in Scandinavian school design. Two major conceptual changes are decentralized administration and teaching and learning…

  16. How Teaching Writing Can Affect Our Own Writing Process.

    ERIC Educational Resources Information Center

    Miller, Toni

    One teacher's experience with changes in writing skills and attitudes while teaching writing led to studies of the experiences of three female graduate student writing tutors with widely varying backgrounds working in a university tutorial service. One was a student from a blue collar family who had entered college as a mature student; one had…

  17. The Teaching Evaluation Process: Segmentation of Marketing Students.

    ERIC Educational Resources Information Center

    Yau, Oliver H. M.; Kwan, Wayne

    1993-01-01

    A study applied the concept of market segmentation to student evaluation of college teaching, by assessing whether there exist several segments of students and how this relates to their evaluation of faculty. Subjects were 156 Australian undergraduate business administration students. Results suggest segments do exist, with different expectations…

  18. The Principal in the Teaching and Learning Process

    ERIC Educational Resources Information Center

    Ediger, Marlow

    2009-01-01

    Today's school principal has a plethora of duties and responsibilities. Among many others, he/she is expected to supervise and monitor teacher progress in the classroom. Too frequently in the past, principals performed largely management duties in schools, but now each principal must also assist in teaching and learning situations. How might the…

  19. Student Satisfaction and Its Implications in the Process of Teaching

    ERIC Educational Resources Information Center

    Ciobanu, Alina; Ostafe, Livia

    2014-01-01

    Student satisfaction is widely recognized as an indicator of the quality of students' learning and teaching experience. This study aims to highlight how satisfied students (from the primary and preschool pedagogy specialization within the Faculty of Psychology and Educational Sciences, who are studying to become future kindergarten and primary…

  20. Teaching Freshmen To Understand Research as a Process of Inquiry.

    ERIC Educational Resources Information Center

    Tracey, Karen

    Freshmen often approach research papers by selecting a "giant topic" and going to the library to confront swamps and mountains of resources. A different approach to teaching research is designed to help students begin to shift the often counter-productive paradigm under which they operate. The classroom strategy proposed is 3-fold. Rather than…

  1. Human skin surface evaluation by image processing

    NASA Astrophysics Data System (ADS)

    Zhu, Liangen; Zhan, Xuemin; Xie, Fengying

    2003-12-01

    Human skin gradually lose its tension and becomes very dry as time flies by. Use of cosmetics is effective to prevent skin aging. Recently, there are many choices of products of cosmetics. To show their effects, It is desirable to develop a way to evaluate quantificationally skin surface condition. In this paper, An automatic skin evaluating method is proposed. The skin surface has the pattern called grid-texture. This pattern is composed of the valleys that spread vertically, horizontally, and obliquely and the hills separated by them. Changes of the grid are closely linked to the skin surface condition. They can serve as a good indicator for the skin condition. By measuring the skin grid using digital image processing technologies, we can evaluate skin surface about its aging, health, and alimentary status. In this method, the skin grid is first detected to form a closed net. Then, some skin parameters such as Roughness, tension, scale and gloss can be calculated from the statistical measurements of the net. Through analyzing these parameters, the condition of the skin can be monitored.

  2. Image processing software for imaging spectrometry data analysis

    NASA Technical Reports Server (NTRS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  3. VIP: Vortex Image Processing pipeline for high-contrast direct imaging of exoplanets

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Christiaens, Valentin; Absil, Olivier; Mawet, Dimitri

    2016-03-01

    VIP (Vortex Image Processing pipeline) provides pre- and post-processing algorithms for high-contrast direct imaging of exoplanets. Written in Python, VIP provides a very flexible framework for data exploration and image processing and supports high-contrast imaging observational techniques, including angular, reference-star and multi-spectral differential imaging. Several post-processing algorithms for PSF subtraction based on principal component analysis are available as well as the LLSG (Local Low-rank plus Sparse plus Gaussian-noise decomposition) algorithm for angular differential imaging. VIP also implements the negative fake companion technique coupled with MCMC sampling for rigorous estimation of the flux and position of potential companions.

  4. CT imaging of wet specimens from a pathology museum: How to build a "virtual museum" for radiopathological correlation teaching.

    PubMed

    Chhem, R K; Woo, J K H; Pakkiri, P; Stewart, E; Romagnoli, C; Garcia, B

    2006-01-01

    X-rays and CT have been used to examine specimens such as human remains, mummies and formalin-fixed specimens. However, CT has not been used to study formalin-fixed wet specimens within their containers. The purpose of our study is firstly to demonstrate the role of CT as a non-destructive imaging method for the study of wet pathological specimens and secondly to use the CT data as a method for teaching pathological and radiological correlation. CT scanning of 31 musculoskeletal specimens from a pathology museum was carried out. Images were reconstructed using both soft-tissue and bone algorithms. Further processing of the data produced coronal and sagittal reformats of each specimen. The container and storage solution were manually removed using Volume Viewer Voxtool software to produce a 3D reconstruction of each specimen. Photographs of each specimen (container and close-up) were displayed alongside selected coronal, sagittal, 3D reconstructions and cine sequences in a specially designed computer program. CT is a non-destructive imaging modality for building didactic materials from wet specimens in a Pathology Museum, for teaching radiological and pathological correlation. PMID:16814293

  5. Bessel filters applied in biomedical image processing

    NASA Astrophysics Data System (ADS)

    Mesa Lopez, Juan Pablo; Castañeda Saldarriaga, Diego Leon

    2014-06-01

    A magnetic resonance is an image obtained by means of an imaging test that uses magnets and radio waves to create body images, however, in some images it's difficult to recognize organs or foreign agents present in the body. With these Bessel filters the objective is to significantly increase the resolution of magnetic resonance images taken to make them much clearer in order to detect anomalies and diagnose the illness. As it's known, Bessel filters appear to solve the Schrödinger equation for a particle enclosed in a cylinder and affect the image distorting the colors and contours of it, therein lies the effectiveness of these filters, since the clear outline shows more defined and easy to recognize abnormalities inside the body.

  6. Two satellite image sets for the training and validation of image processing systems for defense applications

    NASA Astrophysics Data System (ADS)

    Peterson, Michael R.; Aldridge, Shawn; Herzog, Britny; Moore, Frank

    2010-04-01

    Many image processing algorithms utilize the discrete wavelet transform (DWT) to provide efficient compression and near-perfect reconstruction of image data. Defense applications often require the transmission of data at high levels of compression over noisy channels. In recent years, evolutionary algorithms (EAs) have been utilized to optimize image transform filters that outperform standard wavelets for bandwidth-constrained compression of satellite images. The optimization of these filters requires the use of training images appropriately chosen for the image processing system's intended applications. This paper presents two robust sets of fifty images each intended for the training and validation of satellite and unmanned aerial vehicle (UAV) reconnaissance image processing algorithms. Each set consists of a diverse range of subjects consisting of cities, airports, military bases, and landmarks representative of the types of images that may be captured during reconnaissance missions. Optimized algorithms may be "overtrained" for a specific problem instance and thus exhibit poor performance over a general set of data. To reduce the risk of overtraining an image filter, we evaluate the suitability of each image as a training image. After evolving filters using each image, we assess the average compression performance of each filter across the entire set of images. We thus identify a small subset of images from each set that provide strong performance as training images for the image transform optimization problem. These images will also provide a suitable platform for the development of other algorithms for defense applications. The images are available upon request from the contact author.

  7. DTV color and image processing: past, present, and future

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Yeong; Lee, SeongDeok; Park, Du-Sik; Kwak, Youngshin

    2006-01-01

    The image processor in digital TV has started to play an important role due to the customers' growing desire for higher quality image. The customers want more vivid and natural images without any visual artifact. Image processing techniques are to meet customers' needs in spite of the physical limitation of the panel. In this paper, developments in image processing techniques for DTV in conjunction with developments in display technologies at Samsung R and D are reviewed. The introduced algorithms cover techniques required to solve the problems caused by the characteristics of the panel itself and techniques for enhancing the image quality of input signals optimized for the panel and human visual characteristics.

  8. Image processing techniques for digital orthophotoquad production

    USGS Publications Warehouse

    Hood, Joy J.; Ladner, L. J.; Champion, Richard A.

    1989-01-01

    Orthophotographs have long been recognized for their value as supplements or alternatives to standard maps. Recent trends towards digital cartography have resulted in efforts by the US Geological Survey to develop a digital orthophotoquad production system. Digital image files were created by scanning color infrared photographs on a microdensitometer. Rectification techniques were applied to remove tile and relief displacement, thereby creating digital orthophotos. Image mosaicking software was then used to join the rectified images, producing digital orthophotos in quadrangle format.

  9. Enhancing the Teaching and Learning of Mathematical Visual Images

    ERIC Educational Resources Information Center

    Quinnell, Lorna

    2014-01-01

    The importance of mathematical visual images is indicated by the introductory paragraph in the Statistics and Probability content strand of the Australian Curriculum, which draws attention to the importance of learners developing skills to analyse and draw inferences from data and "represent, summarise and interpret data and undertake…

  10. Cardiovascular Imaging and Image Processing: Theory and Practice - 1975

    NASA Technical Reports Server (NTRS)

    Harrison, Donald C. (Editor); Sandler, Harold (Editor); Miller, Harry A. (Editor); Hood, Manley J. (Editor); Purser, Paul E. (Editor); Schmidt, Gene (Editor)

    1975-01-01

    Ultrasonography was examined in regard to the developmental highlights and present applicatons of cardiac ultrasound. Doppler ultrasonic techniques and the technology of miniature acoustic element arrays were reported. X-ray angiography was discussed with special considerations on quantitative three dimensional dynamic imaging of structure and function of the cardiopulmonary and circulatory systems in all regions of the body. Nuclear cardiography and scintigraphy, three--dimensional imaging of the myocardium with isotopes, and the commercialization of the echocardioscope were studied.

  11. An Image Processing Algorithm Based On FMAT

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Pal, Sankar K.

    1995-01-01

    Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

  12. An invertebrate embryologist's guide to routine processing of confocal images.

    PubMed

    von Dassow, George

    2014-01-01

    It is almost impossible to use a confocal microscope without encountering the need to transform the raw data through image processing. Adherence to a set of straightforward guidelines will help ensure that image manipulations are both credible and repeatable. Meanwhile, attention to optimal data collection parameters will greatly simplify image processing, not only for convenience but for quality and credibility as well. Here I describe how to conduct routine confocal image processing tasks, including creating 3D animations or stereo images, false coloring or merging channels, background suppression, and compressing movie files for display. PMID:24567209

  13. Viking image processing. [digital stereo imagery and computer mosaicking

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1977-01-01

    The paper discusses the camera systems capable of recording black and white and color imagery developed for the Viking Lander imaging experiment. Each Viking Lander image consisted of a matrix of numbers with 512 rows and an arbitrary number of columns up to a maximum of about 9,000. Various techniques were used in the processing of the Viking Lander images, including: (1) digital geometric transformation, (2) the processing of stereo imagery to produce three-dimensional terrain maps, and (3) computer mosaicking of distinct processed images. A series of Viking Lander images is included.

  14. Primary School Teachers' Understanding of Science Process Skills in Relation to Their Teaching Qualifications and Teaching Experience

    NASA Astrophysics Data System (ADS)

    Shahali, Edy H. M.; Halim, Lilia; Treagust, David F.; Won, Mihye; Chandrasegaran, A. L.

    2015-11-01

    This study investigated the understanding of science process skills (SPS) of 329 science teachers from 52 primary schools selected by random sampling. The understanding of SPS was measured in terms of conceptual and operational aspects of SPS using an instrument called the Science Process Skills Questionnaire (SPSQ) with a Cronbach's alpha reliability of 0.88. The findings showed that the teachers' conceptual understanding of SPS was much weaker than their practical application of SPS. The teachers' understanding of SPS differed by their teaching qualifications but not so much by their teaching experience. Emphasis needs to be given to both conceptual and operational understanding of SPS during pre-service and in-service teacher education to enable science teachers to use the skills and implement inquiry-based lessons in schools.

  15. Legal and ethical issues in the use of anonymous images in pathology teaching and research.

    PubMed

    Tranberg, H A; Rous, B A; Rashbass, J

    2003-02-01

    The privacy of patients' health information is of paramount importance. However, it is equally important that medical staff and students have access to photographs and video recordings of real patients for training purposes. Where the patient can be identified from such images, his or her consent is clearly required to both obtain the image and to use it in this way. However, the need for consent, both legally and ethically, is much less convincing where the patient cannot, by the very nature of the image, be identified from it. This is the case for many images used in the teaching of clinical medicine, such as videos taken of laparoscopies, images of internal organs and unlabelled X-rays. PMID:12558741

  16. The constructive use of images in medical teaching: a literature review

    PubMed Central

    Norris, Elizabeth M

    2012-01-01

    This literature review illustrates the various ways images are used in teaching and the evidence appertaining to it and advice regarding permissions and use. Four databases were searched, 23 papers were retained out of 135 abstracts found for the study. Images are frequently used to motivate an audience to listen to a lecture or to note key medical findings. Images can promote observation skills when linked with learning outcomes, but the timing and relevance of the images is important – it appears they must be congruent with the dialogue. Student reflection can be encouraged by asking students to actually draw their own impressions of a course as an integral part of course feedback. Careful structured use of images improve attention, cognition, reflection and possibly memory retention. PMID:22666530

  17. Image processing methods for visual prostheses based on DSP

    NASA Astrophysics Data System (ADS)

    Liu, Huwei; Zhao, Ying; Tian, Yukun; Ren, Qiushi; Chai, Xinyu

    2008-12-01

    Visual prostheses for extreme vision impairment have come closer to reality during these few years. The task of this research has been to design exoteric devices and study image processing algorithms and methods for different complexity images. We have developed a real-time system capable of image capture and processing to obtain most available and important image features for recognition and simulation experiment based on DSP (Digital Signal Processor). Beyond developing hardware system, we introduce algorithms such as resolution reduction, information extraction, dilation and erosion, square (circular) pixelization and Gaussian pixelization. And we classify images with different stages according to different complexity such as simple images, medium complex images, complex images. As a result, this paper will get the needed signal for transmitting to electrode array and images for simulation experiment.

  18. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  19. Optimizing signal and image processing applications using Intel libraries

    NASA Astrophysics Data System (ADS)

    Landré, Jérôme; Truchetet, Frédéric

    2007-01-01

    This paper presents optimized signal and image processing libraries from Intel Corporation. Intel Performance Primitives (IPP) is a low-level signal and image processing library developed by Intel Corporation to optimize code on Intel processors. Open Computer Vision library (OpenCV) is a high-level library dedicated to computer vision tasks. This article describes the use of both libraries to build flexible and efficient signal and image processing applications.

  20. Principles of cryo-EM single-particle image processing.

    PubMed

    Sigworth, Fred J

    2016-02-01

    Single-particle reconstruction is the process by which 3D density maps are obtained from a set of low-dose cryo-EM images of individual macromolecules. This review considers the fundamental principles of this process and the steps in the overall workflow for single-particle image processing. Also considered are the limits that image signal-to-noise ratio places on resolution and the distinguishing of heterogeneous particle populations. PMID:26705325

  1. AUDIOVISUAL RESOURCES ON THE TEACHING PROCESS IN SURGICAL TECHNIQUE

    PubMed Central

    PUPULIM, Guilherme Luiz Lenzi; IORIS, Rafael Augusto; GAMA, Ricardo Ribeiro; RIBAS, Carmen Australia Paredes Marcondes; MALAFAIA, Osvaldo; GAMA, Mirnaluci

    2015-01-01

    Background: The development of didactic means to create opportunities to permit complete and repetitive viewing of surgical procedures is of great importance nowadays due to the increasing difficulty of doing in vivo training. Thus, audiovisual resources favor the maximization of living resources used in education, and minimize problems arising only with verbalism. Aim: To evaluate the use of digital video as a pedagogical strategy in surgical technique teaching in medical education. Methods: Cross-sectional study with 48 students of the third year of medicine, when studying in the surgical technique discipline. They were divided into two groups with 12 in pairs, both subject to the conventional method of teaching, and one of them also exposed to alternative method (video) showing the technical details. All students did phlebotomy in the experimental laboratory, with evaluation and assistance of the teacher/monitor while running. Finally, they answered a self-administered questionnaire related to teaching method when performing the operation. Results: Most of those who did not watch the video took longer time to execute the procedure, did more questions and needed more faculty assistance. The total exposed to video followed the chronology of implementation and approved the new method; 95.83% felt able to repeat the procedure by themselves, and 62.5% of those students that only had the conventional method reported having regular capacity of technique assimilation. In both groups mentioned having regular difficulty, but those who have not seen the video had more difficulty in performing the technique. Conclusion: The traditional method of teaching associated with the video favored the ability to understand and transmitted safety, particularly because it is activity that requires technical skill. The technique with video visualization motivated and arouse interest, facilitated the understanding and memorization of the steps for procedure implementation, benefiting the students performance. PMID:26734790

  2. Photogrammetric processing of large images on a PC

    NASA Astrophysics Data System (ADS)

    Skryabin, Sergei V.; Zheltov, Sergey Y.; Visilter, Yury V.

    1995-12-01

    A digital stereophotogrammetric system based on PC is being developed. The system uses standard IBM PC-AT/386-/486 as a processing unit. The system is capable to fulfill processes of stereo model's building and terrain reconstruction using aero and space photographs. The peculiarity of this system is its possibility to process the large digital images exceeding disk memory capacities. Initial images must be previously decomposed on the fragments by use of the Workstation computer. The fragments are accompanied by some extra information. The survey image is created using pyramid image processing.

  3. Measuring multivariate subjective image quality for still and video cameras and image processing system components

    NASA Astrophysics Data System (ADS)

    Nyman, Göte; Leisti, Tuomas; Lindroos, Paul; Radun, Jenni; Suomi, Sini; Virtanen, Toni; Olives, Jean-Luc; Oja, Joni; Vuori, Tero

    2008-01-01

    The subjective quality of an image is a non-linear product of several, simultaneously contributing subjective factors such as the experienced naturalness, colorfulness, lightness, and clarity. We have studied subjective image quality by using a hybrid qualitative/quantitative method in order to disclose relevant attributes to experienced image quality. We describe our approach in mapping the image quality attribute space in three cases: still studio image, video clips of a talking head and moving objects, and in the use of image processing pipes for 15 still image contents. Naive observers participated in three image quality research contexts in which they were asked to freely and spontaneously describe the quality of the presented test images. Standard viewing conditions were used. The data shows which attributes are most relevant for each test context, and how they differentiate between the selected image contents and processing systems. The role of non-HVS based image quality analysis is discussed.

  4. Interactive Computer Assisted Instruction in Teaching of Process Analysis and Simulation.

    ERIC Educational Resources Information Center

    Nuttall, Herbert E., Jr.; Himmelblau, David M.

    To improve the instructional process, time shared computer-assisted instructional methods were developed to teach upper division undergraduate chemical engineering students the concepts of process simulation and analysis. The interactive computer simulation aimed at enabling the student to learn the difficult concepts of process dynamics by…

  5. The Comparison of Teaching Process of First Reading in USA and Turkey

    ERIC Educational Resources Information Center

    Bay, Yalçin

    2014-01-01

    The aim of this study is to compare the teaching process of early reading in the US to in Turkey. This study observes developing early reading of students, their reading miscues, and compares early reading process of students in the US and to early reading process of students in Turkey. This study includes the following research question: What are…

  6. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  7. Image processing in a maritime environment

    NASA Astrophysics Data System (ADS)

    Pietrzak, Kenneth A.; Alberg, Matthew T.

    2015-05-01

    The performance of mast mounted imaging sensors operating near the near marine boundary layer can be severely impacted by environmental issues. Haze, atmospheric turbulence, and rough seas can all impact imaging system performance. Examples of these impacts are provided in this paper. In addition, sensor artifacts such as deinterlace artifacts can also impact imaging performance. Deinterlace artifacts caused by a rotating mast are often too severe to be useful by an operator for detection of contacts. An artifact edge minimization approach is presented that eliminates these global motion-based deinterlace artifacts.

  8. Quantum Image Morphology Processing Based on Quantum Set Operation

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Chang, Zhi-bo; Fan, Ping; Li, Wei; Huan, Tian-tian

    2015-06-01

    Set operation is the essential operation of mathematical morphology, but it is difficult to complete the set operation quickly on the electronic computer. Therefore, the efficiency of traditional morphology processing is very low. In this paper, by adopting the method of the combination of quantum computation and image processing, though multiple quantum logical gates and combining the quantum image storage, quantum loading scheme and Boyer search algorithm, a novel quantum image processing method is proposed, which is the morphological image processing based on quantum set operation. The basic operations, such as erosion and dilation, are carried out for the images by using the quantum erosion algorithm and quantum dilation algorithm. Because the parallel capability of quantum computation can improve the speed of the set operation greatly, the image processing gets higher efficiency. The runtime of our quantum algorithm is . As a result, this method can produce better results.

  9. Sub-image data processing in Astro-WISE

    NASA Astrophysics Data System (ADS)

    Mwebaze, Johnson; Boxhoorn, Danny; McFarland, John; Valentijn, Edwin A.

    2013-01-01

    Most often, astronomers are interested in a source (e.g., moving, variable, or extreme in some colour index) that lies on a few pixels of an image. However, the classical approach in astronomical data processing is the processing of the entire image or set of images even when the sole source of interest may exist on only a few pixels of one or a few images. This is because pipelines have been written and designed for instruments with fixed detector properties (e.g., image size, calibration frames, overscan regions, etc.). Furthermore, all metadata and processing parameters are based on an instrument or a detector. Accordingly, out of many thousands of images for a survey, this can lead to unnecessary processing of data that is both time-consuming and wasteful. We describe the architecture and an implementation of sub-image processing in Astro-WISE. The architecture enables a user to select, retrieve and process only the relevant pixels in an image where the source exists. We show that lineage data collected during the processing and analysis of datasets can be reused to perform selective reprocessing (at sub-image level) on datasets while the remainder of the dataset is untouched, a difficult process to automate without lineage.

  10. Experiments with recursive estimation in astronomical image processing

    NASA Technical Reports Server (NTRS)

    Busko, I.

    1992-01-01

    Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

  11. A color image processing pipeline for digital microscope

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Liu, Peng; Zhuang, Zhefeng; Chen, Enguo; Yu, Feihong

    2012-10-01

    Digital microscope has found wide application in the field of biology, medicine et al. A digital microscope differs from traditional optical microscope in that there is no need to observe the sample through an eyepiece directly, because the optical image is projected directly on the CCD/CMOS camera. However, because of the imaging difference between human eye and sensor, color image processing pipeline is needed for the digital microscope electronic eyepiece to get obtain fine image. The color image pipeline for digital microscope, including the procedures that convert the RAW image data captured by sensor into real color image, is of great concern to the quality of microscopic image. The color pipeline for digital microscope is different from digital still cameras and video cameras because of the specific requirements of microscopic image, which should have the characters of high dynamic range, keeping the same color with the objects observed and a variety of image post-processing. In this paper, a new color image processing pipeline is proposed to satisfy the requirements of digital microscope image. The algorithm of each step in the color image processing pipeline is designed and optimized with the purpose of getting high quality image and accommodating diverse user preferences. With the proposed pipeline implemented on the digital microscope platform, the output color images meet the various analysis requirements of images in the medicine and biology fields very well. The major steps of color imaging pipeline proposed include: black level adjustment, defect pixels removing, noise reduction, linearization, white balance, RGB color correction, tone scale correction and gamma correction.

  12. Experiences with digital processing of images at INPE

    NASA Technical Reports Server (NTRS)

    Mascarenhas, N. D. A. (Principal Investigator)

    1984-01-01

    Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.

  13. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer

    NASA Technical Reports Server (NTRS)

    Masuoka, E.; Rose, J.; Quattromani, M.

    1981-01-01

    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  14. A pathologist-designed imaging system for anatomic pathology signout, teaching, and research.

    PubMed

    Schubert, E; Gross, W; Siderits, R H; Deckenbaugh, L; He, F; Becich, M J

    1994-11-01

    Pathology images are derived from gross surgical specimens, light microscopy, immunofluorescence, electron microscopy, molecular diagnostic gels, flow cytometry, image analysis data, and clinical laboratory data in graphic form. We have implemented a network of desktop personal computers (PCs) that allow us to easily capture, store, and retrieve gross and microscopic, anatomic, and research pathology images. System architecture involves multiple image acquisition and retrieval sites and a central file server for storage. The digitized images are conveyed via a local area network to and from image capture or display stations. Acquisition sites consist of a high-resolution camera connected to a frame grabber card in a 486-type personal computer, equipped with 16 MB (Table 1) RAM, a 1.05-gigabyte hard drive, and a 32-bit ethernet card for access to our anatomic pathology reporting system. We have designed a push-button workstation for acquiring and indexing images that does not significantly interfere with surgical pathology sign-out. Advantages of the system include the following: (1) Improving patient care: the availability of gross images at time of microscopic sign-out, verification of recurrence of malignancy from archived images, monitoring of bone marrow engraftment and immunosuppressive intervention after bone marrow/solid organ transplantation on repeat biopsies, and ability to seek instantaneous consultation with any pathologist on the network; (2) enhancing the teaching environment: building a digital surgical pathology atlas, improving the availability of images for conference support, and sharing cases across the network; (3) enhancing research: case study compilation, metastudy analysis, and availability of digitized images for quantitative analysis and permanent/reusable image records for archival study; and (4) other practical and economic considerations: storing case requisition images and hand-drawn diagrams deters the spread of gross room contaminants and results in considerable cost savings in photographic media for conferences, improved quality assurance by porting control stains across the network, and a multiplicity of other advantages that enhance image and information management in pathology. PMID:7878302

  15. Agglomerates processing on in-flight images of granular products

    NASA Astrophysics Data System (ADS)

    Ros, Frederic; Guillaume, S.; Sevila, Francis

    1993-11-01

    Image analysis can be used to characterize granular populations in many processes in food industry or in agricultural engineering. Either global or individual parameters can be extracted from the image. However, granular products may appear agglomerate on the image, bringing biasing on individual parameters. Combining statistical and neural network technics enables the build of a system which can recognize if products are agglomerate or not. To process images after agglomerates detection, two approaches have been studied: the first is based on erosion, followed by conditional dilation with the original image; the second takes advantage of the graph's properties of the agglomerate's skeleton.

  16. Breast image pre-processing for mammographic tissue segmentation.

    PubMed

    He, Wenda; Hogg, Peter; Juette, Arne; Denton, Erika R E; Zwiggelaar, Reyer

    2015-12-01

    During mammographic image acquisition, a compression paddle is used to even the breast thickness in order to obtain optimal image quality. Clinical observation has indicated that some mammograms may exhibit abrupt intensity change and low visibility of tissue structures in the breast peripheral areas. Such appearance discrepancies can affect image interpretation and may not be desirable for computer aided mammography, leading to incorrect diagnosis and/or detection which can have a negative impact on sensitivity and specificity of screening mammography. This paper describes a novel mammographic image pre-processing method to improve image quality for analysis. An image selection process is incorporated to better target problematic images. The processed images show improved mammographic appearances not only in the breast periphery but also across the mammograms. Mammographic segmentation and risk/density classification were performed to facilitate a quantitative and qualitative evaluation. When using the processed images, the results indicated more anatomically correct segmentation in tissue specific areas, and subsequently better classification accuracies were achieved. Visual assessments were conducted in a clinical environment to determine the quality of the processed images and the resultant segmentation. The developed method has shown promising results. It is expected to be useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment. PMID:26498046

  17. The Development of Sun-Tracking System Using Image Processing

    PubMed Central

    Lee, Cheng-Dar; Huang, Hong-Cheng; Yeh, Hong-Yih

    2013-01-01

    This article presents the development of an image-based sun position sensor and the algorithm for how to aim at the Sun precisely by using image processing. Four-quadrant light sensors and bar-shadow photo sensors were used to detect the Sun's position in the past years. Nevertheless, neither of them can maintain high accuracy under low irradiation conditions. Using the image-based Sun position sensor with image processing can address this drawback. To verify the performance of the Sun-tracking system including an image-based Sun position sensor and a tracking controller with embedded image processing algorithm, we established a Sun image tracking platform and did the performance testing in the laboratory; the results show that the proposed Sun tracking system had the capability to overcome the problem of unstable tracking in cloudy weather and achieve a tracking accuracy of 0.04°. PMID:23615582

  18. Using quantum filters to process images of diffuse axonal injury

    NASA Astrophysics Data System (ADS)

    Pineda Osorio, Mateo

    2014-06-01

    Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.

  19. Airy-Kaup-Kupershmidt filters applied to digital image processing

    NASA Astrophysics Data System (ADS)

    Hoyos Yepes, Laura Cristina

    2015-09-01

    The Kaup-Kupershmidt operator is applied to the two-dimensional solution of the Airy-diffusion equation and the resulting filter is applied via convolution to image processing. The full procedure is implemented using Maple code with the package ImageTools. Some experiments were performed using a wide category of images including biomedical images generated by magnetic resonance, computarized axial tomography, positron emission tomography, infrared and photon diffusion. The Airy-Kaup-Kupershmidt filter can be used as a powerful edge detector and as powerful enhancement tool in image processing. It is expected that the Airy-Kaup-Kupershmidt could be incorporated in standard programs for image processing such as ImageJ.

  20. Study on the improvement of overall optical image quality via digital image processing

    NASA Astrophysics Data System (ADS)

    Tsai, Cheng-Mu; Fang, Yi Chin; Lin, Yu Chin

    2008-12-01

    This paper studies the effects of improving overall optical image quality via Digital Image Processing (DIP) and compares the promoted optical image with the non-processed optical image. Seen from the optical system, the improvement of image quality has a great influence on chromatic aberration and monochromatic aberration. However, overall image capture systems-such as cellphones and digital cameras-include not only the basic optical system but also many other factors, such as the electronic circuit system, transducer system, and so forth, whose quality can directly affect the image quality of the whole picture. Therefore, in this thesis Digital Image Processing technology is utilized to improve the overall image. It is shown via experiments that system modulation transfer function (MTF) based on the proposed DIP technology and applied to a comparatively bad optical system can be comparable to, even possibly superior to, the system MTF derived from a good optical system.

  1. Whole slide imaging: uses and limitations for surgical pathology and teaching.

    PubMed

    Boyce, B F

    2015-07-01

    Advances in computer and software technology and in the quality of images produced by digital cameras together with development of robotic devices that can take glass histology slides from a cassette holding many slides and place them in a conventional microscope for electronic scanning have facilitated the development of whole slide imaging (WSI) systems during the past decade. Anatomic pathologists now have opportunities to test the utility of WSI systems for diagnostic, teaching and research purposes and to determine their limitations. Uses include rendering primary diagnoses from scanned hematoxylin and eosin stained tissues on slides, reviewing frozen section or routine slides from remote locations for interpretation or consultation. Also, WSI can replace physical storage of glass slides with digital images, storing images of slides from outside institutions, presenting slides at clinical or research conferences, teaching residents and medical students, and storing fluorescence images without fading or quenching of the fluorescence signal. Limitations include the high costs of the scanners, maintenance contracts and IT support, storage of digital files and pathologists' lack of familiarity with the technology. Costs are falling as more devices and systems are sold and cloud storage costs drop. Pathologist familiarity with the technology will grow as more institutions purchase WSI systems. The technology holds great promise for the future of anatomic pathology. PMID:25901738

  2. High resolution image processing on low-cost microcomputers

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1993-01-01

    Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.

  3. IPL processing of the Viking orbiter images of Mars

    NASA Technical Reports Server (NTRS)

    Ruiz, R. M.; Elliott, D. A.; Yagi, G. M.; Pomphrey, R. B.; Power, M. A.; Farrell, W., Jr.; Lorre, J. J.; Benton, W. D.; Dewar, R. E.; Cullen, L. E.

    1977-01-01

    The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through spectral filters. The Image Processing Laboratory of the Jet Propulsion Laboratory was responsible for the design, development, and application of the software required to produce these 'second-order' products.

  4. Protocols for Image Processing based Underwater Inspection of Infrastructure Elements

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; Pakrashi, Vikram

    2015-07-01

    Image processing can be an important tool for inspecting underwater infrastructure elements like bridge piers and pile wharves. Underwater inspection often relies on visual descriptions of divers who are not necessarily trained in specifics of structural degradation and the information may often be vague, prone to error or open to significant variation of interpretation. Underwater vehicles, on the other hand can be quite expensive to deal with for such inspections. Additionally, there is now significant encouragement globally towards the deployment of more offshore renewable wind turbines and wave devices and the requirement for underwater inspection can be expected to increase significantly in the coming years. While the merit of image processing based assessment of the condition of underwater structures is understood to a certain degree, there is no existing protocol on such image based methods. This paper discusses and describes an image processing protocol for underwater inspection of structures. A stereo imaging image processing method is considered in this regard and protocols are suggested for image storage, imaging, diving, and inspection. A combined underwater imaging protocol is finally presented which can be used for a variety of situations within a range of image scenes and environmental conditions affecting the imaging conditions. An example of detecting marine growth is presented of a structure in Cork Harbour, Ireland.

  5. Monitoring Car Drivers' Condition Using Image Processing

    NASA Astrophysics Data System (ADS)

    Adachi, Kazumasa; Yamamto, Nozomi; Yamamoto, Osami; Nakano, Tomoaki; Yamamoto, Shin

    We have developed a car driver monitoring system for measuring drivers' consciousness, with which we aim to reduce car accidents caused by drowsiness of drivers. The system consists of the following three subsystems: an image capturing system with a pulsed infrared CCD camera, a system for detecting blinking waveform by the images using a neural network with which we can extract images of face and eye areas, and a system for measuring drivers' consciousness analyzing the waveform with a fuzzy inference technique and others. The third subsystem extracts three factors from the waveform first, and analyzed them with a statistical method, while our previous system used only one factor. Our experiments showed that the three-factor method we used this time was more effective to measure drivers' consciousness than the one-factor method we described in the previous paper. Moreover, the method is more suitable for fitting parameters of the system to each individual driver.

  6. Teaching Fraunhofer diffraction via experimental and simulated images in the laboratory

    NASA Astrophysics Data System (ADS)

    Peinado, Alba; Vidal, Josep; Escalera, Juan Carlos; Lizana, Angel; Campos, Juan; Yzuel, Maria

    2012-10-01

    Diffraction is an important phenomenon introduced to Physics university students in a subject of Fundamentals of Optics. In addition, in the Physics Degree syllabus of the Universitat Autònoma de Barcelona, there is an elective subject in Applied Optics. In this subject, diverse diffraction concepts are discussed in-depth from different points of view: theory, experiments in the laboratory and computing exercises. In this work, we have focused on the process of teaching Fraunhofer diffraction through laboratory training. Our approach involves students working in small groups. They visualize and acquire some important diffraction patterns with a CCD camera, such as those produced by a slit, a circular aperture or a grating. First, each group calibrates the CCD camera, that is to say, they obtain the relation between the distances in the diffraction plane in millimeters and in the computer screen in pixels. Afterwards, they measure the significant distances in the diffraction patterns and using the appropriate diffraction formalism, they calculate the size of the analyzed apertures. Concomitantly, students grasp the convolution theorem in the Fourier domain by analyzing the diffraction of 2-D gratings of elemental apertures. Finally, the learners use a specific software to simulate diffraction patterns of different apertures. They can control several parameters: shape, size and number of apertures, 1-D or 2-D gratings, wavelength, focal lens or pixel size.Therefore, the program allows them to reproduce the images obtained experimentally, and generate others by changingcertain parameters. This software has been created in our research group, and it is freely distributed to the students in order to help their learning of diffraction. We have observed that these hands on experiments help students to consolidate their theoretical knowledge of diffraction in a pedagogical and stimulating learning process.

  7. Image Processing In Laser-Beam-Steering Subsystem

    NASA Technical Reports Server (NTRS)

    Lesh, James R.; Ansari, Homayoon; Chen, Chien-Chung; Russell, Donald W.

    1996-01-01

    Conceptual design of image-processing circuitry developed for proposed tracking apparatus described in "Beam-Steering Subsystem For Laser Communication" (NPO-19069). In proposed system, desired frame rate achieved by "windowed" readout scheme in which only pixels containing and surrounding two spots read out and others skipped without being read. Image data processed rapidly and efficiently to achieve high frequency response.

  8. Resolution modification and context based image processing for retinal prosthesis

    NASA Astrophysics Data System (ADS)

    Naghdy, Golshah; Beston, Chris; Seo, Jong-Mo; Chung, Hum

    2006-08-01

    This paper focuses on simulating image processing algorithms and exploring issues related to reducing high resolution images to 25 x 25 pixels suitable for the retinal implant. Field of view (FoV) is explored, and a novel method of virtual eye movement discussed. Several issues beyond the normal model of human vision are addressed through context based processing.

  9. Teaching with Pensive Images: Rethinking Curiosity in Paulo Freire's "Pedagogy of the Oppressed"

    ERIC Educational Resources Information Center

    Lewis, Tyson E.

    2012-01-01

    Often when the author is teaching philosophy of education, his students begin the process of inquiry by prefacing their questions with something along the lines of "I'm just curious, but ...." Why do teachers and students feel compelled to express their curiosity as "just" curiosity? Perhaps there is a slight embarrassment in proclaiming their…

  10. Teaching with Pensive Images: Rethinking Curiosity in Paulo Freire's "Pedagogy of the Oppressed"

    ERIC Educational Resources Information Center

    Lewis, Tyson E.

    2012-01-01

    Often when the author is teaching philosophy of education, his students begin the process of inquiry by prefacing their questions with something along the lines of "I'm just curious, but ...." Why do teachers and students feel compelled to express their curiosity as "just" curiosity? Perhaps there is a slight embarrassment in proclaiming their

  11. High-performance image processing on the desktop

    NASA Astrophysics Data System (ADS)

    Jordan, Stephen D.

    1996-04-01

    The suitability of computers to the task of medical image visualization for the purposes of primary diagnosis and treatment planning depends on three factors: speed, image quality, and price. To be widely accepted the technology must increase the efficiency of the diagnostic and planning processes. This requires processing and displaying medical images of various modalities in real-time, with accuracy and clarity, on an affordable system. Our approach to meeting this challenge began with market research to understand customer image processing needs. These needs were translated into system-level requirements, which in turn were used to determine which image processing functions should be implemented in hardware. The result is a computer architecture for 2D image processing that is both high-speed and cost-effective. The architectural solution is based on the high-performance PA-RISC workstation with an HCRX graphics accelerator. The image processing enhancements are incorporated into the image visualization accelerator (IVX) which attaches to the HCRX graphics subsystem. The IVX includes a custom VLSI chip which has a programmable convolver, a window/level mapper, and an interpolator supporting nearest-neighbor, bi-linear, and bi-cubic modes. This combination of features can be used to enable simultaneous convolution, pan, zoom, rotate, and window/level control into 1 k by 1 k by 16-bit medical images at 40 frames/second.

  12. Mimos: a description framework for exchanging medical image processing results.

    PubMed

    Aubry, F; Todd-Pokropek, A

    2001-01-01

    Image processing plays increasingly important role in using medical images, both for routine as for research purposes, due to the growing interest in functional studies (PET, MR, etc.). Unfortunately, there exist nearly as many formats for data and results coding as image processing procedures. If Dicom presently supports a kind of structured reporting of image studies, it does not take into account the semantics of the image handling domain. This can impede the exchange and the interpretation of processing results. In order to facilitate the use of image processing results, we have designed a framework for representing image processing results. This framework, whose principle is called an "ontology" in the literature, extends the formalism, which we have used in our previous work on image databases. It permits a systematic representation of the entities and information involved in the processing, that is not only input data, command parameters, output data, but also software and hardware descriptions, and relationships between these different parameters. Consequently, this framework allows the building of standardized documents, which can be exchanged amongst various users. As the framework is based on a formal grammar, documents can be encoded using XML. They are thus compatible with Internet / Intranet technology. In this paper, the main characteristics of the framework are presented and illustrated. We also discuss implementation issues in order to be able to integrate documents, and correlated images, handling these with a classical Web browser. PMID:11604861

  13. Implementing full backtracking facilities for Prolog-based image processing

    NASA Astrophysics Data System (ADS)

    Jones, Andrew C.; Batchelor, Bruce G.

    1995-10-01

    PIP (Prolog image processing) is a system currently under development at UWCC, designed to support interactive image processing using the PROLOG programming language. In this paper we discuss Prolog-based image processing paradigms and present a meta-interpreter developed by the first author, designed to support an approach to image processing in PIP which is more in the spirit of Prolog than was previously possible. This meta-interpreter allows backtracking over image processing operations in a manner transparent to the programmer. Currently, for space-efficiency, the programmer needs to indicate over which operations the system may backtrack in a program; however, a number of extensions to the present work, including a more intelligent approach intended to obviate this need, are mentioned at the end of this paper, which the present meta-interpreter will provide a basis for investigating in the future.

  14. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  15. Review of biomedical signal and image processing

    PubMed Central

    2013-01-01

    This article is a review of the book “Biomedical Signal and Image Processing” by Kayvan Najarian and Robert Splinter, which is published by CRC Press, Taylor & Francis Group. It will evaluate the contents of the book and discuss its suitability as a textbook, while mentioning highlights of the book, and providing comparison with other textbooks.

  16. Assessment of vessel diameters for MR brain angiography processed images

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Obreja, Cristian-Dragos; Moldovanu, Simona

    2015-12-01

    The motivation was to develop an assessment method to measure (in)visible differences between the original and the processed images in MR brain angiography as a method of evaluation of the status of the vessel segments (i.e. the existence of the occlusion or intracerebral vessels damaged as aneurysms). Generally, the image quality is limited, so we improve the performance of the evaluation through digital image processing. The goal is to determine the best processing method that allows an accurate assessment of patients with cerebrovascular diseases. A total of 10 MR brain angiography images were processed by the following techniques: histogram equalization, Wiener filter, linear contrast adjustment, contrastlimited adaptive histogram equalization, bias correction and Marr-Hildreth filter. Each original image and their processed images were analyzed into the stacking procedure so that the same vessel and its corresponding diameter have been measured. Original and processed images were evaluated by measuring the vessel diameter (in pixels) on an established direction and for the precise anatomic location. The vessel diameter is calculated using the plugin ImageJ. Mean diameter measurements differ significantly across the same segment and for different processing techniques. The best results are provided by the Wiener filter and linear contrast adjustment methods and the worst by Marr-Hildreth filter.

  17. Development of the Instructional Model by Integrating Information Literacy in the Class Learning and Teaching Processes

    ERIC Educational Resources Information Center

    Maitaouthong, Therdsak; Tuamsuk, Kulthida; Techamanee, Yupin

    2011-01-01

    This study was aimed at developing an instructional model by integrating information literacy in the instructional process of general education courses at an undergraduate level. The research query, "What is the teaching methodology that integrates information literacy in the instructional process of general education courses at an undergraduate…

  18. The Reflections of Layered Curriculum to Learning-Teaching Process in Social Studies Course

    ERIC Educational Resources Information Center

    Gun, Emine Seda

    2013-01-01

    The purpose of this research is to set the effect of Layered Curriculum on learning-teaching processes. The research was conducted on 2011-2012 educational year. The implementation process, which lasted for 4 weeks, was carried out with the theme named "The World of All of Us" in Social Studies lesson at 5th grade. Observation and interview

  19. The Arrangement of Students' Extracurricular Piano Practice Process with the Asynchronous Distance Piano Teaching Method

    ERIC Educational Resources Information Center

    Karahan, Ahmet Suat

    2015-01-01

    That the students do their extracurricular piano practices in the direction of the teacher's warnings is a key factor in achieving success in the teaching-learning process. However, the teachers cannot adequately control the students' extracurricular practices in the process of traditional piano education. Under the influence of this lack of…

  20. [Present-day organization of the teaching process at the Department of Propedeutics of Internal Diseases].

    PubMed

    El'garov, A A; Kalmykova, M A; El'garov, M A; Kardangusheva, A M

    2014-01-01

    The experience with organization of the teaching process at the Department ofPropedeutics of Internal Diseases is reported. The role of the rating system for knowledge and skill control is discussed along with the possibilities of its application for the improvement of training students in compliance with the Bologna process principles and their integration into the education system. PMID:25790701

  1. The Design Studio as Teaching/Learning Medium--A Process-Based Approach

    ERIC Educational Resources Information Center

    Ozturk, Maya N.; Turkkan, Elif E.

    2006-01-01

    This article discusses a design studio teaching experience exploring the design process itself as a methodological tool. We consider the structure of important phases of the process that contain different levels of design thinking: conception, function and practical knowledge as well as the transitions from inception to construction. We show how…

  2. The Practice of Information Processing Model in the Teaching of Cognitive Strategies

    ERIC Educational Resources Information Center

    Ozel, Ali

    2009-01-01

    In this research, the differentiation condition of teaching the learning strategies depending on the time which the first grade of primary school teachers carried out to form an information-process skeleton on student is tried to be found out. This process including the efforts of 260 teachers in this direction consists of whether the adequate

  3. The Guidance Role of the Instructor in the Teaching and Learning Process

    ERIC Educational Resources Information Center

    Alutu, Azuka N. G.

    2006-01-01

    This study examines the guidance role of the instructor in the teaching and learning process. The paper x-rays the need for the learners to be consciously guided by their teachers as this facilitates and complements the learning process. Gagne's theory of conditions of learning, phases of learning and model for design of instruction was adopted to…

  4. The Arrangement of Students' Extracurricular Piano Practice Process with the Asynchronous Distance Piano Teaching Method

    ERIC Educational Resources Information Center

    Karahan, Ahmet Suat

    2015-01-01

    That the students do their extracurricular piano practices in the direction of the teacher's warnings is a key factor in achieving success in the teaching-learning process. However, the teachers cannot adequately control the students' extracurricular practices in the process of traditional piano education. Under the influence of this lack of

  5. Applying Experiential Learning in College Teaching and Assessment: A Process Model.

    ERIC Educational Resources Information Center

    Jackson, Lewis, Ed.; And Others

    This manual presents a process model in which university teaching and assessment processes are embedded within a broader view of the human learning experience and the outcomes that are required for professional student growth. The model conceptualizes the university's role in the lives of life-long learners and provides a framework for rethinking…

  6. The Practice of Information Processing Model in the Teaching of Cognitive Strategies

    ERIC Educational Resources Information Center

    Ozel, Ali

    2009-01-01

    In this research, the differentiation condition of teaching the learning strategies depending on the time which the first grade of primary school teachers carried out to form an information-process skeleton on student is tried to be found out. This process including the efforts of 260 teachers in this direction consists of whether the adequate…

  7. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  8. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  9. Image Processing Requirements and Distributed Networks in a Digital Imaging Environment

    PubMed Central

    Maguire, G.Q.; Zeleznik, M.P.; Horii, S.C.; Schimpf, J.H.; Hitchner, L.E.; Noz, M.E.; Baxter, B.S.

    1982-01-01

    This paper will discuss a unified digital image distribution and processing system linking various digital image sources through a broadband local area network and a comnon image format. Ultimately, the system allows for viewing and processing of all images produced within the complex, and for viewing stations at any number of convenient locations. The physical handling of storage media at image sources, can be totally eliminated. Complete archiving, file maintenance and large scale processing capabilities are provided by a central file server. This paper presents a concrete proposal for an initial system which has a central archiving facility for permanently storing and selectively viewing computed tomography (CT), nuclear medicine (NM) and ultrasound (US) images. The system proposed can then be slowly expanded to include all the digital images produced by the radiology the department, and ultimately to include all the images by digitizing those produced in an analog fashion.

  10. The Khoros software development environment for image and signal processing.

    PubMed

    Konstantinides, K; Rasure, J R

    1994-01-01

    Data flow visual language systems allow users to graphically create a block diagram of their applications and interactively control input, output, and system variables. Khoros is an integrated software development environment for information processing and visualization. It is particularly attractive for image processing because of its rich collection of tools for image and digital signal processing. This paper presents a general overview of Khoros with emphasis on its image processing and DSP tools. Various examples are presented and the future direction of Khoros is discussed. PMID:18291923

  11. Digital processing of stereoscopic image pairs.

    NASA Technical Reports Server (NTRS)

    Levine, M. D.

    1973-01-01

    The problem under consideration is concerned with scene analysis during robot navigation on the surface of Mars. In this mode, the world model of the robot must be continuously updated to include sightings of new obstacles and scientific samples. In order to describe the content of a particular scene, it is first necessary to segment it into known objects. One technique for accomplishing this segmentation is by analyzing the pair of images produced by the stereoscopic cameras mounted on the robot. A heuristic method is presented for determining the range for each point in the two-dimensional scene under consideration. The method is conceptually based on a comparison of corresponding points in the left and right images of the stereo pair. However, various heuristics which are adaptive in nature are used to make the algorithm both efficient and accurate. Examples are given of the use of this so-called range picture for the purpose of scene segmentation.

  12. Image processing system to analyze droplet distributions in sprays

    NASA Technical Reports Server (NTRS)

    Bertollini, Gary P.; Oberdier, Larry M.; Lee, Yong H.

    1987-01-01

    An image processing system was developed which automatically analyzes the size distributions in fuel spray video images. Images are generated by using pulsed laser light to freeze droplet motion in the spray sample volume under study. This coherent illumination source produces images which contain droplet diffraction patterns representing the droplets degree of focus. The analysis is performed by extracting feature data describing droplet diffraction patterns in the images. This allows the system to select droplets from image anomalies and measure only those droplets considered in focus. Unique features of the system are the totally automated analysis and droplet feature measurement from the grayscale image. The feature extraction and image restoration algorithms used in the system are described. Preliminary performance data is also given for two experiments. One experiment gives a comparison between a synthesized distribution measured manually and automatically. The second experiment compares a real spray distribution measured using current methods against the automatic system.

  13. Processing of polarametric SAR images. Final report

    SciTech Connect

    Warrick, A.L.; Delaney, P.A.

    1995-09-01

    The objective of this work was to develop a systematic method of combining multifrequency polarized SAR images. It is shown that the traditional methods of correlation, hard targets, and template matching fail to produce acceptable results. Hence, a new algorithm was developed and tested. The new approach combines the three traditional methods and an interpolation method. An example is shown that demonstrates the new algorithms performance. The results are summarized suggestions for future research are presented.

  14. Processing ISS Images of Titan's Surface

    NASA Technical Reports Server (NTRS)

    Perry, Jason; McEwen, Alfred; Fussner, Stephanie; Turtle, Elizabeth; West, Robert; Porco, Carolyn; Knowles, Ben; Dawson, Doug

    2005-01-01

    One of the primary goals of the Cassini-Huygens mission, in orbit around Saturn since July 2004, is to understand the surface and atmosphere of Titan. Surface investigations are primarily accomplished with RADAR, the Visual and Infrared Mapping Spectrometer (VIMS), and the Imaging Science Subsystem (ISS) [1]. The latter two use methane "windows", regions in Titan's reflectance spectrum where its atmosphere is most transparent, to observe the surface. For VIMS, this produces clear views of the surface near 2 and 5 microns [2]. ISS uses a narrow continuum band filter (CB3) at 938 nanometers. While these methane windows provide our best views of the surface, the images produced are not as crisp as ISS images of satellites like Dione and Iapetus [3] due to the atmosphere. Given a reasonable estimate of contrast (approx.30%), the apparent resolution of features is approximately 5 pixels due to the effects of the atmosphere and the Modulation Transfer Function of the camera [1,4]. The atmospheric haze also reduces contrast, especially with increasing emission angles [5].

  15. Image processing of underwater multispectral imagery

    USGS Publications Warehouse

    Zawada, D.G.

    2003-01-01

    Capturing in situ fluorescence images of marine organisms presents many technical challenges. The effects of the medium, as well as the particles and organisms within it, are intermixed with the desired signal. Methods for extracting and preparing the imagery for analysis are discussed in reference to a novel underwater imaging system called the low-light-level underwater multispectral imaging system (LUMIS). The instrument supports both uni- and multispectral collections, each of which is discussed in the context of an experimental application. In unispectral mode, LUMIS was used to investigate the spatial distribution of phytoplankton. A thin sheet of laser light (532 nm) induced chlorophyll fluorescence in the phytoplankton, which was recorded by LUMIS. Inhomogeneities in the light sheet led to the development of a beam-pattern-correction algorithm. Separating individual phytoplankton cells from a weak background fluorescence field required a two-step procedure consisting of edge detection followed by a series of binary morphological operations. In multispectral mode, LUMIS was used to investigate the bio-assay potential of fluorescent pigments in corals. Problems with the commercial optical-splitting device produced nonlinear distortions in the imagery. A tessellation algorithm, including an automated tie-point-selection procedure, was developed to correct the distortions. Only pixels corresponding to coral polyps were of interest for further analysis. Extraction of these pixels was performed by a dynamic global-thresholding algorithm.

  16. Diagnosis of skin cancer using image processing

    NASA Astrophysics Data System (ADS)

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Coronel-Beltrán, Ángel

    2014-10-01

    In this papera methodology for classifying skin cancerin images of dermatologie spots based on spectral analysis using the K-law Fourier non-lineartechnique is presented. The image is segmented and binarized to build the function that contains the interest area. The image is divided into their respective RGB channels to obtain the spectral properties of each channel. The green channel contains more information and therefore this channel is always chosen. This information is point to point multiplied by a binary mask and to this result a Fourier transform is applied written in nonlinear form. If the real part of this spectrum is positive, the spectral density takeunit values, otherwise are zero. Finally the ratio of the sum of the unit values of the spectral density with the sum of values of the binary mask are calculated. This ratio is called spectral index. When the value calculated is in the spectral index range three types of cancer can be detected. Values found out of this range are benign injure.

  17. Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry

    NASA Technical Reports Server (NTRS)

    Hong, Yie-Ming

    1973-01-01

    Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.

  18. [Image processing method based on prime number factor layer].

    PubMed

    Fan, Yifang; Yuan, Zhirun

    2004-10-01

    In sport games, since the human body movement data are mainly drawn from the sports field with the hues or even interruptions of commercial environment, some difficulties must be surmounted in order to analyze the images. It is obviously not enough just to use the method of grey-image treatment. We have applied the characteristics of the prime number function to the human body movement images and thus introduce a new method of image processing in this article. When trying to deal with certain moving images, we can get a better result. PMID:15553856

  19. Data management in pattern recognition and image processing systems

    NASA Technical Reports Server (NTRS)

    Zobrist, A. L.; Bryant, N. A.

    1976-01-01

    Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.

  20. Adaptable adaptive optics and image processing at Fraunhofer IOSB

    NASA Astrophysics Data System (ADS)

    Gladysz, S.; Marin Palomo, P.; Zepp, A.; Stein, K.

    2015-04-01

    Research activities in the Adaptive Optics Group at the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation (IOSB) in Ettlingen, Germany, revolve around imaging and laser propagation through strong turbulence, especially along horizontal paths. We are developing simulations, theoretical models, image processing software and adaptive optics systems. This paper gives an overview of two application areas bound by the same deficiency: both image processing and laser correction systems often require information about average turbulence strength at the time of their operation in order to function properly or to maximize their effectiveness.

  1. Teaching-Learning Processes in Elementary School: A Synoptic View. Studies of Educative Processes, Report No. 9.

    ERIC Educational Resources Information Center

    Harnischfeger, Annegret; Wiley, David E.

    This approach to the study of classroom teaching-learning processes concentrates on pupil time and the various ways in which it is used. The conceptual framework contrasts with most earlier studies that report teacher behavior as the most direct influence on pupil achievement. Two premises form the basis of the framework: (1) The total amount of…

  2. Undergraduate Teaching in Solids Processing and Particle Technology.

    ERIC Educational Resources Information Center

    Chase, George G.; Jacob, Karl

    1998-01-01

    Argues that newly-graduated chemical engineers frequently encounter projects that involve solids processing and find their knowledge of particle technology to be inadequate. Describes a senior undergraduate course on solids processing. (DDR)

  3. Spectral reproduction from scene to hardcopy: II. Image processing

    NASA Astrophysics Data System (ADS)

    Rosen, Mitchell; Imai, Francisco H.; Jiang, Xiaoyun; Ohta, Noboru

    2000-12-01

    Traditional image processing techniques used for 3- and 4- band images are not suited to the many-band character of spectral images. A sparse multi-dimensional lookup table with inter-node interpolation is a typical image processing technique used for applying either a known model or an empirically derived mapping to an image. Such an approach for spectral images becomes problematic because input dimensionality of lookup tables is proportional to the number of source image bands and the size of lookup table sis exponentially related to the number of input dimensions. While an RGB or CMY source images would require a 3D lookup table, a 31-band spectral image would need a 31-dimensional lookup table. A 31-dimensional lookup table would be absurdly large. A novel approach to spectral image processing is explored. This approach combines a low-cost spectral analysis followed by application of one from a set of low-dimensional lookup tables. The method is computationally feasible and does not make excessive demands on disk space or run-time memory.

  4. [Studies on digital watermark embedding intensity against image processing and image deterioration].

    PubMed

    Nishio, Masato; Ando, Yutaka; Tsukamoto, Nobuhiro; Kawashima, Hironao

    2004-04-01

    In order to apply digital watermarking to medical imaging, it is required to find a trade-off between strength of watermark embedding and deterioration of image quality. In this study, watermarks were embedded in 4 types of modality images to determine the correlation among the watermarking strength, robustness against image processing, and image deterioration due to embedding. The results demonstrated that watermarks which were embedded by the least significant bit insertion method became unable to be detected and recognized on image processing even if the watermarks were embedded with such strength that could cause image deterioration. On the other hand, watermarks embedded by the Discrete Cosine Transform were clearly detected and recognized even after image processing regardless of the embedding strength. The maximum level of embedding strength that will not affect diagnosis differed depending on the type of modality. It is expected that embedding the patient information together with the facility information as watermarks will help maintain the patient information, prevent mix-ups of the images, and identify the test performing facilities. The concurrent use of watermarking less resistant to image processing makes it possible to detect whether any image processing has been performed or not. PMID:15159668

  5. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  6. An Image Processing Approach to Linguistic Translation

    NASA Astrophysics Data System (ADS)

    Kubatur, Shruthi; Sreehari, Suhas; Hegde, Rajeshwari

    2011-12-01

    The art of translation is as old as written literature. Developments since the Industrial Revolution have influenced the practice of translation, nurturing schools, professional associations, and standard. In this paper, we propose a method of translation of typed Kannada text (taken as an image) into its equivalent English text. The National Instruments (NI) Vision Assistant (version 8.5) has been used for Optical character Recognition (OCR). We developed a new way of transliteration (which we call NIV transliteration) to simplify the training of characters. Also, we build a special type of dictionary for the purpose of translation.

  7. Detecting jaundice by using digital image processing

    NASA Astrophysics Data System (ADS)

    Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.

    2014-03-01

    When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

  8. Processing, analysis, recognition, and automatic understanding of medical images

    NASA Astrophysics Data System (ADS)

    Tadeusiewicz, Ryszard; Ogiela, Marek R.

    2004-07-01

    Paper presents some new ideas introducing automatic understanding of the medical images semantic content. The idea under consideration can be found as next step on the way starting from capturing of the images in digital form as two-dimensional data structures, next going throw images processing as a tool for enhancement of the images visibility and readability, applying images analysis algorithms for extracting selected features of the images (or parts of images e.g. objects), and ending on the algorithms devoted to images classification and recognition. In the paper we try to explain, why all procedures mentioned above can not give us full satisfaction in many important medical problems, when we do need understand image semantic sense, not only describe the image in terms of selected features and/or classes. The general idea of automatic images understanding is presented as well as some remarks about the successful applications of such ideas for increasing potential possibilities and performance of computer vision systems dedicated to advanced medical images analysis. This is achieved by means of applying linguistic description of the picture merit content. After this we try use new AI methods to undertake tasks of the automatic understanding of images semantics in intelligent medical information systems. A successful obtaining of the crucial semantic content of the medical image may contribute considerably to the creation of new intelligent multimedia cognitive medical systems. Thanks to the new idea of cognitive resonance between stream of the data extracted form the image using linguistic methods and expectations taken from the representation of the medical knowledge, it is possible to understand the merit content of the image even if the form of the image is very different from any known pattern.

  9. Knowledge-based aerial image understanding systems and expert systems for image processing

    SciTech Connect

    Matsuyama, T.

    1987-05-01

    This paper discusses roles of artificial intelligence in the automatic interpretation of remotely sensed imagery. The authors first discuss several image understanding systems for analyzing complex aerial photographs. The discussion is mainly concerned with knowledge representation and control structure in the aerial image understanding systems: a blackboard model for integrating diverse object detection modules, a symbolic model representation for three-dimensional object recognition, and integration of bottom-up and top-down analyses. Then, a model of expert systems for image processing is introduced that discussed which and what combinations of image processing operators are effective to analyze an image.

  10. Evaluation of clinical image processing algorithms used in digital mammography.

    PubMed

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the same six pairs of modalities were significantly different, but the JAFROC confidence intervals were about 32% smaller than ROC confidence intervals. This study shows that image processing has a significant impact on the detection of microcalcifications in digital mammograms. Objective measurements, such as described here, should be used by the manufacturers to select the optimal image processing algorithm. PMID:19378737

  11. Ground control requirements for precision processing of ERTS images

    USGS Publications Warehouse

    Burger, Thomas C.

    1972-01-01

    When the first Earth Resources Technology Satellite (ERTS-A) flies in 1972, NASA expects to receive and bulk-process 9,000 images a week. From this deluge of images, a few will be selected for precision processing; that is, about 5 percent will be further treated to improve the geometry of the scene, both in the relative and absolute sense. Control points are required for this processing. This paper describes the control requirements for relating ERTS images to a reference surface of the earth. Enough background on the ERTS-A satellite is included to make the requirements meaningful to the user.

  12. Land image data processing requirements for the EOS era

    NASA Technical Reports Server (NTRS)

    Wharton, Stephen W.; Newcomer, Jeffrey A.

    1989-01-01

    Requirements are proposed for a hybrid approach to image analysis that combines the functionality of a general-purpose image processing system with the knowledge representation and manipulation capabilities associated with expert systems to improve the productivity of scientists in extracting information from remotely sensed image data. The overall functional objectives of the proposed system are to: (1) reduce the level of human interaction required on a scene-by-scene basis to perform repetitive image processing tasks; (2) allow the user to experiment with ad hoc rules and procedures for the extraction, description, and identification of the features of interest; and (3) facilitate the derivation, application, and dissemination of expert knowledge for target recognition whose scope of application is not necessarily limited to the image(s) from which it was derived.

  13. Image pre-processing for optimizing automated photogrammetry performances

    NASA Astrophysics Data System (ADS)

    Guidi, G.; Gonizzi, S.; Micoli, L. L.

    2014-05-01

    The purpose of this paper is to analyze how optical pre-processing with polarizing filters and digital pre-processing with HDR imaging, may improve the automated 3D modeling pipeline based on SFM and Image Matching, with special emphasis on optically non-cooperative surfaces of shiny or dark materials. Because of the automatic detection of homologous points, the presence of highlights due to shiny materials, or nearly uniform dark patches produced by low reflectance materials, may produce erroneous matching involving wrong 3D point estimations, and consequently holes and topological errors on the mesh originated by the associated dense 3D cloud. This is due to the limited dynamic range of the 8 bit digital images that are matched each other for generating 3D data. The same 256 levels can be more usefully employed if the actual dynamic range is compressed, avoiding luminance clipping on the darker and lighter image areas. Such approach is here considered both using optical filtering and HDR processing with tone mapping, with experimental evaluation on different Cultural Heritage objects characterized by non-cooperative optical behavior. Three test images of each object have been captured from different positions, changing the shooting conditions (filter/no-filter) and the image processing (no processing/HDR processing), in order to have the same 3 camera orientations with different optical and digital pre-processing, and applying the same automated process to each photo set.

  14. Using NASA Space Imaging Technology to Teach Earth and Sun Topics

    NASA Astrophysics Data System (ADS)

    Verner, E.; Bruhweiler, F. C.; Long, T.

    2011-12-01

    We teach an experimental college-level course, directed toward elementary education majors, emphasizing "hands-on" activities that can be easily applied to the elementary classroom. This course, Physics 240: "The Sun-Earth Connection" includes various ways to study selected topics in physics, earth science, and basic astronomy. Our lesson plans and EPO materials make extensive use of NASA imagery and cover topics about magnetism, the solar photospheric, chromospheric, coronal spectra, as well as earth science and climate. In addition we are developing and will cover topics on ecosystem structure, biomass and water on Earth. We strive to free the non-science undergraduate from the "fear of science" and replace it with the excitement of science such that these future teachers will carry this excitement to their future students. Hands-on experiments, computer simulations, analysis of real NASA data, and vigorous seminar discussions are blended in an inquiry-driven curriculum to instill confident understanding of basic physical science and modern, effective methods for teaching it. The course also demonstrates ways how scientific thinking and hands-on activities could be implemented in the classroom. We have designed this course to provide the non-science student a confident basic understanding of physical science and modern, effective methods for teaching it. Most of topics were selected using National Science Standards and National Mathematics Standards that are addressed in grades K-8. The course focuses on helping education majors: 1) Build knowledge of scientific concepts and processes; 2) Understand the measurable attributes of objects and the units and methods of measurements; 3) Conduct data analysis (collecting, organizing, presenting scientific data, and to predict the result); 4) Use hands-on approaches to teach science; 5) Be familiar with Internet science teaching resources. Here we share our experiences and challenges we face while teaching this course.

  15. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    PubMed

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. PMID:25676816

  16. Subband/Transform MATLAB Functions For Processing Images

    NASA Technical Reports Server (NTRS)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  17. Systematic processing of Mars Express HRSC image mosaic quadrangles

    NASA Astrophysics Data System (ADS)

    Michael, Greg; Walter, Sebastian; McGuire, Patrick; Kneissl, Thomas; van Gasselt, Stephan; Gross, Christoph; Schreiner, Bjoern; Zuschneid, Wilhelm

    2015-04-01

    Mars Express HRSC image strips show varying brightnesses caused by differing illumination and atmospheric conditions. Lambert correction improves the situation, but not sufficiently for a visually consistent mosaic. In the absence of a systematically applicable correction for atmospheric effects, we demonstrate a technique to provide image equalisation using a brightness reference map, and show the first processed MC-30 quadrangles from a programme to produce a global HRSC image mosaic.

  18. Onboard processing for future space-borne imaging systems

    NASA Technical Reports Server (NTRS)

    Wellman, J. B.; Norris, D. D.

    1978-01-01

    There is a strong rationale for increasing the rate of information return from imaging class experiments aboard both terrestrial and planetary spacecraft. Future imaging systems will be designed with increased spatial resolution, broader spectral range and more spectral channels (or higher spectral resolution). The data rate implied by these improved performance characteristics can be expected to grow more rapidly than the projected telecommunications capability. One solution to this dilemma is the use of improved onboard data processing. The use of onboard classification processing in a multispectral imager can result in orders of magnitude increase in information transfer for very specific types of imaging tasks. Several of these processing functions are included in the conceptual design of an Infrared Multispectral Imager which would map the spatial distribution of characteristic geologic features associated with deposits of economic minerals.

  19. Digital Image Processing Overview For Helmet Mounted Displays

    NASA Astrophysics Data System (ADS)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  20. Application of image processing for terahertz time domain spectroscopy imaging quantitative detection

    NASA Astrophysics Data System (ADS)

    Li, Li-juan; Wang, Sheng; Ren, Jiao-jiao; Zhou, Ming-xing; Zhao, Duo

    2015-03-01

    According to nondestructive testing principle for the terahertz time domain spectroscopy Imaging, using digital image processing techniques, through Terahertz time-domain spectroscopy system collected images and two-dimensional datas and using a range of processing methods, including selecting regions of interest, contrast enhancement, edge detection, and defects being detected. In the paper, Matlab programming is been use to defect recognition of Terahertz, by figuring out the pixels to determine defects defect area and border length, roundness, diameter size. Through the experiment of the qualitative analysis and quantitative calculation of Matlab image processing, this method of detection of defects of geometric dimension of the sample to get a better result.

  1. Color sensitivity of the multi-exposure HDR imaging process

    NASA Astrophysics Data System (ADS)

    Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

    2013-04-01

    Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

  2. Digital image processing and analysis for activated sludge wastewater treatment.

    PubMed

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants. PMID:25381111

  3. Latent Image Processing Can Bolster the Value of Quizzes.

    ERIC Educational Resources Information Center

    Singer, David

    1985-01-01

    Latent image processing is a method which reveals hidden ink when marked with a special pen. Using multiple-choice items with commercially available latent image transfers can provide immediate feedback on take-home quizzes. Students benefitted from formative evaluation and were challenged to search for alternative solutions and explain unexpected…

  4. Digital images in the map revision process

    NASA Astrophysics Data System (ADS)

    Newby, P. R. T.

    Progress towards the adoption of digital (or softcopy) photogrammetric techniques for database and map revision is reviewed. Particular attention is given to the Ordnance Survey of Great Britain, the author's former employer, where digital processes are under investigation but have not yet been introduced for routine production. Developments which may lead to increasing automation of database update processes appear promising, but because of the cost and practical problems associated with managing as well as updating large digital databases, caution is advised when considering the transition to softcopy photogrammetry for revision tasks.

  5. VICAR-DIGITAL image processing system

    NASA Technical Reports Server (NTRS)

    Billingsley, F.; Bressler, S.; Friden, H.; Morecroft, J.; Nathan, R.; Rindfleisch, T.; Selzer, R.

    1969-01-01

    Computer program corrects various photometic, geometric and frequency response distortions in pictures. The program converts pictures to a number of elements, with each elements optical density quantized to a numerical value. The translated picture is recorded on magnetic tape in digital form for subsequent processing and enhancement by computer.

  6. Imaging Implicit Morphological Processing: Evidence from Hebrew

    ERIC Educational Resources Information Center

    Bick, Atira S.; Frost, Ram; Goelman, Gadi

    2010-01-01

    Is morphology a discrete and independent element of lexical structure or does it simply reflect a fine-tuning of the system to the statistical correlation that exists among orthographic and semantic properties of words? Hebrew provides a unique opportunity to examine morphological processing in the brain because of its rich morphological system.…

  7. Application of digital image processing techniques to astronomical imagery 1977

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.; Lynn, D. J.

    1978-01-01

    Nine specific techniques of combination of techniques developed for applying digital image processing technology to existing astronomical imagery are described. Photoproducts are included to illustrate the results of each of these investigations.

  8. Image processing for flight crew enhanced situation awareness

    NASA Technical Reports Server (NTRS)

    Roberts, Barry

    1993-01-01

    This presentation describes the image processing work that is being performed for the Enhanced Situational Awareness System (ESAS) application. Specifically, the presented work supports the Enhanced Vision System (EVS) component of ESAS.

  9. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  10. Design and implementation of non-linear image processing functions for CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel

    2012-11-01

    Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.

  11. Digital image processing for the earth resources technology satellite data.

    NASA Technical Reports Server (NTRS)

    Will, P. M.; Bakis, R.; Wesley, M. A.

    1972-01-01

    This paper discusses the problems of digital processing of the large volumes of multispectral image data that are expected to be received from the ERTS program. Correction of geometric and radiometric distortions are discussed and a byte oriented implementation is proposed. CPU timing estimates are given for a System/360 Model 67, and show that a processing throughput of 1000 image sets per week is feasible.

  12. ELAS: A powerful, general purpose image processing package

    NASA Technical Reports Server (NTRS)

    Walters, David; Rickman, Douglas

    1991-01-01

    ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.

  13. Multimission image processing and science data visualization

    NASA Technical Reports Server (NTRS)

    Green, William B.

    1993-01-01

    The Operational Science Analysis (OSA) Functional area supports science instrument data display, analysis, visualization and photo processing in support of flight operations of planetary spacecraft managed by the Jet Propulsion Laboratory (JPL). This paper describes the data products generated by the OSA functional area, and the current computer system used to generate these data products. The objectives on a system upgrade now in process are described. The design approach to development of the new system are reviewed, including use of the Unix operating system and X-Window display standards to provide platform independence, portability, and modularity within the new system, is reviewed. The new system should provide a modular and scaleable capability supporting a variety of future missions at JPL.

  14. Digital interactive image analysis by array processing

    NASA Technical Reports Server (NTRS)

    Sabels, B. E.; Jennings, J. D.

    1973-01-01

    An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.

  15. All digital precision processing of ERTS images

    NASA Technical Reports Server (NTRS)

    Bernstein, R. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Experimentation was conducted to evaluate the performance of the Sequential Similarity Detection Algorithm (SSDA) to detect and locate ground central points (GCP) automatically using MSS data. Recent experiments with ERTS data having a temporal separation of from 17 to 72 days between the search area and the GCP have shown that the algorithm can find the GCP's and with an overall probability of 88%. Band 5 appears to give the best results. A modified reseau detection algorithm has been applied to 2 RBV scenes separated by a 12 day period. The algorithm correctly located all 486 reseaus. No false reseaus were located in a companion experiment. Changes in apparent reseau position, due to camera characteristics, were never greater than 3 picture elements in either axis. The positional error of a geometrically corrected image has been predicted by the use of an APL program. The maximum deviation of the GCP's from true UTM coordinate position was computed to be 190 meters. The RMS positional error of all GCP's was 106 meters. Further refinement of the algorithm is expected to reduce the errors.

  16. Understanding Reactions to Workplace Injustice through Process Theories of Motivation: A Teaching Module and Simulation

    ERIC Educational Resources Information Center

    Stecher, Mary D.; Rosse, Joseph G.

    2007-01-01

    Management and organizational behavior students are often overwhelmed by the plethora of motivation theories they must master at the undergraduate level. This article offers a teaching module geared toward helping students understand how two major process theories of motivation, equity and expectancy theories and theories of organizational…

  17. Using a Laboratory Simulator in the Teaching and Study of Chemical Processes in Estuarine Systems

    ERIC Educational Resources Information Center

    Garcia-Luque, E.; Ortega, T.; Forja, J. M.; Gomez-Parra, A.

    2004-01-01

    The teaching of Chemical Oceanography in the Faculty of Marine and Environmental Sciences of the University of Cadiz (Spain) has been improved since 1994 by the employment of a device for the laboratory simulation of estuarine mixing processes and the characterisation of the chemical behaviour of many substances that pass through an estuary. The…

  18. ICCE/ICCAI 2000 Full & Short Papers (Teaching and Learning Processes).

    ERIC Educational Resources Information Center

    2000

    This document contains the full and short papers on teaching and learning processes from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction) covering the following topics: a code restructuring tool to help scaffold novice programmers; efficient study of Kanji using…

  19. Applying the Decoding the Disciplines Process to Teaching Structural Mechanics: An Autoethnographic Case Study

    ERIC Educational Resources Information Center

    Tingerthal, John Steven

    2013-01-01

    Using case study methodology and autoethnographic methods, this study examines a process of curricular development known as "Decoding the Disciplines" (Decoding) by documenting the experience of its application in a construction engineering mechanics course. Motivated by the call to integrate what is known about teaching and learning…

  20. Optimizing the Teaching-Learning Process Through a Linear Programming Model--Stage Increment Model.

    ERIC Educational Resources Information Center

    Belgard, Maria R.; Min, Leo Yoon-Gee

    An operations research method to optimize the teaching-learning process is introduced in this paper. In particular, a linear programing model is proposed which, unlike dynamic or control theory models, allows the computer to react to the responses of a learner in seconds or less. To satisfy the assumptions of linearity, the seemingly complicated…

  1. A Problem-Based Learning Model for Teaching the Instructional Design Business Acquisition Process.

    ERIC Educational Resources Information Center

    Kapp, Karl M.; Phillips, Timothy L.; Wanner, Janice H.

    2002-01-01

    Outlines a conceptual framework for using a problem-based learning model for teaching the Instructional Design Business Acquisition Process. Discusses writing a response to a request for proposal, developing a working prototype, orally presenting the solution, and the impact of problem-based learning on students' perception of their confidence in…

  2. A Development of Environmental Education Teaching Process by Using Ethics Infusion for Undergraduate Students

    ERIC Educational Resources Information Center

    Wongchantra, Prayoon; Boujai, Pairoj; Sata, Winyoo; Nuangchalerm, Prasart

    2008-01-01

    Environmental problems were made by human beings because they lack environmental ethics. The sustainable solving of environmental problems must rely on a teaching process using an environmental ethics infusion method. The purposes of this research were to study knowledge of environment and environmental ethics through an environmental education…

  3. Using the Internet and Computer Technologies in Learning/Teaching Process

    ERIC Educational Resources Information Center

    Geladze, Darejan

    2015-01-01

    According to the new national curriculum innovations are in a successful outcome for the introduction of many factors, the most important things are learning environment, which includes the suitable equipment, place, space utilization, and the selection of learning resources to support teaching and learning problem solving process, by creating the…

  4. Assessing Teachers' Perception on Integrating ICT in Teaching-Learning Process: The Case of Adwa College

    ERIC Educational Resources Information Center

    Gebremedhin, Mewcha Amha; Fenta, Ayele Almaw

    2015-01-01

    Rapid growth and improvement in ICT have led to the diffusion of technology in education. The purpose of this study is to investigate teachers' perception on integrating ICT in teaching-learning process. The research questions sought to measure teachers' software usage as well as other instructional tools and materials, preferences for…

  5. Incentives and Motivation in the Teaching-Learning Process: The Role of Teacher Intentions.

    ERIC Educational Resources Information Center

    Menges, Robert J.

    The theory of "reasoned action" is applied to the teaching-learning process. This theory asserts that people use the information available to them in a reasonable manner to arrive at their decisions and that a person's behavior follows logically and systematically from whatever information he has available. To illustrate application of the theory…

  6. The Emergence of the Teaching/Learning Process in Preschoolers: Theory of Mind and Age Effect

    ERIC Educational Resources Information Center

    Bensalah, Leila

    2011-01-01

    This study analysed the gradual emergence of the teaching/learning process by examining theory of mind (ToM) acquisition and age effects in the preschool period. We observed five dyads performing a jigsaw task drawn from a previous study. Three stages were identified. In the first one, the teacher focuses on the execution of her/his own task…

  7. Teaching the Dialectic Process to Preservice Teachers in an Educational Psychology Class.

    ERIC Educational Resources Information Center

    Allen, James D.; Jeffers, Glenn A.

    This paper describes an educational psychology class that helped develop self-reflection in preservice teachers by teaching them the dialectic process of analysis. Students explored the attitudes, beliefs, values, assumptions, and biases they as future teachers would bring with them to classroom interactions with students. As an ongoing…

  8. Exploring the Process of Integrating the Internet into English Language Teaching

    ERIC Educational Resources Information Center

    Abdallah, Mahmoud Mohammad Sayed

    2007-01-01

    The present paper explores the process of integrating the Internet into the field of English language teaching in the light of the following points: the general importance of the Internet in our everyday lives shedding some light on the increasing importance of the Internet as a new innovation in our modern life; benefits of using the Internet in…

  9. Understanding Reactions to Workplace Injustice through Process Theories of Motivation: A Teaching Module and Simulation

    ERIC Educational Resources Information Center

    Stecher, Mary D.; Rosse, Joseph G.

    2007-01-01

    Management and organizational behavior students are often overwhelmed by the plethora of motivation theories they must master at the undergraduate level. This article offers a teaching module geared toward helping students understand how two major process theories of motivation, equity and expectancy theories and theories of organizational

  10. A National Research Survey of Technology Use in the BSW Teaching and Learning Process

    ERIC Educational Resources Information Center

    Buquoi, Brittany; McClure, Carli; Kotrlik, Joseph W.; Machtmes, Krisanna; Bunch, J. C.

    2013-01-01

    The purpose of this descriptive-correlational research study was to assess the overall use of technology in the teaching and learning process (TLP) by BSW educators. The accessible and target population included all full-time, professorial-rank, BSW faculty in Council on Social Work Education--accredited BSW programs at land grant universities.…

  11. Learning and Teaching about the Nature of Science through Process Skills

    ERIC Educational Resources Information Center

    Mulvey, Bridget K.

    2012-01-01

    This dissertation, a three-paper set, explored whether the process skills-based approach to nature of science instruction improves teachers' understandings, intentions to teach, and instructional practice related to the nature of science. The first paper examined the nature of science views of 53 preservice science teachers before and after a…

  12. The Process of Teaching and Learning about Reflection: Research Insights from Professional Nurse Education

    ERIC Educational Resources Information Center

    Bulman, Chris; Lathlean, Judith; Gobbi, Mary

    2014-01-01

    The study aimed to investigate the process of reflection in professional nurse education and the part it played in a teaching and learning context. The research focused on the social construction of reflection within a post-registration, palliative care programme, accessed by nurses, in the United Kingdom (UK). Through an interpretive ethnographic…

  13. Theory and Practice in the Teaching of Composition: Processing, Distancing, and Modeling.

    ERIC Educational Resources Information Center

    Myers, Miles, Ed.; Gray, James, Ed.

    Intended to show teachers how their approaches to the teaching of writing reflect a particular area of research and to show researchers how the intuitions of teachers reflect research findings, the articles in this book are classified according to three approaches to writing: processing, distancing, and modeling. After an introductory essay that…

  14. A Review of the Effects of Stress on the Teaching-Learning Process.

    ERIC Educational Resources Information Center

    Bossing, Lewis; Ruoff, Nancy

    Literature on the impact of stress on various elements in the teaching-learning process in the school environment is reviewed. Writings and research findings on stress are discussed in relation to: (1) children and adolescents; (2) gifted students; (3) mentally handicapped student; (4) teachers; (5) teacher stress as it affects learning; and (6)…

  15. Metaphors in Mathematics Classrooms: Analyzing the Dynamic Process of Teaching and Learning of Graph Functions

    ERIC Educational Resources Information Center

    Font, Vicenc; Bolite, Janete; Acevedo, Jorge

    2010-01-01

    This article presents an analysis of a phenomenon that was observed within the dynamic processes of teaching and learning to read and elaborate Cartesian graphs for functions at high-school level. Two questions were considered during this investigation: What types of metaphors does the teacher use to explain the graphic representation of functions…

  16. Student-Centered Transformative Learning in Leadership Education: An Examination of the Teaching and Learning Process

    ERIC Educational Resources Information Center

    Haber-Curran, Paige; Tillapaugh, Daniel W.

    2015-01-01

    Innovative and learner-centered approaches to teaching and learning are vital for the applied field of leadership education, yet little research exists on such pedagogical approaches within the field. Using a phenomenological approach in analyzing 26 students' reflective narratives, the authors explore students' experiences of and process of…

  17. Learning and Teaching about the Nature of Science through Process Skills

    ERIC Educational Resources Information Center

    Mulvey, Bridget K.

    2012-01-01

    This dissertation, a three-paper set, explored whether the process skills-based approach to nature of science instruction improves teachers' understandings, intentions to teach, and instructional practice related to the nature of science. The first paper examined the nature of science views of 53 preservice science teachers before and after a

  18. Internet Access, Use and Sharing Levels among Students during the Teaching-Learning Process

    ERIC Educational Resources Information Center

    Tutkun, Omer F.

    2011-01-01

    The purpose of this study was to determine the awareness among students and levels regarding student access, use, and knowledge sharing during the teaching-learning process. The triangulation method was utilized in this study. The population of the research universe was 21,747. The student sample population was 1,292. Two different data collection…

  19. A Performer's Creative Processes: Implications for Teaching and Learning Musical Interpretation

    ERIC Educational Resources Information Center

    Silverman, Marissa

    2008-01-01

    The purpose of this study is to investigate aspects of musical interpretation and suggest guidelines for developing performance students' interpretative processes. Since musical interpretation involves basic issues concerning the nature of music, and competing concepts of "interpretation" and its teaching, an overview of these issues is given.…

  20. Using Process Observation to Teach Alternative Dispute Resolution: Alternatives to Simulation.

    ERIC Educational Resources Information Center

    Bush, Robert A. Barush

    1987-01-01

    A method of teaching alternative dispute resolution (ADR) involves sending students to observe actual ADR sessions, by agreement with the agencies conducting them, and then analyzing the students' observations in focused discussions to improve student insight and understanding of the processes involved. (MSE)

  1. Personalized Instruction, Group Process and the Teaching of Psychological Theories of Learning.

    ERIC Educational Resources Information Center

    DiScipio, William J.; Crohn, Joan

    An innovative approach to teaching learning theory to undergraduates was tested by comparing a modified Personalized System of Instruction (PSI) group process class (n=19) to a traditional teacher-centered control class (n=32). Predictions were that academic performance and motivation would be improved by the PSI method, and student satisfaction

  2. Study of gray image pseudo-color processing algorithms

    NASA Astrophysics Data System (ADS)

    Hu, Jinlong; Peng, Xianrong; Xu, Zhiyong

    In gray images which contain abundant information, if the differences between adjacent pixels' intensity are small, the required information can not be extracted by humans, since humans are more sensitive to color images than gray images. If gray images are transformed to pseudo-color images, the details of images will be more explicit, and the target will be recognized more easily. There are two methods (in frequency field and in spatial field) to realize pseudo-color enhancement of gray images. The first method is mainly the filtering in frequency field, and the second is the equal density pseudo-color coding methods which mainly include density segmentation coding, function transformation and complementary pseudo-color coding. Moreover, there are many other methods to realize pseudo-color enhancement, such as pixel's self-transformation based on RGB tri-primary, pseudo-color coding from phase-modulated image based on RGB color model, pseudo-color coding of high gray-resolution image, et al. However, above methods are tailored to a particular situation and transformations are based on RGB color space. In order to improve the visual effect, the method based on RGB color space and pixels' self-transformation is improved in this paper, which is based on HIS color space. Compared with other methods, some gray images with ordinary formats can be processed, and many gray images can be transformed to pseudo-color images with 24 bits. The experiment shows that the processed image has abundant levels, which is consistent with human's perception.

  3. The design of a distributed image processing and dissemination system

    SciTech Connect

    Rafferty, P.; Hower, L.

    1990-01-01

    The design and implementation of a distributed image processing and dissemination system was undertaken and accomplished as part of a prototype communication and intelligence (CI) system, the contingency support system (CSS), which is intended to support contingency operations of the Tactical Air Command. The system consists of six (6) Sun 3/180C workstations with integrated ITEX image processors and three (3) 3/50 diskless workstations located at four (4) system nodes (INEL, base, and mobiles). All 3/180C workstations are capable of image system server functions where as the 3/50s are image system clients only. Distribution is accomplished via both local and wide area networks using standard Defense Data Network (DDN) protocols (i.e., TCP/IP, et al.) and Defense Satellite Communication Systems (DSCS) compatible SHF Transportable Satellite Earth Terminals (TSET). Image applications utilize Sun's Remote Procedure Call (RPC) to facilitate the image system client and server relationships. The system provides functions to acquire, display, annotate, process, transfer, and manage images via an icon, panel, and menu oriented Sunview{trademark} based user interface. Image spatial resolution is 512 {times} 480 with 8-bits/pixel black and white and 12/24 bits/pixel color depending on system configuration. Compression is used during various image display and transmission functions to reduce the dynamic range of image data of 12/6/3/2 bits/pixel depending on the application. Image acquisition is accomplished in real-time or near-real-time by special purpose Itex image hardware. As a result all image displays are highly interactive with attention given to subsecond response time. 3 refs., 7 figs.

  4. The research on image processing technology of the star tracker

    NASA Astrophysics Data System (ADS)

    Li, Yu-ming; Li, Chun-jiang; Zheng, Ran; Li, Xiao; Yang, Jun

    2014-11-01

    As the core of visual sensitivity via imaging, image processing technology, especially for star tracker, is mainly characterized by such items as image exposure, optimal storage, background estimation, feature correction, target extraction, iteration compensation. This paper firstly summarizes the new research on those items at home and abroad, then, according to star tracker's practical engineering, environment in orbit and lifetime information, shows an architecture about rapid fusion between multiple frame images, which can be used to restrain oversaturation of the effective pixels, which means star tracker can be made more precise, more robust and more stable.

  5. A quantum mechanics-based framework for image processing and its application to image segmentation

    NASA Astrophysics Data System (ADS)

    Youssry, Akram; El-Rafei, Ahmed; Elramly, Salwa

    2015-10-01

    Quantum mechanics provides the physical laws governing microscopic systems. A novel and generic framework based on quantum mechanics for image processing is proposed in this paper. The basic idea is to map each image element to a quantum system. This enables the utilization of the quantum mechanics powerful theory in solving image processing problems. The initial states of the image elements are evolved to the final states, controlled by an external force derived from the image features. The final states can be designed to correspond to the class of the element providing solutions to image segmentation, object recognition, and image classification problems. In this work, the formulation of the framework for a single-object segmentation problem is developed. The proposed algorithm based on this framework consists of four major steps. The first step is designing and estimating the operator that controls the evolution process from image features. The states associated with the pixels of the image are initialized in the second step. In the third step, the system is evolved. Finally, a measurement is performed to determine the output. The presented algorithm is tested on noiseless and noisy synthetic images as well as natural images. The average of the obtained results is 98.5 % for sensitivity and 99.7 % for specificity. A comparison with other segmentation algorithms is performed showing the superior performance of the proposed method. The application of the introduced quantum-based framework to image segmentation demonstrates high efficiency in handling different types of images. Moreover, it can be extended to multi-object segmentation and utilized in other applications in the fields of signal and image processing.

  6. Computational techniques in zebrafish image processing and analysis.

    PubMed

    Xia, Shunren; Zhu, Yongxu; Xu, Xiaoyin; Xia, Weiming

    2013-02-15

    The zebrafish (Danio rerio) has been widely used as a vertebrate animal model in neurobiological. The zebrafish has several unique advantages that make it well suited for live microscopic imaging, including its fast development, large transparent embryos that develop outside the mother, and the availability of a large selection of mutant strains. As the genome of zebrafish has been fully sequenced it is comparatively easier to carry out large scale forward genetic screening in zebrafish to investigate relevant human diseases, from neurological disorders like epilepsy, Alzheimer's disease, and Parkinson's disease to other conditions, such as polycystic kidney disease and cancer. All of these factors contribute to an increasing number of microscopic images of zebrafish that require advanced image processing methods to objectively, quantitatively, and quickly analyze the image dataset. In this review, we discuss the development of image analysis and quantification techniques as applied to zebrafish images, with the emphasis on phenotype evaluation, neuronal structure quantification, vascular structure reconstruction, and behavioral monitoring. Zebrafish image analysis is continually developing, and new types of images generated from a wide variety of biological experiments provide the dataset and foundation for the future development of image processing algorithms. PMID:23219894

  7. Shocks and other nonlinear filtering applied to image processing

    NASA Astrophysics Data System (ADS)

    Osher, Stanley; Rudin, Leonid I.

    1991-12-01

    Two new filters for image enhancement are developed, extending the early work of the authors. One filter uses a new nonlinear time dependent partial differential equation and its discretization, the second uses a discretization which constrains the backwards heat equation and keeps it variation bounded. The evolution of the initial image as t increases through U(x,y,t) is the filtering process. The processed image is piecewise smooth, nonoscillatory and apparently an accurate reconstruction. The algorithms are fast and easy to program.

  8. Particle sizing in rocket motor studies utilizing hologram image processing

    NASA Technical Reports Server (NTRS)

    Netzer, David; Powers, John

    1987-01-01

    A technique of obtaining particle size information from holograms of combustion products is described. The holograms are obtained with a pulsed ruby laser through windows in a combustion chamber. The reconstruction is done with a krypton laser with the real image being viewed through a microscope. The particle size information is measured with a Quantimet 720 image processing system which can discriminate various features and perform measurements of the portions of interest in the image. Various problems that arise in the technique are discussed, especially those that are a consequence of the speckle due to the diffuse illumination used in the recording process.

  9. Dielectric barrier discharge image processing by Photoshop

    NASA Astrophysics Data System (ADS)

    Dong, Lifang; Li, Xuechen; Yin, Zengqian; Zhang, Qingli

    2001-09-01

    In this paper, the filamentary pattern of dielectric barrier discharge has been processed by using Photoshop, the coordinates of each filament can also be obtained. By using Photoshop two different ways have been used to analyze the spatial order of the pattern formation in dielectric barrier discharge. The results show that the distance of the neighbor filaments at U equals 14 kV and d equals 0.9 mm is about 1.8 mm. In the scope of the experimental error, the results from the two different methods are similar.

  10. Multispectral image processing for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark J.; Lazaroff, Mark B.; Brennan, Mark W.

    1993-03-01

    New techniques are described for detecting environmental anomalies and changes using multispectral imagery. Environmental anomalies are areas that do not exhibit normal signatures due to man-made activities and include phenomena such as effluent discharges, smoke plumes, stressed vegetation, and deforestation. A new region-based processing technique is described for detecting these phenomena using Landsat TM imagery. Another algorithm that can detect the appearance or disappearance of environmental phenomena is also described and an example illustrating its use in detecting urban changes using SPOT imagery is presented.

  11. Image processing as a tool in flight testing evaluation

    NASA Astrophysics Data System (ADS)

    Kaelldahl, Anders

    1991-05-01

    An advanced system for digitizing and automatically analyzing film and video images and its adaptation for specific purposes are described. A video camera of charge coupled devices type was installed in a Viggen test aircraft and flight tests of sensitivity for different wavelengths and measurable resolution were carried out. A video tape containing interesting runs was input into an image processing system in order to show that the video based system meets the specification for Head Up Display (HUD) evaluation. A market investigation was undertaken for an image processing system for HUD evaluation, with particular requirements on a high performance time base corrector in the video input unit, video disks, image processor, and display unit. A digital video disk system and a tracking algorithm based on correlation methods were chosen; the image processing system was completed with a film scanner for converting cinema films into a digital format. In addition to the HUD analysis technique, a photogrammetric system was used for testing microwave landing, radar altimeter, or third category landing system. It is concluded that this image processing system provides evaluation time reduction, higher possibility for correct evaluation and higher accuracy since more points from each image are used.

  12. Digital image processing of bone - Problems and potentials

    NASA Technical Reports Server (NTRS)

    Morey, E. R.; Wronski, T. J.

    1980-01-01

    The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

  13. Digital processing of side-scan sonar data with the Woods Hole image processing system software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low-resolution side-scan sonar data. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for processing side-scan sonar data. This report describes the steps required to process the collected data and to produce an image that has equal along- and across-track resol

  14. Color image processing and object tracking workstation

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Paulick, Michael J.

    1992-01-01

    A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.

  15. Anomalous diffusion process applied to magnetic resonance image enhancement

    NASA Astrophysics Data System (ADS)

    Senra Filho, A. C. da S.; Garrido Salmon, C. E.; Murta Junior, L. O.

    2015-03-01

    Diffusion process is widely applied to digital image enhancement both directly introducing diffusion equation as in anisotropic diffusion (AD) filter, and indirectly by convolution as in Gaussian filter. Anomalous diffusion process (ADP), given by a nonlinear relationship in diffusion equation and characterized by an anomalous parameters q, is supposed to be consistent with inhomogeneous media. Although classic diffusion process is widely studied and effective in various image settings, the effectiveness of ADP as an image enhancement is still unknown. In this paper we proposed the anomalous diffusion filters in both isotropic (IAD) and anisotropic (AAD) forms for magnetic resonance imaging (MRI) enhancement. Filters based on discrete implementation of anomalous diffusion were applied to noisy MRI T2w images (brain, chest and abdominal) in order to quantify SNR gains estimating the performance for the proposed anomalous filter when realistic noise is added to those images. Results show that for images containing complex structures, e.g. brain structures, anomalous diffusion presents the highest enhancements when compared to classical diffusion approach. Furthermore, ADP presented a more effective enhancement for images containing Rayleigh and Gaussian noise. Anomalous filters showed an ability to preserve anatomic edges and a SNR improvement of 26% for brain images, compared to classical filter. In addition, AAD and IAD filters showed optimum results for noise distributions that appear on extreme situations on MRI, i.e. in low SNR images with approximate Rayleigh noise distribution, and for high SNR images with Gaussian or non central χ noise distributions. AAD and IAD filter showed the best results for the parametric range 1.2 < q < 1.6, suggesting that the anomalous diffusion regime is more suitable for MRI. This study indicates the proposed anomalous filters as promising approaches in qualitative and quantitative MRI enhancement.

  16. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  17. Negative tone imaging process and materials for EUV lighography

    NASA Astrophysics Data System (ADS)

    Tarutani, Shinji; Nihashi, Wataru; Hirano, Shuuji; Yokokawa, Natsumi; Takizawa, Hiroo

    2013-03-01

    The advantages of NTI process in EUV is demonstrated by optical simulation method for 0.25NA and 0.33NA illumination system with view point of optical aerial image quality and photon density. The extendability of NTI for higher NA system is considered for further tight pitch and small size contact hole imaging capability. Process and material design strategy to NTI were discussed with consideration on comparison to ArF NTI process and materials, and challenges in EUV materials dedicated to NTI process were discussed as well. A new polymer was well designed for EUV-NTD process, and the resists formulated with the new polymer demonstrated good advantage of resolution and sensitivity in isolated trench imaging, and 24 nm half pitch resolution at dense C/H, with 0.3NA MET tool.

  18. Digital image processing: a primer for JVIR authors and readers: part 2: digital image acquisition.

    PubMed

    LaBerge, Jeanne M; Andriole, Katherine P

    2003-11-01

    This is the second installment of a three-part series on digital image processing intended to prepare authors for online submission of manuscripts. In the first article of the series, we reviewed the fundamentals of digital image architecture. In this article, we describe the ways that an author can import digital images to the computer desktop. We explore the modern imaging network and explain how to import picture archiving and communications systems (PACS) images to the desktop. Options and techniques for producing digital hard copy film are also presented. PMID:14605101

  19. Ubiquitous image processing: a novel image-enhancement facility for consumers

    NASA Astrophysics Data System (ADS)

    Shaw, Rodney; Johnson, Paul

    2006-01-01

    Image-enhancement technology has been developed from first principles whereby an unskilled user may enhance and optimize the image quality of any digital photograph to personal choice within a matter of seconds. The novel methodology, which by virtue of its simple user-interface, real-time computation, and lack of any appreciable user learning-curve, naturally lends itself to many practical imaging applications in addition to that of a stand-alone application, including digital cameras, printers and photo-kiosks, or provision as an image-processing web-service. The basic imaging philosophy and principles leading to the development of this enhancement technology are discussed.

  20. Reducing the absorbed dose in analogue radiography of infant chest images by improving the image quality, using image processing techniques.

    TOXLINE Toxicology Bibliographic Information

    Karimian A; Yazdani S; Askari MA

    2011-09-01

    Radiographic inspection is one of the most widely employed techniques for medical testing methods. Because of poor contrast and high un-sharpness of radiographic image quality in films, converting radiographs to a digital format and using further digital image processing is the best method of enhancing the image quality and assisting the interpreter in their evaluation. In this research work, radiographic films of 70 infant chest images with different sizes of defects were selected. To digitise the chest images and employ image processing the two algorithms (i) spatial domain and (ii) frequency domain techniques were used. The MATLAB environment was selected for processing in the digital format. Our results showed that by using these two techniques, the defects with small dimensions are detectable. Therefore, these suggested techniques may help medical specialists to diagnose the defects in the primary stages and help to prevent more repeat X-ray examination of paediatric patients.