Science.gov

Sample records for teaching image processing

  1. Image Processing for Teaching.

    ERIC Educational Resources Information Center

    Greenberg, R.; And Others

    1993-01-01

    The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

  2. Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services

    NASA Astrophysics Data System (ADS)

    di, L.; Deng, M.

    2010-12-01

    Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory Digital Image Processing, A Remote Sensing Perspective" authored by John Jensen. The textbook is widely adopted in the geography departments around the world for training students on digital processing of remote sensing images. In the traditional teaching setting for the course, the instructor prepares a set of sample remote sensing images to be used for the course. Commercial desktop remote sensing software, such as ERDAS, is used for students to do the lab exercises. The students have to do the excurses in the lab and can only use the simple images. For this specific course at GMU, we developed GeoBrain-based lab excurses for the course. With GeoBrain, students now can explore petabytes of remote sensing images in the NASA, NOAA, and USGS data archives instead of dealing only with sample images. Students have a much more powerful computing facility available for their lab excurses. They can explore the data and do the excurses any time at any place they want as long as they can access the Internet through the Web Browser. The feedbacks from students are all very positive about the learning experience on the digital image processing with the help of GeoBrain web processing services. The teaching/lab materials and GeoBrain services are freely available to anyone at http://www.laits.gmu.edu.

  3. Applying a visual language for image processing as a graphical teaching tool in medical imaging

    NASA Astrophysics Data System (ADS)

    Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.

  4. The teaching of computer programming and digital image processing in radiography.

    PubMed

    Allan, G L; Zylinski, J

    1998-06-01

    The increased use of digital processing techniques in Medical Radiations imaging modalities, along with the rapid advance in information technology has resulted in a significant change in the delivery of radiographic teaching programs. This paper details a methodology used to concurrently educate radiographers in both computer programming and image processing. The students learn to program in visual basic applications (VBA), and the programming skills are contextualised by requiring the students to write a digital subtraction angiography (DSA) package. Program code generation and image presentation interface is undertaken by the spreadsheet Microsoft Excel. The user-friendly nature of this common interface enables all students to readily begin program creation. The teaching of programming and image processing skills by this method may be readily generalised to other vocational fields where digital image manipulation is a professional requirement. PMID:9726504

  5. Teaching image processing and pattern recognition with the Intel OpenCV library

    NASA Astrophysics Data System (ADS)

    Koz?owski, Adam; Królak, Aleksandra

    2009-06-01

    In this paper we present an approach to teaching image processing and pattern recognition with the use of the OpenCV library. Image processing, pattern recognition and computer vision are important branches of science and apply to tasks ranging from critical, involving medical diagnostics, to everyday tasks including art and entertainment purposes. It is therefore crucial to provide students of image processing and pattern recognition with the most up-to-date solutions available. In the Institute of Electronics at the Technical University of Lodz we facilitate the teaching process in this subject with the OpenCV library, which is an open-source set of classes, functions and procedures that can be used in programming efficient and innovative algorithms for various purposes. The topics of student projects completed with the help of the OpenCV library range from automatic correction of image quality parameters or creation of panoramic images from video to pedestrian tracking in surveillance camera video sequences or head-movement-based mouse cursor control for the motorically impaired.

  6. Teaching High School Science Using Image Processing: A Case Study of Implementation of Computer Technology.

    ERIC Educational Resources Information Center

    Greenberg, Richard; Raphael, Jacqueline; Keller, Jill L.; Tobias, Sheila

    1998-01-01

    Outlines an in-depth case study of teachers' use of image processing in biology, earth science, and physics classes in one high school science department. Explores issues surrounding technology implementation. Contains 21 references. (DDR)

  7. A self-teaching image processing and voice-recognition-based, intelligent and interactive system to educate visually impaired children

    NASA Astrophysics Data System (ADS)

    Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad

    2010-02-01

    A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.

  8. SSMiles: Using Models to Teach about Remote Sensing and Image Processing.

    ERIC Educational Resources Information Center

    Tracy, Dyanne M., Ed.

    1994-01-01

    Presents an introductory lesson on remote sensing and image processing to be used in cooperative groups. Students are asked to solve a problem by gathering information, making inferences, transforming data into other forms, and making and testing hypotheses. Includes four expansions of the lesson and a reproducible student worksheet. (MKR)

  9. Image Processing

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

  10. Teaching: A Reflective Process

    ERIC Educational Resources Information Center

    German, Susan; O'Day, Elizabeth

    2009-01-01

    In this article, the authors describe how they used formative assessments to ferret out possible misconceptions among middle-school students in a unit about weather-related concepts. Because they teach fifth- and eighth-grade science, this assessment also gives them a chance to see how student understanding develops over the years. This year they…

  11. Image Visualization Medical Image Processing

    E-print Network

    Wu, Xiaolin

    Image Visualization ENG4BF3 Medical Image Processing #12;2 Visualization Methods · Visualization of medical images is for the determination of the quantitative information about the properties of anatomic types) #12;3 Two-dimensional Image Generation and Visualization · The utility of 2D images depends

  12. Linear Algebra and Image Processing

    ERIC Educational Resources Information Center

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  13. Image Processing

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A new spinoff product was derived from Geospectra Corporation's expertise in processing LANDSAT data in a software package. Called ATOM (for Automatic Topographic Mapping), it's capable of digitally extracting elevation information from stereo photos taken by spaceborne cameras. ATOM offers a new dimension of realism in applications involving terrain simulations, producing extremely precise maps of an area's elevations at a lower cost than traditional methods. ATOM has a number of applications involving defense training simulations and offers utility in architecture, urban planning, forestry, petroleum and mineral exploration.

  14. Hyperspectral image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

  15. Subroutines For Image Processing

    NASA Technical Reports Server (NTRS)

    Faulcon, Nettie D.; Monteith, James H.; Miller, Keith W.

    1988-01-01

    Image Processing Library computer program, IPLIB, is collection of subroutines facilitating use of COMTAL image-processing system driven by HP 1000 computer. Functions include addition or subtraction of two images with or without scaling, display of color or monochrome images, digitization of image from television camera, display of test pattern, manipulation of bits, and clearing of screen. Provides capability to read or write points, lines, and pixels from image; read or write at location of cursor; and read or write array of integers into COMTAL memory. Written in FORTRAN 77.

  16. Personal Teaching: Puzzles, Images, and Stories for Professional Reform.

    ERIC Educational Resources Information Center

    Mostert, Mark P.

    1992-01-01

    This article recommends that teachers' images of teaching, their metaphorical language, and their responses to teaching problems be examined to provide a tool for personal teaching reform. Excerpts from an interview with a preservice teacher illustrate his personal perceptions of teaching as "engagement" and of understanding and ability as a…

  17. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  18. Apple Image Processing Educator

    NASA Technical Reports Server (NTRS)

    Gunther, F. J.

    1981-01-01

    A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

  19. Image Processing System

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

  20. Multiscale Solar Image Processing

    NASA Astrophysics Data System (ADS)

    Young, C. B.; Byrne, J.; Ireland, J.; Gallagher, P. T.; McAteer, R. J.

    2006-12-01

    Wavelets have been very successfully used as a tool for noise reduction and general processing of images. Despite this, wavelets have inherent limitations with 2-D data. Wavelets are well suited for describing point singularities but much of the interesting information in images is described by edges, lines or curves. Newly developed multiscale transforms address some of these issues. The ridgelet transform takes the multiscale concept of wavelets but applies it to 1-D objects (lines) instead of 0-D objects (points). The curvelet transform likewise applies to multiscale curves. We present a preliminary study of the use of these new multiscale transforms with solar image data. These data include TRACE EUV images and LASCO coronagraph images.

  1. Role of Clinical Images Based Teaching as a Supplement to Conventional Clinical Teaching in Dermatology

    PubMed Central

    Kumar, Gurumoorthy Rajesh; Madhavi, Sankar; Karthikeyan, Kaliaperumal; Thirunavakarasu, MR

    2015-01-01

    Introduction: Clinical Dermatology is a visually oriented specialty, where visually oriented teaching is more important than it is in any other specialty. It is essential that students must have repeated exposure to common dermatological disorders in the limited hours of Dermatology clinical teaching. Aim: This study was conducted to assess the effect of clinical images based teaching as a supplement to the patient based clinical teaching in Dermatology, among final year MBBS students. Methods: A clinical batch comprising of 19 students was chosen for the study. Apart from the routine clinical teaching sessions, clinical images based teaching was conducted. This teaching method was evaluated using a retrospective pre-post questionnaire. Students’ performance was assessed using Photo Quiz and an Objective Structured Clinical Examination (OSCE). Feedback about the addition of images based class was collected from students. Results: A significant improvement was observed in the self-assessment scores following images based teaching. Mean OSCE score was 6.26/10, and that of Photo Quiz was 13.6/20. Conclusion: This Images based Dermatology teaching has proven to be an excellent supplement to routine clinical cases based teaching.

  2. Quantum image processing?

    E-print Network

    Mario Mastriani

    2015-12-03

    This paper presents a number of problems concerning the practical (real) implementation of the techniques known as Quantum Image Processing. The most serious problem is the recovery of the outcomes after the quantum measurement, which will be demonstrated in this work that is equivalent to a noise measurement, and it is not considered in the literature on the subject. It is noteworthy that this is due to several factors: 1) a classical algorithm that uses Dirac's notation and then it is coded in MATLAB does not constitute a quantum algorithm, 2) the literature emphasizes the internal representation of the image but says nothing about the classical-to-quantum and quantum-to-classical interfaces and how these are affected by decoherence, 3) the literature does not mention how to implement in a practical way (at the laboratory) these proposals internal representations, 4) given that Quantum Image Processing works with generic qubits this requires measurements in all axes of the Bloch sphere, logically, and 5) among others. In return, the technique known as Quantum Boolean Image Processing is mentioned, which works with computational basis states (CBS), exclusively. This methodology allows us to avoid the problem of quantum measurement, which alters the results of the measured except in the case of CBS. Said so far is extended to quantum algorithms outside image processing too.

  3. Teaching Psychological Report Writing: Content and Process

    ERIC Educational Resources Information Center

    Wiener, Judith; Costaris, Laurie

    2012-01-01

    The purpose of this article is to discuss the process of teaching graduate students in school psychology to write psychological reports that teachers and parents find readable and that guide intervention. The consensus from studies across four decades of research is that effective psychological reports connect to the client's context; have clear…

  4. Computer Aided Teaching of Digital Signal Processing.

    ERIC Educational Resources Information Center

    Castro, Ian P.

    1990-01-01

    Describes a microcomputer-based software package developed at the University of Surrey for teaching digital signal processing to undergraduate science and engineering students. Menu-driven software capabilities are explained, including demonstration of qualitative concepts and experimentation with quantitative data, and examples are given of…

  5. Processing Of Binary Images

    NASA Astrophysics Data System (ADS)

    Hou, H. S.

    1985-07-01

    An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.

  6. Image-Processing Program

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Hull, D. R.

    1994-01-01

    IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

  7. Image processing and reconstruction

    SciTech Connect

    Chartrand, Rick

    2012-06-15

    This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

  8. Introduction to computer image processing

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  9. Teaching Process Design through Integrated Process Synthesis

    ERIC Educational Resources Information Center

    Metzger, Matthew J.; Glasser, Benjamin J.; Patel, Bilal; Hildebrandt, Diane; Glasser, David

    2012-01-01

    The design course is an integral part of chemical engineering education. A novel approach to the design course was recently introduced at the University of the Witwatersrand, Johannesburg, South Africa. The course aimed to introduce students to systematic tools and techniques for setting and evaluating performance targets for processes, as well as…

  10. Image processing technology

    SciTech Connect

    Van Eeckhout, E.; Pope, P.; Balick, L.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.

  11. scikit-image: image processing in Python

    PubMed Central

    Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

    2014-01-01

    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

  12. Fundamentals of! Image Processing!

    E-print Network

    Erdem, Erkut

    "! · f: image ! image! · Uses: ! ­ Enhance images! · Noise reduction, smooth, resize, increase contrast reduction" · Assume image is degraded with an additive model.! · Then,! !Observation ! = True signal + noise! !Observed image = Actual image + noise! low-pass" filters! high-pass" filters! smooth the image! Common

  13. Fundamentals of Image Processing

    E-print Network

    Erdem, Erkut

    " · f: image è image · Uses: ­ Enhance images · Noise reduction, smooth, resize, increase contrast: noise reduction · Assume image is degraded with an additive model. · Then, Observation = True signal + noise Observed image = Actual image + noise low-pass filters high-pass filters smooth

  14. Image processing of thermal infrared images

    SciTech Connect

    Schott, J.R. )

    1989-09-01

    Techniques for digital processing of thermal infrared images are addressed. In particular, techniques that are uniquely required for thermal imagery are emphasized. This includes a treatment of how to implement absolute temperature calibration algorithms, methods for registering and combining multiple thermal infrared images, and methods for combining thermal infrared reflected visible and near-infrared data. In addition, the characteristics and methods for analysis of apparent thermal inertia images and thermal infrared multispectral images are treated. 25 refs.

  15. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  16. ASPIC: STARLINK image processing package

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Hartley, Ken F.; Penny, Alan J.; Kelly, B. D.; King, Dave J.; Lupton, W. F.; Tudhope, D.; Pike, C. D.; Cooke, J. A.; Pence, W. D.; Wallace, Patrick T.; Brownrigg, D. R. K.; Baines, Dave W. T.; Warren-Smith, Rodney F.; McNally, B. V.; Bell, L. L.; Jones, T. A.; Terrett, Dave L.; Pearce, D. J.; Carey, J. V.; Currie, Malcolm J.; Benn, Chris; Beard, S. M.; Giddings, Jack R.; Balona, Luis A.; Harrison, B.; Wood, Roger; Sparkes, Bill; Allan, Peter M.; Berry, David S.; Shirt, J. V.

    2015-10-01

    ASPIC handled basic astronomical image processing. Early releases concentrated on image arithmetic, standard filters, expansion/contraction/selection/combination of images, and displaying and manipulating images on the ARGS and other devices. Later releases added new astronomy-specific applications to this sound framework. The ASPIC collection of about 400 image-processing programs was written using the Starlink "interim" environment in the 1980; the software is now obsolete.

  17. Smart Image Enhancement Process

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

    2012-01-01

    Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

  18. Toward a Student-Centred Process of Teaching Arithmetic

    ERIC Educational Resources Information Center

    Eriksson, Gota

    2011-01-01

    This article describes a way toward a student-centred process of teaching arithmetic, where the content is harmonized with the students' conceptual levels. At school start, one classroom teacher is guided in recurrent teaching development meetings in order to develop teaching based on the students' prerequisites and to successively learn the…

  19. Session 0575 Suggestions for Teaching the Engineering Research Process

    E-print Network

    Duan, Zhenhai

    , engineering degree programs should make a concerted effort to teach students how to become good researchers with several specific techniques to help teach the basic skills necessary for performing good engineeringSession 0575 Suggestions for Teaching the Engineering Research Process David J. Lilja University

  20. Teaching People and Machines to Enhance Images

    NASA Astrophysics Data System (ADS)

    Berthouzoz, Floraine Sara Martianne

    Procedural tasks such as following a recipe or editing an image are very common. They require a person to execute a sequence of operations (e.g. chop onions, or sharpen the image) in order to achieve the goal of the task. People commonly use step-by-step tutorials to learn these tasks. We focus on software tutorials, more specifically photo manipulation tutorials, and present a set of tools and techniques to help people learn, compare and automate photo manipulation procedures. We describe three different systems that are each designed to help with a different stage in acquiring procedural knowledge. Today, people primarily rely on hand-crafted tutorials in books and on websites to learn photo manipulation procedures. However, putting together a high quality step-by-step tutorial is a time-consuming process. As a consequence, many online tutorials are poorly designed which can lead to confusion and slow down the learning process. We present a demonstration-based system for automatically generating succinct step-by-step visual tutorials of photo manipulations. An author first demonstrates the manipulation using an instrumented version of GIMP (GNU Image Manipulation Program) that records all changes in interface and application state. From the example recording, our system automatically generates tutorials that illustrate the manipulation using images, text, and annotations. It leverages automated image labeling (recognition of facial features and outdoor scene structures in our implementation) to generate more precise text descriptions of many of the steps in the tutorials. A user study finds that our tutorials are effective for learning the steps of a procedure; users are 20-44% faster and make 60-95% fewer errors when using our tutorials than when using screencapture video tutorials or hand-designed tutorials. We also demonstrate a new interface that allows learners to navigate, explore and compare large collections (i.e. thousands) of photo manipulation tutorials based on their command-level structure. Sites such as tutorialized.com or good-tutorials.com collect tens of thousands of photo manipulation tutorials. These collections typically contain many different tutorials for the same task. For example, there are many different tutorials that describe how to recolor the hair of a person in an image. Learners often want to compare these tutorials to understand the different ways a task can be done. They may also want to identify common strategies that are used across tutorials for a variety of tasks. However, the large number of tutorials in these collections and their inconsistent formats can make it difficult for users to systematically explore and compare them. Current tutorial collections do not exploit the underlying command-level structure of tutorials, and to explore the collection users have to either page through long lists of tutorial titles or perform keyword searches on the natural language tutorial text. We present a new browsing interface to help learners navigate, explore and compare collections of photo manipulation tutorials based on their command-level structure. Our browser indexes tutorials by their commands, identifies common strategies within the tutorial collection, and highlights the similarities and differences between sets of tutorials that execute the same task. User feedback suggests that our interface is easy to understand and use, and that users find command-level browsing to be useful for exploring large tutorial collections. They strongly preferred to explore tutorial collections with our browser over keyword search. Finally, we present a framework for generating content-adaptive macros (programs) that can transfer complex photo manipulation procedures to new target images. After learners master a photo manipulation procedure, they often repeatedly apply it to multiple images. For example, they might routinely apply the same vignetting effect to all their photographs. This process can be very tedious especially for procedures that involve many steps. While image manipulation programs pro

  1. Fundamentals of Image Processing

    E-print Network

    Erdem, Erkut

    at the edges or noise points #12;Signals ­ Examples #12;Motivation: noise reduction · Assume image is degraded with an additive model. · Then, ! Observation ! = True signal + noise ! Observed image = Actual image + noise low-pass filters high-pass filters smooth the image #12;Common types of noise ­ Salt and pepper noise: random

  2. The APL image processing laboratory

    NASA Technical Reports Server (NTRS)

    Jenkins, J. O.; Randolph, J. P.; Tilley, D. G.; Waters, C. A.

    1984-01-01

    The present and proposed capabilities of the Central Image Processing Laboratory, which provides a powerful resource for the advancement of programs in missile technology, space science, oceanography, and biomedical image analysis, are discussed. The use of image digitizing, digital image processing, and digital image output permits a variety of functional capabilities, including: enhancement, pseudocolor, convolution, computer output microfilm, presentation graphics, animations, transforms, geometric corrections, and feature extractions. The hardware and software of the Image Processing Laboratory, consisting of digitizing and processing equipment, software packages, and display equipment, is described. Attention is given to applications for imaging systems, map geometric correction, raster movie display of Seasat ocean data, Seasat and Skylab scenes of Nantucket Island, Space Shuttle imaging radar, differential radiography, and a computerized tomographic scan of the brain.

  3. The Teacher as Bag Lady: Images and Metaphors of Teaching.

    ERIC Educational Resources Information Center

    Parks, John G.

    1996-01-01

    Literature is examined for its metaphors for teaching and teachers, including the teacher as custodian and steward of knowledge, as sower of knowledge, and as "trickster," a picaresque, mythical figure who offers solutions, often inadvertently. The roles of kindness and cruelty in the learning/teaching process are also considered. (MSE)

  4. Images of Teaching with Computer Technology: A Metaphorical Perspective. Research Paper: Moving Beyond the Crossroads: Teachers as Agents for Change.

    ERIC Educational Resources Information Center

    Subramaniam, Karthigeyan

    This paper focuses teachers' images of their own computer-integrated teaching: the way the teaching process is structured by teachers' relationships with computer technology and how this relationship defines their work and their thoughts, voices, and experiences. It also examines how teachers themselves comprehend and convey their roles and thus…

  5. Peer Observation of Teaching: A Decoupled Process

    ERIC Educational Resources Information Center

    Chamberlain, John Martyn; D'Artrey, Meriel; Rowe, Deborah-Anne

    2011-01-01

    This article details the findings of research into the academic teaching staff experience of peer observation of their teaching practice. Peer observation is commonly used as a tool to enhance a teacher's continuing professional development. Research participants acknowledged its ability to help develop their teaching practice, but they also…

  6. Cooperative processes in image segmentation

    NASA Technical Reports Server (NTRS)

    Davis, L. S.

    1982-01-01

    Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

  7. The Kwasan Image Processing System.

    NASA Astrophysics Data System (ADS)

    Nakai, Y.; Kitai, R.; Asada, T.; Iwasaki, K.

    The Kwasan Image Processing System is a general purpose interactive image processing and analyzing system designed to process a large amount of photographic and photoelectric data. The hardware of the system mainly consists of a PDS MICRO-10 microdensitometer, a VAX-11/750 minicomputer, a 456 M bytes Winchester disk and a VS11 color-graphic terminal. The application programs "PDS, KIPS, STII" enable users to analyze spectrographic plates and two-dimensional images without site-special knowledge of programming.

  8. Voyager image processing at the Image Processing Laboratory

    NASA Technical Reports Server (NTRS)

    Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

    1980-01-01

    This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

  9. Digital image processing and analysis

    E-print Network

    van Vliet, Lucas J.

    color (RGB): mosaic filters, photon sorting 3CCD Image intensifiers Properties of CCD cameras LinearityDigital image processing and analysis et 8005 (et 4085) Lucas J. van Vliet www.ph.tn.tudelft.nl/~lucas TU Delft TNW: Faculty of Applied Sciences IST: Imaging Science and technology PH: Pattern Recognition

  10. Enhancing the Teaching-Learning Process: A Knowledge Management Approach

    ERIC Educational Resources Information Center

    Bhusry, Mamta; Ranjan, Jayanthi

    2012-01-01

    Purpose: The purpose of this paper is to emphasize the need for knowledge management (KM) in the teaching-learning process in technical educational institutions (TEIs) in India, and to assert the impact of information technology (IT) based KM intervention in the teaching-learning process. Design/methodology/approach: The approach of the paper is…

  11. Industrial Applications of Image Processing

    NASA Astrophysics Data System (ADS)

    Ciora, Radu Adrian; Simion, Carmen Mihaela

    2014-11-01

    The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

  12. Fundamentals of Image Processing

    E-print Network

    Erdem, Erkut

    need color? (an incomplete list...) · To tell what food is edible. · To distinguish material changes Hacettepe University Color #12;Review - image formation · What determines the brightness of an image. J. Black #12;Outline · Color and light · Color spaces 8 #12;Why does a visual system need color

  13. Image processing in forensic pathology.

    PubMed

    Oliver, W R

    1998-03-01

    Image processing applications in forensic pathology are becoming increasingly important. This article introduces basic concepts in image processing as applied to problems in forensic pathology in a non-mathematical context. Discussions of contrast enhancement, digital encoding, compression, deblurring, and other topics are presented. PMID:9523070

  14. An image processing algorithm for PPCR imaging

    NASA Astrophysics Data System (ADS)

    Cowen, Arnold R.; Giles, Anthony; Davies, Andrew G.; Workman, A.

    1993-09-01

    During 1990 The UK Department of Health installed two Photostimulable Phosphor Computed Radiography (PPCR) systems in the General Infirmary at Leeds with a view to evaluating the clinical and physical performance of the technology prior to its introduction into the NHS. An issue that came to light from the outset of the projects was the radiologists reservations about the influence of the standard PPCR computerized image processing on image quality and diagnostic performance. An investigation was set up by FAXIL to develop an algorithm to produce single format high quality PPCR images that would be easy to implement and allay the concerns of radiologists.

  15. SWNT Imaging Using Multispectral Image Processing

    NASA Astrophysics Data System (ADS)

    Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

    2012-02-01

    A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

  16. Teaching Image Computation: From Computer Graphics to Computer Vision

    E-print Network

    Draper, Bruce A.

    Teaching Image Computation: From Computer Graphics to Computer Vision Bruce A. Draper and J. Ross Beveridge Department of Computer Science Colorado State University Fort Collins, CO 80523 draper@cs.colostate.edu ross@cs.colostate.edu Keywords: Computer Vision, Computer Graphics, Education, Course Design

  17. Astronomical Image Processing with Hadoop

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-07-01

    In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.

  18. Image Processing: Some Challenging Problems

    NASA Astrophysics Data System (ADS)

    Huang, T. S.; Aizawa, K.

    1993-11-01

    Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

  19. Fundamentals of Image Processing

    E-print Network

    Erdem, Erkut

    M ln N l 0 N1 k 0 M 1 Forward transform #12;How to interpret a 2-d Fourier Spectrum and cosines Image courtesy of Technology Review #12;Review - Fourier Transform We want to understand Transform F(w) f(x)Inverse Fourier Transform For every w from 0 to inf, F(w) holds the amplitude

  20. Fundamentals of Image Processing

    E-print Network

    Erdem, Erkut

    credit: W. Freeman #12;Why does a visual system need color? (an incomplete list...) · To tell what food University Color Perception and Color Spaces #12;Review - image formation · What determines.77 0.89 0.99 0.93 Slide credit: M. J. Black #12;Outline · Perception of color and light · Color

  1. Fundamentals of! Image Processing!

    E-print Network

    Erdem, Erkut

    does a visual system need color? (an incomplete list...)" · To tell what food is edible University" " ! Color Perception and Color Spaces! Review - image formation" · What determines the brightness.99 0.93 Slide credit: M. J. Black Outline" · Perception of color and light! · Color spaces ! Why does

  2. Fractional Modeling Method of Cognition Process in Teaching Evaluation

    NASA Astrophysics Data System (ADS)

    Zhao, Chunna; Wu, Minhua; Zhao, Yu; Luo, Liming; Li, Yingshun

    Cognition process has been translated into other quantitative indicators in some assessment decision systems. In teaching evaluation system a fractional cognition process model is proposed in this paper. The fractional model is built on fractional calculus theory combining with classroom teaching features. The fractional coefficient is determined by the actual course information. Student self-parameter is decided by the actual situation potential of each individual student. The detailed descriptions are displayed through building block diagram. The objective quantitative description can be given in the fractional cognition process model. And the teaching quality assessments will be more objective and accurate based on the above quantitative description.

  3. Image Processing Mar. 12, 2013

    E-print Network

    Erdem, Erkut

    . Freeman #12;Why does a visual system need color? (an incomplete list...) · To tell what food is edibleBBM 663 Image Processing Mar. 12, 2013 Erkut Erdem Color #12;Review - image formation · What.78 0.77 0.89 0.99 0.93 Slide credit: M. J. Black #12;Outline · Color and light · Color spaces #12

  4. Fuzzy image processing in sun sensor

    NASA Technical Reports Server (NTRS)

    Mobasser, S.; Liebe, C. C.; Howard, A.

    2003-01-01

    This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

  5. Acousto-optic image processing.

    PubMed

    Balakshy, Vladimir I; Kostyuk, Dmitry E

    2009-03-01

    Acousto-optic processing of images is based on the angular selectivity of acousto-optic interaction resulting in spatial filtration of the image spectrum. We present recent theoretical and experimental investigations carried out in this field. Much attention is given to the analysis of the acousto-optic cell transfer function form depending on the crystal cut, the geometry of acousto-optic interaction, and the ultrasound frequency. Computer simulation results of the two-dimensional acousto-optic spatial filtration of some elementary images are presented. A new method of phase object visualization is suggested and examined that makes it possible to separate amplitude and phase information contained in an optical image. The potentialities of the acousto-optic image processing are experimentally demonstrated by examples of edge enhancement and optical wavefront visualization effects. PMID:19252612

  6. Using Classic and Contemporary Visual Images in Clinical Teaching.

    ERIC Educational Resources Information Center

    Edwards, Janine C.

    1990-01-01

    The patient's body is an image that medical students and residents use to process information. The classic use of images using the patient is qualitative and personal. The contemporary use of images is quantitative and impersonal. The contemporary use of imaging includes radiographic, nuclear, scintigraphic, and nuclear magnetic resonance…

  7. Images on the Web for Astronomy Teaching: Image Repositories

    NASA Astrophysics Data System (ADS)

    Fraknoi, Andrew

    This guide lists and reviews 61 Web sites with catalogs of astronomical images that are useful for both formal and informal education. Some are general sites (including images covering many topics), whereas others are particular to one topic or one instrument. We briefly discuss getting started in using images, and copyright and fair use issues.

  8. Image Processing Erkut Erdem

    E-print Network

    Erdem, Erkut

    what is most important Adapted from T. Judd #12;Scene Analysis and Eye Movements · Our visual system processes an enormous amount of data coming from the retina. ~109 bits/sec (Itti, Ph.) Anstis 1998 Adapted from T. Judd, M. S. Lewicki Michael S. Lewicki Carnegie MellonCP08: Eye

  9. Law and Pop Culture: Teaching and Learning about Law Using Images from Popular Culture.

    ERIC Educational Resources Information Center

    Joseph, Paul R.

    2000-01-01

    Believes that using popular culture images of law, lawyers, and the legal system is an effective way for teaching about real law. Offers examples of incorporating popular culture images when teaching about law. Includes suggestions for teaching activities, a mock trial based on Dr. Seuss's book "Yertle the Turtle," and additional resources. (CMK)

  10. Integrating image processing in PACS.

    PubMed

    Faggioni, Lorenzo; Neri, Emanuele; Cerri, Francesca; Turini, Francesca; Bartolozzi, Carlo

    2011-05-01

    Integration of RIS and PACS services into a single solution has become a widespread reality in daily radiological practice, allowing substantial acceleration of workflow with greater ease of work compared with older generation film-based radiological activity. In particular, the fast and spectacular recent evolution of digital radiology (with special reference to cross-sectional imaging modalities, such as CT and MRI) has been paralleled by the development of integrated RIS--PACS systems with advanced image processing tools (either two- and/or three-dimensional) that were an exclusive task of costly dedicated workstations until a few years ago. This new scenario is likely to further improve productivity in the radiology department with reduction of the time needed for image interpretation and reporting, as well as to cut costs for the purchase of dedicated standalone image processing workstations. In this paper, a general description of typical integrated RIS--PACS architecture with image processing capabilities will be provided, and the main available image processing tools will be illustrated. PMID:19619971

  11. [Significance of teaching the Nursing Process for the faculty].

    PubMed

    Franco Corona, M Brenda Eugenia; Carvalho, Emilia Campos de

    2005-01-01

    This paper aimed to analyze the significance of teaching the Nursing Process (NP) for faculty, through the Representational Theory of Significance. Ten professors from Guanajuato University participated in the investigation. Data were collected through semi-structured interview and questionnaire. The analysis was done from the perspective of the mentioned theory. All faculty members know, teach and apply the NP. They also refer to the importance of its teaching and practice by professionals. In conclusion, regarding the significance of the nursing process, faculty members tend to recognize it as a scientific method and an important instrument in professional nursing activities; as for the meaning of teaching the nursing process, it is considered as a well-founded, indispensable and modern instruction for the discipline, as long as it covers the five stages it consists of. PMID:16444396

  12. Ethical implications of digital images for teaching and learning purposes: an integrative review

    PubMed Central

    Kornhaber, Rachel; Betihavas, Vasiliki; Baber, Rodney J

    2015-01-01

    Background Digital photography has simplified the process of capturing and utilizing medical images. The process of taking high-quality digital photographs has been recognized as efficient, timely, and cost-effective. In particular, the evolution of smartphone and comparable technologies has become a vital component in teaching and learning of health care professionals. However, ethical standards in relation to digital photography for teaching and learning have not always been of the highest standard. The inappropriate utilization of digital images within the health care setting has the capacity to compromise patient confidentiality and increase the risk of litigation. Therefore, the aim of this review was to investigate the literature concerning the ethical implications for health professionals utilizing digital photography for teaching and learning. Methods A literature search was conducted utilizing five electronic databases, PubMed, Embase (Excerpta Medica Database), Cumulative Index to Nursing and Allied Health Literature, Educational Resources Information Center, and Scopus, limited to English language. Studies that endeavored to evaluate the ethical implications of digital photography for teaching and learning purposes in the health care setting were included. Results The search strategy identified 514 papers of which nine were retrieved for full review. Four papers were excluded based on the inclusion criteria, leaving five papers for final analysis. Three key themes were developed: knowledge deficit, consent and beyond, and standards driving scope of practice. Conclusion The assimilation of evidence in this review suggests that there is value for health professionals utilizing digital photography for teaching purposes in health education. However, there is limited understanding of the process of obtaining and storage and use of such mediums for teaching purposes. Disparity was also highlighted related to policy and guideline identification and development in clinical practice. Therefore, the implementation of policy to guide practice requires further research. PMID:26089681

  13. Teaching Science: A Picture Perfect Process.

    ERIC Educational Resources Information Center

    Leyden, Michael B.

    1994-01-01

    Explains how teachers can use graphs and graphing concepts when teaching art, language arts, history, social studies, and science. Students can graph the lifespans of the Ninja Turtles' Renaissance namesakes (Donatello, Michelangelo, Raphael, and Leonardo da Vinci) or world population growth. (MDM)

  14. The Tao of Teaching: Romance and Process.

    ERIC Educational Resources Information Center

    Schindler, Stefan

    1991-01-01

    Because college teaching aims to elevate, not entertain, it must be nourished and appreciated as a pedagogical alchemy mixing facts and feelings, ideas and skills, history and mystery. The current debate on educational reform should focus more on quality of learning experience, and on how to create and sustain it. (MSE)

  15. Computer processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.

    1984-01-01

    In the past 20 years, a substantial amount of effort has been expended on the development of computer techniques for enhancement of X-ray images and for automated extraction of quantitative diagnostic information. The historical development of these methods is described. Illustrative examples are presented and factors influencing the relative success or failure of various techniques are discussed. Some examples of current research in radiographic image processing is described.

  16. Digital processing of radiographic images

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.

    1973-01-01

    Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

  17. FITS Liberator: Image processing software

    NASA Astrophysics Data System (ADS)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  18. Image processing of galaxy photographs

    NASA Technical Reports Server (NTRS)

    Arp, H.; Lorre, J.

    1976-01-01

    New computer techniques for analyzing and processing photographic images of galaxies are presented, with interesting scientific findings gleaned from the processed photographic data. Discovery and enhancement of very faint and low-contrast nebulous features, improved resolution of near-limit detail in nebulous and stellar images, and relative colors of a group of nebulosities in the field are attained by the methods. Digital algorithms, nonlinear pattern-recognition filters, linear convolution filters, plate averaging and contrast enhancement techniques, and an atmospheric deconvolution technique are described. New detail is revealed in images of NGC 7331, Stephan's Quintet, Seyfert's Sextet, and the jet in M87, via processes of addition of plates, star removal, contrast enhancement, standard deviation filtering, and computer ratioing to bring out qualitative color differences.

  19. Seismic Imaging Processing and Migration

    Energy Science and Technology Software Center (ESTSC)

    2000-06-26

    Salvo is a 3D, finite difference, prestack, depth migration code for parallel computers. It is also capable of processing 2D and poststack data. The code requires as input a seismic dataset, a velocity model and a file of parameters that allows the user to select various options. The code uses this information to produce a seismic image. Some of the options available to the user include the application of various filters and imaging conditions. Themore »code also incorporates phase encoding (patent applied for) to process multiple shots simultaneously.« less

  20. Fingerprint recognition using image processing

    NASA Astrophysics Data System (ADS)

    Dholay, Surekha; Mishra, Akassh A.

    2011-06-01

    Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

  1. Computer image processing: Geologic applications

    NASA Technical Reports Server (NTRS)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  2. ASPIC -- Image Process programs (1)

    NASA Astrophysics Data System (ADS)

    Lawden, M. D.; Berry, D. S.; King, Dave; Hartley, Ken

    ASPIC is a collection of image processing programs. It is not a monolithic package of the usual kind, nor even a tight group of interacting programs like SPICA. The common link is that they are all written using the first version of the Starlink Primitive Subroutine Interface (SUN/4) and are held in a single directory.

  3. SAR Image Processing using GPU

    NASA Astrophysics Data System (ADS)

    Shanmugha Sundaram, GA; Sujith Maddikonda, Syam

    Synthetic aperture Radar (SAR) has been extensively used for space-borne Earth observations in recent times. In conventional SAR systems analog beam-steering techniques are capable of implementing multiple operational modes, such as the Stripmap, ScanSAR, and Spotlight, to fulfill the different requirements in terms of spatial resolution and coverage. Future RADAR satellites need to resolve the complex issues such as wide area coverage and resolution. Digital beamforming (DBF) is a promising technique to overcome the problems mentioned above. In communication satellites DBF technique is already implemented. This paper discuses the relevance of DBF in space-borne RADAR satellites for enhancements to quality imaging. To implement DBF in SAR, processing of SAR data is an important step. This work focused on processing of Level 1.1 and 1.5 SAR image data. The SAR raw data is computationally intensive to process. To resolve the computation problem, high performance computing (HPC) is necessary. The relevance of HPC for SAR data processing using an off-the-shelf graphical processing unit (GPU) over CPU is discussed in this paper. Quantitative estimates on SAR image processing performance comparisons using both CPU and GPU are also provided as validation for the results.

  4. Pedagogical issues for effective teaching of biosignal processing and analysis.

    PubMed

    Sandham, William A; Hamilton, David J

    2010-01-01

    Biosignal processing and analysis is generally perceived by many students to be a challenging topic to understand, and to become adept with the necessary analytical skills. This is a direct consequence of the high mathematical content involved, and the many abstract features of the topic. The MATLAB and Mathcad software packages offer an excellent algorithm development environment for teaching biosignal processing and analysis modules, and can also be used effectively in many biosignal, and indeed bioengineering, research areas. In this paper, traditional introductory and advanced biosignal processing (and analysis) syllabi are reviewed, and the use of MATLAB and Mathcad for teaching and research is illustrated with a number of examples. PMID:21095992

  5. Digital processing of radiographic images from PACS to publishing.

    PubMed

    Christian, M E; Davidson, H C; Wiggins, R H; Berges, G; Cannon, G; Jackson, G; Chapman, B; Harnsberger, H R

    2001-03-01

    Several studies have addressed the implications of filmless radiologic imaging on telemedicine, diagnostic ability, and electronic teaching files. However, many publishers still require authors to submit hard-copy images for publication of articles and textbooks. This study compares the quality digital images directly exported from picture archive and communications systems (PACS) to images digitized from radiographic film. The authors evaluated the quality of publication-grade glossy photographs produced from digital radiographic images using 3 different methods: (1) film images digitized using a desktop scanner and then printed, (2) digital images obtained directly from PACS then printed, and (3) digital images obtained from PACS and processed to improve sharpness prior to printing. Twenty images were printed using each of the 3 different methods and rated for quality by 7 radiologists. The results were analyzed for statistically significant differences among the image sets. Subjective evaluations of the filmless images found them to be of equal or better quality than the digitized images. Direct electronic transfer of PACS images reduces the number of steps involved in creating publication-quality images as well as providing the means to produce high-quality radiographic images in a digital environment. PMID:11310910

  6. Cognitive Learning Processes and Research in Shorthand Teaching

    ERIC Educational Resources Information Center

    Hillestad, Mildred

    1977-01-01

    In connection with the teaching-learning process in shorthand, the author discusses principles and procedures in such aspects of cognitive learning as brain function in symbol processing, short-term memory, stimulus-response modes in decoding, attribute identification in concept learning, contextual cues, and letter-sound associations. (MF)

  7. Image processing applications in NDE

    SciTech Connect

    Morris, R.A.

    1980-01-01

    Nondestructive examination (NDE) can be defined as a technique or collection of techniques that permits one to determine some property of a material or object without damaging the object. There are a large number of such techniques and most of them use visual imaging in one form or another. They vary from holographic interferometry where displacements under stress are measured to the visual inspection of an objects surface to detect cracks after penetrant has been applied. The use of image processing techniques on the images produced by NDE is relatively new and can be divided into three general categories: classical image enhancement; mensuration techniques; and quantitative sensitometry. An example is discussed of how image processing techniques are used to nondestructively and destructively test the product throughout its life cycle. The product that will be followed is the microballoon target used in the laser fusion program. The laser target is a small (50 to 100 ..mu..m - dia) glass sphere with typical wall thickness of 0.5 to 6 ..mu..m. The sphere may be used as is or may be given a number of coatings of any number of materials. The beads are mass produced by the millions and the first nondestructive test is to separate the obviously bad beads (broken or incomplete) from the good ones. After this has been done, the good beads must be inspected for spherocity and wall thickness uniformity. The microradiography of the glass, uncoated bead is performed on a specially designed low-energy x-ray machine. The beads are mounted in a special jig and placed on a Kodak high resolution plate in a vacuum chamber that contains the x-ray source. The x-ray image is made with an energy less that 2 keV and the resulting images are then inspected at a magnification of 500 to 1000X. Some typical results are presented.

  8. Teaching the Dance Class: Strategies to Enhance Skill Acquisition, Mastery and Positive Self-Image

    ERIC Educational Resources Information Center

    Mainwaring, Lynda M.; Krasnow, Donna H.

    2010-01-01

    Effective teaching of dance skills is informed by a variety of theoretical frameworks and individual teaching and learning styles. The purpose of this paper is to present practical teaching strategies that enhance the mastery of skills and promote self-esteem, self-efficacy, and positive self-image. The predominant thinking and primary research…

  9. Teaching the Process...with Calvin & Hobbes.

    ERIC Educational Resources Information Center

    Big6 Newsletter, 1998

    1998-01-01

    Discusses the use of Calvin & Hobbes comic strips to point out steps in the Big6 information problem-solving process. Students can see what Calvin is doing wrong and then explain how to improve the process. (LRW)

  10. Using the Results of Teaching Evaluations to Improve Teaching: A Case Study of a New Systematic Process

    ERIC Educational Resources Information Center

    Malouff, John M.; Reid, Jackie; Wilkes, Janelle; Emmerton, Ashley J.

    2015-01-01

    This article describes a new 14-step process for using student evaluations of teaching to improve teaching. The new process includes examination of student evaluations in the context of instructor goals, student evaluations of the same course completed in prior terms, and evaluations of similar courses taught by other instructors. The process has…

  11. Teaching Strategies for the Process of Planned Change.

    ERIC Educational Resources Information Center

    Green, Clarissa P.

    1983-01-01

    Explores ways of presenting content and of fostering the grounding of the planned change process within the nurse's previous experience, value system, and personal characteristics. States that teaching strategies that combine experiential exercises with theory can make planned change meaningful and valuable to nurses. (NRJ)

  12. Interactive Virtual Client for Teaching Occupational Therapy Evaluative Processes

    E-print Network

    Stansfield, Sharon

    Interactive Virtual Client for Teaching Occupational Therapy Evaluative Processes Sharon Stansfield-based educational tool for Occupational Therapy students learning client evaluation techniques. The software is dialog-based and allows the student to interact with a virtual client. Students carry out an evaluation

  13. Developing Evaluative Tool for Online Learning and Teaching Process

    ERIC Educational Resources Information Center

    Aksal, Fahriye A.

    2011-01-01

    The research study aims to underline the development of a new scale on online learning and teaching process based on factor analysis. Further to this, the research study resulted in acceptable scale which embraces social interaction role, interaction behaviour, barriers, capacity for interaction, group interaction as sub-categories to evaluate…

  14. The Teaching of L2 Pronunciation through Processing Instruction

    ERIC Educational Resources Information Center

    Gonzales-Bueno, Manuela; Quintana-Lara, Marcela

    2011-01-01

    The goal of this study is to pilot test whether the instructional approach known as Processing Instruction could be adapted to the teaching of second language (L2) pronunciation. The target sounds selected were the Spanish tap and trill. Three groups of high school students of Spanish as a foreign language participated in the study. One group…

  15. Process versus Product Task Interpretation and Parental Teaching Practice.

    ERIC Educational Resources Information Center

    Renshaw, Peter D.; Gardner, Ruth

    1990-01-01

    Reports on research on parental teaching strategies with children aged three and four years. Findings support Dweck and Elliott's view that adults who are process oriented rather than product oriented act more as resources than as judges; focus children on learning rather than outcome; and respond to errors as natural and useful rather than as…

  16. Student Evaluation of Teaching: An Instrument and a Development Process

    ERIC Educational Resources Information Center

    Alok, Kumar

    2011-01-01

    This article describes the process of faculty-led development of a student evaluation of teaching instrument at Centurion School of Rural Enterprise Management, a management institute in India. The instrument was to focus on teacher behaviors that students get an opportunity to observe. Teachers and students jointly contributed a number of…

  17. RDI Advising Model for Improving the Teaching-Learning Process

    ERIC Educational Resources Information Center

    de la Fuente, Jesus; Lopez-Medialdea, Ana Maria

    2007-01-01

    Introduction: Advising in Educational Psychology from the perspective of RDI takes on a stronger investigative, innovative nature. The model proposed by De la Fuente et al (2006, 2007) and Education & Psychology (2007) was applied to the field of improving teaching-learning processes at a school. Hypotheses were as follows: (1) interdependence…

  18. Teaching About Occupations and Quality Control in Meat Processing

    ERIC Educational Resources Information Center

    McCreight, Donald E.

    1970-01-01

    This article is based on Dr. McCreight's Ph.D. dissertation, "A Vocational Teaching Experiment on Occupations and Quality Control in the Processing of Meat, which was completed at The Pennsylvania State University in 1969. Glenn Z. Stevens, Professor of Agricultural Education at The Pennsylvania State University, was Dr. McCreight's major…

  19. Direct Influence of English Teachers in the Teaching Learning Process

    ERIC Educational Resources Information Center

    Inamullah, Hafiz Muhammad; Hussain, Ishtiaq; Ud Din, M. Naseer

    2008-01-01

    Teachers play a vital role in the classroom environment. Interaction between teacher and students is an essential part of the teaching/learning process. An educator, Flanders originally developed an instrument called Flanders Interaction Analysis (FIA). The FIA system was designed to categorize the types, quantity of verbal interaction and direct…

  20. Teaching Information Systems Development via Process Variants

    ERIC Educational Resources Information Center

    Tan, Wee-Kek; Tan, Chuan-Hoo

    2010-01-01

    Acquiring the knowledge to assemble an integrated Information System (IS) development process that is tailored to the specific needs of a project has become increasingly important. It is therefore necessary for educators to impart to students this crucial skill. However, Situational Method Engineering (SME) is an inherently complex process that…

  1. Teaching as a Two-Way Street: Discontinuities Among Metaphors, Images, and Classroom Realities.

    ERIC Educational Resources Information Center

    Dooley, Cindy

    1998-01-01

    One preservice teacher's images and metaphors about teaching and learning were used to investigate his difficulties in the classroom. Data from journal entries, observations, and interviews indicated that examination of metaphors and images encouraged him to reflect on his beliefs, assumptions, and approaches to teaching, providing a language that…

  2. Evaluation of Pre-Service Teachers' Images of Science Teaching in Turkey

    ERIC Educational Resources Information Center

    Yilmaz, Hulya; Turkmen, Hakan; Pedersen, Jon E.; Huyuguzel Cavas, Pinar

    2007-01-01

    The purpose of this study is to investigate elementary pre-service teachers' image of science teaching, analyze the gender differences in image of science teaching, and evaluate restructured 2004 education reform by using a Draw-A-Science-Teacher-Test Checklist (DASTT-C). Two hundred thirteen (213) pre-service elementary teachers from three…

  3. Chemical Process Design: An Integrated Teaching Approach.

    ERIC Educational Resources Information Center

    Debelak, Kenneth A.; Roth, John A.

    1982-01-01

    Reviews a one-semester senior plant design/laboratory course, focusing on course structure, student projects, laboratory assignments, and course evaluation. Includes discussion of laboratory exercises related to process waste water and sludge. (SK)

  4. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  5. Image Processing and Analysis Image Restoration -correcting errors and distortion.

    E-print Network

    Small, Christopher

    and other instrument noise Applying corrections to compensate for atmospheric absorbtion & radiance Image classification Temporal Variation What is an Image? An image is an array of numbers. A sensor (e.g. AVHRRImage Processing and Analysis 3 stages: Image Restoration - correcting errors and distortion

  6. Image Processing and Analysis Image Restoration correcting errors and distortion.

    E-print Network

    Small, Christopher

    and other instrument noise Applying corrections to compensate for atmospheric absorbtion & radiance Image classification Temporal Variation What is an Image? An image is an array of numbers. A sensor (e.g. AVHRRImage Processing and Analysis 3 stages: Image Restoration ­ correcting errors and distortion

  7. Community Assessment in Teaching the Research Process

    ERIC Educational Resources Information Center

    Craddock, IdaMae

    2013-01-01

    Community assessment is the concept of using wider professional communities to provide authentic assessment to students. It means using the knowledge available in one's immediate surroundings and through Web 2.0 tools to enrich instructional processes. It means using retirees, experts, and volunteers from professional organizations and…

  8. Teaching Word Processing in the Library.

    ERIC Educational Resources Information Center

    Teo, Elizabeth A.; Jenkins, Sylvia M.

    A description is provided of a program developed at Moraine Valley Community College (MVCC), in Illinois, for providing word processing instruction in the library, including recommendations for program development based on MVCC experience and results from a survey of program participants. The first part of the paper discusses a model development…

  9. Video and Image Processing in Multimedia Systems (Video Processing)

    E-print Network

    Furht, Borko

    COT 6930 Video and Image Processing in Multimedia Systems (Video Processing) Instructor: Borko. Content-based image and video indexing and retrieval. Video processing using compressed data. Course concepts and structures 4. Classification of compression techniques 5. Image and video compression

  10. Concurrent Image Processing Executive (CIPE)

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1988-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

  11. Digital Image Processing Instructor: Namrata Vaswani

    E-print Network

    Vaswani, Namrata

    signals don't always model image noise well · No standard statistical models to categorize images, every/detect (inference) from images · Pattern Recognition: only detection/classification problems · Computer Vision: 2DDigital Image Processing Instructor: Namrata Vaswani http://www.ece.iastate.edu/~namrata #12

  12. Eliminating "Hotspots" in Digital Image Processing

    NASA Technical Reports Server (NTRS)

    Salomon, P. M.

    1984-01-01

    Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.

  13. Introduction to Image Processing Prof. George Wolberg

    E-print Network

    Wolberg, George

    , and Machine Vision, Cengage Learning, 2014. George Wolberg, Digital Image Warping, IEEE Computer Society Press processing concerned specifically with pictures · It aims to improve image quality for - human perceptionIntroduction to Image Processing Prof. George Wolberg Dept. of Computer Science City College of New

  14. Gradient-Domain Processing of Large Images

    E-print Network

    Kazhdan, Michael

    Observation: Since our eye is sensitive to image boundaries, we would like to construct a single low dynamic to represent images, it is not how our eyes process visual information. 1 1 #12;Motivation We tend to think way to represent images, it is not how our eyes process visual information. So it might

  15. The Quantitative Methods Boot Camp: Teaching Quantitative Thinking and

    E-print Network

    Born, Richard

    biological systems. The boot camp teaches basic programming using biological examples from statistics, image processing, and data analysis. This integrative approach to teaching programming and quantitative reasoningEDUCATION The Quantitative Methods Boot Camp: Teaching Quantitative Thinking and Computing Skills

  16. Visualisation of Ecohydrological Processes and Relationships for Teaching Using Advanced Techniques

    NASA Astrophysics Data System (ADS)

    Guan, H.; Wang, H.; Gutierrez-Jurado, H. A.; Yang, Y.; Deng, Z.

    2014-12-01

    Ecohydrology is an emerging discipline with a rapid research growth. This calls for enhancing ecohydrology education in both undergraduate and postgraduate levels. In other hydrology disciplines, hydrological processes are commonly observed in environments (e.g. streamflow, infiltration) or easily demonstrated in labs (e.g. Darcy's column). It is relatively difficult to demonstrate ecohydrological concepts and processes (e.g. soil-vegetation water relationship) in teaching. In this presentation, we report examples of using some advanced techniques to illustrate ecohydrological concepts, relationships, and processes, with measurements based on a native vegetation catchment in South Australia. They include LIDAR images showing the relationship between topography-control hdyroclimatic conditions and vegetation distribution, electrical resistivity tomography derived images showing stem structures, continuous stem water potential monitoring showing diurnal variations of plant water status, root zone moisture depletion during dry spells, and responses to precipitation inputs, and incorporating sapflow measurements to demonstrate environmental stress on plant stomatal behaviours.

  17. The Teaching Process & Arts and Aesthetics. Third Yearbook on Research in Arts and Aesthetic Education.

    ERIC Educational Resources Information Center

    Knieter, Gerald L., Ed.; Stallings, Jane, Ed.

    Nine essays collected from a conference titled "The Teaching Process and the Arts and Aesthetics" examine the relationship between research and teaching in the arts and in aesthetic education. Issues which guided presentations and discussions were: the relationship between general research in education and the teaching process in the arts and…

  18. Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images

    ERIC Educational Resources Information Center

    Perry, Jamie; Kuehn, David; Langlois, Rick

    2007-01-01

    Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two-dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.…

  19. Process for Award of Graduate Doctoral Teaching Fellowships and Graduate Master's Teaching Fellowships AY 2010-11

    E-print Network

    Huang, Haiying

    2/11/10 Process for Award of Graduate Doctoral Teaching Fellowships and Graduate Master's Teaching Fellowships AY 2010-11 The new enhanced stipend and full tuition coverage support packages will be available semesters to qualify for and retain their fellowships. 2. Exception Requests Exceptions may be requested

  20. NASA Regional Planetary Image Facility image retrieval and processing system

    NASA Technical Reports Server (NTRS)

    Slavney, Susan

    1986-01-01

    The general design and analysis functions of the NASA Regional Planetary Image Facility (RPIF) image workstation prototype are described. The main functions of the MicroVAX II based workstation will be database searching, digital image retrieval, and image processing and display. The uses of the Transportable Applications Executive (TAE) in the system are described. File access and image processing programs use TAE tutor screens to receive parameters from the user and TAE subroutines are used to pass parameters to applications programs. Interface menus are also provided by TAE.

  1. Image processing on the IBM personal computer

    NASA Technical Reports Server (NTRS)

    Myers, H. J.; Bernstein, R.

    1985-01-01

    An experimental, personal computer image processing system has been developed which provides a variety of processing functions in an environment that connects programs by means of a 'menu' for both casual and experienced users. The system is implemented by a compiled BASIC program that is coupled to assembly language subroutines. Image processing functions encompass subimage extraction, image coloring, area classification, histogramming, contrast enhancement, filtering, and pixel extraction.

  2. Deep Semantic Learning: Teach machines to understand text, image, and knowledge graph

    E-print Network

    Borgs, Christian

    (DSSM) [Huang, He, Gao, Deng, Acero, Heck, "Learning deep structured semantic models for web searchDeep Semantic Learning: Teach machines to understand text, image, and knowledge graph Xiaodong He DLTC, Microsoft Research, Redmond, WA, USA #12;2 Deep Semantic Learning: Teach machines to understand

  3. Teaching while selecting images for satellite-based forest mapping Froduald Kabanza and Kami Rousseau

    E-print Network

    Kabanza, Froduald

    Teaching while selecting images for satellite-based forest mapping Froduald Kabanza and Kami.kabanza, kami.rousseau}@usherbrooke.ca Abstract Satellite images are increasingly being used to monitor environmental temporal changes. The general approach is to compare old images to recent ones acquired from

  4. Multimodality radiological image processing system (MRIPS)/MEDx: a system for medical image processing

    NASA Astrophysics Data System (ADS)

    Levin, Ronald L.; Douglas, Margaret A.

    1995-01-01

    A new image processing system, MRIPS/MEDx, is being deployed at NIH to facilitate the visualization and analysis of multidimensional images and spectra obtained from different radiological imaging modalities.

  5. Image Processing Chapter 6: Color Image

    E-print Network

    Wu, Xiaolin

    Processing · In 1666 Newton discovered that a beam of sunlight passed through a prism will break perceived from an object is determined by the nature of light reflected from that object. · An object reflecting light that is balanced in all visible light appears white. · An object that favors reflectance

  6. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (inventor); Sampsell, Jeffrey B. (inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  7. Neural net computing for biomedical image processing

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Anke

    1999-03-01

    In this paper we describe some of the most important types of neural networks applied in biomedical image processing. The networks described are variations of well-known architectures but are including image-relevant features in their structure. Convolutional neural networks, modified Hopfield networks, regularization networks and nonlinear principal component analysis neural networks are successfully applied in biomedical image classification, restoration and compression.

  8. Quantitative image processing in fluid mechanics

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus; Helman, James; Ning, Paul

    1992-01-01

    The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

  9. Water surface capturing by image processing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  10. Topics in genomic image processing 

    E-print Network

    Hua, Jianping

    2006-04-12

    -FISH images using EMIC with different integer wavelet filters and de- composition levels. The shown compression ratios are in bits /pixel/channel and are averaged over the eight test image sets. : : : 29 III Lossless compression results of the foreground... objects. The bit- rates shown are in bits/pixel/channel and are averaged over 88 M-FISH image sets. The (6,2) wavelet filters are used with three levels of decomposition in EMIC and EWV. : : : : : : : : : : : : : : 31 IV PSNR (in dB) of each channel of M...

  11. Accessing first-grade teachers' images and beliefs about teaching, learning, and students: The use of abstract symbolic drawing

    NASA Astrophysics Data System (ADS)

    Droy, Karen A.

    The purpose of this study was to explore teacher beliefs and images of students, learning, and teaching. The study was designed to elicit images and beliefs with the use of teachers' symbolic drawing and subsequent interpretation of their drawings. Twelve first grade teachers with teaching experience ranging from 1½ to 25 years, and from a variety of educational settings (i.e., urban, suburban, traditional public schools, non-traditional public or private schools) participated. Data collection utilized two primary methods of qualitative inquiry: teacher created abstract symbolic drawings and interviewing. The combination of symbolic drawings and interviewing provided an effective means for teachers to access, reflect upon, and express their tacit images and beliefs in a cohesive and holistic manner. The twelve teachers in this study appeared on the surface to have similar images of learning and teaching. Teachers talked about learning as a process that involved images of filtering, connecting, becoming stuck, and disconnecting. One major difference emerged that separated teachers into two distinct groups. The majority of teachers, ten out of twelve, viewed learning as a fact-based associative categorization where students either made connections through associations or replaced old information with new information. Only two teachers talked about learning as theory-based, describing learning as making connection through an assimilatory categorization process or making revisions to personal theories. Teachers who viewed learning as fact based also viewed teaching as fact-based. In general, these teachers used discussion, teacher questions, and a large variety of activities to help students collect new facts and make associative connections. Teachers who viewed learning as theory-based used activities, discussion, and teacher questions to promote conversation and thinking. They expected students to use new facts to build and revise theories with the use of logical reasoning.

  12. Image processing: mathematics, engineering, or art

    SciTech Connect

    Hanson, K.M.

    1985-01-01

    From the strict mathematical viewpoint, it is impossible to fully achieve the goal of digital image processing, which is to determine an unknown function of two dimensions from a finite number of discrete measurements linearly related to it. However, the necessity to display image data in a form that is visually useful to an observer supersedes such mathematically correct admonitions. Engineering defines the technological limits of what kind of image processing can be done and how the resulting image can be displayed. The appeal and usefulness of the final image to the human eye pertains to aesthetics. Effective image processing necessitates unification of mathematical theory, practical implementation, and artistic display. 59 references, 6 figures.

  13. Stable image acquisition for mobile image processing applications

    NASA Astrophysics Data System (ADS)

    Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker

    2015-02-01

    Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.

  14. Image Resolution Enhancement and its applications to Medical Image Processing

    E-print Network

    Ferguson, Thomas S.

    in the form of still images, a sequence of im- age frames devoid of inter-frame motion, a single video artifacts, blurring and noise. This is mathematically modeled as a nonlinear process consisting of a convo

  15. Applications of Digital Image Processing 11

    NASA Technical Reports Server (NTRS)

    Cho, Y. -C.

    1988-01-01

    A new technique, digital image velocimetry, is proposed for the measurement of instantaneous velocity fields of time dependent flows. A time sequence of single-exposure images of seed particles are captured with a high-speed camera, and a finite number of the single-exposure images are sampled within a prescribed period in time. The sampled images are then digitized on an image processor, enhanced, and superimposed to construct an image which is equivalent to a multiple exposure image used in both laser speckle velocimetry and particle image velocimetry. The superimposed image and a single-exposure Image are digitally Fourier transformed for extraction of information on the velocity field. A great enhancement of the dynamic range of the velocity measurement is accomplished through the new technique by manipulating the Fourier transform of both the single-exposure image and the superimposed image. Also the direction of the velocity vector is unequivocally determined. With the use of a high-speed video camera, the whole process from image acquisition to velocity determination can be carried out electronically; thus this technique can be developed into a real-time capability.

  16. Image processing and analysis of Saturn's rings

    NASA Technical Reports Server (NTRS)

    Yagi, G. M.; Jepsen, P. L.; Garneau, G. W.; Mosher, J. A.; Doyle, L. R.; Lorre, J. J.; Avis, C. C.; Korsmo, E. P.

    1981-01-01

    Processing of Voyager image data of Saturn's rings at JPL's Image Processing Laboratory is described. A software system to navigate the flight images, facilitate feature tracking, and to project the rings has been developed. This system has been used to make measurements of ring radii and to measure the velocities of the spoke features in the B-Ring. A projected ring movie to study the development of these spoke features has been generated. Finally, processing to facilitate comparison of the photometric properties of Saturn's rings at various phase angles is described.

  17. The Positioning of Students in Newly Qualified Secondary Teachers' Images of Their "Best Teaching"

    ERIC Educational Resources Information Center

    Haigh, Mavis; Kane, Ruth; Sandretto, Susan

    2012-01-01

    Asking newly qualified teachers (NQTs) to provide images of their "best teaching", and encouraging subsequent reflection on these images, has the potential to enhance their understanding of themselves as teachers as they explore their often unconsciously held assumptions about students and classrooms. This paper reports aspects of a study of 100…

  18. Enriching Student Concept Images: Teaching and Learning Fractions through a Multiple-Embodiment Approach

    ERIC Educational Resources Information Center

    Zhang, Xiaofen; Clements, M. A.; Ellerton, Nerida F.

    2015-01-01

    This study investigated how fifth-grade children's concept images of the unit fractions represented by the symbols 1/2, 1/3/ and 1/4 changed as a result of their participation in an instructional intervention based on multiple embodiments of fraction concepts. The participants' concept images were examined through pre- and post-teaching written…

  19. Interactive image processing in swallowing research

    NASA Astrophysics Data System (ADS)

    Dengel, Gail A.; Robbins, JoAnne; Rosenbek, John C.

    1991-06-01

    Dynamic radiographic imaging of the mouth, larynx, pharynx, and esophagus during swallowing is used commonly in clinical diagnosis, treatment and research. Images are recorded on videotape and interpreted conventionally by visual perceptual methods, limited to specific measures in the time domain and binary decisions about the presence or absence of events. An image processing system using personal computer hardware and original software has been developed to facilitate measurement of temporal, spatial and temporospatial parameters. Digitized image sequences derived from videotape are manipulated and analyzed interactively. Animation is used to preserve context and increase efficiency of measurement. Filtering and enhancement functions heighten image clarity and contrast, improving visibility of details which are not apparent on videotape. Distortion effects and extraneous head and body motions are removed prior to analysis, and spatial scales are controlled to permit comparison among subjects. Effects of image processing on intra- and interjudge reliability and research applications are discussed.

  20. Enriching student concept images: Teaching and learning fractions through a multiple-embodiment approach

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaofen; Clements, M. A. (Ken); Ellerton, Nerida F.

    2015-06-01

    This study investigated how fifth-grade children's concept images of the unit fractions represented by the symbols , , and changed as a result of their participation in an instructional intervention based on multiple embodiments of fraction concepts. The participants' concept images were examined through pre- and post-teaching written questions and pre- and post-teaching one-to-one verbal interview questions. Results showed that at the pre-teaching stage, the student concept images of unit fractions were very narrow and mainly linked to area models. However, after the instructional intervention, the fifth graders were able to select and apply a variety of models in response to unit fraction tasks, and their concept images of unit fractions were enriched and linked to capacity, perimeter, linear and discrete models, as well as to area models. Their performances on tests had improved, and their conceptual understandings of unit fractions had developed.

  1. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  2. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  3. Nonlinear Optical Image Processing with Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Deiss, Ron (Technical Monitor)

    1994-01-01

    The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

  4. Image Processing Apr. 16, 2012

    E-print Network

    Erdem, Erkut

    [k,l]e i km M ln N l 0 N1 k 0 M 1 Forward transform #12;How to interpret a 2-d Fourier of sines and cosines Image courtesy of Technology Review #12;Review - Fourier Transform We want(x) F(w)Fourier Transform F(w) f(x)Inverse Fourier Transform For every w from 0 to inf, F(w) holds

  5. HYPERSPECTRAL IMAGING FOR FOOD PROCESSING AUTOMATION

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A hyperspectral imaging system could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system inc...

  6. Image Processing with Nonparametric Neighborhood Statistics

    E-print Network

    , "Feature-Preserving MRI Denoising using a Nonparametric, Empirical-Bayes Approach", IEEE Trans. MedicalImage Processing with Nonparametric Neighborhood Statistics Ross T. Whitaker Scientific Computing and Ross T. Whitaker, "Higher-Order Image Statistics for Unsupervised, Information- Theoretic, Adaptive

  7. Imaging partons in exclusive scattering processes

    E-print Network

    Markus Diehl

    2012-09-20

    The spatial distribution of partons in the proton can be probed in suitable exclusive scattering processes. I report on recent performance estimates for parton imaging at a proposed Electron-Ion Collider.

  8. Image Processing (E735001) Valid in the academic year 2015-2016

    E-print Network

    Walraevens, Joris

    via an exam contract Dutch Image processing, computer vision, OpenCV The course focuses on a number algorithms. The programming environment used is OpenCV, but at the start of the course it is sufficient in OpenCV. Contact hrsStudy time 80 hCredits 3.0 Teaching languages Keywords Position of the course

  9. Pharmacy Students' Perceptions of a Teaching Evaluation Process

    PubMed Central

    Desselle, Shane P.

    2007-01-01

    Objective To assess PharmD students' perceptions of the usefulness of Duquesne University's Teaching Effectiveness Questionnaire (TEQ), the instrument currently employed for student evaluation of teaching. Methods Opinions of PharmD students regarding the TEQ were measured using a survey instrument comprised of Likert-type scales eliciting perceptions, behaviors, and self-reported biases. Results PharmD students viewed student evaluation of teaching as appropriate and necessary, but conceded that the faculty members receiving the best evaluations were not always the most effective teachers. Most students indicated a willingness to complete the TEQ when given the opportunity but expressed frustration that their feedback did not appear to improve subsequent teaching efforts. Conclusion The current TEQ mechanism for student evaluation of teaching is clearly useful but nevertheless imperfect with respect to its ability to improve teaching. Future research may examine other aspects of pharmacy students' roles as evaluators of teaching. PMID:17429506

  10. Augmented Reality for Teaching Endotracheal Intubation: MR Imaging to Create Anatomically Correct Models

    PubMed Central

    Kerner, Karen F.; Imielinska, Celina; Rolland, Jannick; Tang, Haiying

    2003-01-01

    Clinical procedures have traditionally been taught at the bedside, in the morgue and in the animal lab. Augmented Reality (AR) technology (the merging of virtual reality and real objects or patients) provides a new method for teaching clinical and surgical procedures. Improved patient safety is a major advantage. We describe a system which employs AR technology to teach endotracheal intubation, using the Visible Human datasets, as well as MR images from live patient volunteers. PMID:14728393

  11. Improvement of hospital processes through business process management in Qaem Teaching Hospital: A work in progress.

    PubMed

    Yarmohammadian, Mohammad H; Ebrahimipour, Hossein; Doosty, Farzaneh

    2014-01-01

    In a world of continuously changing business environments, organizations have no option; however, to deal with such a big level of transformation in order to adjust the consequential demands. Therefore, many companies need to continually improve and review their processes to maintain their competitive advantages in an uncertain environment. Meeting these challenges requires implementing the most efficient possible business processes, geared to the needs of the industry and market segments that the organization serves globally. In the last 10 years, total quality management, business process reengineering, and business process management (BPM) have been some of the management tools applied by organizations to increase business competiveness. This paper is an original article that presents implementation of "BPM" approach in the healthcare domain that allows an organization to improve and review its critical business processes. This project was performed in "Qaem Teaching Hospital" in Mashhad city, Iran and consists of four distinct steps; (1) identify business processes, (2) document the process, (3) analyze and measure the process, and (4) improve the process. Implementing BPM in Qaem Teaching Hospital changed the nature of management by allowing the organization to avoid the complexity of disparate, soloed systems. BPM instead enabled the organization to focus on business processes at a higher level. PMID:25540784

  12. Improvement of hospital processes through business process management in Qaem Teaching Hospital: A work in progress

    PubMed Central

    Yarmohammadian, Mohammad H.; Ebrahimipour, Hossein; Doosty, Farzaneh

    2014-01-01

    In a world of continuously changing business environments, organizations have no option; however, to deal with such a big level of transformation in order to adjust the consequential demands. Therefore, many companies need to continually improve and review their processes to maintain their competitive advantages in an uncertain environment. Meeting these challenges requires implementing the most efficient possible business processes, geared to the needs of the industry and market segments that the organization serves globally. In the last 10 years, total quality management, business process reengineering, and business process management (BPM) have been some of the management tools applied by organizations to increase business competiveness. This paper is an original article that presents implementation of “BPM” approach in the healthcare domain that allows an organization to improve and review its critical business processes. This project was performed in “Qaem Teaching Hospital” in Mashhad city, Iran and consists of four distinct steps; (1) identify business processes, (2) document the process, (3) analyze and measure the process, and (4) improve the process. Implementing BPM in Qaem Teaching Hospital changed the nature of management by allowing the organization to avoid the complexity of disparate, soloed systems. BPM instead enabled the organization to focus on business processes at a higher level. PMID:25540784

  13. The geometric processing of remote sensing images

    NASA Astrophysics Data System (ADS)

    de Masson Dautume, G.

    The paper presents a process, based on generalized rays, which is suitable for the production of orthophotos and stereomates or the analytical plotting, starting from any kind of image or equivalent numerical data. The basic tool of the process is the Deformation Model (DM) which gives for each node of a lattice in the object-space the image coordinates and the direction parameters of the incident ray. It is demonstrated that image coordinates of an object-point are obtained by simple interpolation and that second order epipolar lines exist. Possible applications to conventional or CCD cameras and SLAR are outlined.

  14. Rotation Covariant Image Processing for Biomedical Applications

    PubMed Central

    Reisert, Marco

    2013-01-01

    With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences. PMID:23710255

  15. Image processing via ultrasonics - Status and promise

    NASA Technical Reports Server (NTRS)

    Kornreich, P. G.; Kowel, S. T.; Mahapatra, A.; Nouhi, A.

    1979-01-01

    Acousto-electric devices for electronic imaging of light are discussed. These devices are more versatile than line scan imaging devices in current use. They have the capability of presenting the image information in a variety of modes. The image can be read out in the conventional line scan mode. It can be read out in the form of the Fourier, Hadamard, or other transform. One can take the transform along one direction of the image and line scan in the other direction, or perform other combinations of image processing functions. This is accomplished by applying the appropriate electrical input signals to the device. Since the electrical output signal of these devices can be detected in a synchronous mode, substantial noise reduction is possible

  16. Emerging Model of Questioning through the Process of Teaching and Learning Electrochemistry

    ERIC Educational Resources Information Center

    Iksan, Zanaton Haji; Daniel, Esther

    2015-01-01

    Verbal questioning is a technique used by teachers in the teaching and learning process. Research in Malaysia related to teachers' questioning in the chemistry teaching and learning process is more focused on the level of the questions asked rather than the content to ensure that students understand. Thus, the research discussed in this paper is…

  17. Effects of Using Online Tools in Improving Regulation of the Teaching-Learning Process

    ERIC Educational Resources Information Center

    de la Fuente, Jesus; Cano, Francisco; Justicia, Fernando; Pichardo, Maria del Carmen; Garcia-Berben, Ana Belen; Martinez-Vicente, Jose Manuel; Sander, Paul

    2007-01-01

    Introduction: The current panorama of Higher Education reveals a need to improve teaching and learning processes taking place there. The rise of the information society transforms how we organize learning and transmit knowledge. On this account, teaching-learning processes must be enhanced, the role of teachers and students must be evaluated, and…

  18. Overview on METEOSAT geometrical image data processing

    NASA Technical Reports Server (NTRS)

    Diekmann, Frank J.

    1994-01-01

    Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

  19. Real-time optical image processing techniques

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang

    1988-01-01

    Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

  20. Bistatic SAR: Signal Processing and Image Formation.

    SciTech Connect

    Wahl, Daniel E.; Yocky, David A.

    2014-10-01

    This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

  1. JIP: Java image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  2. Mobile Phone Images and Video in Science Teaching and Learning

    ERIC Educational Resources Information Center

    Ekanayake, Sakunthala Yatigammana; Wishart, Jocelyn

    2014-01-01

    This article reports a study into how mobile phones could be used to enhance teaching and learning in secondary school science. It describes four lessons devised by groups of Sri Lankan teachers all of which centred on the use of the mobile phone cameras rather than their communication functions. A qualitative methodological approach was used to…

  3. Snapping Sharks, Maddening Mindreaders, and Interactive Images: Teaching Correlation.

    ERIC Educational Resources Information Center

    Mitchell, Mark L.

    Understanding correlation coefficients is difficult for students. A free computer program that helps introductory psychology students distinguish between positive and negative correlation, and which also teaches them to understand the differences between correlation coefficients of different size is described in this paper. The program is…

  4. A Pipeline Tool for CCD Image Processing

    NASA Astrophysics Data System (ADS)

    Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.

  5. Thermal Imaging Processes of Polymer Nanocomposite Coatings

    NASA Astrophysics Data System (ADS)

    Meth, Jeffrey

    2015-03-01

    Laser induced thermal imaging (LITI) is a process whereby infrared radiation impinging on a coating on a donor film transfers that coating to a receiving film to produce a pattern. This talk describes how LITI patterning can print color filters for liquid crystal displays, and details the physical processes that are responsible for transferring the nanocomposite coating in a coherent manner that does not degrade its optical properties. Unique features of this process involve heating rates of 107 K/s, and cooling rates of 104 K/s, which implies that not all of the relaxation modes of the polymer are accessed during the imaging process. On the microsecond time scale, the polymer flow is forced by devolatilization of solvents, followed by deformation akin to the constrained blister test, and then fracture caused by differential thermal expansion. The unique combination of disparate physical processes demonstrates the gamut of physics that contribute to advanced material processing in an industrial setting.

  6. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  7. Ambassadors of the Swedish Nation: National Images in the Teaching of the Swedish Lecturers in Germany 1918-1945

    ERIC Educational Resources Information Center

    Åkerlund, Andreas

    2015-01-01

    This article analyses the teaching of Swedish language lecturers active in Germany during the first half of the twentieth century. It shows the centrality of literature and literary constructions and analyses images of Swedishness and the Swedish nation present in the teaching material of that time in relation to the national image present in…

  8. ECE 468 Digital Image Processing Catalog Description: Introduction to digital image processing including fundamental concepts

    E-print Network

    filters Noise characterization, periodic noise and random noise Image restoration 2D de image processing (filtering, enhancement, de-noising) Measurable Student Learning Outcomes, restoration, filtering, and de-noising (ABET outcomes a, c, e, k, l, m, n) Learning Resources: Digital Image

  9. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  10. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  11. Infrared image processing and data analysis

    NASA Astrophysics Data System (ADS)

    Ibarra-Castanedo, C.; González, D.; Klein, M.; Pilla, M.; Vallerand, S.; Maldague, X.

    2004-12-01

    Infrared thermography in nondestructive testing provides images (thermograms) in which zones of interest (defects) appear sometimes as subtle signatures. In this context, raw images are not often appropriate since most will be missed. In some other cases, what is needed is a quantitative analysis such as for defect detection and characterization. In this paper, presentation is made of various methods of data analysis required either at preprocessing and/or processing images. References from literature are provided for briefly discussed known methods while novelties are elaborated in more details within the text which include also experimental results.

  12. Reconnaissance system performance, prediction through image processing

    NASA Astrophysics Data System (ADS)

    Conrad, Gary

    1990-11-01

    After establishing a working-base image of given pixel-resolution and digitization, in conjunction with a calibration transform, the present method for image-processing based reconnaissance system performance evaluation gives attention to scaling, atmospheric, and MTF factors. The 'kernel spread-point definition', which is a weighted map of the redistribution of energy from a point object, is used to emulate lens-diffraction limitations, linear image motion, and simusoidal or random vibration. Recognition and identification criteria can be evaluated by simply looking at targets.

  13. Three-dimensional image signals: processing methods

    NASA Astrophysics Data System (ADS)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  14. Support Routines for In Situ Image Processing

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

    2013-01-01

    This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the pointing of in situ cameras, (8) marsinvrange: Inverse of marsrange . given a range file, re-computes an XYZ file that closely matches the original. . marsproj: Projects an XYZ coordinate through the camera model, and reports the line/sample coordinates of the point in the image, (9) marsprojfid: Given the output of marsfidfinder, projects the XYZ locations and compares them to the found locations, creating a report showing the fiducial errors in each image. marsrad: Radiometrically corrects an image, (10) marsrelabel: Updates coordinate system or camera model labels in an image, (11) marstiexyz: Given a stereo pair, allows the user to interactively pick a point in each image and reports the XYZ value corresponding to that pair of locations. marsunmosaic: Extracts a single frame from a mosaic, which will be created such that it could have been an input to the original mosaic. Useful for creating simulated input frames using different camera models than the original mosaic used, and (12) merinverter: Uses an inverse lookup table to convert 8-bit telemetered data to its 12-bit original form. Can be used in other missions despite the name.

  15. Teaching Practice Trends Regarding the Teaching of the Design Process within a South African Context: A Situation Analysis

    ERIC Educational Resources Information Center

    Potgieter, Calvyn

    2013-01-01

    In this article an analysis is made of the responses of 95 technology education teachers, 14 technology education lecturers and 25 design practitioners to questionnaires regarding the teaching and the application of the design process. The main purpose of the questionnaires is to determine whether there are any trends regarding the strategies and…

  16. Processing infrared images of aircraft lapjoints

    NASA Technical Reports Server (NTRS)

    Syed, Hazari; Winfree, William P.; Cramer, K. E.

    1992-01-01

    Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.

  17. Contour Stencils and Variational Image Processing

    NASA Astrophysics Data System (ADS)

    Getreuer, Pascal Tom

    The first part of this thesis is on contour stencils, a new method for edge adaptive image processing. We focus particularly on image zooming, which is the problem of increasing the resolution of a given image. An important aspect of zooming is accurate estimation of edge orientations. Contour stencils is a new method for estimating the image contours based on total variation along curves. Contour stencils are applied in designing several edge-adaptive color zooming methods. These zooming methods fall at different points in the balance between speed and quality. One of these zooming methods, contour stencil windowed zooming, is particular successful. Although most zooming methods require either solving a large linear system or running many iterations, this method has linear complexity in the number of pixels and can be computed in a single pass through the image. The zoomed image is constructed as a function that may be sampled anywhere, enabling arbitrary resampling operations. Comparisons show that contour stencil zooming methods are competitive with existing methods. Applications of contour stencils to corner detection and image enhancement are also illustrated. The second part of this thesis is on topics in variational image processing. First, we apply variational techniques to formulate a total variation optimal prediction in Harten multiresolution schemes. We show that this prediction is well-defined, construct a Harten multiresolution using this prediction, and show that a modified encoding strategy is possible for approximation using the scheme. We also investigate the efficient numerical solution of the prediction and compare several different algorithms. Examples show that image approximation with this scheme is competitive with the CDF 9/7 wavelet. Next, we investigate nonconvex potentials in variational image problems. For the approximate solution of these nonconvex problems, we develop a particle swarm optimization like algorithm that avoids becoming trapped in shallow local minima. Examples in denoising and image zooming show that the method can outperform gradient descent. Finally, the last topic is on image restoration with Rician noise. Total variation regularization is usually applied with L² data fidelity assuming an additive white Gaussian noise model. However, better results are possible when the noise model accurately describes the noise in the given image. Total variation denoising has already been developed with the Laplace noise model (L¹ data fidelity) and the Poisson noise model. A challenge with Rician noise is that the resulting objective function is nonconvex. We develop a convex variational problem that closely approximates the Rician noise model. The problem is efficiently solved using the split Bregman method.

  18. Scaling-up Process-Oriented Guided Inquiry Learning Techniques for Teaching Large Information Systems Courses

    ERIC Educational Resources Information Center

    Trevathan, Jarrod; Myers, Trina; Gray, Heather

    2014-01-01

    Promoting engagement during lectures becomes significantly more challenging as class sizes increase. Therefore, lecturers need to experiment with new teaching methodologies to embolden deep learning outcomes and to develop interpersonal skills amongst students. Process Oriented Guided Inquiry Learning is a teaching approach that uses highly…

  19. Twitter for Teaching: Can Social Media Be Used to Enhance the Process of Learning?

    ERIC Educational Resources Information Center

    Evans, Chris

    2014-01-01

    Can social media be used to enhance the process of learning by students in higher education? Social media have become widely adopted by students in their personal lives. However, the application of social media to teaching and learning remains to be fully explored. In this study, the use of the social media tool Twitter for teaching was…

  20. The Effect of Activity Based Lexis Teaching on Vocabulary Development Process

    ERIC Educational Resources Information Center

    Mert, Esra Lule

    2013-01-01

    "Teaching word" as a complimentary process of teaching Turkish is a crucial field of study. However, studies on this area are insufficient. The only aim of the designed activities that get under way with the constructivist approach on which new education programs are based is to provide students with vocabulary elements of Turkish. In…

  1. Process Evaluation of a Teaching and Learning Centre at a Research University

    ERIC Educational Resources Information Center

    Smith, Deborah B.; Gadbury-Amyot, Cynthia C.

    2014-01-01

    This paper describes the evaluation of a teaching and learning centre (TLC) five?years after its inception at a mid-sized, midwestern state university. The mixed methods process evaluation gathered data from 209 attendees and non-attendees of the TLC from the full-time, benefit-eligible teaching faculty. Focus groups noted feelings of…

  2. Preservice Chemistry Teachers' Images about Science Teaching in Their Future Classrooms

    ERIC Educational Resources Information Center

    Elmas, Ridvan; Demirdogen, Betul; Geban, Omer

    2011-01-01

    The purpose of this study is to explore pre-service chemistry teachers' images of science teaching in their future classrooms. Also, association between instructional style, gender, and desire to be a teacher was explored. Sixty six pre-service chemistry teachers from three public universities participated in the data collection for this study. A…

  3. An Emphasis on Perception: Teaching Image Formation Using a Mechanistic Model of Vision.

    ERIC Educational Resources Information Center

    Allen, Sue; And Others

    An effective way to teach the concept of image is to give students a model of human vision which incorporates a simple mechanism of depth perception. In this study two almost identical versions of a curriculum in geometrical optics were created. One used a mechanistic, interpretive eye model, and in the other the eye was modeled as a passive,…

  4. Processing Images of Craters for Spacecraft Navigation

    NASA Technical Reports Server (NTRS)

    Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

    2009-01-01

    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

  5. Onboard Image Processing System for Hyperspectral Sensor.

    PubMed

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  6. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  7. Teaching the Process of Science: Faculty Perceptions and an Effective Methodology

    PubMed Central

    Coil, David; Wenderoth, Mary Pat; Cunningham, Matthew

    2010-01-01

    Most scientific endeavors require science process skills such as data interpretation, problem solving, experimental design, scientific writing, oral communication, collaborative work, and critical analysis of primary literature. These are the fundamental skills upon which the conceptual framework of scientific expertise is built. Unfortunately, most college science departments lack a formalized curriculum for teaching undergraduates science process skills. However, evidence strongly suggests that explicitly teaching undergraduates skills early in their education may enhance their understanding of science content. Our research reveals that faculty overwhelming support teaching undergraduates science process skills but typically do not spend enough time teaching skills due to the perceived need to cover content. To encourage faculty to address this issue, we provide our pedagogical philosophies, methods, and materials for teaching science process skills to freshman pursuing life science majors. We build upon previous work, showing student learning gains in both reading primary literature and scientific writing, and share student perspectives about a course where teaching the process of science, not content, was the focus. We recommend a wider implementation of courses that teach undergraduates science process skills early in their studies with the goals of improving student success and retention in the sciences and enhancing general science literacy. PMID:21123699

  8. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

  9. ENVIRONMENTALLYORIENTED PROCESSING OF MULTISPECTRAL SATELLITE IMAGES

    E-print Network

    Kreinovich, Vladik

    satellites present an op­ portunity for scientists to investigate problems in environmental and earth scienceENVIRONMENTALLY­ORIENTED PROCESSING OF MULTI­SPECTRAL SATELLITE IMAGES: NEW CHALLENGES FOR BAYESIAN METHODS S.A. STARKS AND V. KREINOVICH NASA Pan American Center for Earth and Environmental Studies

  10. VariationalPDEModels inImageProcessing

    E-print Network

    Vese, Luminita A.

    , human and machine vision, telecommunication, autopiloting, surveillance video, and biometric security science: computer vision and computer graphics. Vision (whether machine or human) tries to reconstruct of vision and cognitive sci- ence, image processing is a basic tool used to reconstruct the relative order

  11. CMSC 426: Image Processing (Computer Vision)

    E-print Network

    Jacobs, David

    CMSC 426: Image Processing (Computer Vision) David Jacobs Today's class · What is vision · What is computer vision · Layout of the class #12;Vision · ``to know what is where, by looking.'' (Marr). · Where · What Why is Vision Interesting? · Psychology ­ ~ 50% of cerebral cortex is for vision. ­ Vision is how

  12. Neuroprocessing: Image Compres-sion & Audio Processing

    E-print Network

    Dony, Bob

    Components: Hebbian Learning NN w11 y1 ym xn x2 x1 w12 wm1 wmn w1n w2m Neuroprocessing: Applications in Image Compression and Audio Processing ­ p.13/42 #12;APEX Network x1 x2 x3 xN y1 y2 y3 ym w c y Neuroprocessing

  13. Review of biomedical signal and image processing

    PubMed Central

    2013-01-01

    This article is a review of the book “Biomedical Signal and Image Processing” by Kayvan Najarian and Robert Splinter, which is published by CRC Press, Taylor & Francis Group. It will evaluate the contents of the book and discuss its suitability as a textbook, while mentioning highlights of the book, and providing comparison with other textbooks.

  14. Limiting liability via high resolution image processing

    SciTech Connect

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  15. Visual parameter optimisation for biomedical image processing

    PubMed Central

    2015-01-01

    Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538

  16. [Digital thoracic radiology: devices, image processing, limits].

    PubMed

    Frija, J; de Géry, S; Lallouet, F; Guermazi, A; Zagdanski, A M; De Kerviler, E

    2001-09-01

    In a first part, the different techniques of digital thoracic radiography are described. Since computed radiography with phosphore plates are the most commercialized it is more emphasized. But the other detectors are also described, as the drum coated with selenium and the direct digital radiography with selenium detectors. The other detectors are also studied in particular indirect flat panels detectors and the system with four high resolution CCD cameras. In a second step the most important image processing are discussed: the gradation curves, the unsharp mask processing, the system MUSICA, the dynamic range compression or reduction, the soustraction with dual energy. In the last part the advantages and the drawbacks of computed thoracic radiography are emphasized. The most important are the almost constant good quality of the pictures and the possibilities of image processing. PMID:11567193

  17. Photogrammetrie GZ Exercise 3: Image Processing with Matlab

    E-print Network

    Giger, Christine

    Photogrammetrie GZ Exercise 3: Image Processing with Matlab Institut für Geodäsie und of the image. In this exercise we shall investigate how the matrix capabilities of Matlab allow us to investigate and process digital images . Exercises 1. Pick (a) greyscale image, and (b) a color image. Using

  18. EOS image data processing system definition study

    NASA Technical Reports Server (NTRS)

    Gilbert, J.; Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    The Image Processing System (IPS) requirements and configuration are defined for NASA-sponsored advanced technology Earth Observatory System (EOS). The scope included investigation and definition of IPS operational, functional, and product requirements considering overall system constraints and interfaces (sensor, etc.) The scope also included investigation of the technical feasibility and definition of a point design reflecting system requirements. The design phase required a survey of present and projected technology related to general and special-purpose processors, high-density digital tape recorders, and image recorders.

  19. Teaching the Writing Process through Digital Storytelling in Pre-service Education 

    E-print Network

    Green, Martha Robison

    2012-07-16

    This study used a mixed-methods design to determine instructional strategies that best enhance pre-service teachers’ valuing of digital storytelling as a method to teach the narrative writing process; to consider how digital storytelling increases...

  20. Automated synthesis of image processing procedures using AI planning techniques

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  1. 1216 TRANSACTIONS IMAGE PROCESSING, JULY Sonar Image Segmentation Using an Unsupervised

    E-print Network

    Mignotte, Max

    1216 TRANSACTIONS IMAGE PROCESSING, JULY Sonar Image Segmentation Using an Unsupervised concerned hierarchical Markov random field (MRF) models and their application sonar image segmentation. We present original hierarchical segmentation procedure devoted images given high­resolution sonar. sonar

  2. Using the medical image processing package, ImageJ, for astronomy

    E-print Network

    Jennifer L. West; Ian D. Cameron

    2006-11-21

    At the most fundamental level, all digital images are just large arrays of numbers that can easily be manipulated by computer software. Specialized digital imaging software packages often use routines common to many different applications and fields of study. The freely available, platform independent, image-processing package ImageJ has many such functions. We highlight ImageJ's capabilities by presenting methods of processing sequences of images to produce a star trail image and a single high quality planetary image.

  3. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  4. IPLIB (Image processing library) user's manual

    NASA Technical Reports Server (NTRS)

    Faulcon, N. D.; Monteith, J. H.; Miller, K.

    1985-01-01

    IPLIB is a collection of HP FORTRAN 77 subroutines and functions that facilitate the use of a COMTAL image processing system driven by an HP-1000 computer. It is intended for programmers who want to use the HP 1000 to drive the COMTAL Vision One/20 system. It is assumed that the programmer knows HP 1000 FORTRAN 77 or at least one FORTRAN dialect. It is also assumed that the programmer has some familiarity with the COMTAL Vision One/20 system.

  5. Digital image processing of vascular angiograms

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.

    1975-01-01

    A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.

  6. Novel image processing approach to detect malaria

    NASA Astrophysics Data System (ADS)

    Mas, David; Ferrer, Belen; Cojoc, Dan; Finaurini, Sara; Mico, Vicente; Garcia, Javier; Zalevsky, Zeev

    2015-09-01

    In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.

  7. A VIRTUAL REALITY ELECTROCARDIOGRAPHY TEACHING TOOL Image Synthesis Group

    E-print Network

    O'Sullivan, Carol

    Bell Physiology Dept. Trinity College Dublin Dublin 2, Ireland cbell@tcd.ie Robert Mooney Image was first used by the inventor of ECG, Einthoven. It is still referred to as Einthoven's Triangle. Figure 2 demonstrates this. Figure 2. Einthoven's Triangle 417-162 250 #12;There are several research institutes

  8. Troubling Images of Teaching in No Child Left Behind

    ERIC Educational Resources Information Center

    Cochran-Smith, Marilyn; Lytle, Susan

    2006-01-01

    This article offers a critique of No Child Left Behind (NCLB) related to the implications for teachers in educational improvement. Through an analysis of the NCLB legislation and accompanying policy tools that support it, the authors explore three images or central common conceptions symbolic of basic attitudes and orientations about teachers and…

  9. Images and the History Lecture: Teaching the History Channel Generation

    ERIC Educational Resources Information Center

    Coohill, Joseph

    2006-01-01

    No sensible historian would argue that using images in history lectures is a pedagogical waste of time. All people seem to accept the idea that visual elements (paintings, photographs, films, maps, charts, etc.) enhance the retention of historical information and add greatly to student enjoyment of the subject. However, there seems to be very…

  10. Color Image Processing and Object Tracking System

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

    1996-01-01

    This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

  11. Digital image processing for information extraction.

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1973-01-01

    The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.

  12. Optical processing of imaging spectrometer data

    NASA Technical Reports Server (NTRS)

    Liu, Shiaw-Dong; Casasent, David

    1988-01-01

    The data-processing problems associated with imaging spectrometer data are reviewed; new algorithms and optical processing solutions are advanced for this computationally intensive application. Optical decision net, directed graph, and neural net solutions are considered. Decision nets and mineral element determination of nonmixture data are emphasized here. A new Fisher/minimum-variance clustering algorithm is advanced, initialization using minimum-variance clustering is found to be preferred and fast. Tests on a 500-class problem show the excellent performance of this algorithm.

  13. Sorting Olive Batches for the Milling Process Using Image Processing

    PubMed Central

    Puerto, Daniel Aguilera; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  14. Sorting Olive Batches for the Milling Process Using Image Processing.

    PubMed

    Aguilera Puerto, Daniel; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  15. Context Effects in the Teaching-Learning Process.

    ERIC Educational Resources Information Center

    Soar, Robert S.; Soar, Ruth M.

    A review of research findings on context variables in the classroom and their effects on student achievement provided a framework upon which to base conclusions on the following factors pertaining to effective teaching: (1) teacher control of the classroom; (2) adjustment of teacher control of learning activities to student abilities; (3)…

  16. Student Satisfaction and Its Implications in the Process of Teaching

    ERIC Educational Resources Information Center

    Ciobanu, Alina; Ostafe, Livia

    2014-01-01

    Student satisfaction is widely recognized as an indicator of the quality of students' learning and teaching experience. This study aims to highlight how satisfied students (from the primary and preschool pedagogy specialization within the Faculty of Psychology and Educational Sciences, who are studying to become future kindergarten and primary…

  17. Teaching Transferable Compensatory Skills and Processes to Visually Impaired Adults.

    ERIC Educational Resources Information Center

    Roberts, Alvin

    2001-01-01

    This article presents the eight laws of association theory and applies four of them to strategies for teaching transferable skills to individuals with visual impairments. Strategies described include situation forecasting, generalization, sense shifting, performing skills repetitively to facilitate the transfer habit, and assigning an intensity…

  18. Teaching Choreography in Higher Education: A Process Continuum Model

    ERIC Educational Resources Information Center

    Butterworth, Jo

    2004-01-01

    This study proposes a new paradigm for the learning and teaching of choreography in the tertiary sector. It is based on the rationale that the choreography curriculum for the twenty-first century should be broad and balanced, and that tertiary students will benefit from a range of skills, knowledge and understanding germane to possible future…

  19. Computer Use by School Teachers in Teaching-Learning Process

    ERIC Educational Resources Information Center

    Bhalla, Jyoti

    2013-01-01

    Developing countries have a responsibility not merely to provide computers for schools, but also to foster a habit of infusing a variety of ways in which computers can be integrated in teaching-learning amongst the end users of these tools. Earlier researches lacked a systematic study of the manner and the extent of computer-use by teachers. The…

  20. The constructive use of images in medical teaching: a literature review

    PubMed Central

    Norris, Elizabeth M

    2012-01-01

    This literature review illustrates the various ways images are used in teaching and the evidence appertaining to it and advice regarding permissions and use. Four databases were searched, 23 papers were retained out of 135 abstracts found for the study. Images are frequently used to motivate an audience to listen to a lecture or to note key medical findings. Images can promote observation skills when linked with learning outcomes, but the timing and relevance of the images is important – it appears they must be congruent with the dialogue. Student reflection can be encouraged by asking students to actually draw their own impressions of a course as an integral part of course feedback. Careful structured use of images improve attention, cognition, reflection and possibly memory retention. PMID:22666530

  1. High-speed imaging and image processing in voice disorders

    NASA Astrophysics Data System (ADS)

    Tigges, Monika; Wittenberg, Thomas; Rosanowski, Frank; Eysholdt, Ulrich

    1996-12-01

    A digital high-speed camera system for the endoscopic examination of the larynx delivers recording speeds of up to 10,000 frames/s. Recordings of up to 1 s duration can be stored and used for further evaluation. Maximum resolution is 128 multiplied by 128 pixel. The acoustic and electroglottographic signals are recorded simultaneously. An image processing program especially developed for this purpose renders time-way-waveforms (high-speed glottograms) of several locations on the vocal cords. From the graphs all of the known objective parameters of the voice can be derived. Results of examinations in normal subjects and patients are presented.

  2. The Comparison of Teaching Process of First Reading in USA and Turkey

    ERIC Educational Resources Information Center

    Bay, Yalçin

    2014-01-01

    The aim of this study is to compare the teaching process of early reading in the US to in Turkey. This study observes developing early reading of students, their reading miscues, and compares early reading process of students in the US and to early reading process of students in Turkey. This study includes the following research question: What are…

  3. Development of signal processing methods for imaging buried pipes

    SciTech Connect

    Michiguchi, Y.; Hiramoto, K.; Nishi, M.; Takahashi, F.; Ohtaka, T.; Okada, M.

    1987-01-01

    A new imaging technique for subsurface radars is described for reconstructing clear images of buried pipes in soil. The method developed has two signal processing stages; preprocessing and aperture synthesis. The preprocessing extracts signals scattered from the pipes by reducing clutter noise. The synthetic-aperture processing analyzes only the scattered signals derived by the first stage and reconstructs high-quality images in a short processing time. The imaging technique developed was successfully applied to the imaging of actual buried metallic pipes. It was experimentally confirmed that the new imaging method was capable of reconstructing clear images in a short time without losing image quality.

  4. Portable EDITOR (PEDITOR): A portable image processing system. [satellite images

    NASA Technical Reports Server (NTRS)

    Angelici, G.; Slye, R.; Ozga, M.; Ritter, P.

    1986-01-01

    The PEDITOR image processing system was created to be readily transferable from one type of computer system to another. While nearly identical in function and operation to its predecessor, EDITOR, PEDITOR employs additional techniques which greatly enhance its portability. These cover system structure and processing. In order to confirm the portability of the software system, two different types of computer systems running greatly differing operating systems were used as target machines. A DEC-20 computer running the TOPS-20 operating system and using a Pascal Compiler was utilized for initial code development. The remaining programmers used a Motorola Corporation 68000-based Forward Technology FT-3000 supermicrocomputer running the UNIX-based XENIX operating system and using the Silicon Valley Software Pascal compiler and the XENIX C compiler for their initial code development.

  5. Development of the SOFIA Image Processing Tool

    NASA Technical Reports Server (NTRS)

    Adams, Alexander N.

    2011-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

  6. Imaging ultrafast processes in nanometer sized clusters

    NASA Astrophysics Data System (ADS)

    Bostedt, Christoph

    2015-05-01

    Free-electron lasers deliver extremely intense, coherent x-ray flashes with femtosecond pulse length. With the intense x-ray pulses single nanoscale objects can be imaged wirth single shots, opening the door for spatially and time resolved investigations of transient states and dynamic processes. Imaging of individual He droplets allows the unambiguous identification of quantum vortices. Ultrafast scattering of small highly excited nanoplasma carries information about their transient electronic states. With pump-probe techniques the electronic and structural evolution of highly excited clusters and nanoplasmas far from equilibrium can be investigated with femtosecond time and nanometer spatial resolution. These examples showcase that there are exciting new opportunities for Atomic, Molecular and Cluster Physics using ultrafast and ultraintense x-ray pulses.

  7. Processing Neutron Imaging Data - Quo Vadis?

    NASA Astrophysics Data System (ADS)

    Kaestner, A. P.; Schulz, M.

    Once an experiment has ended at a neutron imaging instrument, users often ask themselves how to proceed with the collected data. Large amounts of data have been obtained, but for first time users there is often no plan or experience to evaluate the obtained information. The users are then depending on the support from the local contact, who unfortunately does not have the time to perform in-depth studies for every user. By instructing the users and providing evaluation tools either on-site or as free software this situation can be improved. With the continuous development of new instrument features that require increasingly complex analysis methods, there is a deficit on the side of developing tools that bring the new features to the user community. We propose to start a common platform for open source development of analysis tools dedicated to processing neutron imaging data.

  8. Curve evolution and estimation-theoretic techniques for image processing

    E-print Network

    Tsai, Andy, 1969-

    2001-01-01

    The broad objective of this thesis is the development of statistically robust, computationally efficient, and global image processing algorithms. Such image processing algorithms are not only useful, but in high demand ...

  9. Image Processing Onboard Spacecraft for Autonomous Plume Detection$

    E-print Network

    Schaffer, Steven

    Image Processing Onboard Spacecraft for Autonomous Plume Detection$ David R. Thompson , Melissa's limited cache and bandwidth, and pre- cludes sustained surveys of plume activity. Onboard processing could image sequences onboard to identify plumes, with events triggering preferential storage, prioritized

  10. HYMOSS signal processing for pushbroom spectral imaging

    NASA Technical Reports Server (NTRS)

    Ludwig, David E.

    1991-01-01

    The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

  11. Teaching with Pensive Images: Rethinking Curiosity in Paulo Freire's "Pedagogy of the Oppressed"

    ERIC Educational Resources Information Center

    Lewis, Tyson E.

    2012-01-01

    Often when the author is teaching philosophy of education, his students begin the process of inquiry by prefacing their questions with something along the lines of "I'm just curious, but ...." Why do teachers and students feel compelled to express their curiosity as "just" curiosity? Perhaps there is a slight embarrassment in proclaiming their…

  12. A New Image Processing and GIS Package

    NASA Technical Reports Server (NTRS)

    Rickman, D.; Luvall, J. C.; Cheng, T.

    1998-01-01

    The image processing and GIS package ELAS was developed during the 1980's by NASA. It proved to be a popular, influential and powerful in the manipulation of digital imagery. Before the advent of PC's it was used by hundreds of institutions, mostly schools. It is the unquestioned, direct progenitor or two commercial GIS remote sensing packages, ERDAS and MapX and influenced others, such as PCI. Its power was demonstrated by its use for work far beyond its original purpose, having worked several different types of medical imagery, photomicrographs of rock, images of turtle flippers and numerous other esoteric imagery. Although development largely stopped in the early 1990's the package still offers as much or more power and flexibility than any other roughly comparable package, public or commercial. It is a huge body or code, representing more than a decade of work by full time, professional programmers. The current versions all have several deficiencies compared to current software standards and usage, notably its strictly command line interface. In order to support their research needs the authors are in the process of fundamentally changing ELAS, and in the process greatly increasing its power, utility, and ease of use. The new software is called ELAS II. This paper discusses the design of ELAS II.

  13. Educational Use of Toshiba TDF-500 Medical Image Filing System for Teaching File Archiving and Viewing

    NASA Astrophysics Data System (ADS)

    Kimura, Michio; Yashiro, Naobumi; Kita, Koichi; Tani, Yuichiro; IIO, Masahiro

    1989-05-01

    The authors have been using medical image filing system TOSHIBA TDIS-FILE as a teaching files archiving and viewing at University of Tokyo, Hospital, Department of Radiology. Image display on CRT was proven sufficient for the purpose of education for small groups of students, as well as residents. However, retrieval time for archived images, man-machine interface, and financial expenses are not in a satisfactory level yet. The authors also implemented flexible retrieval scheme for diagnostic codes, which has been proven sophisticated. These kinds of software utilities, as well as hardware evolution, are essential for this kind of instruments to be used as potential component of PACSystem. In our department, PACS project is being carried on. In the system, TOSHIBA AS3160 workstation (=SUN 3/160) handles all user interfaces including controls of medical image displays, examination data bases, and interface with HIS.

  14. The Effects of Image-Based Concept Mapping on the Learning Outcomes and Cognitive Processes of Mobile Learners

    ERIC Educational Resources Information Center

    Yen, Jung-Chuan; Lee, Chun-Yi; Chen, I-Jung

    2012-01-01

    The purpose of this study was to investigate the effects of different teaching strategies (text-based concept mapping vs. image-based concept mapping) on the learning outcomes and cognitive processes of mobile learners. Eighty-six college freshmen enrolled in the "Local Area Network Planning and Implementation" course taught by the first author…

  15. Proceedings of the Irish Machine Vision and Image Processing Conference

    E-print Network

    Whelan, Paul F.

    IMVIP '99 Proceedings of the Irish Machine Vision and Image Processing Conference Dublin City cation Society #12;Paul F. Whelan (Ed.) IMVIP '99 Irish Machine Vision and Image Processing Conference that a full reference to the source is given. #12;Foreword The 1999Irish Machine Vision and Image Processing

  16. ATM experiment S-056 image processing requirements definition

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A plan is presented for satisfying the image data processing needs of the S-056 Apollo Telescope Mount experiment. The report is based on information gathered from related technical publications, consultation with numerous image processing experts, and on the experience that was in working on related image processing tasks over a two-year period.

  17. Using Image Processing to Determine Emphysema Severity

    NASA Astrophysics Data System (ADS)

    McKenzie, Alexander; Sadun, Alberto

    2010-10-01

    Currently X-rays and computerized tomography (CT) scans are used to detect emphysema, but other tests are required to accurately quantify the amount of lung that has been affected by the disease. These images clearly show if a patient has emphysema, but are unable by visual scan alone, to quantify the degree of the disease, as it presents as subtle, dark spots on the lung. Our goal is to use these CT scans to accurately diagnose and determine emphysema severity levels in patients. This will be accomplished by performing several different analyses of CT scan images of several patients representing a wide range of severity of the disease. In addition to analyzing the original CT data, this process will convert the data to one and two bit images and will then examine the deviation from a normal distribution curve to determine skewness. Our preliminary results show that this method of assessment appears to be more accurate and robust than the currently utilized methods, which involve looking at percentages of radiodensities in the air passages of the lung.

  18. Multispectral image processing: the nature factor

    NASA Astrophysics Data System (ADS)

    Watkins, Wendell R.

    1998-09-01

    The images processed by our brain represent our window into the world. For some animals this window is derived from a single eye, for others, including humans, two eyes provide stereo imagery, for others like the black widow spider several eyes are used (8 eyes), and some insects like the common housefly utilize thousands of eyes (ommatidia). Still other animals like the bat and dolphin have eyes for regular vision, but employ acoustic sonar vision for seeing where their regular eyes don't work such as in pitch black caves or turbid water. Of course, other animals have adapted to dark environments by bringing along their own lighting such as the firefly and several creates from the depths of the ocean floor. Animal vision is truly varied and has developed over millennia in many remarkable ways. We have learned a lot about vision processes by studying these animal systems and can still learn even more.

  19. Effects of image processing on the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  20. The Practice of Information Processing Model in the Teaching of Cognitive Strategies

    ERIC Educational Resources Information Center

    Ozel, Ali

    2009-01-01

    In this research, the differentiation condition of teaching the learning strategies depending on the time which the first grade of primary school teachers carried out to form an information-process skeleton on student is tried to be found out. This process including the efforts of 260 teachers in this direction consists of whether the adequate…

  1. The Arrangement of Students' Extracurricular Piano Practice Process with the Asynchronous Distance Piano Teaching Method

    ERIC Educational Resources Information Center

    Karahan, Ahmet Suat

    2015-01-01

    That the students do their extracurricular piano practices in the direction of the teacher's warnings is a key factor in achieving success in the teaching-learning process. However, the teachers cannot adequately control the students' extracurricular practices in the process of traditional piano education. Under the influence of this lack of…

  2. Development of the Instructional Model by Integrating Information Literacy in the Class Learning and Teaching Processes

    ERIC Educational Resources Information Center

    Maitaouthong, Therdsak; Tuamsuk, Kulthida; Techamanee, Yupin

    2011-01-01

    This study was aimed at developing an instructional model by integrating information literacy in the instructional process of general education courses at an undergraduate level. The research query, "What is the teaching methodology that integrates information literacy in the instructional process of general education courses at an undergraduate…

  3. The Guidance Role of the Instructor in the Teaching and Learning Process

    ERIC Educational Resources Information Center

    Alutu, Azuka N. G.

    2006-01-01

    This study examines the guidance role of the instructor in the teaching and learning process. The paper x-rays the need for the learners to be consciously guided by their teachers as this facilitates and complements the learning process. Gagne's theory of conditions of learning, phases of learning and model for design of instruction was adopted to…

  4. The Design Studio as Teaching/Learning Medium--A Process-Based Approach

    ERIC Educational Resources Information Center

    Ozturk, Maya N.; Turkkan, Elif E.

    2006-01-01

    This article discusses a design studio teaching experience exploring the design process itself as a methodological tool. We consider the structure of important phases of the process that contain different levels of design thinking: conception, function and practical knowledge as well as the transitions from inception to construction. We show how…

  5. Applying Experiential Learning in College Teaching and Assessment: A Process Model.

    ERIC Educational Resources Information Center

    Jackson, Lewis, Ed.; And Others

    This manual presents a process model in which university teaching and assessment processes are embedded within a broader view of the human learning experience and the outcomes that are required for professional student growth. The model conceptualizes the university's role in the lives of life-long learners and provides a framework for rethinking…

  6. Images of a 'good nurse' presented by teaching staff.

    PubMed

    de Araujo Sartorio, Natalia; Pavone Zoboli, Elma Lourdes Campos

    2010-11-01

    Nursing is at the same time a vocation, a profession and a job. By nature, nursing is a moral endeavor, and being a 'good nurse' is an issue and an aspiration for professionals. The aim of our qualitative research project carried out with 18 nurse teachers at a university nursing school in Brazil was to identify the ethical image of nursing. In semistructured interviews the participants were asked to choose one of several pictures, to justify their choice and explain what they meant by an ethical nurse. Five different perspectives were revealed: good nurses fulfill their duties correctly; they are proactive patient advocates; they are prepared and available to welcome others as persons; they are talented, competent, and carry out professional duties excellently; and they combine authority with power sharing in patient care. The results point to a transition phase from a historical introjection of religious values of obedience and service to a new sense of a secular, proactive, scientific and professional identity. PMID:21097967

  7. IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 8, NO. 9, SEPTEMBER 1999 1243 Image Segmentation and Labeling

    E-print Network

    Alajaji, Fady

    to noise, which may be additive or multiplicative, or to blurring. In other words, an image is a corruptedIEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 8, NO. 9, SEPTEMBER 1999 1243 Image Segmentation of an infection to yield a segmentation of the image into homogeneous regions. This process is implemented using

  8. IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Image Sensor Noise Parameter Estimation by

    E-print Network

    Hesser, Jürgen

    IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Image Sensor Noise Parameter Estimation by Variance denoising requires taking into account the dependence of the noise distribution on the original image into an image with signal-independent noise. Principal component analysis of blocks of the transformed image

  9. Spot restoration for GPR image post-processing

    DOEpatents

    Paglieroni, David W; Beer, N. Reginald

    2014-05-20

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  10. Using NASA Space Imaging Technology to Teach Earth and Sun Topics

    NASA Astrophysics Data System (ADS)

    Verner, E.; Bruhweiler, F. C.; Long, T.

    2011-12-01

    We teach an experimental college-level course, directed toward elementary education majors, emphasizing "hands-on" activities that can be easily applied to the elementary classroom. This course, Physics 240: "The Sun-Earth Connection" includes various ways to study selected topics in physics, earth science, and basic astronomy. Our lesson plans and EPO materials make extensive use of NASA imagery and cover topics about magnetism, the solar photospheric, chromospheric, coronal spectra, as well as earth science and climate. In addition we are developing and will cover topics on ecosystem structure, biomass and water on Earth. We strive to free the non-science undergraduate from the "fear of science" and replace it with the excitement of science such that these future teachers will carry this excitement to their future students. Hands-on experiments, computer simulations, analysis of real NASA data, and vigorous seminar discussions are blended in an inquiry-driven curriculum to instill confident understanding of basic physical science and modern, effective methods for teaching it. The course also demonstrates ways how scientific thinking and hands-on activities could be implemented in the classroom. We have designed this course to provide the non-science student a confident basic understanding of physical science and modern, effective methods for teaching it. Most of topics were selected using National Science Standards and National Mathematics Standards that are addressed in grades K-8. The course focuses on helping education majors: 1) Build knowledge of scientific concepts and processes; 2) Understand the measurable attributes of objects and the units and methods of measurements; 3) Conduct data analysis (collecting, organizing, presenting scientific data, and to predict the result); 4) Use hands-on approaches to teach science; 5) Be familiar with Internet science teaching resources. Here we share our experiences and challenges we face while teaching this course.

  11. Signal and Image Processing with Sinlets

    E-print Network

    Alexander Y. Davydov

    2012-09-17

    This paper presents a new family of localized orthonormal bases - sinlets - which are well suited for both signal and image processing and analysis. One-dimensional sinlets are related to specific solutions of the time-dependent harmonic oscillator equation. By construction, each sinlet is infinitely differentiable and has a well-defined and smooth instantaneous frequency known in analytical form. For square-integrable transient signals with infinite support, one-dimensional sinlet basis provides an advantageous alternative to the Fourier transform by rendering accurate signal representation via a countable set of real-valued coefficients. The properties of sinlets make them suitable for analyzing many real-world signals whose frequency content changes with time including radar and sonar waveforms, music, speech, biological echolocation sounds, biomedical signals, seismic acoustic waves, and signals employed in wireless communication systems. One-dimensional sinlet bases can be used to construct two- and higher-dimensional bases with variety of potential applications including image analysis and representation.

  12. Methods for processing and imaging marsh foraminifera

    USGS Publications Warehouse

    Dreher, Chandra A.; Flocks, James G.

    2011-01-01

    This study is part of a larger U.S. Geological Survey (USGS) project to characterize the physical conditions of wetlands in southwestern Louisiana. Within these wetlands, groups of benthic foraminifera-shelled amoeboid protists living near or on the sea floor-can be used as agents to measure land subsidence, relative sea-level rise, and storm impact. In the Mississippi River Delta region, intertidal-marsh foraminiferal assemblages and biofacies were established in studies that pre-date the 1970s, with a very limited number of more recent studies. This fact sheet outlines this project's improved methods, handling, and modified preparations for the use of Scanning Electron Microscope (SEM) imaging of these foraminifera. The objective is to identify marsh foraminifera to the taxonomic species level by using improved processing methods and SEM imaging for morphological characterization in order to evaluate changes in distribution and frequency relative to other environmental variables. The majority of benthic marsh foraminifera consists of agglutinated forms, which can be more delicate than porcelaneous forms. Agglutinated tests (shells) are made of particles such as sand grains or silt and clay material, whereas porcelaneous tests consist of calcite.

  13. Intelligent elevator management system using image processing

    NASA Astrophysics Data System (ADS)

    Narayanan, H. Sai; Karunamurthy, Vignesh; Kumar, R. Barath

    2015-03-01

    In the modern era, the increase in the number of shopping malls and industrial building has led to an exponential increase in the usage of elevator systems. Thus there is an increased need for an effective control system to manage the elevator system. This paper is aimed at introducing an effective method to control the movement of the elevators by considering various cases where in the location of the person is found and the elevators are controlled based on various conditions like Load, proximity etc... This method continuously monitors the weight limit of each elevator while also making use of image processing to determine the number of persons waiting for an elevator in respective floors. Canny edge detection technique is used to find out the number of persons waiting for an elevator. Hence the algorithm takes a lot of cases into account and locates the correct elevator to service the respective persons waiting in different floors.

  14. Hardware-based image processing library for Virtex FPGA

    NASA Astrophysics Data System (ADS)

    Gorgon, Marek; Tadeusiewicz, Ryszard

    2000-10-01

    The paper considers hardware-based realization of image processing algorithms. Usage of single FPGA device - Virtex as a processing element capable to carry out image processing in real-time is thoroughly discussed. For implementation of the algorithms in hardware resources specialized IP cores architectures has been designed and tested. The image-processing library consists of individual cores able to be linked together on a software level and implemented in high capacity FPGA devices is proposed.

  15. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  16. Viewpoints on Medical Image Processing: From Science to Application

    PubMed Central

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  17. Image and Signal Processing LISP Environment (ISLE)

    SciTech Connect

    Azevedo, S.G.; Fitch, J.P.; Johnson, R.R.; Lager, D.L.; Searfus, R.M.

    1987-10-02

    We have developed a multidimensional signal processing software system called the Image and Signal LISP Environment (ISLE). It is a hybrid software system, in that it consists of a LISP interpreter (used as the command processor) combined with FORTRAN, C, or LISP functions (used as the processing and display routines). Learning the syntax for ISLE is relatively simple and has the additional benefit of introducing a subset of commands from the general-purpose programming language, Common LISP. Because Common LISP is a well-documented and complete language, users do not need to depend exclusively on system developers for a description of the features of the command language, nor do the developers need to generate a command parser that exhaustively satisfies all the user requirements. Perhaps the major reason for selecting the LISP environment is that user-written code can be added to the environment through a ''foreign function'' interface without recompiling the entire system. The ability to perform fast prototyping of new algorithms is an important feature of this environment. As currently implemented, ISLE requires a Sun color or monochrome workstation and a license to run Franz Extended Common LISP. 16 refs., 4 figs.

  18. Improving night sky star image processing algorithm for star sensors.

    PubMed

    Arbabmir, Mohammad Vali; Mohammadi, Seyyed Mohammad; Salahshour, Sadegh; Somayehee, Farshad

    2014-04-01

    In this paper, the night sky star image processing algorithm, consisting of image preprocessing, star pattern recognition, and centroiding steps, is improved. It is shown that the proposed noise reduction approach can preserve more necessary information than other frequently used approaches. It is also shown that the proposed thresholding method unlike commonly used techniques can properly perform image binarization, especially in images with uneven illumination. Moreover, the higher performance rate and lower average centroiding estimation error of near 0.045 for 400 simulated images compared to other algorithms show the high capability of the proposed night sky star image processing algorithm. PMID:24695142

  19. Visual image processing by neural networks with nonlocal coupling

    NASA Astrophysics Data System (ADS)

    Belliustin, N. S.; Khobotov, A. G.

    1994-09-01

    The possibilities of using homogeneous artificial neural networks both for encoding and back reconstruction of visual images are investigated. A conjugate reverse neural network with an opposite time direction is used for the image reconstruction. Information-entropic characteristics of the image and its different-level histograms are used for quality monitoring of the dynamic image processing. Experimental data on the dynamic behavior of these values in computer simulation for pure images are reported. Comparison with the thermodynamic entropy, subject to a monotonic change, is made. Special cases of nonmonotonic entropy change are considered and examples of the dynamic processing of images with code redundancy are demonstrated.

  20. Understanding Reactions to Workplace Injustice through Process Theories of Motivation: A Teaching Module and Simulation

    ERIC Educational Resources Information Center

    Stecher, Mary D.; Rosse, Joseph G.

    2007-01-01

    Management and organizational behavior students are often overwhelmed by the plethora of motivation theories they must master at the undergraduate level. This article offers a teaching module geared toward helping students understand how two major process theories of motivation, equity and expectancy theories and theories of organizational…

  1. Internet Access, Use and Sharing Levels among Students during the Teaching-Learning Process

    ERIC Educational Resources Information Center

    Tutkun, Omer F.

    2011-01-01

    The purpose of this study was to determine the awareness among students and levels regarding student access, use, and knowledge sharing during the teaching-learning process. The triangulation method was utilized in this study. The population of the research universe was 21,747. The student sample population was 1,292. Two different data collection…

  2. Learning and Teaching about the Nature of Science through Process Skills

    ERIC Educational Resources Information Center

    Mulvey, Bridget K.

    2012-01-01

    This dissertation, a three-paper set, explored whether the process skills-based approach to nature of science instruction improves teachers' understandings, intentions to teach, and instructional practice related to the nature of science. The first paper examined the nature of science views of 53 preservice science teachers before and after a…

  3. A Performer's Creative Processes: Implications for Teaching and Learning Musical Interpretation

    ERIC Educational Resources Information Center

    Silverman, Marissa

    2008-01-01

    The purpose of this study is to investigate aspects of musical interpretation and suggest guidelines for developing performance students' interpretative processes. Since musical interpretation involves basic issues concerning the nature of music, and competing concepts of "interpretation" and its teaching, an overview of these issues is given.…

  4. ICCE/ICCAI 2000 Full & Short Papers (Teaching and Learning Processes).

    ERIC Educational Resources Information Center

    2000

    This document contains the full and short papers on teaching and learning processes from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction) covering the following topics: a code restructuring tool to help scaffold novice programmers; efficient study of Kanji using…

  5. Exploring the Process of Integrating the Internet into English Language Teaching

    ERIC Educational Resources Information Center

    Abdallah, Mahmoud Mohammad Sayed

    2007-01-01

    The present paper explores the process of integrating the Internet into the field of English language teaching in the light of the following points: the general importance of the Internet in our everyday lives shedding some light on the increasing importance of the Internet as a new innovation in our modern life; benefits of using the Internet in…

  6. Using a Laboratory Simulator in the Teaching and Study of Chemical Processes in Estuarine Systems

    ERIC Educational Resources Information Center

    Garcia-Luque, E.; Ortega, T.; Forja, J. M.; Gomez-Parra, A.

    2004-01-01

    The teaching of Chemical Oceanography in the Faculty of Marine and Environmental Sciences of the University of Cadiz (Spain) has been improved since 1994 by the employment of a device for the laboratory simulation of estuarine mixing processes and the characterisation of the chemical behaviour of many substances that pass through an estuary. The…

  7. The Emergence of the Teaching/Learning Process in Preschoolers: Theory of Mind and Age Effect

    ERIC Educational Resources Information Center

    Bensalah, Leila

    2011-01-01

    This study analysed the gradual emergence of the teaching/learning process by examining theory of mind (ToM) acquisition and age effects in the preschool period. We observed five dyads performing a jigsaw task drawn from a previous study. Three stages were identified. In the first one, the teacher focuses on the execution of her/his own task…

  8. A Development of Environmental Education Teaching Process by Using Ethics Infusion for Undergraduate Students

    ERIC Educational Resources Information Center

    Wongchantra, Prayoon; Boujai, Pairoj; Sata, Winyoo; Nuangchalerm, Prasart

    2008-01-01

    Environmental problems were made by human beings because they lack environmental ethics. The sustainable solving of environmental problems must rely on a teaching process using an environmental ethics infusion method. The purposes of this research were to study knowledge of environment and environmental ethics through an environmental education…

  9. Metaphors in Mathematics Classrooms: Analyzing the Dynamic Process of Teaching and Learning of Graph Functions

    ERIC Educational Resources Information Center

    Font, Vicenc; Bolite, Janete; Acevedo, Jorge

    2010-01-01

    This article presents an analysis of a phenomenon that was observed within the dynamic processes of teaching and learning to read and elaborate Cartesian graphs for functions at high-school level. Two questions were considered during this investigation: What types of metaphors does the teacher use to explain the graphic representation of functions…

  10. Optimizing the Teaching-Learning Process Through a Linear Programming Model--Stage Increment Model.

    ERIC Educational Resources Information Center

    Belgard, Maria R.; Min, Leo Yoon-Gee

    An operations research method to optimize the teaching-learning process is introduced in this paper. In particular, a linear programing model is proposed which, unlike dynamic or control theory models, allows the computer to react to the responses of a learner in seconds or less. To satisfy the assumptions of linearity, the seemingly complicated…

  11. Teaching Sign Language to Hearing Parents of Deaf Children: An Action Research Process

    ERIC Educational Resources Information Center

    Napier, Jemina; Leigh, Greg; Nann, Sharon

    2007-01-01

    This paper provides an overview of the challenges in learning a signed language as a second language, in particular for hearing parents with deaf children, and details an action research process that led to the design of a new curriculum for teaching Australian Sign Language (Auslan) to the families of deaf children. The curriculum was developed…

  12. History and Ethno-Mathematics in the interpretation of the process of learning/teaching

    E-print Network

    Spagnolo, Filippo

    1 History and Ethno-Mathematics in the interpretation of the process of learning/teaching Filippo the relationship between Epistemology, History and communication of mathematics. If the interpretation of phenomena of mathematics as 1) history of syntax of mathematics languages, 2) history of semantic of mathematics languages

  13. The Process of Teaching and Learning about Reflection: Research Insights from Professional Nurse Education

    ERIC Educational Resources Information Center

    Bulman, Chris; Lathlean, Judith; Gobbi, Mary

    2014-01-01

    The study aimed to investigate the process of reflection in professional nurse education and the part it played in a teaching and learning context. The research focused on the social construction of reflection within a post-registration, palliative care programme, accessed by nurses, in the United Kingdom (UK). Through an interpretive ethnographic…

  14. A National Research Survey of Technology Use in the BSW Teaching and Learning Process

    ERIC Educational Resources Information Center

    Buquoi, Brittany; McClure, Carli; Kotrlik, Joseph W.; Machtmes, Krisanna; Bunch, J. C.

    2013-01-01

    The purpose of this descriptive-correlational research study was to assess the overall use of technology in the teaching and learning process (TLP) by BSW educators. The accessible and target population included all full-time, professorial-rank, BSW faculty in Council on Social Work Education--accredited BSW programs at land grant universities.…

  15. High Speed Terahertz Pulse Imaging in the Reflection Geometry and Image Quality Enhancement by Digital Image Processing

    NASA Astrophysics Data System (ADS)

    Shon, Chae-Hwa; Chong, Won-Yong; Jeon, Seok-Gy; Kim, Geun-Ju; Kim, Jung-Il; Jin, Yun-Sik

    2008-01-01

    We describe the formation and enhancement of two dimensional pulsed terahertz (THz) images obtained in the reflection geometry with a high-speed optical delay line. Two test objects are imaged and analyzed with respect to material information and concealed structure. Clear THz images were obtained with various imaging modes and were compared with the X-ray images. The THz image of a sample revealed material features that the X-ray image cannot distinguish. We could enhance the THz image quality using various image processing techniques, such as edge detection, de-noising, high-pass filtering, and wavelet filtering.

  16. An invertebrate embryologist's guide to routine processing of confocal images.

    PubMed

    von Dassow, George

    2014-01-01

    It is almost impossible to use a confocal microscope without encountering the need to transform the raw data through image processing. Adherence to a set of straightforward guidelines will help ensure that image manipulations are both credible and repeatable. Meanwhile, attention to optimal data collection parameters will greatly simplify image processing, not only for convenience but for quality and credibility as well. Here I describe how to conduct routine confocal image processing tasks, including creating 3D animations or stereo images, false coloring or merging channels, background suppression, and compressing movie files for display. PMID:24567209

  17. Survey on Neural Networks Used for Medical Image Processing

    PubMed Central

    Shi, Zhenghao; He, Lifeng; Suzuki, Kenji; Nakamura, Tsuyoshi; Itoh, Hidenori

    2010-01-01

    This paper aims to present a review of neural networks used in medical image processing. We classify neural networks by its processing goals and the nature of medical images. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of neural network application for medical image processing and an outlook for the future research are also discussed. By this survey, we try to answer the following two important questions: (1) What are the major applications of neural networks in medical image processing now and in the nearby future? (2) What are the major strengths and weakness of applying neural networks for solving medical image processing tasks? We believe that this would be very helpful researchers who are involved in medical image processing with neural network techniques.

  18. IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 7, NO. 12, DECEMBER 1998 1661 Processing JPEG-Compressed

    E-print Network

    de Queiroz, Ricardo L.

    IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 7, NO. 12, DECEMBER 1998 1661 Processing JPEG Group (JPEG) has become an international standard for image compression, we present techniques that allow the processing of an image in the "JPEG-compressed" domain. The goal is to reduce memory

  19. Optimizing signal and image processing applications using Intel libraries

    NASA Astrophysics Data System (ADS)

    Landré, Jérôme; Truchetet, Frédéric

    2007-01-01

    This paper presents optimized signal and image processing libraries from Intel Corporation. Intel Performance Primitives (IPP) is a low-level signal and image processing library developed by Intel Corporation to optimize code on Intel processors. Open Computer Vision library (OpenCV) is a high-level library dedicated to computer vision tasks. This article describes the use of both libraries to build flexible and efficient signal and image processing applications.

  20. A Reflective Teaching-Learning Process to Enhance Personal Knowing.

    ERIC Educational Resources Information Center

    Smith, Mary Jane

    2000-01-01

    Presents a process in which nursing graduate students describe their personal stories of time pressures, identify the common elements in their stories, and reflect on their inner voice, leading to recognition of personal knowing. (SK)

  1. Abstract --Image segmentation plays an important role in medical image processing. The aim of conventional hard

    E-print Network

    Abstract -- Image segmentation plays an important role in medical image processing. The aim spatial resolution of medical imaging equipment and the complex anatomic structure of soft tissues, a single voxel in a medical image may be composed of several tissue types, which is called partial volume

  2. IEEE TRANSACTIONS IN IMAGE PROCESSING, SUBMITTED 1 A Wavelet-Laplace Variational Technique for Image

    E-print Network

    Bertozzi, Andrea L.

    ] and Perona-Malik [18] for image denoising led to a vast body of work on variational techniques for imageIEEE TRANSACTIONS IN IMAGE PROCESSING, SUBMITTED 1 A Wavelet-Laplace Variational Technique for Image Deconvolution and Inpainting Julia A. Dobrosotskaya and Andrea L. Bertozzi Abstract

  3. IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Image Noise Level Estimation by Principal

    E-print Network

    Hesser, Jürgen

    IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Image Noise Level Estimation by Principal Component. In this article, we propose a new noise level estimation method based on principal component analysis of image blocks. We show that the noise variance can be estimated as the smallest eigenvalue of the image block

  4. A fusion method for visible and infrared images based on contrast pyramid with teaching learning based optimization

    NASA Astrophysics Data System (ADS)

    Jin, Haiyan; Wang, Yanyan

    2014-05-01

    This paper proposes a novel image fusion scheme based on contrast pyramid (CP) with teaching learning based optimization (TLBO) for visible and infrared images under different spectrum of complicated scene. Firstly, CP decomposition is employed into every level of each original image. Then, we introduce TLBO to optimizing fusion coefficients, which will be changed under teaching phase and learner phase of TLBO, so that the weighted coefficients can be automatically adjusted according to fitness function, namely the evaluation standards of image quality. At last, obtain fusion results by the inverse transformation of CP. Compared with existing methods, experimental results show that our method is effective and the fused images are more suitable for further human visual or machine perception.

  5. Quantum Image Morphology Processing Based on Quantum Set Operation

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Chang, Zhi-bo; Fan, Ping; Li, Wei; Huan, Tian-tian

    2015-06-01

    Set operation is the essential operation of mathematical morphology, but it is difficult to complete the set operation quickly on the electronic computer. Therefore, the efficiency of traditional morphology processing is very low. In this paper, by adopting the method of the combination of quantum computation and image processing, though multiple quantum logical gates and combining the quantum image storage, quantum loading scheme and Boyer search algorithm, a novel quantum image processing method is proposed, which is the morphological image processing based on quantum set operation. The basic operations, such as erosion and dilation, are carried out for the images by using the quantum erosion algorithm and quantum dilation algorithm. Because the parallel capability of quantum computation can improve the speed of the set operation greatly, the image processing gets higher efficiency. The runtime of our quantum algorithm is . As a result, this method can produce better results.

  6. Cardiovascular Imaging and Image Processing: Theory and Practice - 1975

    NASA Technical Reports Server (NTRS)

    Harrison, Donald C. (editor); Sandler, Harold (editor); Miller, Harry A. (editor); Hood, Manley J. (Editor); Purser, Paul E. (Editor); Schmidt, Gene (Editor)

    1975-01-01

    Ultrasonography was examined in regard to the developmental highlights and present applicatons of cardiac ultrasound. Doppler ultrasonic techniques and the technology of miniature acoustic element arrays were reported. X-ray angiography was discussed with special considerations on quantitative three dimensional dynamic imaging of structure and function of the cardiopulmonary and circulatory systems in all regions of the body. Nuclear cardiography and scintigraphy, three--dimensional imaging of the myocardium with isotopes, and the commercialization of the echocardioscope were studied.

  7. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer

    NASA Technical Reports Server (NTRS)

    Masuoka, E.; Rose, J.; Quattromani, M.

    1981-01-01

    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  8. IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Exact Feature Extraction using

    E-print Network

    Dragotti, Pier Luigi

    IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Exact Feature Extraction using Finite Rate of Innovation. Popular registration methods are often based on features extracted from the acquired images. The accuracy. However, in low-resolution images, only a few features can be extracted and often with a poor precision

  9. Experiences with digital processing of images at INPE

    NASA Technical Reports Server (NTRS)

    Mascarenhas, N. D. A. (principal investigator)

    1984-01-01

    Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.

  10. MIPL : AN IMAGE PROCESSING LIBRARY FOR MEDICAL APPLICATIONS

    E-print Network

    MIPL : AN IMAGE PROCESSING LIBRARY FOR MEDICAL APPLICATIONS C. Bouras1,2 , T. Georgantas3 , K of Mathematics, USA. 1 Introduction The display and manipulation of medical images is becoming a very important]. The access and manipulation of medical images in digital form require different functionalities. The most

  11. Image Processing and Analysis at IPAG I. INTRODUCTION

    E-print Network

    Duncan, James S.

    1 Image Processing and Analysis at IPAG I. INTRODUCTION Medical image analysis has grown to the development of a significant body of work addressing such issues as fully three-dimensional data, nonrigid understood. However, the method- ologies developed encompass an array of techniques that have advanced image

  12. High Speed Imaging Technology for the Microgravity Containerless Processing Facility

    E-print Network

    Fossum, Eric R.

    infrared (1-3 microns) #12;High Speed Imaging Technology Study page 4 INTRODUCTION This report summarizesStudy on High Speed Imaging Technology for the Microgravity Containerless Processing Facility Dr September 15, 1992 #12;High Speed Imaging Technology Study page 2 TABLE OF CONTENTS Glossary

  13. On the reduction of impulsive noise in multichannel image processing

    E-print Network

    Plataniotis, Konstantinos N.

    On the reduction of impulsive noise in multichannel image processing B. Smolka A. Chydzinski K. W approach to the problem of impulsive-noise reduction for color images is introduced. The major advantage of the technique is that it filters out the noise component while adapting itself to the local image structures

  14. Kagan Structures, Processing, and Excellence in College Teaching

    ERIC Educational Resources Information Center

    Kagan, Spencer

    2014-01-01

    Frequent student processing of lecture content (1) clears working memory, (2) increases long-term memory storage, (3) produces retrograde memory enhancement, (4) creates episodic memories, (5) increases alertness, and (6) activates many brain structures. These outcomes increase comprehension of and memory for content. Many professors now…

  15. Creative Process Mentoring: Teaching the "Making" in Dance-Making

    ERIC Educational Resources Information Center

    Lavender, Larry

    2006-01-01

    Within the Western fine arts tradition of concert and theatrical dance, new dances may be created in any number of ways. No matter how dance making begins, however, unless the work is to be improvised afresh each time it is performed, a process of developing, revising, and "setting" the work needs to take place. To move confidently and…

  16. Teaching Information Literacy and Scientific Process Skills: An Integrated Approach.

    ERIC Educational Resources Information Center

    Souchek, Russell; Meier, Marjorie

    1997-01-01

    Describes an online searching and scientific process component taught as part of the laboratory for a general zoology course. The activities were designed to be gradually more challenging, culminating in a student-developed final research project. Student evaluations were positive, and faculty indicated that student research skills transferred to…

  17. Teaching the Writing Process in Primary Grades: One Teacher's Approach

    ERIC Educational Resources Information Center

    Martin, Linda E.; Thacker, Shirley

    2009-01-01

    This article describes how one such teacher, Shirley Thacker, developed and implemented a successful writing program in her first grade classroom, which is known as Thackerville. Shirley describes how she motivated a classroom of first-graders to use the writing process in a workshop format and how this approach affected the children's perceptions…

  18. Preparing Teachers to Teach Science: Learning Science as a Process.

    ERIC Educational Resources Information Center

    Cornell, Elizabeth A.

    1985-01-01

    Cites the lack of students' understanding and practicing of science processes as evidenced in science fair projects. Major contributors to the decline in science achievement are discussed. Author suggests teachers need experience with "sciencing" in the form of original investigative projects. Coursework designed to meet this goal is described.…

  19. Breast image pre-processing for mammographic tissue segmentation.

    PubMed

    He, Wenda; Hogg, Peter; Juette, Arne; Denton, Erika R E; Zwiggelaar, Reyer

    2015-12-01

    During mammographic image acquisition, a compression paddle is used to even the breast thickness in order to obtain optimal image quality. Clinical observation has indicated that some mammograms may exhibit abrupt intensity change and low visibility of tissue structures in the breast peripheral areas. Such appearance discrepancies can affect image interpretation and may not be desirable for computer aided mammography, leading to incorrect diagnosis and/or detection which can have a negative impact on sensitivity and specificity of screening mammography. This paper describes a novel mammographic image pre-processing method to improve image quality for analysis. An image selection process is incorporated to better target problematic images. The processed images show improved mammographic appearances not only in the breast periphery but also across the mammograms. Mammographic segmentation and risk/density classification were performed to facilitate a quantitative and qualitative evaluation. When using the processed images, the results indicated more anatomically correct segmentation in tissue specific areas, and subsequently better classification accuracies were achieved. Visual assessments were conducted in a clinical environment to determine the quality of the processed images and the resultant segmentation. The developed method has shown promising results. It is expected to be useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment. PMID:26498046

  20. The processing loads of young children's and teachers' representations of place value and implications for teaching

    NASA Astrophysics Data System (ADS)

    Boulton-Lewis, Gillian; Halford, Graeme

    1992-02-01

    This is a description of one aspect of research carried out to assess and compare the processing loads of typical mathematical representations and strategies used by teachers and young children. The sample consisted of 29 children, aged between 5 and 8 years, and their teachers in a suburban school in a low to medium socioeconomic area in Brisbane, Australia. Children were interviewed and videotaped individually before and after teaching of a specified concept, in this case place value. Teachers were interviewed and asked to describe their teaching strategies. The results show that some representations and strategies, that either teachers or children choose, impose an unnecessary processing load which can interfere with conceptual learning. If teachers are aware of this load then representations can be used in such a way that the processing loads are minimized.

  1. Using quantum filters to process images of diffuse axonal injury

    NASA Astrophysics Data System (ADS)

    Pineda Osorio, Mateo

    2014-06-01

    Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.

  2. Application of near-infrared image processing in agricultural engineering

    NASA Astrophysics Data System (ADS)

    Chen, Ming-hong; Zhang, Guo-ping; Xia, Hongxing

    2009-07-01

    Recently, with development of computer technology, the application field of near-infrared image processing becomes much wider. In this paper the technical characteristic and development of modern NIR imaging and NIR spectroscopy analysis were introduced. It is concluded application and studying of the NIR imaging processing technique in the agricultural engineering in recent years, base on the application principle and developing characteristic of near-infrared image. The NIR imaging would be very useful in the nondestructive external and internal quality inspecting of agricultural products. It is important to detect stored-grain insects by the application of near-infrared spectroscopy. Computer vision detection base on the NIR imaging would be help to manage food logistics. Application of NIR imaging promoted quality management of agricultural products. In the further application research fields of NIR image in the agricultural engineering, Some advices and prospect were put forward.

  3. Morphological image sequence processing Karol Mikula

    E-print Network

    Rumpf, Martin

    -temporal level-set evolution, where the additional artificial time variable serves as the multi-scale parameter considered as initial data to some suitable evolution problem. The artificial time param- eter acts (US), magnetic resonance imaging (MRI) and computed tomography imaging (CT) en- able

  4. High resolution image processing on low-cost microcomputers

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1993-01-01

    Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.

  5. Protocols for Image Processing based Underwater Inspection of Infrastructure Elements

    NASA Astrophysics Data System (ADS)

    O'Byrne, Michael; Ghosh, Bidisha; Schoefs, Franck; Pakrashi, Vikram

    2015-07-01

    Image processing can be an important tool for inspecting underwater infrastructure elements like bridge piers and pile wharves. Underwater inspection often relies on visual descriptions of divers who are not necessarily trained in specifics of structural degradation and the information may often be vague, prone to error or open to significant variation of interpretation. Underwater vehicles, on the other hand can be quite expensive to deal with for such inspections. Additionally, there is now significant encouragement globally towards the deployment of more offshore renewable wind turbines and wave devices and the requirement for underwater inspection can be expected to increase significantly in the coming years. While the merit of image processing based assessment of the condition of underwater structures is understood to a certain degree, there is no existing protocol on such image based methods. This paper discusses and describes an image processing protocol for underwater inspection of structures. A stereo imaging image processing method is considered in this regard and protocols are suggested for image storage, imaging, diving, and inspection. A combined underwater imaging protocol is finally presented which can be used for a variety of situations within a range of image scenes and environmental conditions affecting the imaging conditions. An example of detecting marine growth is presented of a structure in Cork Harbour, Ireland.

  6. Internship proposal "Image processing applied to flow characterisation"

    E-print Network

    Condat, Laurent

    Internship proposal "Image processing applied to flow characterisation" Introduction Many. Objectives The aim of this internship is to apply image processing techniques to online flow characterisation in realised. In addition, an experimental case is provided. Place of internship Gipsa-lab, Campus Grenoble

  7. Application Of Digital Image Processing In Computer Aided Art.

    NASA Astrophysics Data System (ADS)

    Groen, Frans C. A.; le Gue, Raymond P.; Smeulders, Arnold W. M...

    1983-10-01

    Digital Image Processing offers the possibility to realize novel special effects in Computer Aided Art. In this paper image processing methods are described to alter static scenes or to realize image transitions in animation. Transformations include conversion to dot patterns, line patterns and various enhancement and resolution effects. Examples are the production of a standart Dutch post stamp, contributions to an experimental television program presented by the Dutch broadcasting corporation and some poster designs.

  8. The Use of Formal Program Design Methods in the Teaching of Computer Science and Data Processing Students.

    ERIC Educational Resources Information Center

    Smith, Peter; Thompson, Barrie

    1988-01-01

    Describes an instructional methodology developed and implemented at Sunderland Polytechnic that uses formal computer program design methods to teach computer programing in computer science and data processing courses. Jackson Structured Programming is explained, and the results of an evaluation by both teaching staff and students are presented.…

  9. Co-Teaching through Modeling Processes: Professional Development of Students and Instructors in a Teacher Training Program

    ERIC Educational Resources Information Center

    Bashan, Bilha; Holsblat, Rachel

    2012-01-01

    In this article, a unique model of instruction based on co-teaching carried out in the framework of the practice teaching program intended for third year college students is presented. The program incorporated principles of modeling based on processes of mentoring, instruction, and discussion, showing the students the pedagogical importance of…

  10. Developing a Constructivist Proposal for Primary Teachers to Teach Science Process Skills: "Extended" Simple Science Experiments (ESSE)

    ERIC Educational Resources Information Center

    Hirça, Necati

    2015-01-01

    Although science experiments are the basis of teaching science process skills (SPS), it has been observed that a large number of prospective primary teachers (PPTs), by virtue of their background, feel anxious about doing science experiments. To overcome this problem, a proposal was suggested for primary school teachers (PSTs) to teach science and…

  11. AN EIGHT WEEK SUMMER INSTITUTE TRAINING PROGRAM TO RETRAIN OFFICE EDUCATION TEACHERS FOR TEACHING BUSINESS ELECTRONIC DATA PROCESSING.

    ERIC Educational Resources Information Center

    BREESE, WILLIAM E.

    A 16-WEEK TWO-SUMMER INSTITUTE WAS HELD TO ASSIST IN DEVELOPING THE KNOWLEDGE AND SKILL ESSENTIAL FOR TEACHING SPECIALIZED COURSES IN A 2-YEAR CURRICULUM IN BUSINESS ELECTRONIC DATA PROCESSING. THE REPORT DESCRIBES THE INSTITUTE'S ENROLLMENT, ENVIRONMENT (AREA AND SCHOOL), TEACHING STAFF, TEXT MATERIAL, AND COURSE OUTLINES. EVALUATIONS BY BOTH THE…

  12. The Evolution of English Language Teaching during Societal Transition in Finland--A Mutual Relationship or a Distinctive Process?

    ERIC Educational Resources Information Center

    Jaatinen, Riitta; Saarivirta, Toni

    2014-01-01

    This study describes the evolution of English language teaching in Finland and looks into the connections of the societal and educational changes in the country as explanatory factors in the process. The results of the study show that the language teaching methodology and the status of foreign languages in Finland are clearly connected to the…

  13. Investigating Factors Affecting Science Teachers' Performance and Satisfaction toward Their Teaching Process at Najran University for Girls' Science Colleges

    ERIC Educational Resources Information Center

    Alshehry, Amel Thafer

    2014-01-01

    In Saudi educational system, many factors have led to a various need for teaching qualifications in higher educational institutions. One main aim of this study was to determine the perception of college teachers on how to assess the effectiveness of the teaching process and what most students consider when evaluating their teachers. Further, it…

  14. Multiscale Astronomical Image Processing Based on Nonlinear Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Pesenson, Meyer; Roby, William; McCollum, Bruce

    2008-08-01

    Astronomical applications of recent advances in the field of nonastronomical image processing are presented. These innovative methods, applied to multiscale astronomical images, increase signal-to-noise ratio, do not smear point sources or extended diffuse structures, and are thus a highly useful preliminary step for detection of different features including point sources, smoothing of clumpy data, and removal of contaminants from background maps. We show how the new methods, combined with other algorithms of image processing, unveil fine diffuse structures while at the same time enhance detection of localized objects, thus facilitating interactive morphology studies and paving the way for the automated recognition and classification of different features. We have also developed a new application framework for astronomical image processing that implements some recent advances made in computer vision and modern image processing, along with original algorithms based on nonlinear partial differential equations. The framework enables the user to easily set up and customize an image-processing pipeline interactively; it has various common and new visualization features and provides access to many astronomy data archives. Altogether, the results presented here demonstrate the first implementation of a novel synergistic approach based on integration of image processing, image visualization, and image quality assessment.

  15. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  16. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A. (Berkeley, CA)

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  17. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  18. Approach to retina optical coherence tomography image processing

    NASA Astrophysics Data System (ADS)

    Yuan, Jiali; Liu, Ruihua; Xuan, Gao; Yang, Jun; Yuan, Libo

    2007-03-01

    Optical coherence tomography (OCT) is a recently developed imaging technology. By using Zeiss STRATUS OCT one could obtain clear tomography pictures of retina and macula lutea. The clinical use of image processing requires both medical knowledge and expertise in image processing techniques. This paper focused on processing of retina OCT image to design the automatic retina OCT image identification system that could help us evaluate retina, examine and clinically diagnose the fundus diseases. The motivation of our work is to extract the contour and highlight the feature area of the focus clearly and exactly. Generally it is related to image segmentation, enhancement and binarization etc. In the paper we try to reduce the image noise and make the symbolic area connected through color segmentation, low-pass filter and mathematical morphology algorithm etc., finally discern some common and different properties of postprocessing image compared with the real OCT image. Experiments were done on cystoid macular edema, macular hole and normal retina OCT image. The results show that the project raised is feasible and suitable for further image identification and classification according to ophthalmology criteria.

  19. Data processing of vibrational chemical imaging for pharmaceutical applications.

    PubMed

    Sacré, P-Y; De Bleye, C; Chavez, P-F; Netchacovitch, L; Hubert, Ph; Ziemons, E

    2014-12-01

    Vibrational spectroscopy (MIR, NIR and Raman) based hyperspectral imaging is one of the most powerful tools to analyze pharmaceutical preparation. Indeed, it combines the advantages of vibrational spectroscopy to imaging techniques and allows therefore the visualization of distribution of compounds or crystallization processes. However, these techniques provide a huge amount of data that must be processed to extract the relevant information. This review presents fundamental concepts of hyperspectral imaging, the basic theory of the most used chemometric tools used to pre-process, process and post-process the generated data. The last part of the present paper focuses on pharmaceutical applications of hyperspectral imaging and highlights the data processing approaches to enable the reader making the best choice among the different tools available. PMID:24809748

  20. Image reversal trilayer process using standard positive photoresist

    NASA Astrophysics Data System (ADS)

    Abdallah, David J.; Sagan, John; Kurosawa, Kazunori; Li, Jin; Takano, Yusuke; Shimizu, Yasuo; Shinde, Ninad; Nagahara, Tatsuro; Ishikawa, Tomonori; Dammel, Ralph R.

    2009-03-01

    Conventional trilayer schemes alleviate the decreasing photoresist budgets as well as satisfy the antireflection issues associated with high NA imaging. However, a number of challenges still exist with standard trilayer processing, most notable among which is the lack of broad resist compatibility and trade-offs associated with improving Si content, such as stability and lithography performance. One way to circumvent these issues is to use a silicon hard mask coated over a photoresist image of reverse tone to the desired pattern. Feasibility of this image reversal trilayer process was demonstrated by patterning of trenches and contact holes in a carbon hard mask from line and pillar photoresist images, respectively. This paper describes the lithography, pattern transfer process and materials developed for the image reversal trilayer processing.

  1. Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry

    NASA Technical Reports Server (NTRS)

    Hong, Yie-Ming

    1973-01-01

    Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.

  2. Subway tunnel crack identification algorithm research based on image processing

    NASA Astrophysics Data System (ADS)

    Bai, Biao; Zhu, Liqiang; Wang, Yaodong

    2014-04-01

    The detection of cracks in tunnels has profound impact on the tunnel's safety. It's common for low contrast, uneven illumination and severe noise pollution in tunnel surface images. As traditional image processing algorithms are not suitable for detecting tunnel cracks, a new image processing method for detecting cracks in surface images of subway tunnels is presented in this paper. This algorithm includes two steps. The first step is a preprocessing which uses global and local methods simultaneously. The second step is the elimination of different types of noises based on the connected components. The experimental results show that the proposed algorithm is effective for detecting tunnel surface cracks.

  3. Computer vision applications for coronagraphic optical alignment and image processing

    E-print Network

    Savransky, Dmitry; Poyneer, Lisa A; Macintosh, Bruce A; 10.1364/AO.52.003394

    2013-01-01

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  4. Signal Processing: Image Communication 17 (2002) 348 An overview of the JPEG 2000 still image

    E-print Network

    Guerrini, Carla

    2002-01-01

    Signal Processing: Image Communication 17 (2002) 3­48 An overview of the JPEG 2000 still image In 1996, the JPEG committee began to investigate possibilities for a new still image compression standard to serve current and future applications. This initiative, which was named JPEG 2000, has resulted

  5. 0 IEEE TRANS. ON IMAGE PROCESSING (REVISED) Hardcopy Image Barcodes Via

    E-print Network

    Evans, Brian L.

    0 IEEE TRANS. ON IMAGE PROCESSING (REVISED) Hardcopy Image Barcodes Via Block Error Diffusion that is explicitly modeled. We refer to the encoded printed version as an image barcode due to its high information security, barcodes Contact-- Prof. Brian L. Evans, 1 University Station C0803, The University of Texas

  6. [Image processing method based on prime number factor layer].

    PubMed

    Fan, Yifang; Yuan, Zhirun

    2004-10-01

    In sport games, since the human body movement data are mainly drawn from the sports field with the hues or even interruptions of commercial environment, some difficulties must be surmounted in order to analyze the images. It is obviously not enough just to use the method of grey-image treatment. We have applied the characteristics of the prime number function to the human body movement images and thus introduce a new method of image processing in this article. When trying to deal with certain moving images, we can get a better result. PMID:15553856

  7. Embedded processor extensions for image processing

    NASA Astrophysics Data System (ADS)

    Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy

    2008-04-01

    The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.

  8. Teaching Image Formation by Extended Light Sources: The Use of a Model Derived from the History of Science

    NASA Astrophysics Data System (ADS)

    Dedes, Christos; Ravanis, Konstantinos

    2009-01-01

    This research, carried out in Greece on pupils aged 12-16, focuses on the transformation of their representations concerning light emission and image formation by extended light sources. The instructive process was carried out in two stages, each one having a different, distinct target set. During the first stage, the appropriate conflict conditions were created by contrasting the subjects’ predictions with the results of experimental situations inspired by the History of Science, with a view to destabilizing the pupils’ alternative representations. During the second stage, the experimental teaching intervention was carried out; it was based on the geometrical optics model and its parameters were derived from Kepler’s relevant historic experiment. For the duration of this process and within the framework of didactical interactions, an effort was made to reorganize initial limited representations and restructure them at the level of the accepted scientific model. The effectiveness of the intervention was evaluated two weeks later, using experimental tasks which had the same cognitive yet different empirical content with respect to the tasks conducted during the intervention. The results of the study showed that the majority of the subjects accepted the model of geometrical optics, that is, the pupils were able to correctly predict and adequately justify the experimental results based on the principle of punctiform light emission. Educational and research implications are discussed.

  9. Determination of Porosity Content in Composites by Micrograph Image Processing

    NASA Astrophysics Data System (ADS)

    Kite, A. H.; Hsu, D. K.; Barnard, D. J.

    2008-02-01

    This paper describes a method to determine the porosity content of a composite lay-up by processing micrograph images of the laminate. The porosity content of a composite structure is critical to the overall strength and performance of the structure. The determination of the porosity content is often done by the acid digestion method. The acid digestion method requires the use of chemicals and costly equipment that may not be available. The image processing method developed utilizes a free software package to process micrograph images of the test sample. The process can be automated with simple scripts within the free software. The results from the image processing method are shown to correlate well with the acid digestion results.

  10. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    PubMed

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. PMID:25676816

  11. Application of image processing for terahertz time domain spectroscopy imaging quantitative detection

    NASA Astrophysics Data System (ADS)

    Li, Li-juan; Wang, Sheng; Ren, Jiao-jiao; Zhou, Ming-xing; Zhao, Duo

    2015-03-01

    According to nondestructive testing principle for the terahertz time domain spectroscopy Imaging, using digital image processing techniques, through Terahertz time-domain spectroscopy system collected images and two-dimensional datas and using a range of processing methods, including selecting regions of interest, contrast enhancement, edge detection, and defects being detected. In the paper, Matlab programming is been use to defect recognition of Terahertz, by figuring out the pixels to determine defects defect area and border length, roundness, diameter size. Through the experiment of the qualitative analysis and quantitative calculation of Matlab image processing, this method of detection of defects of geometric dimension of the sample to get a better result.

  12. Image pre-processing for optimizing automated photogrammetry performances

    NASA Astrophysics Data System (ADS)

    Guidi, G.; Gonizzi, S.; Micoli, L. L.

    2014-05-01

    The purpose of this paper is to analyze how optical pre-processing with polarizing filters and digital pre-processing with HDR imaging, may improve the automated 3D modeling pipeline based on SFM and Image Matching, with special emphasis on optically non-cooperative surfaces of shiny or dark materials. Because of the automatic detection of homologous points, the presence of highlights due to shiny materials, or nearly uniform dark patches produced by low reflectance materials, may produce erroneous matching involving wrong 3D point estimations, and consequently holes and topological errors on the mesh originated by the associated dense 3D cloud. This is due to the limited dynamic range of the 8 bit digital images that are matched each other for generating 3D data. The same 256 levels can be more usefully employed if the actual dynamic range is compressed, avoiding luminance clipping on the darker and lighter image areas. Such approach is here considered both using optical filtering and HDR processing with tone mapping, with experimental evaluation on different Cultural Heritage objects characterized by non-cooperative optical behavior. Three test images of each object have been captured from different positions, changing the shooting conditions (filter/no-filter) and the image processing (no processing/HDR processing), in order to have the same 3 camera orientations with different optical and digital pre-processing, and applying the same automated process to each photo set.

  13. Utilization Of Spatial Self-Similarity In Medical Image Processing

    NASA Astrophysics Data System (ADS)

    Kuklinski, Walter S.

    1987-01-01

    Many current medical image processing algorithms utilize Fourier Transform techniques that represent images as sums of translationally invariant complex exponential basis functions. Selective removal or enhancement of these translationally invariant components can be used to effect a number of image processing operations such as edge enhancement or noise attenuation. An important characteristic of many natural phenomena, including the structures of interest in medical imaging is spatial self-similarity. In this work a filtering technique that represents images as sums of scale invariant self-similar basis functions will be presented. The decomposition of a signal or image into scale invariant components can be accomplished using the Mellin Transform, which diagonalizes changes of scale in a manner analogous to the way the Fourier Transform diagonalizes translation.

  14. Land image data processing requirements for the EOS era

    NASA Technical Reports Server (NTRS)

    Wharton, Stephen W.; Newcomer, Jeffrey A.

    1989-01-01

    Requirements are proposed for a hybrid approach to image analysis that combines the functionality of a general-purpose image processing system with the knowledge representation and manipulation capabilities associated with expert systems to improve the productivity of scientists in extracting information from remotely sensed image data. The overall functional objectives of the proposed system are to: (1) reduce the level of human interaction required on a scene-by-scene basis to perform repetitive image processing tasks; (2) allow the user to experiment with ad hoc rules and procedures for the extraction, description, and identification of the features of interest; and (3) facilitate the derivation, application, and dissemination of expert knowledge for target recognition whose scope of application is not necessarily limited to the image(s) from which it was derived.

  15. Dynamic range selection in image processing hardware to maximize SNR while avoiding image saturation

    NASA Astrophysics Data System (ADS)

    Boucher, William B.; Michnovicz, Michael R.

    1999-07-01

    An image processing system has been designed and implemented which can accommodate a wide a range of image brightness, ranging in intensity from a dim star to the very bright exhaust plume of a boosting missile. The system seeks to maintain the maximum allowable gain in the image processor while not saturating the image. The implemented system uses two stages. The first is an image processing system, which selects the proper 8 bits out of a 12-bit image and achieves the above algorithm using a programmable lookup table. An 8- bit path is an artifact of the chosen image processing hardware. The image intensity to prevent saturation in the image processing system for brighter images. Both the programmable lookup table and the filter wheel can be set in response to the statistics collected for a given target image frame. This paper describes the algorithm used to achieve the required dynamic range compression. In operation, the dynamic range compression algorithm is used in a 'set and forget' fashion, where the algorithm is run for the first few frames of target imagery during a mission, and once the algorithm has settled, it is switched out and the resultant setting are employed for the remainder of the engagement. This research is being tuned by Ballistic Missile Defense Organization and conducted by the Air Force Research Laboratory.

  16. Onboard processing for future space-borne imaging systems

    NASA Technical Reports Server (NTRS)

    Wellman, J. B.; Norris, D. D.

    1978-01-01

    There is a strong rationale for increasing the rate of information return from imaging class experiments aboard both terrestrial and planetary spacecraft. Future imaging systems will be designed with increased spatial resolution, broader spectral range and more spectral channels (or higher spectral resolution). The data rate implied by these improved performance characteristics can be expected to grow more rapidly than the projected telecommunications capability. One solution to this dilemma is the use of improved onboard data processing. The use of onboard classification processing in a multispectral imager can result in orders of magnitude increase in information transfer for very specific types of imaging tasks. Several of these processing functions are included in the conceptual design of an Infrared Multispectral Imager which would map the spatial distribution of characteristic geologic features associated with deposits of economic minerals.

  17. Subband/Transform MATLAB Functions For Processing Images

    NASA Technical Reports Server (NTRS)

    Glover, D.

    1995-01-01

    SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

  18. MR-tutor: a program for teaching the interdependence of factors which influence signal intensity in magnetic resonance imaging.

    PubMed

    Posteraro, R H; Blinder, R A; Herfkens, R J

    1989-01-01

    Magnetic resonance (MR) imaging is a valuable diagnostic radiologic procedure. The appearance of tissues on the MR scan is dependent upon a complex relationship among a number of variables. We have written a computer program which teaches students of magnetic resonance imaging the interdependence of these variables and how they affect the appearance of tissues on MR images. The program is written in BASIC for the IBM* and compatible computers. A listing of the program appears in the Appendix of this article. PMID:2680054

  19. Teaching Cost-Conscious Medicine: Impact of a Simple Educational Intervention on Appropriate Abdominal Imaging at a Community-Based Teaching Hospital

    PubMed Central

    Covington, Matthew F.; Agan, Donna L.; Liu, Yang; Johnson, John O.; Shaw, David J.

    2013-01-01

    Background Rising costs pose a major threat to US health care. Residency programs are being asked to teach residents how to provide cost-conscious medical care. Methods An educational intervention incorporating the American College of Radiology appropriateness criteria with lectures on cost-consciousness and on the actual hospital charges for abdominal imaging was implemented for residents at Scripps Mercy Hospital in San Diego, CA. We hypothesized that residents would order fewer abdominal imaging examinations for patients with complaints of abdominal pain after the intervention. We analyzed the type and number of abdominal imaging studies completed for patients admitted to the inpatient teaching service with primary abdominal complaints for 18 months before (738 patients) and 12 months following the intervention (632 patients). Results There was a significant reduction in mean abdominal computed tomography (CT) scans per patient (1.7–1.4 studies per patient, P < .001) and total abdominal radiology studies per patient (3.1–2.7 studies per patient, P ?=? .02) following the intervention. The avoidance of charges solely due to the reduction in abdominal CT scans following the intervention was $129 per patient or $81,528 in total. Conclusions A simple educational intervention appeared to change the radiologic test-ordering behavior of internal medicine residents. Widespread adoption of similar interventions by residency programs could result in significant savings for the health care system. PMID:24404274

  20. Design and implementation of non-linear image processing functions for CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel

    2012-11-01

    Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 ?m including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.

  1. Digital processing of stereoscopic image pairs.

    NASA Technical Reports Server (NTRS)

    Levine, M. D.

    1973-01-01

    The problem under consideration is concerned with scene analysis during robot navigation on the surface of Mars. In this mode, the world model of the robot must be continuously updated to include sightings of new obstacles and scientific samples. In order to describe the content of a particular scene, it is first necessary to segment it into known objects. One technique for accomplishing this segmentation is by analyzing the pair of images produced by the stereoscopic cameras mounted on the robot. A heuristic method is presented for determining the range for each point in the two-dimensional scene under consideration. The method is conceptually based on a comparison of corresponding points in the left and right images of the stereo pair. However, various heuristics which are adaptive in nature are used to make the algorithm both efficient and accurate. Examples are given of the use of this so-called range picture for the purpose of scene segmentation.

  2. Processing of polarametric SAR images. Final report

    SciTech Connect

    Warrick, A.L.; Delaney, P.A.

    1995-09-01

    The objective of this work was to develop a systematic method of combining multifrequency polarized SAR images. It is shown that the traditional methods of correlation, hard targets, and template matching fail to produce acceptable results. Hence, a new algorithm was developed and tested. The new approach combines the three traditional methods and an interpolation method. An example is shown that demonstrates the new algorithms performance. The results are summarized suggestions for future research are presented.

  3. Quantum Noise in Multipixel Image Processing

    E-print Network

    Nicolas Treps; Vincent Delaubert; Agnes Maitre; Jean-Michel Courty; Claude Fabre

    2004-07-29

    We consider the general problem of the quantum noise in a multipixel measurement of an optical image. We first give a precise criterium in order to characterize intrinsic single mode and multimode light. Then, using a transverse mode decomposition, for each type of possible linear combination of the pixels' outputs we give the exact expression of the detection mode, i.e. the mode carrying the noise. We give also the only way to reduce the noise in one or several simultaneous measurements.

  4. Fingerprint pattern restoration by digital image processing techniques.

    PubMed

    Wen, Che-Yen; Yu, Chiu-Chung

    2003-09-01

    Fingerprint evidence plays an important role in solving criminal problems. However, defective (lacking information needed for completeness) or contaminated (undesirable information included) fingerprint patterns make identifying and recognizing processes difficult. Unfortunately. this is the usual case. In the recognizing process (enhancement of patterns, or elimination of "false alarms" so that a fingerprint pattern can be searched in the Automated Fingerprint Identification System (AFIS)), chemical and physical techniques have been proposed to improve pattern legibility. In the identifying process, a fingerprint examiner can enhance contaminated (but not defective) fingerprint patterns under guidelines provided by the Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST), the Scientific Working Group on Imaging Technology (SWGIT), and an AFIS working group within the National Institute of Justice. Recently, the image processing techniques have been successfully applied in forensic science. For example, we have applied image enhancement methods to improve the legibility of digital images such as fingerprints and vehicle plate numbers. In this paper, we propose a novel digital image restoration technique based on the AM (amplitude modulation)-FM (frequency modulation) reaction-diffusion method to restore defective or contaminated fingerprint patterns. This method shows its potential application to fingerprint pattern enhancement in the recognizing process (but not for the identifying process). Synthetic and real images are used to show the capability of the proposed method. The results of enhancing fingerprint patterns by the manual process and our method are evaluated and compared. PMID:14535661

  5. Batch image processing in synaptic membrane biology Donal Stewart1

    E-print Network

    Millar, Andrew J.

    Batch image processing in synaptic membrane biology Donal Stewart1 donal.stewart@ed.ac.uk Stephen. The analysis of the raw time series image data requires significant work to obtain these final synapse profiles (bottom). The intensity profile shown is the normalised mean intensity over all ROIs in a single

  6. PROCESSING JPEG-COMPRESSED IMAGES Ricardo L. de Queiroz

    E-print Network

    de Queiroz, Ricardo L.

    PROCESSING JPEG-COMPRESSED IMAGES Ricardo L. de Queiroz Xerox Corporation 800 Phillips Rd, M S 128 of an image in the JPEG-compressed" domain. The goal is to reduce memory requirements while increasing speed of JPEG basic opera- tions. Techniques are presented for scaling, previewing, ro- tating, mirroring

  7. SAMPLING THEORY IN SIGNAL AND IMAGE PROCESSING # 2003 SAMPLING PUBLISHING

    E-print Network

    Teschke, Gerd

    SAMPLING THEORY IN SIGNAL AND IMAGE PROCESSING c # 2003 SAMPLING PUBLISHING Vol. 3, No. 2, May 2004, Sobolev embedding op­ erator, Tikhonov regularization 2000 AMS Mathematics Subject Classification --- 65J the observation is a noisy and blurred version of the true signal or image. In order to extract the underlying

  8. Automatic Image Capturing and Processing for PetrolWatch

    E-print Network

    New South Wales, University of

    Automatic Image Capturing and Processing for PetrolWatch Yi Fei Dong1 Salil Kanhere1 Chun Tung Chou SOUTH WALES School of Computer Science and Engineering The University of New South Wales Sydney 2052 called PetrolWatch to collect fuel prices from camera images of road-side price board (billboard

  9. Processing ISS Images of Titan's Surface

    NASA Technical Reports Server (NTRS)

    Perry, Jason; McEwen, Alfred; Fussner, Stephanie; Turtle, Elizabeth; West, Robert; Porco, Carolyn; Knowles, Ben; Dawson, Doug

    2005-01-01

    One of the primary goals of the Cassini-Huygens mission, in orbit around Saturn since July 2004, is to understand the surface and atmosphere of Titan. Surface investigations are primarily accomplished with RADAR, the Visual and Infrared Mapping Spectrometer (VIMS), and the Imaging Science Subsystem (ISS) [1]. The latter two use methane "windows", regions in Titan's reflectance spectrum where its atmosphere is most transparent, to observe the surface. For VIMS, this produces clear views of the surface near 2 and 5 microns [2]. ISS uses a narrow continuum band filter (CB3) at 938 nanometers. While these methane windows provide our best views of the surface, the images produced are not as crisp as ISS images of satellites like Dione and Iapetus [3] due to the atmosphere. Given a reasonable estimate of contrast (approx.30%), the apparent resolution of features is approximately 5 pixels due to the effects of the atmosphere and the Modulation Transfer Function of the camera [1,4]. The atmospheric haze also reduces contrast, especially with increasing emission angles [5].

  10. Image processing of underwater multispectral imagery

    USGS Publications Warehouse

    Zawada, D.G.

    2003-01-01

    Capturing in situ fluorescence images of marine organisms presents many technical challenges. The effects of the medium, as well as the particles and organisms within it, are intermixed with the desired signal. Methods for extracting and preparing the imagery for analysis are discussed in reference to a novel underwater imaging system called the low-light-level underwater multispectral imaging system (LUMIS). The instrument supports both uni- and multispectral collections, each of which is discussed in the context of an experimental application. In unispectral mode, LUMIS was used to investigate the spatial distribution of phytoplankton. A thin sheet of laser light (532 nm) induced chlorophyll fluorescence in the phytoplankton, which was recorded by LUMIS. Inhomogeneities in the light sheet led to the development of a beam-pattern-correction algorithm. Separating individual phytoplankton cells from a weak background fluorescence field required a two-step procedure consisting of edge detection followed by a series of binary morphological operations. In multispectral mode, LUMIS was used to investigate the bio-assay potential of fluorescent pigments in corals. Problems with the commercial optical-splitting device produced nonlinear distortions in the imagery. A tessellation algorithm, including an automated tie-point-selection procedure, was developed to correct the distortions. Only pixels corresponding to coral polyps were of interest for further analysis. Extraction of these pixels was performed by a dynamic global-thresholding algorithm.

  11. Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 1 Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 2

    E-print Network

    Murray, David

    1987-01-01

    Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 1 #12;Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 2 #12;Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 3 #12;Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 4 #12;Computer Vision

  12. IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Multidimensional Orthogonal Filter Bank

    E-print Network

    Do, Minh N.

    IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Multidimensional Orthogonal Filter Bank Characterization, IEEE Abstract-- We present a complete characterization and design of orthogonal IIR and FIR filter orthogonal filter banks cannot be extended to higher dimensions directly due to the lack

  13. Application of digital image processing techniques to astronomical imagery 1977

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.; Lynn, D. J.

    1978-01-01

    Nine specific techniques of combination of techniques developed for applying digital image processing technology to existing astronomical imagery are described. Photoproducts are included to illustrate the results of each of these investigations.

  14. IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Multidimensional Multichannel FIR Deconvolution

    E-print Network

    Do, Minh N.

    IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Multidimensional Multichannel FIR Deconvolution Using Gr for general multidimen- sional multichannel deconvolution with finite impulse response (FIR) convolution and deconvolution filters using Gr¨obner bases. Previous work formulates the problem of multichannel FIR

  15. An image processing of a Raphael's portrait of Leonardo

    E-print Network

    Sparavigna, Amelia Carolina

    2011-01-01

    In one of his paintings, the School of Athens, Raphael is depicting Leonardo da Vinci as the philosopher Plato. Some image processing tools can help us in comparing this portrait with two Leonardo's portraits, considered as self-portraits.

  16. Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    E-print Network

    Rector, T A; Frattare, L M; English, J; Puuohau-Pummill, K

    2004-01-01

    The quality of modern astronomical data, the power of modern computers and the agility of current image-processing software enable the creation of high-quality images in a purely digital form. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways it has led to a new philosophy towards how to create them. A practical guide is presented on how to generate astronomical images from research data with powerful image-processing programs. These programs use a layering metaphor that allows for an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. A philosophy is also presented on how to use color and composition to create images that simultaneously highlight scientific detail and are aesthetically appealing. This philosophy is necessary because most datasets do not correspond to t...

  17. [From brain imaging to good teaching? implicating from neuroscience for research on learning and instruction].

    PubMed

    Stubenrauch, Christa; Krinzinger, Helga; Konrad, Kerstin

    2014-07-01

    Psychiatric disorders in childhood and adolescence, in particular attention deficit disorder or specific learning disorders like developmental dyslexia and developmental dyscalculia, affect academic performance and learning at school. Recent advances in neuroscientific research have incited an intensive debate both in the general public and in the field of educational and instructional science as well as to whether and to what extent these new findings in the field of neuroscience might be of importance for school-related learning and instruction. In this review, we first summarize neuroscientific findings related to the development of attention, working memory and executive functions in typically developing children and then evaluate their relevance for school-related learning. We present an overview of neuroimaging studies of specific learning disabilities such as developmental dyslexia and developmental dyscalculia, and critically discuss their practical implications for educational and teaching practice, teacher training, early diagnosis as well as prevention and disorder-specific therapy. We conclude that the new interdisciplinary field of neuroeducation cannot be expected to provide direct innovative educational applications (e.g., teaching methods). Rather, the future potential of neuroscience lies in creating a deeper understanding of the underlying cognitive mechanisms and pathomechanisms of learning processes and learning disorders. PMID:25005903

  18. ELAS: A powerful, general purpose image processing package

    NASA Technical Reports Server (NTRS)

    Walters, David; Rickman, Douglas

    1991-01-01

    ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.

  19. Digital image processing for the earth resources technology satellite data.

    NASA Technical Reports Server (NTRS)

    Will, P. M.; Bakis, R.; Wesley, M. A.

    1972-01-01

    This paper discusses the problems of digital processing of the large volumes of multispectral image data that are expected to be received from the ERTS program. Correction of geometric and radiometric distortions are discussed and a byte oriented implementation is proposed. CPU timing estimates are given for a System/360 Model 67, and show that a processing throughput of 1000 image sets per week is feasible.

  20. A quantum mechanics-based framework for image processing and its application to image segmentation

    NASA Astrophysics Data System (ADS)

    Youssry, Akram; El-Rafei, Ahmed; Elramly, Salwa

    2015-10-01

    Quantum mechanics provides the physical laws governing microscopic systems. A novel and generic framework based on quantum mechanics for image processing is proposed in this paper. The basic idea is to map each image element to a quantum system. This enables the utilization of the quantum mechanics powerful theory in solving image processing problems. The initial states of the image elements are evolved to the final states, controlled by an external force derived from the image features. The final states can be designed to correspond to the class of the element providing solutions to image segmentation, object recognition, and image classification problems. In this work, the formulation of the framework for a single-object segmentation problem is developed. The proposed algorithm based on this framework consists of four major steps. The first step is designing and estimating the operator that controls the evolution process from image features. The states associated with the pixels of the image are initialized in the second step. In the third step, the system is evolved. Finally, a measurement is performed to determine the output. The presented algorithm is tested on noiseless and noisy synthetic images as well as natural images. The average of the obtained results is 98.5 % for sensitivity and 99.7 % for specificity. A comparison with other segmentation algorithms is performed showing the superior performance of the proposed method. The application of the introduced quantum-based framework to image segmentation demonstrates high efficiency in handling different types of images. Moreover, it can be extended to multi-object segmentation and utilized in other applications in the fields of signal and image processing.

  1. The design of a distributed image processing and dissemination system

    SciTech Connect

    Rafferty, P.; Hower, L.

    1990-01-01

    The design and implementation of a distributed image processing and dissemination system was undertaken and accomplished as part of a prototype communication and intelligence (CI) system, the contingency support system (CSS), which is intended to support contingency operations of the Tactical Air Command. The system consists of six (6) Sun 3/180C workstations with integrated ITEX image processors and three (3) 3/50 diskless workstations located at four (4) system nodes (INEL, base, and mobiles). All 3/180C workstations are capable of image system server functions where as the 3/50s are image system clients only. Distribution is accomplished via both local and wide area networks using standard Defense Data Network (DDN) protocols (i.e., TCP/IP, et al.) and Defense Satellite Communication Systems (DSCS) compatible SHF Transportable Satellite Earth Terminals (TSET). Image applications utilize Sun's Remote Procedure Call (RPC) to facilitate the image system client and server relationships. The system provides functions to acquire, display, annotate, process, transfer, and manage images via an icon, panel, and menu oriented Sunview{trademark} based user interface. Image spatial resolution is 512 {times} 480 with 8-bits/pixel black and white and 12/24 bits/pixel color depending on system configuration. Compression is used during various image display and transmission functions to reduce the dynamic range of image data of 12/6/3/2 bits/pixel depending on the application. Image acquisition is accomplished in real-time or near-real-time by special purpose Itex image hardware. As a result all image displays are highly interactive with attention given to subsecond response time. 3 refs., 7 figs.

  2. Detecting jaundice by using digital image processing

    NASA Astrophysics Data System (ADS)

    Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.

    2014-03-01

    When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

  3. An Image Processing Approach to Linguistic Translation

    NASA Astrophysics Data System (ADS)

    Kubatur, Shruthi; Sreehari, Suhas; Hegde, Rajeshwari

    2011-12-01

    The art of translation is as old as written literature. Developments since the Industrial Revolution have influenced the practice of translation, nurturing schools, professional associations, and standard. In this paper, we propose a method of translation of typed Kannada text (taken as an image) into its equivalent English text. The National Instruments (NI) Vision Assistant (version 8.5) has been used for Optical character Recognition (OCR). We developed a new way of transliteration (which we call NIV transliteration) to simplify the training of characters. Also, we build a special type of dictionary for the purpose of translation.

  4. Reducing the absorbed dose in analogue radiography of infant chest images by improving the image quality, using image processing techniques.

    TOXLINE Toxicology Bibliographic Information

    Karimian A; Yazdani S; Askari MA

    2011-09-01

    Radiographic inspection is one of the most widely employed techniques for medical testing methods. Because of poor contrast and high un-sharpness of radiographic image quality in films, converting radiographs to a digital format and using further digital image processing is the best method of enhancing the image quality and assisting the interpreter in their evaluation. In this research work, radiographic films of 70 infant chest images with different sizes of defects were selected. To digitise the chest images and employ image processing the two algorithms (i) spatial domain and (ii) frequency domain techniques were used. The MATLAB environment was selected for processing in the digital format. Our results showed that by using these two techniques, the defects with small dimensions are detectable. Therefore, these suggested techniques may help medical specialists to diagnose the defects in the primary stages and help to prevent more repeat X-ray examination of paediatric patients.

  5. Digital processing of side-scan sonar data with the Woods Hole image processing system software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low-resolution side-scan sonar data. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for processing side-scan sonar data. This report describes the steps required to process the collected data and to produce an image that has equal along- and across-track resol

  6. Optimal filtering of solar images using soft morphological processing techniques

    NASA Astrophysics Data System (ADS)

    Marshall, S.; Fletcher, L.; Hough, K.

    2006-10-01

    Context: .CCD images obtained by space-based astronomy and solar physics are frequently spoiled by galactic and solar cosmic rays, and particles in the Earth's radiation belt, which produces an overlaid, often saturated, speckle. Aims: .We describe the development and application of a new image-processing technique for the removal of this noise source, and apply it to SOHO/LASCO coronagraph images. Methods: .We employ soft morphological filters, a branch of non-linear image processing originating from the field of mathematical morphology, which are particularly effective for noise removal. Results: .The soft morphological filters result in a significant improvement in image quality, and perform significantly better than other currently existing methods based on frame comparison, thresholding, or simple morphologies. Conclusions: .This is a promising and adaptable technique that should be extendable to other space-based solar and astronomy datasets.

  7. High Dynamic Range Processing for Magnetic Resonance Imaging

    PubMed Central

    Sukerkar, Preeti A.; Meade, Thomas J.

    2013-01-01

    Purpose To minimize feature loss in T1- and T2-weighted MRI by merging multiple MR images acquired at different TR and TE to generate an image with increased dynamic range. Materials and Methods High Dynamic Range (HDR) processing techniques from the field of photography were applied to a series of acquired MR images. Specifically, a method to parameterize the algorithm for MRI data was developed and tested. T1- and T2-weighted images of a number of contrast agent phantoms and a live mouse were acquired with varying TR and TE parameters. The images were computationally merged to produce HDR-MR images. All acquisitions were performed on a 7.05 T Bruker PharmaScan with a multi-echo spin echo pulse sequence. Results HDR-MRI delineated bright and dark features that were either saturated or indistinguishable from background in standard T1- and T2-weighted MRI. The increased dynamic range preserved intensity gradation over a larger range of T1 and T2 in phantoms and revealed more anatomical features in vivo. Conclusions We have developed and tested a method to apply HDR processing to MR images. The increased dynamic range of HDR-MR images as compared to standard T1- and T2-weighted images minimizes feature loss caused by magnetization recovery or low SNR. PMID:24250788

  8. 2340 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 10, OCTOBER 2009 Image Registration Using Adaptive Polar Transform

    E-print Network

    Zheng, Yuan F.

    the registered images is recovered with the new search scheme using Gabor feature extraction to accelerate2340 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 18, NO. 10, OCTOBER 2009 Image Registration Using. Ewing, Senior Member, IEEE Abstract--Image registration is an essential step in many image processing

  9. University Faculty Describe Their Use of Moving Images in Teaching and Learning and Their Perceptions of the Library's Role in That Use

    ERIC Educational Resources Information Center

    Otto, Jane Johnson

    2014-01-01

    The moving image plays a significant role in teaching and learning; faculty in a variety of disciplines consider it a crucial component of their coursework. Yet little has been written about how faculty identify, obtain, and use these resources and what role the library plays. This study, which engaged teaching faculty in a dialogue with library…

  10. Digital image processing of bone - Problems and potentials

    NASA Technical Reports Server (NTRS)

    Morey, E. R.; Wronski, T. J.

    1980-01-01

    The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

  11. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  12. Computer tomography imaging of fast plasmachemical processes

    SciTech Connect

    Denisova, N. V.; Katsnelson, S. S.; Pozdnyakov, G. A.

    2007-11-15

    Results are presented from experimental studies of the interaction of a high-enthalpy methane plasma bunch with gaseous methane in a plasmachemical reactor. The interaction of the plasma flow with the rest gas was visualized by using streak imaging and computer tomography. Tomography was applied for the first time to reconstruct the spatial structure and dynamics of the reagent zones in the microsecond range by the maximum entropy method. The reagent zones were identified from the emission of atomic hydrogen (the H{sub {alpha}} line) and molecular carbon (the Swan bands). The spatiotemporal behavior of the reagent zones was determined, and their relation to the shock-wave structure of the plasma flow was examined.

  13. Image Processing System To Analyze Droplet Distributions In Sprays

    NASA Astrophysics Data System (ADS)

    Bertollini, Gary P.; Oberdier, Larry M.; Lee, Yong H.

    1985-06-01

    The General Motors Research Laboratories has developed an image processing system that automatically analyzes the size distributions in fuel spray video images. Images are generated by using pulsed laser light to freeze droplet motion in the spray sample volume under study. This coherent illumina-tion source produces images that contain droplet diffraction patterns representing the droplet's degree of focus. Thousands of images are recorded per sample volume to get an ensemble average of the distribution at that spray location. After image acquisition the recorded video frames are replayed and analyzed under computer control. The analysis is performed by extracting feature data describing droplet diffraction patterns in the images. This allows the system to select droplets from image anomalies and measure only those droplets con-sidered in focus. The system was designed to analyze sprays from a variety of environments. Currently these are an ambient spray chamber, a high pressure, high temperature spray facility, and a running engine. Unique features of the system are the totally automated analysis and droplet feature measurement from the gray scale image. Also, it can distinguish nonspherical anomalies from droplets, which allows sizing of droplets near the spray nozzle. This paper describes the feature extraction and image restoration algorithms used in the system. Preliminary performance data are also given for two experiments. One experiment gives a comparison between manual and automatic measurements of a synthesized distribution. The second experiment compares measurements of a real spray distribution using current methods and using the automatic system.

  14. A comparison of image processing techniques for bird recognition.

    PubMed

    Nadimpalli, Uma D; Price, Randy R; Hall, Steven G; Bomma, Pallavi

    2006-01-01

    Bird predation is one of the major concerns for fish culture in open ponds. A novel method for dispersing birds is the use of autonomous vehicles. Image recognition software can improve their efficiency. Several image processing techniques for recognition of birds have been tested. A series of morphological operations were implemented. We divided images into 3 types, Type 1, Type 2, and Type 3, based on the level of difficulty of recognizing birds. Type 1 images were clear; Type 2 images were medium clear, and Type 3 images were unclear. Local thresholding has been implemented using HSV (Hue, Saturation, and Value), GRAY, and RGB (Red, Green, and Blue) color models on all three sections of images and results were tabulated. Template matching using normal correlation and artificial neural networks (ANN) are the other methods that have been developed in this study in addition to image morphology. Template matching produced satisfactory results irrespective of the difficulty level of images, but artificial neural networks produced accuracies of 100, 60, and 50% on Type 1, Type 2, and Type 3 images, respectively. Correct classification rate can be increased by further training. Future research will focus on testing the recognition algorithms in natural or aquacultural settings on autonomous boats. Applications of such techniques to industrial, agricultural, or related areas are additional future possibilities. PMID:16454486

  15. Image Reconstruction, Recognition, Using Image Processing, Pattern Recognition and the Hough Transform.

    NASA Astrophysics Data System (ADS)

    Seshadri, M. D.

    1992-01-01

    In this dissertation research, we have demonstrated the need for integration of various imaging methodologies, such as image reconstruction from projections, image processing, pattern and feature recognition using chain codes and the Hough transform. Further an integration of these image processing techniques have been brought about for medical imaging systems. An example of this is, classification and identification of brain scans, into normal, haemorrhaged, and lacunar infarcted brain scans. Low level processing was performed using LOG and a variation of LOG. Intermediate level processing used contour completion and chain encoding. Hough transform was used to detect any analytic shapes in the edge images. All these information were used by the data abstraction routine which also extracted information from the user, in the form of a general query. These were input into a backpropagation, which is a very popular supervised neural network. During learning process an output vector was supplied by the expert to the neural network. While performing the neural network compared the input and with the help of the weight matrix computed the output. This output was compared with the expert's opinion and a percentage deviation was calculated. In the case of brain scans this value was about 95%, when the test input vector did not vary, by more than two pixels with the training or learning input vector. A good classification of the brain scans were performed using the integrated imaging system. Identification of various organs in the abdominal region was also successful, within 90% recognition rate, depending on the noise in the image.

  16. Image data processing system requirements study. Volume 1: Analysis. [for Earth Resources Survey Program

    NASA Technical Reports Server (NTRS)

    Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

    1973-01-01

    Digital image processing, image recorders, high-density digital data recorders, and data system element processing for use in an Earth Resources Survey image data processing system are studied. Loading to various ERS systems is also estimated by simulation.

  17. [Information needs of liver transplant candidates: the first step of the teaching-learning process].

    PubMed

    Mendes, Karina Dal Sasso; Rossin, Fabiana Murad; Ziviani, Luciana da Costa; de Castro-E-Silva, Orlando; Galvão, Cristina Maria

    2012-12-01

    'Information need' is defined as a deficiency of information or skill related to a domain of life that is relevant to the patient. This study's objective was to identify the information needs of candidates on the waiting list for a liver transplant. This is a descriptive study and was conducted at a transplant center in the State of São Paulo-Brazil. The sample consisted of 55 patients and data were collected from March to June 2009. The results showed higher average scores for information needs concerning the preoperativeperiod. Identifying the information needs of liver transplant candidates is important to planning the teaching-learning process. PMID:23596922

  18. Perceptual image quality in normalized LOG domain for Adaptive Optics image post-processing

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Liu, Changhai; Gao, Weizhe

    2015-08-01

    Adaptive Optics together with subsequent post-processing techniques obviously improve the resolution of turbulencedegraded images in ground-based space objects detection and identification. The most common method for frame selection and stopping iteration in post-processing has always been subjective viewing of the images due to a lack of widely agreed-upon objective quality metric. Full reference metrics are not applicable for assessing the field data, no-reference metrics tend to perform poor sensitivity for Adaptive Optics images. In the present work, based on the Laplacian of Gaussian (LOG) local contrast feature, a nonlinear normalization is applied to transform the input image into a normalized LOG domain; a quantitative index is then extracted in this domain to assess the perceptual image quality. Experiments show this no-reference quality index is highly consistent with the subjective evaluation of input images for different blur degree and different iteration number.

  19. Teaching groundwater flow processes: connecting lecture to practical and field classes

    NASA Astrophysics Data System (ADS)

    Hakoun, V.; Mazzilli, N.; Pistre, S.; Jourde, H.

    2013-05-01

    Preparing future hydrogeologists to assess local and regional hydrogeological changes and issues related to water supply is a challenging task that creates a need for effective teaching frameworks. The educational literature suggests that hydrogeology courses should consistently integrate lecture class instructions with practical and field classes. However, most teaching examples still separate these three class components. This paper presents an introductory course to groundwater flow processes taught at Université Montpellier 2, France. The adopted pedagogical scheme and the proposed activities are described in details. The key points of the proposed scheme for the course are: (i) iterations into the three class components to address groundwater flow processes topics, (ii) a course that is structured around a main thread (well testing) present in each class component, and (iii) a pedagogical approach that promotes active learning strategies, in particular using original practical classes and field experiments. The experience indicates that the proposed scheme improves the learning process, as compared to a classical, teacher-centered approach.

  20. Patient education process in teaching hospitals of Tehran University of Medical Sciences

    PubMed Central

    Seyedin, Hesam; Goharinezhad, Salime; Vatankhah, Soodabeh; Azmal, Mohammad

    2015-01-01

    Background: Patient education is widely recognized as a core component of nursing. Patient education can lead to quality outcomes including adherence, quality of life, patients' knowledge of their illness and self-management. This study aimed to clarify patient education process in teaching hospitals affiliated to Tehran University of Medical Sciences (TUMS) in Iran. Methods: This cross-sectional study was conducted in 2013. In this descriptive quantitative study, the sample covered 187 head nurses selected from ten teaching hospitals through convenience sampling. Data were collected with a questionnaire developed specifically for this study. The questionnaire measured patient education process in four dimensions: need assessment, planning, implementing and evaluating. Results: The overall mean score of patient education was 3.326±0.0524. Among the four dimensions of the patient education process, planning was in the highest level (3.570±0.0591) and the lowest score belonged to the evaluation of patient education (2.840 ±0.0628). Conclusion: Clarifying patient education steps, developing standardized framework and providing easily understandable tool-kit of the patient education program will improve the ability of nurses in delivering effective patient education in general and specialized hospitals. PMID:26478878

  1. Automating the Photogrammetric Bridging Based on MMS Image Sequence Processing

    NASA Astrophysics Data System (ADS)

    Silva, J. F. C.; Lemes Neto, M. C.; Blasechi, V.

    2014-11-01

    The photogrammetric bridging or traverse is a special bundle block adjustment (BBA) for connecting a sequence of stereo-pairs and of determining the exterior orientation parameters (EOP). An object point must be imaged in more than one stereo-pair. In each stereo-pair the distance ratio between an object and its corresponding image point varies significantly. We propose to automate the photogrammetric bridging based on a fully automatic extraction of homologous points in stereo-pairs and on an arbitrary Cartesian datum to refer the EOP and tie points. The technique uses SIFT algorithm and the keypoint matching is given by similarity descriptors of each keypoint based on the smallest distance. All the matched points are used as tie points. The technique was applied initially to two pairs. The block formed by four images was treated by BBA. The process follows up to the end of the sequence and it is semiautomatic because each block is processed independently and the transition from one block to the next depends on the operator. Besides four image blocks (two pairs), we experimented other arrangements with block sizes of six, eight, and up to twenty images (respectively, three, four, five and up to ten bases). After the whole image pairs sequence had sequentially been adjusted in each experiment, a simultaneous BBA was run so to estimate the EOP set of each image. The results for classical ("normal case") pairs were analyzed based on standard statistics regularly applied to phototriangulation, and they show figures to validate the process.

  2. Image processing system performance prediction and product quality evaluation

    NASA Technical Reports Server (NTRS)

    Stein, E. K.; Hammill, H. B. (principal investigators)

    1976-01-01

    The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

  3. Enhancement of structure images of interstellar diamond microcrystals by image processing

    NASA Technical Reports Server (NTRS)

    O'Keefe, Michael A.; Hetherington, Crispin; Turner, John; Blake, David; Freund, Friedemann

    1988-01-01

    Image processed high resolution TEM images of diamond crystals found in oxidized acid residues of carbonaceous chondrites are presented. Two models of the origin of the diamonds are discussed. The model proposed by Lewis et al. (1987) supposes that the diamonds formed under low pressure conditions, whereas that of Blake et al (1988) suggests that the diamonds formed due to particle-particle collisions behind supernova shock waves. The TEM images of the diamond presented support the high pressure model.

  4. Standardizing PhenoCam Image Processing and Data Products

    NASA Astrophysics Data System (ADS)

    Milliman, T. E.; Richardson, A. D.; Klosterman, S.; Gray, J. M.; Hufkens, K.; Aubrecht, D.; Chen, M.; Friedl, M. A.

    2014-12-01

    The PhenoCam Network (http://phenocam.unh.edu) contains an archive of imagery from digital webcams to be used for scientific studies of phenological processes of vegetation. The image archive continues to grow and currently has over 4.8 million images representing 850 site-years of data. Time series of broadband reflectance (e.g., red, green, blue, infrared bands) and derivative vegetation indices (e.g. green chromatic coordinate or GCC) are calculated for regions of interest (ROI) within each image series. These time series form the basis for subsequent analysis, such as spring and autumn transition date extraction (using curvature analysis techniques) and modeling the climate-phenology relationship. Processing is relatively straightforward but time consuming, with some sites having more than 100,000 images available. While the PhenoCam Network distributes the original image data, it is our goal to provide higher-level vegetation phenology products, generated in a standardized way, to encourage use of the data without the need to download and analyze individual images. We describe here the details of the standard image processing procedures, and also provide a description of the products that will be available for download. Products currently in development include an "all-image" file, which contains a statistical summary of the red, green and blue bands over the pixels in predefined ROI's for each image from a site. This product is used to generate 1-day and 3-day temporal aggregates with 90th percentile values of GCC for the specified time-periodwith standard image selection/filtering criteria applied. Sample software (in python, R, MATLAB) that can be used to read in and plot these products will also be described.

  5. Digital interactive image analysis by array processing

    NASA Technical Reports Server (NTRS)

    Sabels, B. E.; Jennings, J. D.

    1973-01-01

    An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.

  6. Creating & using specimen images for collection documentation, research, teaching and outreach

    NASA Astrophysics Data System (ADS)

    Demouthe, J. F.

    2012-12-01

    In this age of digital media, there are many opportunities for use of good images of specimens. On-line resources such as institutional web sites and global sites such as PaleoNet and the Paleobiology Database provide venues for collection information and images. Pictures can also be made available to the general public through popular media sites such as Flickr and Facebook, where they can be retrieved and used by teachers, students, and the general public. The number of requests for specimen loans can be drastically reduced by offering the scientific community access to data and specimen images using the internet. This is an important consideration in these days of limited support budgets, since it reduces the amount of staff time necessary for giving researchers and educators access to collections. It also saves wear and tear on the specimens themselves. Many institutions now limit or refuse to send specimens out of their own countries because of the risks involved in going through security and customs. The internet can bridge political boundaries, allowing everyone equal access to collections. In order to develop photographic documentation of a collection, thoughtful preparation will make the process easier and more efficient. Acquire the necessary equipment, establish standards for images, and develop a simple workflow design. Manage images in the camera, and produce the best possible results, rather than relying on time-consuming editing after the fact. It is extremely important that the images of each specimen be of the highest quality and resolution. Poor quality, low resolution photos are not good for anything, and will often have to be retaken when another need arises. Repeating the photography process involves more handling of specimens and more staff time. Once good photos exist, smaller versions can be created for use on the web. The originals can be archived and used for publication and other purposes.

  7. SAR image statistics and adaptive signal processing for change detection

    NASA Astrophysics Data System (ADS)

    Vu, Viet T.; Machado, Renato; Pettersson, Mats I.; Dammert, Patrik; Hellsten, Hans

    2015-05-01

    The paper represents investigations on SAR image statistics and adaptive signal processing for change detection. The investigations show that the amplitude distributions of SAR images with possibly detected changes, that is retrieved with a linear subtraction operator, can approximately be represented by the probability density function of the Gaussian or normal distribution. This allows emerging the idea to use the available adaptive signal processing techniques for change detection. The experiments indicate the promising change detection results obtained with an adaptive line enhancer, one of the adaptive signal processing technique. The experiments are conducted on the data collected by CARABAS, a UWB low frequency SAR system.

  8. Models of formation and some algorithms of hyperspectral image processing

    NASA Astrophysics Data System (ADS)

    Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.

    2014-12-01

    Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.

  9. A comparison of polarization image processing across different platforms

    NASA Astrophysics Data System (ADS)

    York, Timothy; Powell, Samuel; Gruev, Viktor

    2011-10-01

    Division-of-focal-plane (DoFP) polarimeters for the visible spectrum hold the promise of being able to capture both the angle and degree of linear polarization in real-time and at high spatial resolution. These sensors are realized by monolithic integration of CCD imaging elements with metallic nanowire polarization filter arrays at the focal plane of the sensor. These sensors capture large amounts of raw polarization data and present unique computational challenges as they aim to provide polarimetric information at high spatial and temporal resolutions. The image processing pipeline in a typical DoFP polarimeter is: per-pixel calibration, interpolation of the four sub-sampled polarization pixels, Stokes parameter estimation, angle and degree of linear polarization estimation, and conversion from polarization domain to color space for display purposes. The entire image processing pipeline must operate at the same frame rate as the CCD polarization imaging sensor (40 frames per second) or higher in order to enable real-time extraction of the polarization properties from the imaged environment. To achieve the necessary frame rate, we have implemented and evaluated the image processing pipeline on three different platforms: general purpose CPU, graphics processing unit (GPU), and an embedded FPGA. The computational throughput, power consumption, precision and physical limitations of the implementations on each platform are described in detail and experimental data is provided.

  10. On-demand server-side image processing for web-based DICOM image display

    NASA Astrophysics Data System (ADS)

    Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

    2000-04-01

    Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

  11. IEEE Proc. on Image Processing, Nov., 1994 1 STORM TRACKING IN DOPPLER RADAR IMAGES

    E-print Network

    Barron, John

    IEEE Proc. on Image Processing, Nov., 1994 1 STORM TRACKING IN DOPPLER RADAR IMAGES D. Krezeski 1 of Western Ontario London, Ontario, N6A 5B7 email: fmercer,barrong@csd.uwo.ca 2 King City Radar Station@aestor.dots.doe.ca ABSTRACT An automated tracking algorithm for doppler radar storms is presented. Potential storms in Doppler

  12. An ImageJ plugin for ion beam imaging and data processing at AIFIRA facility

    NASA Astrophysics Data System (ADS)

    Devès, G.; Daudin, L.; Bessy, A.; Buga, F.; Ghanty, J.; Naar, A.; Sommar, V.; Michelet, C.; Seznec, H.; Barberet, P.

    2015-04-01

    Quantification and imaging of chemical elements at the cellular level requires the use of a combination of techniques such as micro-PIXE, micro-RBS, STIM, secondary electron imaging associated with optical and fluorescence microscopy techniques employed prior to irradiation. Such a numerous set of methods generates an important amount of data per experiment. Typically for each acquisition the following data has to be processed: chemical map for each element present with a concentration above the detection limit, density and backscattered maps, mean and local spectra corresponding to relevant region of interest such as whole cell, intracellular compartment, or nanoparticles. These operations are time consuming, repetitive and as such could be source of errors in data manipulation. In order to optimize data processing, we have developed a new tool for batch data processing and imaging. This tool has been developed as a plugin for ImageJ, a versatile software for image processing that is suitable for the treatment of basic IBA data operations. Because ImageJ is written in Java, the plugin can be used under Linux, Mas OS X and Windows in both 32-bits and 64-bits modes, which may interest developers working on open-access ion beam facilities like AIFIRA. The main features of this plugin are presented here: listfile processing, spectroscopic imaging, local information extraction, quantitative density maps and database management using OMERO.

  13. AR/D image processing system

    NASA Astrophysics Data System (ADS)

    Wookey, Cathy; Nicholson, Bruce

    General Dynamics has developed advanced hardware, software, and algorithms for use with the Tomahawk cruise missile and other unmanned vehicles. We have applied this technology to the problem of locating and determining the orientation of the docking port of a target vehicle with respect to an approaching spacecraft. The system described in this presentation utilizes a multi-processor based computer to digitize and process television imagery and extract parameters such as range to the target vehicle, approach, velocity, and pitch and yaw angles. The processor is based on the Inmos T-800 Transputer and is configured as a loosely coupled array. Each processor operates asynchronously and has its own local memory. This allows additional processors to be easily added if additional processing power is required for more complex tasks. Total system throughput is approximately 100 MIPS (scalar) and 60 MFLOPS and can be expanded as desired. The algorithm implemented on the system uses a unique adaptive thresholding technique to locate the target vehicle and determine the approximate position of the docking port. A target pattern surrounding the port is than analyzed in the imagery to determine the range and orientation of the target. This information is passed to an autopilot which uses it to perform course and speed corrections. Future upgrades to the processor are described which will enhance its capabilities for a variety of missions.

  14. Color image processing and object tracking workstation

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Paulick, Michael J.

    1992-01-01

    A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.

  15. AR/D image processing system

    NASA Technical Reports Server (NTRS)

    Wookey, Cathy; Nicholson, Bruce

    1991-01-01

    General Dynamics has developed advanced hardware, software, and algorithms for use with the Tomahawk cruise missile and other unmanned vehicles. We have applied this technology to the problem of locating and determining the orientation of the docking port of a target vehicle with respect to an approaching spacecraft. The system described in this presentation utilizes a multi-processor based computer to digitize and process television imagery and extract parameters such as range to the target vehicle, approach, velocity, and pitch and yaw angles. The processor is based on the Inmos T-800 Transputer and is configured as a loosely coupled array. Each processor operates asynchronously and has its own local memory. This allows additional processors to be easily added if additional processing power is required for more complex tasks. Total system throughput is approximately 100 MIPS (scalar) and 60 MFLOPS and can be expanded as desired. The algorithm implemented on the system uses a unique adaptive thresholding technique to locate the target vehicle and determine the approximate position of the docking port. A target pattern surrounding the port is than analyzed in the imagery to determine the range and orientation of the target. This information is passed to an autopilot which uses it to perform course and speed corrections. Future upgrades to the processor are described which will enhance its capabilities for a variety of missions.

  16. Dehydration process of fish analyzed by neutron beam imaging

    NASA Astrophysics Data System (ADS)

    Tanoi, K.; Hamada, Y.; Seyama, S.; Saito, T.; Iikura, H.; Nakanishi, T. M.

    2009-06-01

    Since regulation of water content of the dried fish is an important factor for the quality of the fish, water-losing process during drying (squid and Japanese horse mackerel) was analyzed through neutron beam imaging. The neutron image showed that around the shoulder of mackerel, there was a part where water content was liable to maintain high during drying. To analyze water-losing process more in detail, spatial image was produced. From the images, it was clearly indicated that the decrease of water content was regulated around the shoulder part. It was suggested that to prevent deterioration around the shoulder part of the dried fish is an important factor to keep quality of the dried fish in the storage.

  17. Application of Digital Image Processing Methods for Portal Image Quality Improvement

    SciTech Connect

    Gorlachev, G. E.; Kosyrev, D. S.

    2007-11-26

    The different processing methods which could increase the contrast (unsharp mask, histogram equalization, and deconvolution) and reduce noise (median filter) were analysed. An application which allows the importation of BeamView files (ACR-NEMA 2.0 format) and the application of the above mentioned methods were developed. The main objective was to obtain the most accurate comparison of Beamview images with Digitally Received Radiograms. The preliminary results of image processing methods are presented.

  18. New Zealand involvement in Radio Astronomical VLBI Image Processing

    E-print Network

    Weston, Stuart; Gulyaev, Sergei

    2011-01-01

    With the establishment of the AUT University 12m radio telescope at Warkworth, New Zealand has now become a part of the international Very Long Baseline Interferometry (VLBI) community. A major product of VLBI observations are images in the radio domain of astronomical objects such as Active Galactic Nuclei (AGN). Using large geographical separations between radio antennas, very high angular resolution can be achieved. Detailed images can be created using the technique of VLBI Earth Rotation Aperture Synthesis. We review the current process of VLBI radio imaging. In addition we model VLBI configurations using the Warkworth telescope, AuScope (a new array of three 12m antennas in Australia) and the Australian Square Kilometre Array Pathfinder (ASKAP) array currently under construction in Western Australia, and discuss how the configuration of these arrays affects the quality of images. Recent imaging results that demonstrate the modeled improvements from inclusion of the AUT and first ASKAP telescope in the Au...

  19. Learning to Teach: Enhancing Pre-Service Teachers' Awareness of the Complexity of Teaching-Learning Processes

    ERIC Educational Resources Information Center

    Eilam, Billie; Poyas, Yael

    2009-01-01

    Why is it so challenging to provide pre-service teachers with adequate competence to cope with the complexity of the classroom context? Three key difficulties are frequently reported as reducing the effectiveness of teacher education programs: the construction of an integrated body of knowledge about teaching, the application of theories to…

  20. The Development of a Prospective Data Collection Process in a Traditional Chinese Medicine Teaching Clinic

    PubMed Central

    McKenzie, Eileen; Evans, Roni; McKenzie, Mark

    2009-01-01

    Abstract Objective There is a growing need for students and practitioners of Traditional Chinese Medicine to gain experience with standardized data collection, patient outcomes measurement, and practice-based research. The purpose of this paper is to describe the development of a process for standardized data collection that could eventually be adopted for clinical, research, and quality assurance purposes. Settings/location The setting for this study was an acupuncture and Oriental medicine teaching clinic in Bloomington, Minnesota. Methods Four (4) aspects of data collection were assessed and improved, including intake and post-treatment questionnaires, follow-up with patients, integration of data collection into clinic flow, and commitment of resources to the project. Outcome measures The outcomes measures were data collection and missing data rates, burden on patients and clinic staff, and efficiency of data entry. Results Revision to the data collection process resulted in decreased burden to patients and staff, more detailed and aggressive follow-up protocols, enhanced training for clinic staff, and increased personnel and data-related resources. Conclusions The systematic collection of descriptive and clinical characteristics can be accomplished in a teaching clinic with thoughtful attention paid to data collection protocols, dedicated resources, and the involvement of all relevant personnel. PMID:19292655

  1. Video image processing for nuclear safeguards

    SciTech Connect

    Rodriguez, C.A.; Howell, J.A.; Menlove, H.O.; Brislawn, C.M.; Bradley, J.N.; Chare, P.; Gorten, J.

    1995-09-01

    The field of nuclear safeguards has received increasing amounts of public attention since the events of the Iraq-UN conflict over Kuwait, the dismantlement of the former Soviet Union, and more recently, the North Korean resistance to nuclear facility inspections by the International Atomic Energy Agency (IAEA). The role of nuclear safeguards in these and other events relating to the world`s nuclear material inventory is to assure safekeeping of these materials and to verify the inventory and use of nuclear materials as reported by states that have signed the nuclear Nonproliferation Treaty throughout the world. Nuclear safeguards are measures prescribed by domestic and international regulatory bodies such as DOE, NRC, IAEA, and EURATOM and implemented by the nuclear facility or the regulatory body. These measures include destructive and non destructive analysis of product materials/process by-products for materials control and accountancy purposes, physical protection for domestic safeguards, and containment and surveillance for international safeguards.

  2. Digital image processing: a primer for JVIR authors and readers: Part 3: Digital image editing.

    PubMed

    LaBerge, Jeanne M; Andriole, Katherine P

    2003-12-01

    This is the final installment of a three-part series on digital image processing intended to prepare authors for online submission of manuscripts. In the first two articles of the series, the fundamentals of digital image architecture were reviewed and methods of importing images to the computer desktop were described. In this article, techniques are presented for editing images in preparation for online submission. A step-by-step guide to basic editing with use of Adobe Photoshop is provided and the ethical implications of this activity are explored. PMID:14654480

  3. Massive parallel processing of image reconstruction from bispectrum through turbulence.

    PubMed

    Hajmohammadi, Solmaz; Nooshabadi, Saeid; Bos, Jeremy P

    2015-11-10

    This paper presents a massively parallel method for the phase reconstruction of an object from its bispectrum phase. Our aim is to recover an enhanced version of a turbulence-corrupted image by developing an efficient and fast parallel image-restoration algorithm. The proposed massively parallel bispectrum algorithm relies on multiple block parallelization. Further, in each block, we employ wavefront processing through strength reduction to parallelize an iterative algorithm. Results are presented and compared with the existing iterative bispectrum method. We report a speed-up factor of 85.94 with respect to sequential implementation of the same algorithm for an image size of 1024×1024. PMID:26560760

  4. High Performance Image Processing And Laser Beam Recording System

    NASA Astrophysics Data System (ADS)

    Fanelli, Anthony R.

    1980-09-01

    The article is meant to provide the digital image recording community with an overview of digital image processing, and recording. The Digital Interactive Image Processing System (DIIPS) was assembled by ESL for Air Force Systems Command under ROME AIR DEVELOPMENT CENTER's guidance. The system provides the capability of mensuration and exploitation of digital imagery with both mono and stereo digital images as inputs. This development provided for system design, basic hardware, software and operational procedures to enable the Air Force's System Command photo analyst to perform digital mensuration and exploitation of stereo digital images as inputs. The engineering model was based on state-of-the-art technology and to the extent possible off-the-shelf hardware and software. A LASER RECORDER was also developed for the DIIPS Systems and is known as the Ultra High Resolution Image Recorder (UHRIR). The UHRIR is a prototype model that will enable the Air Force Systems Command to record computer enhanced digital image data on photographic film at high resolution with geometric and radiometric distortion minimized.

  5. Object silhouettes and surface directions through stereo matching image processing

    NASA Astrophysics Data System (ADS)

    Akiyama, Akira; Kumagai, Hideo

    2015-09-01

    We have studied the object silhouettes and surface direction through the stereo matching image processing to recognize the position, size and surface direction of the object. For this study we construct the pixel number change distribution of the HSI color component level, the binary component level image by the standard deviation threshold, the 4 directional pixels connectivity filter, the surface elements correspondence by the stereo matching and the projection rule relation. We note that the HSI color component level change tendency of the object image near the focus position is more stable than the HSI color component level change tendency of the object image over the unfocused range. We use the HSI color component level images near the fine focused position to extract the object silhouette. We extract the object silhouette properly. We find the surface direction of the object by the pixel numbers of the correspondence surface areas and the projection cosine rule after the stereo matching image processing by the characteristic areas and the synthesized colors. The epipolar geometry is used in this study because a pair of imager is arranged on the same epipolar plane. The surface direction detection results in the proper angle calculation. The construction of the object silhouettes and the surface direction detection of the object are realized.

  6. SENTINEL-2 Level 1 Products and Image Processing Performances

    NASA Astrophysics Data System (ADS)

    Baillarin, S. J.; Meygret, A.; Dechoz, C.; Petrucci, B.; Lacherade, S.; Tremas, T.; Isola, C.; Martimort, P.; Spoto, F.

    2012-07-01

    In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES) program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km), a high revisit (5 days with two satellites), a high resolution (10 m, 20 m and 60 m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). In this context, the Centre National d'Etudes Spatiales (CNES) supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes), the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands); and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame. The stringent image quality requirements are also described, in particular the geo-location accuracy for both absolute (better than 12.5 m) and multi-temporal (better than 0.3 pixels) cases. Then, the prototyped image processing techniques (both radiometric and geometric) will be addressed. The radiometric corrections will be first introduced. They consist mainly in dark signal and detector relative sensitivity correction, crosstalk correction and MTF restoration. Then, a special focus will be done on the geometric corrections. In particular the innovative method of automatic enhancement of the geometric physical model will be detailed. This method takes advantage of a Global Reference Image database, perfectly geo-referenced, to correct the physical geometric model of each image taken. The processing is based on an automatic image matching process which provides accurate ground control points between a given band of the image to refine and a reference image, allowing to dynamically calibrate the viewing model. The generation of the Global Reference Image database made of Sentinel-2 pre-calibrated mono-spectral images will be also addressed. In order to perform independent validation of the prototyping activity, an image simulator dedicated to Sentinel-2 has been set up. Thanks to this, a set of images have been simulated from various source images and combining different acquisition conditions and landscapes (mountains, deserts, cities …). Given disturbances have been also simulated so as to estimate the end to end performance of the processing chain. Finally, the radiometric and geometric performances obtained by the prototype will be presented. In particular, the geo-location performance of the level-1C products which widely fulfils the image quality requirements will be provided.

  7. Detection of ash fusion temperatures based on the image processing

    NASA Astrophysics Data System (ADS)

    Li, Peisheng; Yue, Yanan; Hu, Yi; Li, Jie; Yu, Wan; Yang, Jun; Hu, Niansu; Yang, Guolu

    2007-11-01

    The detection of ash fusion temperatures is important in the research of coal characteristics. The prevalent method is to build up ash cone with some dimension and detect the characteristic temperatures according to the morphological change. However, conditional detection work is not accurate and brings high intensity of labor as a result of both visualization and real-time observation. According to the insufficiency of conventional method, a new method to determine ash fusion temperatures with image processing techniques is introduced in this paper. Seven techniques (image cutting, image sharpening, edge picking, open operation, dilate operation, close operation, geometrical property extraction) are used in image processing program. The processing results show that image sharpening can intensify the outline of ash cone; Prewitt operator may extract the edge well among many operators; mathematical morphology of image can filter noise effectively while filling up the crack brought by filtration, which is useful for further disposal; characteristic temperatures of ash fusion temperatures can be measured by depth-to-width ratio. Ash fusion temperatures derived from this method match normal values well, which proves that this method is feasible in detection of ash fusion temperatures.

  8. High Throughput Multispectral Image Processing with Applications in Food Science

    PubMed Central

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing’s outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples. PMID:26466349

  9. The Multimission Image Processing Laboratory's virtual frame buffer interface

    NASA Technical Reports Server (NTRS)

    Wolfe, T.

    1984-01-01

    Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.

  10. A spatial planetary image database in the context of processing

    NASA Astrophysics Data System (ADS)

    Willner, K.; Tasdelen, E.

    2015-10-01

    Planetary image data is collected and archived by e.g. the European Planetary Science Archive (PSA) or its US counterpart the Planetary Data System (PDS). These archives usually organize the data according to missions and their respective instruments. Search queries can be posted to retrieve data of interest for a specific instrument data set. In the context of processing data of a number of sensors and missions this is not practical. In the scope of the EU FP7 project PRoViDE meta-data from imaging sensors were collected from PSA as well as PDS and were rearranged and restructured according to the processing needs. Exemplary image data gathered from rover and lander missions operated on the Martian surface was organized into a new unique data base. The data base is a core component of the PRoViDE processing and visualization system as it enables multi-mission and -sensor searches to fully exploit the collected data.

  11. Survey Framework and Process Background In a Report to the Learning and Teaching Board (LTB) in September 2009, it was

    E-print Network

    Greenaway, Alan

    Survey Framework and Process 1 Overview Background In a Report to the Learning and Teaching Board with respect to management and annual scheduling of all institution-wide student surveys. The Report also noted a need to improve the co-ordination of the survey process. It was recommended that the University should

  12. A Case Study Analysing the Process of Analogy-Based Learning in a Teaching Unit about Simple Electric Circuits

    ERIC Educational Resources Information Center

    Paatz, Roland; Ryder, James; Schwedes, Hannelore; Scott, Philip

    2004-01-01

    The purpose of this case study is to analyse the learning processes of a 16-year-old student as she learns about simple electric circuits in response to an analogy-based teaching sequence. Analogical thinking processes are modelled by a sequence of four steps according to Gentner's structure mapping theory (activate base domain, postulate local…

  13. Image Processing for Educators in Global Hands-On Universe

    NASA Astrophysics Data System (ADS)

    Miller, J. P.; Pennypacker, C. R.; White, G. L.

    2006-08-01

    A method of image processing to find time-varying objects is being developed for the National Virtual Observatory as part of Global Hands-On Universe(tm) (Lawrence Hall of Science; University of California, Berkeley). Objects that vary in space or time are of prime importance in modern astronomy and astrophysics. Such objects include active galactic nuclei, variable stars, supernovae, or moving objects across a field of view such as an asteroid, comet, or extrasolar planet transiting its parent star. The search for these objects is undertaken by acquiring an image of the region of the sky where they occur followed by a second image taken at a later time. Ideally, both images are taken with the same telescope using the same filter and charge-coupled device. The two images are aligned and subtracted with the subtracted image revealing any changes in light during the time period between the two images. We have used a method of Christophe Alard using the image processing software IDL Version 6.2 (Research Systems, Inc.) with the exception of the background correction, which is done on the two images prior to the subtraction. Testing has been extensive, using images provided by a number of National Virtual Observatory and collaborating projects. They include the Supernovae Trace Cosmic Expansion (Cerro Tololo Inter-American Observatory), Supernovae/ Acceleration Program (Lawrence Berkeley National Laboratory), Lowell Observatory Near-Earth Object Search (Lowell Observatory), and the Centre National de la Recherche Scientifique (Paris, France). Further testing has been done with students, including a May 2006 two week program at the Lawrence Berkeley National Laboratory. Students from Hardin-Simmons University (Abilene, TX) and Jackson State University (Jackson, MS) used the subtraction method to analyze images from the Cerro Tololo Inter-American Observatory (CTIO) searching for new asteroids and Kuiper Belt objects. In October 2006 students from five U.S. high schools will use the subtraction method in an asteroid search campaign using CTIO images with 7-day follow-up images to be provided by the Las Cumbres Observatory (Santa Barbara, CA). During the Spring 2006 semester, students from Cape Fear High School used the method to search for near-Earth objects and supernovae. Using images from the Astronomical Research Institute (Charleston, IL) the method contributed to the original discovery of two supernovae, SN 2006al and SN 2006bi.

  14. Automated Processing of Zebrafish Imaging Data: A Survey

    PubMed Central

    Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-01-01

    Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

  15. An Automated Image Processing System for Concrete Evaluation

    SciTech Connect

    Baumgart, C.W.; Cave, S.P.; Linder, K.E.

    1998-11-23

    AlliedSignal Federal Manufacturing & Technologies (FM&T) was asked to perform a proof-of-concept study for the Missouri Highway and Transportation Department (MHTD), Research Division, in June 1997. The goal of this proof-of-concept study was to ascertain if automated scanning and imaging techniques might be applied effectively to the problem of concrete evaluation. In the current evaluation process, a concrete sample core is manually scanned under a microscope. Voids (or air spaces) within the concrete are then detected visually by a human operator by incrementing the sample under the cross-hairs of a microscope and by counting the number of "pixels" which fall within a void. Automation of the scanning and image analysis processes is desired to improve the speed of the scanning process, to improve evaluation consistency, and to reduce operator fatigue. An initial, proof-of-concept image analysis approach was successfully developed and demonstrated using acquired black and white imagery of concrete samples. In this paper, the automated scanning and image capture system currently under development will be described and the image processing approach developed for the proof-of-concept study will be demonstrated. A development update and plans for future enhancements are also presented.

  16. Grid Computing Application for Brain Magnetic Resonance Image Processing

    NASA Astrophysics Data System (ADS)

    Valdivia, F.; Crépeault, B.; Duchesne, S.

    2012-02-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  17. A synoptic description of coal basins via image processing

    NASA Technical Reports Server (NTRS)

    Farrell, K. W., Jr.; Wherry, D. B.

    1978-01-01

    An existing image processing system is adapted to describe the geologic attributes of a regional coal basin. This scheme handles a map as if it were a matrix, in contrast to more conventional approaches which represent map information in terms of linked polygons. The utility of the image processing approach is demonstrated by a multiattribute analysis of the Herrin No. 6 coal seam in Illinois. Findings include the location of a resource and estimation of tonnage corresponding to constraints on seam thickness, overburden, and Btu value, which are illustrative of the need for new mining technology.

  18. Instructional image processing on a university mainframe: The Kansas system

    NASA Technical Reports Server (NTRS)

    Williams, T. H. L.; Siebert, J.; Gunn, C.

    1981-01-01

    An interactive digital image processing program package was developed that runs on the University of Kansas central computer, a Honeywell Level 66 multi-processor system. The module form of the package allows easy and rapid upgrades and extensions of the system and is used in remote sensing courses in the Department of Geography, in regional five-day short courses for academics and professionals, and also in remote sensing projects and research. The package comprises three self-contained modules of processing functions: Subimage extraction and rectification; image enhancement, preprocessing and data reduction; and classification. Its use in a typical course setting is described. Availability and costs are considered.

  19. A new programming metaphor for image processing procedures

    NASA Technical Reports Server (NTRS)

    Smirnov, O. M.; Piskunov, N. E.

    1992-01-01

    Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and thus be used as a component of other factories. A bare-bones prototype of factory programming was implemented under the PcIPS image processing system, and a complete version (on a multitasking platform) is under development.

  20. Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    E-print Network

    T. A. Rector; Z. G. Levay; L. M. Frattare; J. English; K. Pu'uohau-Pummill

    2004-12-06

    The quality of modern astronomical data, the power of modern computers and the agility of current image-processing software enable the creation of high-quality images in a purely digital form. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways it has led to a new philosophy towards how to create them. A practical guide is presented on how to generate astronomical images from research data with powerful image-processing programs. These programs use a layering metaphor that allows for an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. A philosophy is also presented on how to use color and composition to create images that simultaneously highlight scientific detail and are aesthetically appealing. This philosophy is necessary because most datasets do not correspond to the wavelength range of sensitivity of the human eye. The use of visual grammar, defined as the elements which affect the interpretation of an image, can maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage viewers and keep them interested for a longer period of time. The use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.

  1. A new infrared image processing method based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Mu, Chenhao; Qiu, Yuehong; Chen, Zhi

    2013-09-01

    In order to improve the efficiency of processing the large amount of data of infrared image, in this paper we develop a new infrared image processing method based on compressed sensing (CS) and simulate the method. The basic idea behind CS is that a signal or image, unknown but supposed to be compressible by a known transform, (eg. wavelet), can be subjected to fewer measurements than the nominal number of pixels, and yet be accurately reconstructed. According to the properties of wavelet transform sub-bands, firstly we make wavelet transform which change the infrared image into a wavelet coefficients matrix. Acquired the features of infrared image, the characters of the wavelet coefficients can be concluded. When deeply analyzing the data of the wavelet coefficient, we can easily find the high-pass wavelet coefficients of the image are sparse enough to measure, while the low-pass wavelet coefficients are not appropriate for measure. So in this second part, only measured the high-pass wavelet coefficients of the image but preserving the low-pass wavelet coefficients. For the reconstruction, the third part, by using the orthogonal matching pursuit (OMP) algorithm, high-pass wavelet coefficients could be recovered by the measurements. Finally the image could be reconstructed by the inverse wavelet transform. The simulation proves that applying the CS theory to the realm of infrared picture can decrease the amount of data which must be collected. Besides, compared with the original compressed sensing algorithm, simulation results demonstrated that the proposed algorithm improved the quality of the recovered image significantly.

  2. Learning and teaching about the nature of science through process skills

    NASA Astrophysics Data System (ADS)

    Mulvey, Bridget K.

    This dissertation, a three-paper set, explored whether the process skills-based approach to nature of science instruction improves teachers' understandings, intentions to teach, and instructional practice related to the nature of science. The first paper examined the nature of science views of 53 preservice science teachers before and after a year of secondary science methods instruction that incorporated the process skills-based approach. Data consisted of each participant's written and interview responses to the Views of the Nature of Science (VNOS) questionnaire. Systematic data analysis led to the conclusion that participants exhibited statistically significant and practically meaningful improvements in their nature of science views and viewed teaching the nature of science as essential to their future instruction. The second and third papers assessed the outcomes of the process skills-based approach with 25 inservice middle school science teachers. For the second paper, she collected and analyzed participants' VNOS and interview responses before, after, and 10 months after a 6-day summer professional development. Long-term retention of more aligned nature of science views underpins teachers' ability to teach aligned conceptions to their students yet it is rarely examined. Participants substantially improved their nature of science views after the professional development, retained those views over 10 months, and attributed their more aligned understandings to the course. The third paper addressed these participants' instructional practices based on participant-created video reflections of their nature of science and inquiry instruction. Two participant interviews and class notes also were analyzed via a constant comparative approach to ascertain if, how, and why the teachers explicitly integrated the nature of science into their instruction. The participants recognized the process skills-based approach as instrumental in the facilitation of their improved views. Additionally, the participants saw the nature of science as an important way to help students to access core science content such as the theory of evolution by natural selection. Most impressively, participants taught the nature of science explicitly and regularly. This instruction was student-centered, involving high levels of student engagement in ways that represented applying, adapting, and innovating on what they learned in the summer professional development.

  3. Entry pupil processing approaches for exo-planet imaging

    NASA Astrophysics Data System (ADS)

    Hyland, David C.

    2005-08-01

    In contrast to standard Michelson interferometry, the idea of entry pupil processing is to somehow convert light gathered at each telescope (of a multi-spacecraft array) into data, then process the data from several telescopes to compute the mutual coherence values needed for image reconstruction. Some advantages are that weak beams of collected light do not have to be propagated to combiners, extreme precision relative path length control among widely separated spacecraft is unnecessary, losses from beam splitting are eliminated, etc. This paper reports our study of several entry pupil processing approaches, including direct electric field reconstruction, optical heterodyne systems and intensity correlation interferometry using the Hanbury Brown-Twiss effect. For all these cases and for amplitude interferometry, we present image plane signal-to-noise (SNR) results for exo-planet imaging, both in the case of planet emissions and for imaging the limb of planets executing a transit across their stars. We particularly consider terrestrial-class planets at a range of 15 pc or less. Using the SNR and related models, we assess the relative advantages and drawbacks of all methods with respect to necessary aperture sizes, imager sensitivity, performance trends with increasing number of measurement baselines, relative performance in visible and in IR, relative positioning and path length control requirements and metrology requirements. The resulting comparisons present a picture of the performance and complexity tradeoffs among several imaging system architectures. The positive conclusion of this work is that, thanks to advances in optoelectronics and signal processing, there exist a number of promising system design alternatives for exo- planet imaging.

  4. 1216 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 9, NO. 7, JULY 2000 Sonar Image Segmentation Using an Unsupervised

    E-print Network

    Mignotte, Max

    1216 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 9, NO. 7, JULY 2000 Sonar Image Segmentation Using to sonar image segmentation.¡ We present an original hierarchical segmentation pr¢ ocedure devoted to images given by a high-resolution sonar. The sonar¡ image is segmented into two kinds of regions: shadow

  5. High Throughput Multispectral Image Processing with Applications in Food Science.

    PubMed

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples. PMID:26466349

  6. DSP filters in FPGAs for image processing applications

    NASA Astrophysics Data System (ADS)

    Taylor, Brad

    1996-10-01

    Real-time video-rate image processing requires orders of magnitude performance beyond the capabilities of general purpose computers. ASICs deliver the required performance, however they have the drawback of fixed functionality. Field programmable gate arrays (FPGAs) are reprogrammable SRAM based ICs capable of real-time image processing. FPGAs deliver the benefits of hardware execution speeds and software programmability. An FPGA program creates a custom data processor, which executes the equivalent of hundreds to thousands of lines of C code on the same clock tick. FPGAs emulate circuits which are normally built as ASICs. Multiple real-time video streams can be processed in Giga Operations' Spectrum Reconfigurable Computing (RC) PlatformTM. The Virtual Bus ArchitectureTM enables the same hardware to be configured into many image processing architectures, including 32-bit pipelines, global busses, rings, and systolic arrays. This allows an efficient mapping of data flows and memory access for many image processing applications and the implementation of many real-time DSP filters, including convolutions, morphological operators, and recoloring and resampling algorithms. FPGAs provide significant price/performance benefits versus ASICs where time to market, cost to market, and technical risk are issues. And FPGA descriptions migrate efficiently and easily into ASICs for downstream cost reduction.

  7. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique

    PubMed Central

    2015-01-01

    Background DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. Results We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. Conclusions This work presents an automated genotyping tool from DNA gel electrophoresis images, called GELect, which was written in Java and made available through the imageJ framework. With a novel automated image processing workflow, the tool can accurately segment lanes from a gel matrix, intelligently extract distorted and even doublet bands that are difficult to identify by existing image processing tools. Consequently, genotyping from DNA gel electrophoresis can be performed automatically allowing users to efficiently conduct large scale DNA fingerprinting via DNA gel electrophoresis. The software is freely available from http://www.biotec.or.th/gi/tools/gelect. PMID:26681167

  8. Study of optical techniques for the Ames unitary wind tunnel: Digital image processing, part 6

    NASA Technical Reports Server (NTRS)

    Lee, George

    1993-01-01

    A survey of digital image processing techniques and processing systems for aerodynamic images has been conducted. These images covered many types of flows and were generated by many types of flow diagnostics. These include laser vapor screens, infrared cameras, laser holographic interferometry, Schlieren, and luminescent paints. Some general digital image processing systems, imaging networks, optical sensors, and image computing chips were briefly reviewed. Possible digital imaging network systems for the Ames Unitary Wind Tunnel were explored.

  9. Optical image classification using optical/digital hybrid image-processing systems

    SciTech Connect

    Li Xiaoyang.

    1990-01-01

    Offering parallel and real-time operations, optical image classification is becoming a general technique in the solution of real-life image classification problems. This thesis investigates several algorithms for optical realization. Compared to other statistical pattern recognition algorithms, the Kittler-Young transform can provide more discriminative feature spaces for image classification. The author applies the Kittler-Young transform to image classification and implement it on optical systems. A feature selection criterion is designed for the application of the Kittler-Young transform to image classification. The realizations of the Kittler-Young transform on both a joint transform correlator and a matrix multiplier are successively conducted. Experiments of applying this technique to two-category and three-category problems are demonstrated. To combine the advantages of the statistical pattern recognition algorithms and the neural network models, processes using the two methods are studied. The Karhunen-Loeve Hopfield model is developed for image classification. This model has significant improvement in the system capacity and the capability of using image structures for more discriminative classification processes. As another such hybrid process, he proposes the feature extraction perceptron. The application of feature extraction techniques to the perceptron shortens its learning time.

  10. Content and Process in a Teaching Workshop for Faculty and Doctoral Students

    ERIC Educational Resources Information Center

    Rinfrette, Elaine S.; Maccio, Elaine M.; Coyle, James P.; Jackson, Kelly F.; Hartinger-Saunders, Robin M.; Rine, Christine M.; Shulman, Lawrence

    2015-01-01

    Teaching in higher education is often not addressed in doctoral education, even though many doctoral graduates will eventually teach. This article describes a biweekly teaching workshop, presents pitfalls and challenges that beginning instructors face, and advocates pedagogical training for doctoral students. Led by a well-known social work…

  11. Image processing of metal surface with structured light

    NASA Astrophysics Data System (ADS)

    Luo, Cong; Feng, Chang; Wang, Congzheng

    2014-09-01

    In structured light vision measurement system, the ideal image of structured light strip, in addition to black background , contains only the gray information of the position of the stripe. However, the actual image contains image noise, complex background and so on, which does not belong to the stripe, and it will cause interference to useful information. To extract the stripe center of mental surface accurately, a new processing method was presented. Through adaptive median filtering, the noise can be preliminary removed, and the noise which introduced by CCD camera and measured environment can be further removed with difference image method. To highlight fine details and enhance the blurred regions between the stripe and noise, the sharping algorithm is used which combine the best features of Laplacian operator and Sobel operator. Morphological opening operation and closing operation are used to compensate the loss of information.Experimental results show that this method is effective in the image processing, not only to restrain the information but also heighten contrast. It is beneficial for the following processing.

  12. Ice images processing interface for automatic features extraction

    NASA Astrophysics Data System (ADS)

    Tardif, Pierre M.

    2001-02-01

    Canadian Coast Guard has the mandate to maintain the navigability of the St.-Lawrence seaway. It must prevent ice jam formation. Radar, sonar sensors and cameras are used to verify ice movement and keep a record of pertinent data. The cameras are placed along the seaway at strategic locations. Images are processed and saved for future reference. The Ice Images Processing Interface (IIPI) is an integral part of Ices Integrated System (IIS). This software processes images to extract the ice speed, concentration, roughness, and rate of flow. Ice concentration is computed from image segmentation using color models and a priori information. Speed is obtained from a region-matching algorithm. Both concentration and speed calculations are complex, since they require a calibration step involving on-site measurements. Color texture features provide ice roughness estimation. Rate of flow uses ice thickness, which is estimated from sonar sensors on the river floor. Our paper will present how we modeled and designed the IIPI, the issues involved and its future. For more reliable results, we suggest that meteorological data be provided, change in camera orientation be changed, sun reflections be anticipated, and more a priori information, such as radar images available at some sites, be included.

  13. Parallel-Processing Software for Creating Mosaic Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

    2008-01-01

    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  14. Woods Hole Image Processing System Software implementation; using NetCDF as a software interface for image processing

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.

  15. Self-Evaluating in Teaching Process as a Factor of Improved Engagement and Success of the Pupils

    NASA Astrophysics Data System (ADS)

    Misic, T.; Nesic, Lj.; Djordjevic, G.

    2007-04-01

    Psychological and motivational factors are very important for work in general, and also for engagement in teaching process. The goal of introduction of the self-evaluating list into the schools is to motivate as large as possible number of pupils to be active in the class. The whole concept of the list lies in its application: it emphasizes the role of the pupils and their own evaluation of work and engagement during the teaching process. In this article we present main results of investigation concerning the application the SEL (self-evaluating list) in the some primary schools in Nis.

  16. An image-processing program for automated counting

    USGS Publications Warehouse

    Cunningham, D.J.; Anderson, W.H.; Anthony, R.M.

    1996-01-01

    An image-processing program developed by the National Institute of Health, IMAGE, was modified in a cooperative project between remote sensing specialists at the Ohio State University Center for Mapping and scientists at the Alaska Science Center to facilitate estimating numbers of black brant (Branta bernicla nigricans) in flocks at Izembek National Wildlife Refuge. The modified program, DUCK HUNT, runs on Apple computers. Modifications provide users with a pull down menu that optimizes image quality; identifies objects of interest (e.g., brant) by spectral, morphometric, and spatial parameters defined interactively by users; counts and labels objects of interest; and produces summary tables. Images from digitized photography, videography, and high- resolution digital photography have been used with this program to count various species of waterfowl.

  17. Image-plane processing for improved computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

  18. Infective endocarditis detection through SPECT/CT images digital processing

    NASA Astrophysics Data System (ADS)

    Moreno, Albino; Valdés, Raquel; Jiménez, Luis; Vallejo, Enrique; Hernández, Salvador; Soto, Gabriel

    2014-03-01

    Infective endocarditis (IE) is a difficult-to-diagnose pathology, since its manifestation in patients is highly variable. In this work, it was proposed a semiautomatic algorithm based on SPECT images digital processing for the detection of IE using a CT images volume as a spatial reference. The heart/lung rate was calculated using the SPECT images information. There were no statistically significant differences between the heart/lung rates values of a group of patients diagnosed with IE (2.62+/-0.47) and a group of healthy or control subjects (2.84+/-0.68). However, it is necessary to increase the study sample of both the individuals diagnosed with IE and the control group subjects, as well as to improve the images quality.

  19. Collection and processing data for high quality CCD images.

    SciTech Connect

    Doerry, Armin Walter

    2007-03-01

    Coherent Change Detection (CCD) with Synthetic Aperture Radar (SAR) images is a technique whereby very subtle temporal changes can be discerned in a target scene. However, optimal performance requires carefully matching data collection geometries and adjusting the processing to compensate for imprecision in the collection geometries. Tolerances in the precision of the data collection are discussed, and anecdotal advice is presented for optimum CCD performance. Processing considerations are also discussed.

  20. Processing techniques for digital sonar images from GLORIA.

    USGS Publications Warehouse

    Chavez, P.S., Jr.

    1986-01-01

    Image processing techniques have been developed to handle data from one of the newest members of the remote sensing family of digital imaging systems. This paper discusses software to process data collected by the GLORIA (Geological Long Range Inclined Asdic) sonar imaging system, designed and built by the Institute of Oceanographic Sciences (IOS) in England, to correct for both geometric and radiometric distortions that exist in the original 'raw' data. Preprocessing algorithms that are GLORIA-specific include corrections for slant-range geometry, water column offset, aspect ratio distortion, changes in the ship's velocity, speckle noise, and shading problems caused by the power drop-off which occurs as a function of range.-from Author

  1. Towards a Platform for Image Acquisition and Processing on RASTA

    NASA Astrophysics Data System (ADS)

    Furano, Gianluca; Guettache, Farid; Magistrati, Giorgio; Tiotto, Gabriele

    2013-08-01

    This paper presents the architecture of a platform for image acquisition and processing based on commercial hardware and space qualified hardware. The aim is to extend the Reference Architecture Test-bed for Avionics (RASTA) system in order to obtain a Test-bed that allows testing different hardware and software solutions in the field of image acquisition and processing. The platform will allow the integration of space qualified hardware and Commercial Off The Shelf (COTS) hardware in order to test different architectural configurations. The first implementation is being performed on a low cost commercial board and on the GR712RC board based on the Dual Core Leon3 fault tolerant processor. The platform will include an actuation module with the aim of implementing a complete pipeline from image acquisition to actuation, making possible the simulation of a real case scenario involving acquisition and actuation.

  2. Personal Computer (PC) based image processing applied to fluid mechanics

    NASA Technical Reports Server (NTRS)

    Cho, Y.-C.; Mclachlan, B. G.

    1987-01-01

    A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

  3. Spatial processing for coherent noise reduction in ultrasonic imaging

    E-print Network

    Saniie, Jafar

    :43.60.Gk, 43.35.2c INTRODUCTION In recent years, ultrasonic nondestructive testing (NDTSpatial processing for coherent noise reduction in ultrasonic imaging NihatM. Bilgutay becauseofits feasibility,versatility, and efectiveness. However, the quaiity of ultrasonic imagesis often

  4. Human visual processing oscillates: Evidence from a classification image technique

    E-print Network

    Gosselin, Frédéric

    Human visual processing oscillates: Evidence from a classification image technique Caroline Blais a with the environ- ment--still need to be clarified. We systematically modulated the signal-to-noise ratio of faces by systematically modulating the signal-to-noise ratio of stimuli through time and by examining how it impacts

  5. SAMPLING THEORY IN SIGNAL AND IMAGE PROCESSING # 2004 SAMPLING PUBLISHING

    E-print Network

    Teschke, Gerd

    SAMPLING THEORY IN SIGNAL AND IMAGE PROCESSING c # 2004 SAMPLING PUBLISHING Vol. 3, No. 3, Sept and phrases : Parameter rule, Sobolev embedding op­ erator, Tikhonov regularization, Statistical noise model 2000 AMS Mathematics Subject Classification --- 65C20, 65J10, 65J20, 65J22, 94A12, 94A20 1 Introduction

  6. Engineering graphics and image processing at Langley Research Center

    NASA Technical Reports Server (NTRS)

    Voigt, Susan J.

    1985-01-01

    The objective of making raster graphics and image processing techniques readily available for the analysis and display of engineering and scientific data is stated. The approach is to develop and acquire tools and skills which are applied to support research activities in such disciplines as aeronautics and structures. A listing of grants and key personnel are given.

  7. Parallel asynchronous hardware implementation of image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, Darryl D.; Perera, A. G. U.

    1990-01-01

    Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

  8. IP core design of template matching algorithm in image processing

    NASA Astrophysics Data System (ADS)

    Zhu, Quanqing; Zou, Xuecheng; Dong, Zhenzhong; Huang, Feng; Shen, Xubang

    2001-09-01

    This paper presents the design and implementation of template matching IP cores for image processing. Enhanced Moment Preserving Pattern Matching (MPPM) algorithm of template matching was adopted for efficient hardware implementation. The cores were coded in Verilog HDL for modularity and portability. The IP cores were validated in a XC4052XL FPGA and XESS XS40 prototyping board.

  9. DIAGNOSTIC OF MELANOMAS VIA IMAGE PROCESSING 0. Hochmuth, Beate Meffert

    E-print Network

    Freytag, Johann-Christoph

    DIAGNOSTIC OF MELANOMAS VIA IMAGE PROCESSING 0. Hochmuth, Beate Meffert Department of Electrical in dermatology as diagnostic method for malignant melanomas and other skin deseases [1,2,3]. The method helps of the melanoma is shown in Fig. 2. The approximation is possible by different procedures, e. g. by a harmonic

  10. Application of image processing to randomly rough surfaces

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangdong; Wu, Chengke

    1997-10-01

    Ocean surfaces should be considered as randomly rough surfaces at the IR and optical bands. The geometric model of sea surfaces has been obtained by using image processing. Based on the electromagnetic scattering theory, the Laser Radar Cross Section for the sea surfaces is analyzed at IR band and the scattering properties of the sea surfaces have been gotten.

  11. Duality Principles in Image Processing and Analysis Luc Florack

    E-print Network

    Utrecht, Universiteit

    ]. Since their physical representations are by definition identical, it is impossibleto segregate metamerDuality Principles in Image Processing and Analysis Luc Florack Department of Computer Science Duality is a well-established concept in quantum physics. It formalises the fact that what one observes

  12. Image processing for a high-resolution optoelectronic retinal prosthesis.

    PubMed

    Asher, Alon; Segal, William A; Baccus, Stephen A; Yaroslavsky, Leonid P; Palanker, Daniel V

    2007-06-01

    In an effort to restore visual perception in retinal diseases such as age-related macular degeneration or retinitis pigmentosa, a design was recently presented for a high-resolution optoelectronic retinal prosthesis having thousands of electrodes. This system requires real-time image processing fast enough to convert a video stream of images into electrical stimulus patterns that can be properly interpreted by the brain. Here, we present image-processing and tracking algorithms for a subretinal implant designed to stimulate the second neuron in the visual pathway, bypassing the degenerated first synaptic layer. For this task, we have developed and implemented: 1) A tracking algorithm that determines the implant's position in each frame. 2) Image cropping outside of the implant boundaries. 3) A geometrical transformation that distorts the image appropriate to the geometry of the fovea. 4) Spatio-temporal image filtering to reproduce the visual processing normally occurring in photoceptors and at the photoreceptor-bipolar cell synapse. 5) Conversion of the filtered visual information into a pattern of electrical current. Methods to accelerate real-time transformations include the exploitation of data redundancy in the time domain, and the use of precomputed lookup tables that are adjustable to retinal physiology and allow flexible control of stimulation parameters. A software implementation of these algorithms processes natural visual scenes with sufficient speed for real-time operation. This computationally efficient algorithm resembles, in some aspects, biological strategies of efficient coding in the retina and could provide a refresh rate higher than fifty frames per second on our system. PMID:17554819

  13. Limiting liability via high-resolution image processing

    NASA Astrophysics Data System (ADS)

    Greenwade, L. E.; Overlin, Trudy K.

    1997-01-01

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as 'evidence ready,' even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  14. Recent developments at JPL in the application of digital image processing techniques to astronomical images

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.; Lynn, D. J.; Benton, W. D.

    1976-01-01

    Several techniques of a digital image-processing nature are illustrated which have proved useful in visual analysis of astronomical pictorial data. Processed digital scans of photographic plates of Stephans Quintet and NGC 4151 are used as examples to show how faint nebulosity is enhanced by high-pass filtering, how foreground stars are suppressed by linear interpolation, and how relative color differences between two images recorded on plates with different spectral sensitivities can be revealed by generating ratio images. Analyses are outlined which are intended to compensate partially for the blurring effects of the atmosphere on images of Stephans Quintet and to obtain more detailed information about Saturn's ring structure from low- and high-resolution scans of the planet and its ring system. The employment of a correlation picture to determine the tilt angle of an average spectral line in a low-quality spectrum is demonstrated for a section of the spectrum of Uranus.

  15. Instant super-resolution imaging in live cells and embryos via analog image processing

    PubMed Central

    York, Andrew G.; Chandris, Panagiotis; Nogare, Damian Dalle; Head, Jeffrey; Wawrzusin, Peter; Fischer, Robert S.; Chitnis, Ajay; Shroff, Hari

    2013-01-01

    Existing super-resolution fluorescence microscopes compromise acquisition speed to provide subdiffractive sample information. We report an analog implementation of structured illumination microscopy that enables 3D super-resolution imaging with 145 nm lateral and 350 nm axial resolution, at acquisition speeds up to 100 Hz. By performing image processing operations optically instead of digitally, we removed the need to capture, store, and combine multiple camera exposures, increasing data acquisition rates 10–100x over other super-resolution microscopes and acquiring and displaying super-resolution images in real-time. Low excitation intensities allow imaging over hundreds of 2D sections, and combined physical and computational sectioning allow similar depth penetration to confocal microscopy. We demonstrate the capability of our system by imaging fine, rapidly moving structures including motor-driven organelles in human lung fibroblasts and the cytoskeleton of flowing blood cells within developing zebrafish embryos. PMID:24097271

  16. Integrated Optics for Planar imaging and Optical Signal Processing

    NASA Astrophysics Data System (ADS)

    Song, Qi

    Silicon photonics is a subject of growing interest with the potential of delivering planar electro-optical devices with chip scale integration. Silicon-on-insulator (SOI) technology has provided a marvelous platform for photonics industry because of its advantages in integration capability in CMOS circuit and countless nonlinearity applications in optical signal processing. This thesis is focused on the investigation of planar imaging techniques on SOI platform and potential applications in ultra-fast optical signal processing. In the first part, a general review and background introduction about integrated photonics circuit and planar imaging technique are provided. In chapter 2, planar imaging platform is realized by a silicon photodiode on SOI chip. Silicon photodiode on waveguide provides a high numerical aperture for an imaging transceiver pixel. An erbium doped Y2O3 particle is excited by 1550nm Laser and the fluorescent image is obtained with assistance of the scanning system. Fluorescence image is reconstructed by using image de-convolution technique. Under photovoltaic mode, we use an on-chip photodiode and an external PIN photodiode to realize similar resolution as 5?m. In chapter 3, a time stretching technique is developed to a spatial domain to realize a 2D imaging system as an ultrafast imaging tool. The system is evaluated based on theoretical calculation. The experimental results are shown for a verification of system capability to imaging a micron size particle or a finger print. Meanwhile, dynamic information for a moving object is also achieved by correlation algorithm. In chapter 4, the optical leaky wave antenna based on SOI waveguide has been utilized for imaging applications and extensive numerical studied has been conducted. and the theoretical explanation is supported by leaky wave theory. The highly directive radiation has been obtained from the broadside with 15.7 dB directivity and a 3dB beam width of ?Ø 3dB ? 1.65° in free space environment when ? -1 = 2.409 × 105/m, ?=4.576 ×103/m. At the end, electronics beam-steering principle has been studied and the comprehensive model has been built to explain carrier transformation behavior in a PIN junction as individual silicon perturbation. Results show that 1019/cm3 is possible obtained with electron injection mechanism. Although the radiation modulation based on carrier injection of 1019/cm3 gives 0.5dB variation, resonant structure, such as Fabry Perrot Cavity, can be integrated with LOWAs to enhance modulation effect.

  17. A simple microscopy assay to teach the processes of phagocytosis and exocytosis.

    PubMed

    Gray, Ross; Gray, Andrew; Fite, Jessica L; Jordan, Renée; Stark, Sarah; Naylor, Kari

    2012-01-01

    Phagocytosis and exocytosis are two cellular processes involving membrane dynamics. While it is easy to understand the purpose of these processes, it can be extremely difficult for students to comprehend the actual mechanisms. As membrane dynamics play a significant role in many cellular processes ranging from cell signaling to cell division to organelle renewal and maintenance, we felt that we needed to do a better job of teaching these types of processes. Thus, we developed a classroom-based protocol to simultaneously study phagocytosis and exocytosis in Tetrahymena pyriformis. In this paper, we present our results demonstrating that our undergraduate classroom experiment delivers results comparable with those acquired in a professional research laboratory. In addition, students performing the experiment do learn the mechanisms of phagocytosis and exocytosis. Finally, we demonstrate a mathematical exercise to help the students apply their data to the cell. Ultimately, this assay sets the stage for future inquiry-based experiments, in which the students develop their own experimental questions and delve deeper into the mechanisms of phagocytosis and exocytosis. PMID:22665590

  18. A Simple Microscopy Assay to Teach the Processes of Phagocytosis and Exocytosis

    PubMed Central

    Gray, Ross; Gray, Andrew; Fite, Jessica L.; Jordan, Renée; Stark, Sarah; Naylor, Kari

    2012-01-01

    Phagocytosis and exocytosis are two cellular processes involving membrane dynamics. While it is easy to understand the purpose of these processes, it can be extremely difficult for students to comprehend the actual mechanisms. As membrane dynamics play a significant role in many cellular processes ranging from cell signaling to cell division to organelle renewal and maintenance, we felt that we needed to do a better job of teaching these types of processes. Thus, we developed a classroom-based protocol to simultaneously study phagocytosis and exocytosis in Tetrahymena pyriformis. In this paper, we present our results demonstrating that our undergraduate classroom experiment delivers results comparable with those acquired in a professional research laboratory. In addition, students performing the experiment do learn the mechanisms of phagocytosis and exocytosis. Finally, we demonstrate a mathematical exercise to help the students apply their data to the cell. Ultimately, this assay sets the stage for future inquiry-based experiments, in which the students develop their own experimental questions and delve deeper into the mechanisms of phagocytosis and exocytosis. PMID:22665590

  19. Adaptive image-processing technique and effective visualization of confocal microscopy images.

    PubMed

    Sun, Yinlong; Rajwa, Bartek; Robinson, J Paul

    2004-06-01

    A common observation about confocal microscopy images is that lower image stacks have lower voxel intensities and are usually blurred in comparison with the upper ones. The key reasons are light absorption and scattering by the objects and particles in the volume through which light passes. This report proposes a new technique to reduce such noise impacts in terms of an adaptive intensity compensation and structural sharpening algorithm. With these image-processing procedures, effective 3D rendering techniques can be applied to faithfully visualize confocal microscopy data. PMID:15352087

  20. RegiStax: Alignment, stacking and processing of images

    NASA Astrophysics Data System (ADS)

    Berrevoets, Cor; DeClerq, Bart; George, Tony; Makolkin, Dmitry; Maxson, Paul; Pilz, Bob; Presnyakov, Pavel; Roel, Eric; Weiller, Sylvain

    2012-06-01

    RegiStax is software for alignment/stacking/processing of images; it was released over 10 years ago and continues to be developed and improved. The current version is RegiStax 6, which supports the following formats: AVI, SER, RFL (RegiStax Framelist), BMP, JPG, TIF, and FIT. This version has a shorter and simpler processing sequence than its predecessor, and optimizing isn't necessary anymore as a new image alignment method optimizes directly. The interface of RegiStax 6 has been simplified to look more uniform in appearance and functionality, and RegiStax 6 now uses Multi-core processing, allowing the user to have up to have multiple cores(recommended to use maximally 4) working simultaneous during alignment/stacking.

  1. Liquid crystal thermography and true-colour digital image processing

    NASA Astrophysics Data System (ADS)

    Stasiek, J.; Stasiek, A.; Jewartowski, M.; Collins, M. W.

    2006-06-01

    In the last decade thermochromic liquid crystals (TLC) and true-colour digital image processing have been successfully used in non-intrusive technical, industrial and biomedical studies and applications. Thin coatings of TLCs at surfaces are utilized to obtain detailed temperature distributions and heat transfer rates for steady or transient processes. Liquid crystals also can be used to make visible the temperature and velocity fields in liquids by the simple expedient of directly mixing the liquid crystal material into the liquid (water, glycerol, glycol, and silicone oils) in very small quantities to use as thermal and hydrodynamic tracers. In biomedical situations e.g., skin diseases, breast cancer, blood circulation and other medical application, TLC and image processing are successfully used as an additional non-invasive diagnostic method especially useful for screening large groups of potential patients. The history of this technique is reviewed, principal methods and tools are described and some examples are also presented.

  2. IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING

    NASA Technical Reports Server (NTRS)

    Roth, D. J.

    1994-01-01

    IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

  3. Combined optimization of image-gathering and image-processing systems for scene feature detection

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Arduini, Robert F.; Samms, Richard W.

    1987-01-01

    The relationship between the image gathering and image processing systems for minimum mean squared error estimation of scene characteristics is investigated. A stochastic optimization problem is formulated where the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled and noisy image data. An analytical solution for the optimal characteristic image processor is developed. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristic is scene restoration. Optimal edge detection is investigated using the Laplacian operator x G as the desired characteristic, where G is a two dimensional Gaussian distribution function. It is shown that the optimal edge detector compensates for the blurring introduced by the image gathering optics, and notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition. The optimal image gathering optical transfer function is also investigated and the results of a sensitivity analysis are shown.

  4. Lunar Crescent Detection Based on Image Processing Algorithms

    NASA Astrophysics Data System (ADS)

    Fakhar, Mostafa; Moalem, Peyman; Badri, Mohamad Ali

    2014-11-01

    For many years lunar crescent visibility has been studied by many astronomers. Different criteria have been used to predict and evaluate the visibility status of new Moon crescents. Powerful equipment such as telescopes and binoculars have changed capability of observations. Most of conventional statistical criteria made wrong predictions when new observations (based on modern equipment) were reported. In order to verify such reports and modify criteria, not only previous statistical parameters should be considered but also some new and effective parameters like high magnification, contour effect, low signal to noise, eyestrain and weather conditions should be viewed. In this paper a new method is presented for lunar crescent detection based on processing of lunar crescent images. The method includes two main steps, first, an image processing algorithm that improves signal to noise ratio and detects lunar crescents based on circular Hough transform (CHT). Second using an algorithm based on image histogram processing to detect the crescent visually. Final decision is made by comparing the results of visual and CHT algorithms. In order to evaluate the proposed method, a database, including 31 images are tested. The illustrated method can distinguish and extract the crescent that even the eye can't recognize. Proposed method significantly reduces artifacts, increases SNR and can be used easily by both groups astronomers and who want to develop a new criterion as a reliable method to verify empirical observation.

  5. Design of multichannel image processing on the Space Solar Telescope

    NASA Astrophysics Data System (ADS)

    Zhang, Bin

    2000-07-01

    The multi-channel image processing system on the Space Solar Telescope (SST) is described in this paper. This system is main part of science data unit (SDU), which is designed for dealing with the science data from every payload on the SST. First every payload on the SST and its scientific objective are introduced. They are main optic telescope, four soft X- ray telescopes, an H-alpha and white light (full disc) telescope, a coronagraph, a wide band X-ray and Gamma-ray spectrometer, and a solar and interplanetary radio spectrometer. Then the structure of SDU is presented. In this part, we discuss the hardware and software structure of SDU, which is designed for multi-payload. The science data scream of every payload is summarized, too. Solar magnetic and velocity field processing that occupies more than 90% of the data processing of SDU is discussed, which includes polarizing unit, image receiver and image adding unit. Last the plan of image data compression and mass memory that is designed for science data storage are presented.

  6. Image processing for safety assessment in civil engineering.

    PubMed

    Ferrer, Belen; Pomares, Juan C; Irles, Ramon; Espinosa, Julian; Mas, David

    2013-06-20

    Behavior analysis of construction safety systems is of fundamental importance to avoid accidental injuries. Traditionally, measurements of dynamic actions in civil engineering have been done through accelerometers, but high-speed cameras and image processing techniques can play an important role in this area. Here, we propose using morphological image filtering and Hough transform on high-speed video sequence as tools for dynamic measurements on that field. The presented method is applied to obtain the trajectory and acceleration of a cylindrical ballast falling from a building and trapped by a thread net. Results show that safety recommendations given in construction codes can be potentially dangerous for workers. PMID:23842183

  7. Application of digital image processing techniques to astronomical imagery, 1979

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.

    1979-01-01

    Several areas of applications of image processing to astronomy were identified and discussed. These areas include: (1) deconvolution for atmospheric seeing compensation; a comparison between maximum entropy and conventional Wiener algorithms; (2) polarization in galaxies from photographic plates; (3) time changes in M87 and methods of displaying these changes; (4) comparing emission line images in planetary nebulae; and (5) log intensity, hue saturation intensity, and principal component color enhancements of M82. Examples are presented of these techniques applied to a variety of objects.

  8. Intensifying Brillouin distributed fibre sensors using image processing

    NASA Astrophysics Data System (ADS)

    Soto, Marcelo A.; Ramírez, Jaime A.; Thévenaz, Luc

    2015-09-01

    Image processing is proposed to enhance the performance of Brillouin distributed fibre sensors. The technique exploits the two-dimensional nature of the measurements, so that each frequency-position pair is assimilated to a pixel of a noisy image. Based on the level of redundancy existing in the two-dimensional information, the method offers unmatched denoising capabilities when compared to classic unidimensional denoising methods, even if those ones are consecutively used in distance and frequency domains. With no modification of the basic configuration, up to ~14 dB SNR improvement is experimentally demonstrated with unobservable loss of spatial resolution. A figure-of-merit of 115'000 is verified.

  9. Lunar and Planetary Science XXXV: Image Processing and Earth Observations

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The titles in this section include: 1) Expansion in Geographic Information Services for PIGWAD; 2) Modernization of the Integrated Software for Imagers and Spectrometers; 3) Science-based Region-of-Interest Image Compression; 4) Topographic Analysis with a Stereo Matching Tool Kit; 5) Central Avra Valley Storage and Recovery Project (CAVSARP) Site, Tucson, Arizona: Floodwater and Soil Moisture Investigations with Extraterrestrial Applications; 6) ASE Floodwater Classifier Development for EO-1 HYPERION Imagery; 7) Autonomous Sciencecraft Experiment (ASE) Operations on EO-1 in 2004; 8) Autonomous Vegetation Cover Scene Classification of EO-1 Hyperion Hyperspectral Data; 9) Long-Term Continental Areal Reduction Produced by Tectonic Processes.

  10. A Comparative Study of the Quality of Teaching Learning Process at Post Graduate Level in the Faculty of Science and Social Science

    ERIC Educational Resources Information Center

    Shahzadi, Uzma; Shaheen, Gulnaz; Shah, Ashfaque Ahmed

    2012-01-01

    The study was intended to compare the quality of teaching learning process in the faculty of social science and science at University of Sargodha. This study was descriptive and quantitative in nature. The objectives of the study were to compare the quality of teaching learning process in the faculty of social science and science at University of…

  11. Teaching Reading

    ERIC Educational Resources Information Center

    Day, Richard R.

    2013-01-01

    "Teaching Reading" uncovers the interactive processes that happen when people learn to read and translates them into a comprehensive easy-to-follow guide on how to teach reading. Richard Day's revelations on the nature of reading, reading strategies, reading fluency, reading comprehension, and reading objectives make fascinating…

  12. Teaching Mathematics in/for New Times: A Poststructuralist Analysis of the Productive Quality of the Pedagogic "Process."

    ERIC Educational Resources Information Center

    Klein, M.

    2002-01-01

    Undertakes, from a poststructuralist perspective, a meta-analysis of two short episodes from a paper by Manouchehri and Goodman (2000). Explores how mathematical knowledge and identities are produced in teaching/learning interactions in the classroom and the wider practical implications of this productive power of process for mathematics education…

  13. Fremdsprachenunterricht als Kommunikationsprozess (Foreign Language Teaching as a Communicative Process). Language Centre News, No. 1. Focus on Spoken Language.

    ERIC Educational Resources Information Center

    Butzkamm, Wolfgang

    Teaching, as a communicative process, ranges between purely message-oriented communication (the goal) and purely language-oriented communication (a means). Classroom discourse ("Close the window", etc.) is useful as a drill but is also message-oriented. Skill in message-oriented communication is acquired only through practice in this kind of…

  14. Validation Study of the Scale for "Assessment of the Teaching-Learning Process", Student Version (ATLP-S)

    ERIC Educational Resources Information Center

    de la Fuente, Jesus; Sander, Paul; Justicia, Fernando; Pichardo, M. Carmen; Garcia-Berben, Ana B.

    2010-01-01

    Introduction: The main goal of this study is to evaluate the psychometric and assessment features of the Scale for the "Assessment of the Teaching-Learning Process, Student Version" (ATLP-S), for both practical and theoretical reasons. From an applied point of view, this self-report measurement instrument has been designed to encourage student…

  15. Administrators in Action--Managing Public Monies and Processing Emotion in School Activities: A Teaching Case Study

    ERIC Educational Resources Information Center

    Tenuto, Penny L.; Gardiner, Mary E.; Yamamoto, Julie K.

    2015-01-01

    This teaching case describes school administrators in action performing day-to-day leadership tasks, managing public funds in school activities, and interacting with others appropriately. The case focuses on administrative challenges in handling and managing school activity funds. A method for processing emotion is discussed to assist…

  16. Validating the ACE Model for Evaluating Student Performance Using a Teaching-Learning Process Based on Computational Modeling Systems

    ERIC Educational Resources Information Center

    Louzada, Alexandre Neves; Elia, Marcos da Fonseca; Sampaio, Fábio Ferrentini; Vidal, Andre Luiz Pestana

    2014-01-01

    The aim of this work is to adapt and test, in a Brazilian public school, the ACE model proposed by Borkulo for evaluating student performance as a teaching-learning process based on computational modeling systems. The ACE model is based on different types of reasoning involving three dimensions. In addition to adapting the model and introducing…

  17. Health-Related Intensity Profiles of Physical Education Classes at Different Phases of the Teaching/Learning Process

    ERIC Educational Resources Information Center

    Bronikowski, Michal; Bronikowska, Malgorzata; Kantanista, Adam; Ciekot, Monika; Laudanska-Krzeminska, Ida; Szwed, Szymon

    2009-01-01

    Study aim: To assess the intensities of three types of physical education (PE) classes corresponding to the phases of the teaching/learning process: Type 1--acquiring and developing skills, Type 2--selecting and applying skills, tactics and compositional principles and Type 3--evaluating and improving performance skills. Material and methods: A…

  18. The Role of Information and Communication Technologies in Improving Teaching and Learning Processes in Primary and Secondary Schools

    ERIC Educational Resources Information Center

    Sangra, Albert; Gonzalez-Sanmamed, Mercedes

    2010-01-01

    The purpose of this study is to analyse what is happening at schools regarding the integration and use of information and communication technologies (ICT) and to examine teachers' perceptions about what teaching and learning processes can be improved through the use of ICT. A multiple-case-study research methodology was applied. From a previous…

  19. Mystery Montage: A Holistic, Visual, and Kinesthetic Process for Expanding Horizons and Revealing the Core of a Teaching Philosophy

    ERIC Educational Resources Information Center

    Ennis, Kim; Priebe, Carly; Sharipova, Mayya; West, Kim

    2012-01-01

    Revealing the core of a teaching philosophy is the key to a concise and meaningful philosophy statement, but it can be an elusive goal. This paper offers a visual, kinesthetic, and holistic process for expanding the horizons of self-reflection, self-analysis, and self-knowledge. Mystery montage, a variation of visual mapping, storyboarding, and…

  20. Publishing Musical Compositions in the Classroom: The Writer's Process at Work in a Teacher/Teaching Artist Collaboration

    ERIC Educational Resources Information Center

    Easton, Hilary; Witek, Tanya; Cione, Danielle

    2005-01-01

    During Winter/Spring of 2004, Danielle Cione, a 5th grade classroom teacher at New York City's PS 199, and Tanya Witek, a Teaching Artist for the New York Philharmonic School Partnership Program, planned and implemented a series of lessons exploring the relationship of the Writer's Workshop as taught in the schools and the process used by…

  1. Evaluation of the quality of the teaching-learning process in undergraduate courses in Nursing 1

    PubMed Central

    González-Chordá, Víctor Manuel; Maciá-Soler, María Loreto

    2015-01-01

    Abstract Objective: to identify aspects of improvement of the quality of the teaching-learning process through the analysis of tools that evaluated the acquisition of skills by undergraduate students of Nursing. Method: prospective longitudinal study conducted in a population of 60 secondyear Nursing students based on registration data, from which quality indicators that evaluate the acquisition of skills were obtained, with descriptive and inferential analysis. Results: nine items were identified and nine learning activities included in the assessment tools that did not reach the established quality indicators (p<0.05). There are statistically significant differences depending on the hospital and clinical practices unit (p<0.05). Conclusion: the analysis of the evaluation tools used in the article "Nursing Care in Welfare Processes" of the analyzed university undergraduate course enabled the detection of the areas for improvement in the teachinglearning process. The challenge of education in nursing is to reach the best clinical research and educational results, in order to provide improvements to the quality of education and health care. PMID:26444173

  2. Digital image manipulation, analysis and processing systems (DIMAPS) - A research-oriented, experimental image-processing system

    NASA Astrophysics Data System (ADS)

    Dave, J. V.

    1985-12-01

    Digital Image Manipulation, Analysis and Processing Systems DIMAPS are FORTRAN-based, dialog-driven, fully interactive programs for the IBM 4341 (or equivalent) computer running under VM/CMS or MVS/TSO. The work station consists of three screens (alphanumeric, high-resolution vector graphics, and high-resolution color display), together with a digitizing graphics tablet, cursor controllers, keyboards, and hard copy devices. The DIMAPS software is 98-percent FORTRAN, thus facilitating maintenance, system growth, and transportability. The original DIMAPS and its modified versions contain functions for the generation, display and comparison of multiband images, and for the quantitative as well as graphic display of data in a selected section of the image under study. Several functions for performing data modification and/or analysis tasks are also included. Some high-level image processing and analysis functions such as the generation of shaded-relief images, unsupervised multispectral classification, scene-to-scene or map-to-scene registration of multiband digital data, extraction of texture information using a two-dimensional Fourier transform of the band data, and reduction of random noise from multiband data using phase agreement among their Fourier coefficients, were developed as adjuncts to DIMAPS.

  3. Digital Image Manipulation, Analysis and Processing Systems (DIMAPS) A research-oriented, experimental image-processing system

    NASA Astrophysics Data System (ADS)

    Dave, J. V.

    1985-04-01

    The acronym DIMAPS stands for the group of experimental Digital Image Manipulation, Analysis and Processing Systems developed at the IBM Scientific Center in Palo Alto, California. These are FORTRAN-based, dialog-driven, fully interactive programs for the IBM 4341 (or equivalent) computer running under VM/CMS or MVS/TSO. The work station consists of three screens (alphanumeric, high-resolution vector graphics, and high-resolution color display), plus a digitizing graphics tablet, cursor controllers, keyboards, and hard copy devices. The DIMAPS software is 98% FORTRAN, thus facilitating maintenance, system growth, and transportability. The original DIMAPS and its modified versions contain functions for the generation, display and comparison of multiband images, and for the quantitative as well as graphic display of data in a selected section of the image under study. Several functions for performing data modification and/or analysis tasks are also included. Some high-level image processing and analysis functions such as the generation of shaded-relief images, unsupervised multispectral classification, scene-to-scene or map-to-scene registration of multiband digital data, extraction of texture information using a two-dimensional Fourier transform of the band data, and reduction of random noise from multiband data using phase agreement among their Fourier coefficients, were developed as adjuncts to DIMAPS.

  4. Constraint processing in our extensible language for cooperative imaging system

    NASA Astrophysics Data System (ADS)

    Aoki, Minoru; Murao, Yo; Enomoto, Hajime

    1996-02-01

    The extensible WELL (Window-based elaboration language) has been developed using the concept of common platform, where both client and server can communicate with each other with support from a communication manager. This extensible language is based on an object oriented design by introducing constraint processing. Any kind of services including imaging in the extensible language is controlled by the constraints. Interactive functions between client and server are extended by introducing agent functions including a request-respond relation. Necessary service integrations are satisfied with some cooperative processes using constraints. Constraints are treated similarly to data, because the system should have flexibilities in the execution of many kinds of services. The similar control process is defined by using intentional logic. There are two kinds of constraints, temporal and modal constraints. Rendering the constraints, the predicate format as the relation between attribute values can be a warrant for entities' validity as data. As an imaging example, a processing procedure of interaction between multiple objects is shown as an image application for the extensible system. This paper describes how the procedure proceeds in the system, and that how the constraints work for generating moving pictures.

  5. Digital Image Processing Applied To Quality Assurance In Mineral Industry

    NASA Astrophysics Data System (ADS)

    Hamrouni, Zouheir; Ayache, Alain; Krey, Charlie J.

    1989-03-01

    In this paper , we bring forward an application of vision in the domain of quality assurance in mineral industry of talc. By using image processing and computer vision means, the proposed real time whiteness captor system intends: - to inspect the whiteness of grinded product, - to manage the mixing of primary talcs before grinding, in order to obtain a final product with predetermined whiteness. The system uses the robotic CCD microcamera MICAM (designed by our laboratory and presently manufactured), a micro computer system based on Motorola 68020 and real time image processing boards. It has the industrial following specifications: - High reliability - Whiteness is determined with a 0.3% precision on a scale of 25 levels. Because of the expected precision, we had to study carefully the lighting system, the type of image captor and associated electronics. The first developped softwares are able to process the withness of talcum powder; then we have conceived original algorithms to control withness of rough talc taking into account texture and shadows. The processing times of these algorithms are completely compatible with industrial rates. This system can be applied to other domains where high precision reflectance captor is needed: industry of paper, paints, ...

  6. Fast Implementation of Matched Filter Based Automatic Alignment Image Processing

    SciTech Connect

    Awwal, A S; Rice, K; Taha, T

    2008-04-02

    Video images of laser beams imprinted with distinguishable features are used for alignment of 192 laser beams at the National Ignition Facility (NIF). Algorithms designed to determine the position of these beams enable the control system to perform the task of alignment. Centroiding is a common approach used for determining the position of beams. However, real world beam images suffer from intensity fluctuation or other distortions which make such an approach susceptible to higher position measurement variability. Matched filtering used for identifying the beam position results in greater stability of position measurement compared to that obtained using the centroiding technique. However, this gain is achieved at the expense of extra processing time required for each beam image. In this work we explore the possibility of using a field programmable logic array (FPGA) to speed up these computations. The results indicate a performance improvement of 20 using the FPGA relative to a 3 GHz Pentium 4 processor.

  7. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  8. An Image Processing System for Cardiovascular Wall Motion Studies*

    PubMed Central

    Covvey, H.D.; McLaughlin, P.; Tsotsos, J.K.; Ridsdale, G.; Wigle, E.D.

    1980-01-01

    Obtaining data from ventricular and coronary angiograms has largely been limited in the past to visualizing and manually outlining film frames. Methods of manipulating the picture data itself and interacting with the picture in digital (numeric) form have been unavailable, except in the case of nuclear (gamma) imaging systems. We have developed, in cooperation with a local company, a computer-based graphics system to obtain quantitative data directly from cardiovascular images. This modular system extends our ability to process picture data to all types of cardiovascular images. The video display system currently displays 6 bits of gray scale (8 bits maximum) and permits color mapping of gray levels. A variety of graphics functions such as zoom and pan are implemented in the hardware. The hardware system is now operational and program development is underway. The first data being analyzed is that from tantulum intramyocardial markers and contrast angiograms.

  9. Analysis and processing of pixel binning for color image sensor

    NASA Astrophysics Data System (ADS)

    Jin, Xiaodan; Hirakawa, Keigo

    2012-12-01

    Pixel binning refers to the concept of combining the electrical charges of neighboring pixels together to form a superpixel. The main benefit of this technique is that the combined charges would overcome the read noise at the sacrifice of spatial resolution. Binning in color image sensors results in superpixel Bayer pattern data, and subsequent demosaicking yields the final, lower resolution, less noisy image. It is common knowledge among the practitioners and camera manufacturers, however, that binning introduces severe artifacts. The in-depth analysis in this article proves that these artifacts are far worse than the ones stemming from loss of resolution or demosaicking, and therefore it cannot be eliminated simply by increasing the sensor resolution. By accurately characterizing the sensor data that has been binned, we propose a post-capture binning data processing solution that succeeds in suppressing noise and preserving image details. We verify experimentally that the proposed method outperforms the existing alternatives by a substantial margin.

  10. Design of interchannel MRF model for probabilistic multichannel image processing.

    PubMed

    Koo, Hyung Il; Cho, Nam Ik

    2011-03-01

    In this paper, we present a novel framework that exploits an informative reference channel in the processing of another channel. We formulate the problem as a maximum a posteriori estimation problem considering a reference channel and develop a probabilistic model encoding the interchannel correlations based on Markov random fields. Interestingly, the proposed formulation results in an image-specific and region-specific linear filter for each site. The strength of filter response can also be controlled in order to transfer the structural information of a channel to the others. Experimental results on satellite image fusion and chrominance image interpolation with denoising show that our method provides improved subjective and objective performance compared with conventional approaches. PMID:20875973

  11. Tracker: Image-Processing and Object-Tracking System Developed

    NASA Technical Reports Server (NTRS)

    Klimek, Robert B.; Wright, Theodore W.

    1999-01-01

    Tracker is an object-tracking and image-processing program designed and developed at the NASA Lewis Research Center to help with the analysis of images generated by microgravity combustion and fluid physics experiments. Experiments are often recorded on film or videotape for analysis later. Tracker automates the process of examining each frame of the recorded experiment, performing image-processing operations to bring out the desired detail, and recording the positions of the objects of interest. It can load sequences of images from disk files or acquire images (via a frame grabber) from film transports, videotape, laser disks, or a live camera. Tracker controls the image source to automatically advance to the next frame. It can employ a large array of image-processing operations to enhance the detail of the acquired images and can analyze an arbitrarily large number of objects simultaneously. Several different tracking algorithms are available, including conventional threshold and correlation-based techniques, and more esoteric procedures such as "snake" tracking and automated recognition of character data in the image. The Tracker software was written to be operated by researchers, thus every attempt was made to make the software as user friendly and self-explanatory as possible. Tracker is used by most of the microgravity combustion and fluid physics experiments performed by Lewis, and by visiting researchers. This includes experiments performed on the space shuttles, Mir, sounding rockets, zero-g research airplanes, drop towers, and ground-based laboratories. This software automates the analysis of the flame or liquid s physical parameters such as position, velocity, acceleration, size, shape, intensity characteristics, color, and centroid, as well as a number of other measurements. It can perform these operations on multiple objects simultaneously. Another key feature of Tracker is that it performs optical character recognition (OCR). This feature is useful in extracting numerical instrumentation data that are embedded in images. All the results are saved in files for further data reduction and graphing. There are currently three Tracking Systems (workstations) operating near the laboratories and offices of Lewis Microgravity Science Division researchers. These systems are used independently by students, scientists, and university-based principal investigators. The researchers bring their tapes or films to the workstation and perform the tracking analysis. The resultant data files generated by the tracking process can then be analyzed on the spot, although most of the time researchers prefer to transfer them via the network to their offices for further analysis or plotting. In addition, many researchers have installed Tracker on computers in their office for desktop analysis of digital image sequences, which can be digitized by the Tracking System or some other means. Tracker has not only provided a capability to efficiently and automatically analyze large volumes of data, saving many hours of tedious work, but has also provided new capabilities to extract valuable information and phenomena that was heretofore undetected and unexploited.

  12. Internet (WWW) based system of ultrasonic image processing tools for remote image analysis.

    PubMed

    Zeng, Hong; Fei, Ding-Yu; Fu, Cai-Ting; Kraft, Kenneth A

    2003-07-01

    Ultrasonic Doppler color imaging can provide anatomic information and simultaneously render flow information within blood vessels for diagnostic purpose. Many researchers are currently developing ultrasound image processing algorithms in order to provide physicians with accurate clinical parameters from the images. Because researchers use a variety of computer languages and work on different computer platforms to implement their algorithms, it is difficult for other researchers and physicians to access those programs. A system has been developed using World Wide Web (WWW) technologies and HTTP communication protocols to publish our ultrasonic Angle Independent Doppler Color Image (AIDCI) processing algorithm and several general measurement tools on the Internet, where authorized researchers and physicians can easily access the program using web browsers to carry out remote analysis of their local ultrasonic images or images provided from the database. In order to overcome potential incompatibility between programs and users' computer platforms, ActiveX technology was used in this project. The technique developed may also be used for other research fields. PMID:12799056

  13. CALL FOR PAPERS Journal of Real-Time Image Processing (JRTIP)

    E-print Network

    Baudoin, Geneviève

    and augmented reality High speed image processing applications 2D/3D measurement systems IMPORTANT DATES: Papers: Industrial visual inspections Medical imaging Interactive equipment, embedded vision sensors Virtual

  14. Seam tracking with texture based image processing for laser materials processing

    NASA Astrophysics Data System (ADS)

    Krämer, S.; Fiedler, W.; Drenker, A.; Abels, P.

    2014-02-01

    This presentation deals with a camera based seam tracking system for laser materials processing. The digital high speed camera records interaction point and illuminated work piece surface. The camera system is coaxially integrated into the laser beam path. The aim is to observe interaction point and joint gap in one image for a closed loop control of the welding process. Especially for the joint gap observation a new image processing method is used. Basic idea is to detect a difference between the textures of the surface of the two work pieces to be welded together instead of looking for a nearly invisible narrow line imaged by the joint gap. The texture based analysis of the work piece surface is more robust and less affected by varying illumination conditions than conventional grey scale image processing. This technique of image processing gives in some cases the opportunity for real zero gap seam tracking. In a condensed view economic benefits are simultaneous laser and seam tracking for self-calibrating laser welding applications without special seam pre preparation for seam tracking.

  15. 898 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 3, MARCH 2013 Image Enhancement Using the Hypothesis Selection

    E-print Network

    quality at all locations in a complex image. For example, a filter designed to remove Gaussian noise from898 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 3, MARCH 2013 Image Enhancement Using selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has

  16. Concurrent Image Processing Executive (CIPE). Volume 1: Design overview

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1990-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.

  17. Application of ultrasound processed images in space: assessing diffuse affectations

    NASA Astrophysics Data System (ADS)

    Pérez-Poch, A.; Bru, C.; Nicolau, C.

    The purpose of this study was to evaluate diffuse affectations in the liver using texture image processing techniques. Ultrasound diagnose equipments are the election of choice to be used in space environments as they are free from hazardous effects on health. However, due to the need for highly trained radiologists to assess the images, this imaging method is mainly applied on focal lesions rather than on non-focal ones. We have conducted a clinical study on 72 patients with different degrees of chronic hepatopaties and a group of control of 18 individuals. All subjects' clinical reports and results of biopsies were compared to the degree of affectation calculated by our computer system , thus validating the method. Full statistical results are given in the present paper showing a good correlation (r=0.61) between pathologist's report and analysis of the heterogenicity of the processed images from the liver. This computer system to analyze diffuse affectations may be used in-situ or via telemedicine to the ground.

  18. Real-time multispectral processing of biological objects images

    NASA Astrophysics Data System (ADS)

    Shapovalov, V. V.; Gurevich, B. S.; Andreyev, S. V.; Belyaev, A. V.; Chelak, V. N.

    2011-07-01

    The spectral information, besides the spatial one, is very important in biology and medicine, as well as in many other areas. However the simultaneous analysis of the spatial and spectral information components is connected with certain difficulties which are caused primarily by the deficiencies of the devices which provide the spectral analysis (by light wavelengths) of images containing high spatial frequencies. We propose the method of biological objects images multispectral processing with rather high productivity. The device which provides this method performance includes a newly elaborated polychromic light source with real time controlled spectral composition for rough switching of the narrow spectral ranges, and acousto-optic tunable filter (AOTF) with wide angular aperture - for fine tuning of the selected sub-images wavelengths. The method and device practical configuration are considered and discussed. Also some features of AOTF required in the presented devices are analyzed. The possible information exchange between spectral and spatial information is also the subject of consideration as well the limitations of spectral and spatial resolving power. The experimental results connected with real time multispectral processing of tomographic images are presented and discussed. Also the possibilities of the method application for biology, medicine, and environment protection are considered.

  19. Crystallographic Image Processing for Atomic Force and Scanning Tunneling Microscopists

    NASA Astrophysics Data System (ADS)

    Moon, Bill; Plachinda, Pavel; Straton, Jack; Moeck, Peter

    2010-03-01

    Crystallographic image processing of atomic force and scanning tunneling microscopy [1] images from 2D periodic and preferentially highly symmetric calibration samples is demonstrated and leads to estimates of the prevailing point spread function of the microscopes. Such a point spread function is valid for one scanning probe tip at a time and the corresponding set of experimental conditions. It can subsequently be utilized to correct for all kinds of geometric distortions including the effects of a blunt scanning probe tip, image bow, and image tilt. The image to be corrected does not even need to possess 2D periodicity. The only condition is that it needs to be recorded with the same microscope under essentially the same experimental conditions and with the same scanning probe tip. [4pt] [1] P. Moeck, B. Moon Jr., M. Abdel-Hafiez, and M. Hietschold, Proc. NSTI 2009, Houston, May 3-7, 2009, Vol. I (2009) 314-317, (ISBN: 978-1-4398-1782-7).

  20. Image Processing System For Enhancement And Deblurring Of Photographs

    NASA Astrophysics Data System (ADS)

    Lehar, A. F.; Stevens, R. J.

    1984-06-01

    This paper describes an image processing system that is in operational use for the extraction of latent information from degraded photographs arising in routine police work. A wide range of spatial domain and Fourier domain techniques to both diagnose and correct for various picture problems are described. The enhancement methods discussed include contrast enhancement, noise filtering, digital color filtering, perspective correction, adaptive Fourier filtering to remove background patterns, and image deblurring. Color images are treated by operating on the separated red, green, and blue (R, G, B) components, and a novel encoding scheme is used to enable the display of color pictures on a standard 8-bit frame store. These techniques have been developed for, and applied to, operational rather than laboratory-generated images. A brief description is given of the hardware that is used, which incorpo-rates an array processor to enhance computational speed, a high quality micro-densitometer for digitizing images, and a digital frame store for final display. The system has been configured to give an operator as much interactive control as possible.