These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

Image Processing for Teaching.  

ERIC Educational Resources Information Center

The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

Greenberg, R.; And Others

1993-01-01

2

Image Processing for Teaching: Transforming a Scientific Research Tool into an Educational Technology.  

ERIC Educational Resources Information Center

Describes the Image Processing for Teaching (IPT) project which provides digital image processing to excite students about science and mathematics as they use research-quality software on microcomputers. Provides information on IPT whose components of this dissemination project have been widespread teacher education, curriculum-based materials…

Greenberg, Richard

1998-01-01

3

Teaching Effectively with Visual Effect in an Image-Processing Class.  

ERIC Educational Resources Information Center

Describes a course teaching the use of computers in emulating human visual capability and image processing and proposes an interactive presentation using multimedia technology to capture and sustain student attention. Describes the three phase presentation: introduction of image processing equipment, presentation of lecture material, and…

Ng, G. S.

1997-01-01

4

Applying a visual language for image processing as a graphical teaching tool in medical imaging  

NASA Astrophysics Data System (ADS)

Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.

Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin

1992-05-01

5

Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services  

Microsoft Academic Search

Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images

L. di; M. Deng

2010-01-01

6

Teaching High School Science Using Image Processing: A Case Study of Implementation of Computer Technology.  

ERIC Educational Resources Information Center

Outlines an in-depth case study of teachers' use of image processing in biology, earth science, and physics classes in one high school science department. Explores issues surrounding technology implementation. Contains 21 references. (DDR)

Greenberg, Richard; Raphael, Jacqueline; Keller, Jill L.; Tobias, Sheila

1998-01-01

7

A self-teaching image processing and voice-recognition-based, intelligent and interactive system to educate visually impaired children  

NASA Astrophysics Data System (ADS)

A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.

Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad

2010-02-01

8

SSMiles: Using Models to Teach about Remote Sensing and Image Processing.  

ERIC Educational Resources Information Center

Presents an introductory lesson on remote sensing and image processing to be used in cooperative groups. Students are asked to solve a problem by gathering information, making inferences, transforming data into other forms, and making and testing hypotheses. Includes four expansions of the lesson and a reproducible student worksheet. (MKR)

Tracy, Dyanne M., Ed.

1994-01-01

9

Image Processing  

Microsoft Academic Search

The field of image processing addresses handling and analysis of images for many purposes using a large number of techniques\\u000a and methods. The applications of image processing range from enhancement of the visibility of certain organs in medical images\\u000a to object recognition for handling by industrial robots and face recognition for identification at airports, but also searching\\u000a for images in

Ferdi van der Heijden; Luuk Spreeuwers; H. M. Blanken; A. P. Vries de; H. E. Blok; L Feng

2007-01-01

10

Image Processing  

NASA Technical Reports Server (NTRS)

Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

1993-01-01

11

Image Processing  

NASA Technical Reports Server (NTRS)

Images are prepared from data acquired by the multispectral scanner aboard Landsat, which views Earth in four ranges of the electromagnetic spectrum, two visible bands and two infrared. Scanner picks up radiation from ground objects and converts the radiation signatures to digital signals, which are relayed to Earth and recorded on tape. Each tape contains "pixels" or picture elements covering a ground area; computerized equipment processes the tapes and plots each pixel, line be line to produce the basic image. Image can be further processed to correct sensor errors, to heighten contrast for feature emphasis or to enhance the end product in other ways. Key factor in conversion of digital data to visual form is precision of processing equipment. Jet Propulsion Laboratory prepared a digital mosaic that was plotted and enhanced by Optronics International, Inc. by use of the company's C-4300 Colorwrite, a high precision, high speed system which manipulates and analyzes digital data and presents it in visual form on film. Optronics manufactures a complete family of image enhancement processing systems to meet all users' needs. Enhanced imagery is useful to geologists, hydrologists, land use planners, agricultural specialists geographers and others.

1982-01-01

12

Teaching Image-Processing Concepts in Junior High School: Boys' and Girls' Achievements and Attitudes towards Technology  

ERIC Educational Resources Information Center

Background: This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose: The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these…

Barak, Moshe; Asad, Khaled

2012-01-01

13

Teaching image-processing concepts in junior high school: boys’ and girls’ achievements and attitudes towards technology  

Microsoft Academic Search

Background: This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students.Purpose: The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these subjects to the children’s world and to the digital culture characterizing society today.Sample:

Moshe Barak; Khaled Asad

2012-01-01

14

Using Process Visuals to Teach Art  

ERIC Educational Resources Information Center

Meeting the diverse needs of Pamela Malkin's students forced her to really look at what she teaches and how she teaches it. While she was teaching her sixth-grade students to create a mask from clay, she found that many were having a hard time remembering all the steps of my demonstration. Since most of the process was taught through…

Malkin, Pamela

2005-01-01

15

Concerns about Teaching Process: Student Teachers' Perspective  

ERIC Educational Resources Information Center

The aim of this study is to determine the student teachers' concerns about the teaching process including the teaching profession, teaching methods, planning, instruction, evaluation and classroom management. A total of 156 student teachers from five departments in the Gazi faculty of education participated in the study. A questionnaire including…

Cakmak, Melek

2008-01-01

16

Linear Algebra and Image Processing  

ERIC Educational Resources Information Center

We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

Allali, Mohamed

2010-01-01

17

Image Processing  

NASA Technical Reports Server (NTRS)

The Computer Graphics Center of North Carolina State University uses LAS, a COSMIC program, to analyze and manipulate data from Landsat and SPOT providing information for government and commercial land resource application projects. LAS is used to interpret aircraft/satellite data and enables researchers to improve image-based classification accuracies. The system is easy to use and has proven to be a valuable remote sensing training tool.

1991-01-01

18

Computers in Public Schools: Changing the Image with Image Processing.  

ERIC Educational Resources Information Center

The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

Raphael, Jacqueline; Greenberg, Richard

1995-01-01

19

Teaching the Process Approach in Poland.  

ERIC Educational Resources Information Center

Members of the Polish faculty at the English Institute (Poland) primarily use English as a second language (ESL) techniques to teach writing, with grammar and idiom drills, and little writing beyond the sentence level. An American professor, on the other hand, used a process approach to teach writing by providing specific instructions about…

Carter, Ronnie D.

20

Digital image processing  

NASA Technical Reports Server (NTRS)

The Federal Systems Division of IBM has developed an image processing facility to experimentally process, view, and record digital image data. This facility has been used to support LANDSAT digital image processing investigations and advanced image processing research and development. A brief description of the facility is presented, some techniques that have been developed to correct the image data are discussed, and some results obtained by users of the facility are described.

Bernstein, R.; Ferneyhough, D. G., Jr.

1975-01-01

21

TEACHING AND ASSESSING THE WRITING PROCESS  

Microsoft Academic Search

ABSTRACT Biros, Gail H. Teaching and Assessing the Writing Process 2000. Thesis Advisor: Dr. Stanley Urban Learning Disabilities Graduate Program The purpose of this study was to provide valuable information regarding the

Gail H. Biros

22

Digital Imaging and Image Processing  

NSDL National Science Digital Library

The first site is an excellent introduction to digital imaging from the Eastman Kodak Company (1). There are five lessons with review questions and competency exams, covering fundamentals, image capture, and processing. A more technical introduction is found at the Digital Imaging Glossary (2). This educational resource has several short articles about compression algorithms and specific imaging techniques. The Hypermedia Image Processing Reference (3) goes into the theory of image processing. It describes operations involving image arithmetic, blending multiple images, and feature detectors, to name a few; and several of the sections have illustrative Java applets. The Center for Imaging Science at John Hopkins University (4) offers two chapters from a book on "metric pattern theory." A brief overview of the material is provided on the main page, and the chapters can be viewed on or offline with special plug-ins given on the Web site. The Journal of Electronic Imaging (5) is a quarterly publication with many papers on current research. The final issue of 2002 has a special section on Internet imaging that is quite interesting. A research project at the University of Washington (6) focuses on the role of mathematics in image processing. Besides a thorough description of the project, there is free software and documentation given on the Web site. Philips Research (7) is working on a product that seems like something from a science fiction movie. Three dimensional television and the technologies that make it possible are described on the site. Related to this is a November 2002 news article discussing holograms and 3-D video displays (8). The devices are being studied by the Spatial Imaging Group at the Massachusetts Institute of Technology Media Lab.

Leske, Cavin.

2002-01-01

23

Biomedical image processing  

SciTech Connect

Biomedical image processing is a very broad field; it covers biomedical signal gathering, image forming, picture processing, and image display to medical diagnosis based on features extracted from images. This article reviews this topic in both its fundamentals and applications. In its fundamentals, some basic image processing techniques including outlining, deblurring, noise cleaning, filtering, search, classical analysis and texture analysis have been reviewed together with examples. The state-of-the-art image processing systems have been introduced and discussed in two categories: general purpose image processing systems and image analyzers. In order for these systems to be effective for biomedical applications, special biomedical image processing languages have to be developed. The combination of both hardware and software leads to clinical imaging devices. Two different types of clinical imaging devices have been discussed. There are radiological imagings which include radiography, thermography, ultrasound, nuclear medicine and CT. Among these, thermography is the most noninvasive but is limited in application due to the low energy of its source. X-ray CT is excellent for static anatomical images and is moving toward the measurement of dynamic function, whereas nuclear imaging is moving toward organ metabolism and ultrasound is toward tissue physical characteristics. Heart imaging is one of the most interesting and challenging research topics in biomedical image processing; current methods including the invasive-technique cineangiography, and noninvasive ultrasound, nuclear medicine, transmission, and emission CT methodologies have been reviewed.

Huang, H.K.

1981-01-01

24

Image Processing Software  

NASA Technical Reports Server (NTRS)

The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

1990-01-01

25

Digital image processing  

Microsoft Academic Search

The field of digital image processing is reviewed with reference to its origins, progress, current status, and prospects for the future. Consideration is given to the evolution of image processor display devices, developments in the functional components of an image processor display system (e.g. memory, data bus, and pipeline central processing unit), and developments in the software. The major future

B. R. Hunt

1981-01-01

26

Visualization Image Processing  

E-print Network

Keywords Visualization Image Processing » Prof. Dr. Gerik Scheuermann Visualization transforms in biotechnology and biomedicine, visualization re- ceives increasing interest. Image processing transforms images with the goal of supporting human vi- sion or automatic visual analysis. The research group does basic re

Schüler, Axel

27

Hyperspectral image processing  

Technology Transfer Automated Retrieval System (TEKTRAN)

Hyperspectral image processing refers to the use of computer algorithms to extract, store and manipulate both spatial and spectral information contained in hyperspectral images across the visible and near-infrared portion of the electromagnetic spectrum. A typical hyperspectral image processing work...

28

Hybrid image processing  

NASA Technical Reports Server (NTRS)

Partly-digital, partly-optical 'hybrid' image processing attempts to use the properties of each domain to synergistic advantage: while Fourier optics furnishes speed, digital processing allows the use of much greater algorithmic complexity. The video-rate image-coordinate transformation used is a critical technology for real-time hybrid image-pattern recognition. Attention is given to the separation of pose variables, image registration, and both single- and multiple-frame registration.

Juday, Richard D.

1990-01-01

29

Subroutines For Image Processing  

NASA Technical Reports Server (NTRS)

Image Processing Library computer program, IPLIB, is collection of subroutines facilitating use of COMTAL image-processing system driven by HP 1000 computer. Functions include addition or subtraction of two images with or without scaling, display of color or monochrome images, digitization of image from television camera, display of test pattern, manipulation of bits, and clearing of screen. Provides capability to read or write points, lines, and pixels from image; read or write at location of cursor; and read or write array of integers into COMTAL memory. Written in FORTRAN 77.

Faulcon, Nettie D.; Monteith, James H.; Miller, Keith W.

1988-01-01

30

Computer Aided Teaching of Digital Signal Processing.  

ERIC Educational Resources Information Center

Describes a microcomputer-based software package developed at the University of Surrey for teaching digital signal processing to undergraduate science and engineering students. Menu-driven software capabilities are explained, including demonstration of qualitative concepts and experimentation with quantitative data, and examples are given of…

Castro, Ian P.

1990-01-01

31

Teaching Psychological Report Writing: Content and Process  

ERIC Educational Resources Information Center

The purpose of this article is to discuss the process of teaching graduate students in school psychology to write psychological reports that teachers and parents find readable and that guide intervention. The consensus from studies across four decades of research is that effective psychological reports connect to the client's context; have clear…

Wiener, Judith; Costaris, Laurie

2012-01-01

32

TEACHING PEER REVIEW AND THE PROCESS OF  

E-print Network

scientific journal. This method has been imple- mented in the course Cell and Molecular Biology for Engineers. The result is a review article in the format for submission to a major scientific journal. This method hasTEACHING PEER REVIEW AND THE PROCESS OF SCIENTIFIC WRITING William H. Guilford Department

Guilford, William

33

Image processing mini manual  

NASA Technical Reports Server (NTRS)

The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

1992-01-01

34

Image Processing Software  

NASA Technical Reports Server (NTRS)

To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

1992-01-01

35

Image Processing Learning Resources  

NSDL National Science Digital Library

The Hypermedia Image Processing Reference (HIPR) offers a wealth of resources for users of image processing and an introduction to hypermedia (through use with Web browsers). HIPR was developed at the Department of Artificial Intelligence in the University of Edinburgh as computer-based tutorial materials for use in courses on image processing and machine vision. The material is available as a package that can easily be shared on a local area network and then made available at any suitably equipped computer connected to that network. The materials cover a wide range of image processing operations and are complemented by an extensive collection of actual digitized images, all organized for easy cross-referencing. Some features include a reference section with information on some of the most common classes of image-processing operations currently used, a section describing how each operation works, and various other instructional tools, such as Java demonstrations; interactive tableau where multiple operators can demonstrate sequences of operations; suggestions for appropriate use of operations; example input and output images for each operation; suggested student exercises; an encyclopedic glossary of common image processing concepts and terms; and other reference information. From the index, visitors can search on a particular topic covered in this website.

36

Teaching Process Design through Integrated Process Synthesis  

ERIC Educational Resources Information Center

The design course is an integral part of chemical engineering education. A novel approach to the design course was recently introduced at the University of the Witwatersrand, Johannesburg, South Africa. The course aimed to introduce students to systematic tools and techniques for setting and evaluating performance targets for processes, as well as…

Metzger, Matthew J.; Glasser, Benjamin J.; Patel, Bilal; Hildebrandt, Diane; Glasser, David

2012-01-01

37

BAOlab: Image processing program  

NASA Astrophysics Data System (ADS)

BAOlab is an image processing package written in C that should run on nearly any UNIX system with just the standard C libraries. It reads and writes images in standard FITS format; 16- and 32-bit integer as well as 32-bit floating-point formats are supported. Multi-extension FITS files are currently not supported. Among its tools are ishape for size measurements of compact sources, mksynth for generating synthetic images consisting of a background signal including Poisson noise and a number of pointlike sources, imconvol for convolving two images (a “source” and a “kernel”) with each other using fast fourier transforms (FFTs) and storing the output as a new image, and kfit2d for fitting a two-dimensional King model to an image.

Larsen, Søren S.

2014-03-01

38

Video image processing  

NASA Technical Reports Server (NTRS)

Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

Murray, N. D.

1985-01-01

39

Image-Processing Program  

NASA Technical Reports Server (NTRS)

IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

Roth, D. J.; Hull, D. R.

1994-01-01

40

Image processing and reconstruction  

SciTech Connect

This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

Chartrand, Rick [Los Alamos National Laboratory

2012-06-15

41

Group Processes Online: Teaching collaboration through collaborative processes  

Microsoft Academic Search

While many subject areas lend themselves well to off-campus distance education delivery, there have always been some which do not necessarily adapt well to non face-to-face provision, in particular, those subjects where interpersonal interaction is integral. This paper discusses a course teaching group processes which had been previously offered on- and off-campus, and its subsequent redesign for online delivery. The

Kath Fisher; Renata Phelps; Allan Ellis

2000-01-01

42

MEDICAL IMAGE PROCESSING USING MATLAB  

Microsoft Academic Search

MATLAB and the Image Processing Toolbox provide a wide range of advanced image processing functions and interactive tools for enhancing and analyzing digital images. The interactive tools allowed us to perform spatial image transformations, morphological operations such as edge detection and noise removal, region-of-interest processing, filtering, basic statistics, curve fitting, FFT, DCT and Radon Transform. Making graphics objects semitransparent is

Emilia Dana SELE?CHI

2008-01-01

43

Medical Image Processing using MATLAB  

Microsoft Academic Search

MATLAB and the Image Processing Toolbox provide a wide range of advanced image processing functions and interactive tools for enhancing and analyzing digital images. The interactive tools allowed us to perform spatial image transformations, morphological operations such as edge detection and noise removal, region-of-interest processing, filtering, basic statistics, curve fitting, FFT, DCT and Radon Transform. Making graphics objects semitransparent is

Emilia Dana

44

Morphological image sequence processing  

Microsoft Academic Search

We present a morphological multi-scale method for image sequence processing, which results in a truly coupled spatio-temporal anisotropic diffusion. The aim of the method is not to smooth the level-sets of single frames but to denoise the whole sequence while retaining geometric features such as spatial edges and highly accelerated motions. This is obtained by an anisotropic spatio-temporal level-set evolution,

Karol Mikula; Tobias Preusser; Martin Rumpf

2004-01-01

45

Teaching People and Machines to Enhance Images  

NASA Astrophysics Data System (ADS)

Procedural tasks such as following a recipe or editing an image are very common. They require a person to execute a sequence of operations (e.g. chop onions, or sharpen the image) in order to achieve the goal of the task. People commonly use step-by-step tutorials to learn these tasks. We focus on software tutorials, more specifically photo manipulation tutorials, and present a set of tools and techniques to help people learn, compare and automate photo manipulation procedures. We describe three different systems that are each designed to help with a different stage in acquiring procedural knowledge. Today, people primarily rely on hand-crafted tutorials in books and on websites to learn photo manipulation procedures. However, putting together a high quality step-by-step tutorial is a time-consuming process. As a consequence, many online tutorials are poorly designed which can lead to confusion and slow down the learning process. We present a demonstration-based system for automatically generating succinct step-by-step visual tutorials of photo manipulations. An author first demonstrates the manipulation using an instrumented version of GIMP (GNU Image Manipulation Program) that records all changes in interface and application state. From the example recording, our system automatically generates tutorials that illustrate the manipulation using images, text, and annotations. It leverages automated image labeling (recognition of facial features and outdoor scene structures in our implementation) to generate more precise text descriptions of many of the steps in the tutorials. A user study finds that our tutorials are effective for learning the steps of a procedure; users are 20-44% faster and make 60-95% fewer errors when using our tutorials than when using screencapture video tutorials or hand-designed tutorials. We also demonstrate a new interface that allows learners to navigate, explore and compare large collections (i.e. thousands) of photo manipulation tutorials based on their command-level structure. Sites such as tutorialized.com or good-tutorials.com collect tens of thousands of photo manipulation tutorials. These collections typically contain many different tutorials for the same task. For example, there are many different tutorials that describe how to recolor the hair of a person in an image. Learners often want to compare these tutorials to understand the different ways a task can be done. They may also want to identify common strategies that are used across tutorials for a variety of tasks. However, the large number of tutorials in these collections and their inconsistent formats can make it difficult for users to systematically explore and compare them. Current tutorial collections do not exploit the underlying command-level structure of tutorials, and to explore the collection users have to either page through long lists of tutorial titles or perform keyword searches on the natural language tutorial text. We present a new browsing interface to help learners navigate, explore and compare collections of photo manipulation tutorials based on their command-level structure. Our browser indexes tutorials by their commands, identifies common strategies within the tutorial collection, and highlights the similarities and differences between sets of tutorials that execute the same task. User feedback suggests that our interface is easy to understand and use, and that users find command-level browsing to be useful for exploring large tutorial collections. They strongly preferred to explore tutorial collections with our browser over keyword search. Finally, we present a framework for generating content-adaptive macros (programs) that can transfer complex photo manipulation procedures to new target images. After learners master a photo manipulation procedure, they often repeatedly apply it to multiple images. For example, they might routinely apply the same vignetting effect to all their photographs. This process can be very tedious especially for procedures that involve many steps. While image manipulation programs pro

Berthouzoz, Floraine Sara Martianne

46

scikit-image: image processing in Python.  

PubMed

scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

2014-01-01

47

scikit-image: image processing in Python  

PubMed Central

scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

2014-01-01

48

Computer image processing and recognition  

NASA Technical Reports Server (NTRS)

A systematic introduction to the concepts and techniques of computer image processing and recognition is presented. Consideration is given to such topics as image formation and perception; computer representation of images; image enhancement and restoration; reconstruction from projections; digital television, encoding, and data compression; scene understanding; scene matching and recognition; and processing techniques for linear systems.

Hall, E. L.

1979-01-01

49

Image processing in precision agriculture  

Microsoft Academic Search

A brief review of our signal and image processing application in precision agriculture is presented. A method for determining sampling frequency for agriculture data is proposed, and some initial results based on data simulation and image processing are reported

Dragoljub Pokrajac; A. Lazarevic; S. Vucetic; T. Fiez; Z. Obradovic

1999-01-01

50

Image processing and recognition for biological images  

PubMed Central

This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

Uchida, Seiichi

2013-01-01

51

Fundamentals of Biomedical Image Processing  

PubMed Central

Automatic biomedical image processing has enjoyed increased popularity of late, primarily because it can be used to enhance images to measure and count accurately and quickly in various types of applications. Preliminary background and basic terminology commonly used in biomedical image processing will be reviewed. Among these are sources and forms of biomedical images, image enhancement, searching and analysis modes in biomedical image processing, and possible output formats. ImagesFig. 2Fig. 3Fig. 4Fig. 5Fig. 6Fig. 7Fig. 10Fig. 11

Huang, H.K.

1978-01-01

52

IMAGES: An interactive image processing system  

NASA Technical Reports Server (NTRS)

The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

Jensen, J. R.

1981-01-01

53

Smart Image Enhancement Process  

NASA Technical Reports Server (NTRS)

Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

2012-01-01

54

The Order of Image Processing  

Microsoft Academic Search

The first step in processing your exposures should be the dark frame subtraction and flat-field correction. Some programs,\\u000a like CCDSoft, call this process image reduction, whereas other programs, like Maxim DL, call this image calibration. Most\\u000a image processing programs have a routine to perform this efficiently on all of your images. Make sure that the dark frames\\u000a match the temperature,

Ruben Kier

55

Factors Causing Demotivation in EFL Teaching Process: A Case Study  

ERIC Educational Resources Information Center

Studies have mainly focused on strategies to motivate teachers or the student-teacher motivation relationships rather than teacher demotivation in the English as a foreign language (EFL) teaching process, whereas no data have been found on the factors that cause teacher demotivation in the Turkish EFL teaching contexts at the elementary education…

Aydin, Selami

2012-01-01

56

Processing Visual Images  

SciTech Connect

The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

Litke, Alan (UC Santa Cruz) [UC Santa Cruz

2006-03-27

57

BITS based imaging process  

Microsoft Academic Search

Many universities use Symantec Ghost to image PCs in their campus computer labs. However, issues related to network traffic, multicasting, and file size create numerous difficulties when transferring image files.Tim Leamy at University of California, Davis (UC Davis) created a system using Microsoft's Background Intelligent Transfer Service (BITS) to transfer Ghost images to PC. Allan Chen and Rob Smith have

Tim Leamy; Rob Smith; Allan Chen

2006-01-01

58

Optimization of the Teaching Process in the Soviet Union  

ERIC Educational Resources Information Center

Comprising a translation of the forward and Chapter 3 of the manual Optimization of the Teaching Process by Iu. K. Babansky, this issue treats the reasons for and the prevention of failing performance among school children. (Author)

Babansky, Iu. K.

1973-01-01

59

Cooperative processes in image segmentation  

NASA Technical Reports Server (NTRS)

Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

Davis, L. S.

1982-01-01

60

Fractional Modeling Method of Cognition Process in Teaching Evaluation  

Microsoft Academic Search

\\u000a Cognition process has been translated into other quantitative indicators in some assessment decision systems. In teaching\\u000a evaluation system a fractional cognition process model is proposed in this paper. The fractional model is built on fractional\\u000a calculus theory combining with classroom teaching features. The fractional coefficient is determined by the actual course\\u000a information. Student self-parameter is decided by the actual situation

Chunna Zhao; Minhua Wu; Yu Zhao; Liming Luo; Yingshun Li

2011-01-01

61

Teaching Science: A Picture Perfect Process.  

ERIC Educational Resources Information Center

Explains how teachers can use graphs and graphing concepts when teaching art, language arts, history, social studies, and science. Students can graph the lifespans of the Ninja Turtles' Renaissance namesakes (Donatello, Michelangelo, Raphael, and Leonardo da Vinci) or world population growth. (MDM)

Leyden, Michael B.

1994-01-01

62

The Tao of Teaching: Romance and Process.  

ERIC Educational Resources Information Center

Because college teaching aims to elevate, not entertain, it must be nourished and appreciated as a pedagogical alchemy mixing facts and feelings, ideas and skills, history and mystery. The current debate on educational reform should focus more on quality of learning experience, and on how to create and sustain it. (MSE)

Schindler, Stefan

1991-01-01

63

Teaching as an Interpretive Inquiry Process.  

ERIC Educational Resources Information Center

An ethnographic approach to teacher action research is presented in this paper. The paper argues that an ethnographic, rather than a qualitative approach, should be considered by teachers wishing to engage in interpretive inquiry (action) research because of the focus on classroom and larger cultural contexts on the teaching-learning situation.…

Brown, Mary Jo McGee

64

Teaching about the Physics of Medical Imaging  

NASA Astrophysics Data System (ADS)

Even before the discovery of X-rays, attempts at non-invasive medical imaging required an understanding of fundamental principles of physics. Students frequently do not see these connections because they are not taught in beginning physics courses. To help students understand that physics and medical imaging are closely connected, we have developed a series of active learning units. For each unit we begin by studying how students transfer their knowledge from traditional physics classes and everyday experiences to medical applications. Then, we build instructional materials to take advantage of the students' ability to use their existing learning and knowledge resources. Each of the learning units involves a combination of hands-on activities, which present analogies, and interactive computer simulations. Our learning units introduce students to the contemporary imaging techniques of CT scans, magnetic resonance imaging (MRI), positron emission tomography (PET), and wavefront aberrometry. The project's web site is http://web.phys.ksu.edu/mmmm/.

Zollman, Dean; McBride, Dyan; Murphy, Sytil; Aryal, Bijaya; Kalita, Spartak; Wirjawan, Johannes v. d.

2010-07-01

65

Industrial applications of process imaging and image processing  

NASA Astrophysics Data System (ADS)

Process imaging is the art of visualizing events inside closed industrial processes. Image processing is the art of mathematically manipulating digitized images to extract quantitative information about such processes. Ongoing advances in camera and computer technology have made it feasible to apply these abilities to measurement needs in the chemical industry. To illustrate the point, this paper describes several applications developed at DuPont, where a variety of measurements are based on in-line, at-line, and off-line imaging. Application areas include compounding, melt extrusion, crystallization, granulation, media milling, and particle characterization. Polymer compounded with glass fiber is evaluated by a patented radioscopic (real-time X-ray imaging) technique to measure concentration and dispersion uniformity of the glass. Contamination detection in molten polymer (important for extruder operations) is provided by both proprietary and commercial on-line systems. Crystallization in production reactors is monitored using in-line probes and flow cells. Granulation is controlled by at-line measurements of granule size obtained from image processing. Tomographic imaging provides feedback for improved operation of media mills. Finally, particle characterization is provided by a robotic system that measures individual size and shape for thousands of particles without human supervision. Most of these measurements could not be accomplished with other (non-imaging) techniques.

Scott, David M.; Sunshine, Gregg; Rosen, Lou; Jochen, Ed

2001-02-01

66

Using Classic and Contemporary Visual Images in Clinical Teaching.  

ERIC Educational Resources Information Center

The patient's body is an image that medical students and residents use to process information. The classic use of images using the patient is qualitative and personal. The contemporary use of images is quantitative and impersonal. The contemporary use of imaging includes radiographic, nuclear, scintigraphic, and nuclear magnetic resonance…

Edwards, Janine C.

1990-01-01

67

Astronomical Image Processing with Hadoop  

NASA Astrophysics Data System (ADS)

In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.

Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

2011-07-01

68

Teaching Tools: Physics Downloads, Movies, and Images  

NSDL National Science Digital Library

The University of California Berkeley Physics Lecture Demonstrations Web site contains a page entitled Things of Interest: Downloads, Movies, and Images. The highlight of the site is the downloadable movies of physics experiments that should be very helpful for time and/or money constrained educators. The ten experiments include movies of a chladni disk, Jacob's ladder, dippy bird, a person rotating in a chair while holding dumbbells, a person in a chair with a rotating bicycle wheel, gyroscopic precession, a superconductor, a levitator, jumping rings, and a Tesla coil.

2002-01-01

69

Fuzzy image processing in sun sensor  

NASA Technical Reports Server (NTRS)

This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

Mobasser, S.; Liebe, C. C.; Howard, A.

2003-01-01

70

FILTER: Focusing Images for Learning and Teaching--An Enriched Resource.  

ERIC Educational Resources Information Center

Digital images have a rich potential as learning and teaching resources and are currently under-utilized in the support of pedagogical activities. The FILTER (Focusing Images for Learning and Teaching--an Enriched Resource) project is addressing this under-use and encouraging uptake of visual resources by mapping different types of images and…

Evans, Jill; Conole, Grainne; Youngs, Karla

71

Teaching Peer Review and the Process of Scientific Writing  

NSDL National Science Digital Library

Many undergraduate and graduate students understand neither the process of scientific writing nor the significance of peer review. In response, some instructors have created writing assignments that teach or mimic parts of the scientific publishing process. However, none fully reproduced peer review and revision of papers together with the writing and publishing process from research to final, accepted draft. In addition, most have been instituted at the graduate rather than undergraduate level. We present a detailed method for teaching undergraduate students the full scientific publishing process, including anonymous peer review, during the process of writing a "term paper." The result is a review article in the format for submission to a major scientific journal. This method has been implemented in the course Cell and Molecular Biology for Engineers at the University of Virginia. Use of this method resulted in improved grades, much higher quality in the final manuscript, greater objectivity in grading, and improved understanding of the importance of peer review.

Dr. William H. Guilford (University of Virginia Department of Biomedical Engineering)

2002-01-25

72

Using the Results of Teaching Evaluations to Improve Teaching: A Case Study of a New Systematic Process  

ERIC Educational Resources Information Center

This article describes a new 14-step process for using student evaluations of teaching to improve teaching. The new process includes examination of student evaluations in the context of instructor goals, student evaluations of the same course completed in prior terms, and evaluations of similar courses taught by other instructors. The process has…

Malouff, John M.; Reid, Jackie; Wilkes, Janelle; Emmerton, Ashley J.

2015-01-01

73

Image processing using reconfigurable FPGAs  

NASA Astrophysics Data System (ADS)

The use of reconfigurable field-programmable gate arrays (FPGAs) for imaging applications show considerable promise to fill the gap that often occurs when digital signal processor chips fail to meet performance specifications. Single chip DSPs do not have the overall performance to meet the needs of many imaging applications, particularly in real-time designs. Using multiple DSPs to boost performance often presents major design challenges in maintaining data alignment and process synchronization. These challenges can impose serious cost, power consumption and board space penalties. Image processing requires manipulating massive amounts of data at high-speed. Although DSP chips can process data at high-speeds, their architectures can inhibit overall system performance in real-time imaging. The rate of operations can be increased when they are performed in dedicated hardware, such as special-purpose imaging devices and FPGAs, which provides the horsepower necessary to implement real-time image processing products successfully and cost-effectively. For many fixed applications, non-SRAM- based (antifuse or flash-based) FPGAs provide the raw speed to accomplish standard high-speed functions. However, in applications where algorithms are continuously changing and compute operations must be modified, only SRAM-based FPGAs give enough flexibility. The addition of reconfigurable FPGAs as a flexible hardware facility enables DSP chips to perform optimally. The benefits primarily stem from optimizing the hardware for the algorithms or the use of reconfigurable hardware to enhance the product architecture. And with SRAM-based FPGAs that are capable of partial dynamic reconfiguration, such as the Cache-Logic FPGAs from Atmel, continuous modification of data and logic is not only possible, it is practical as well. First we review the particular demands of image processing. Then we present various applications and discuss strategies for exploiting the capabilities of reconfigurable FPGAs along with DSPs. We describe the benefits of a compute-oriented FPGA architecture and how partial dynamic reconfiguration delivers unprecedented capabilities for imaging systems and products.

Ferguson, Lee

1996-10-01

74

Teaching Strategies for the Process of Planned Change.  

ERIC Educational Resources Information Center

Explores ways of presenting content and of fostering the grounding of the planned change process within the nurse's previous experience, value system, and personal characteristics. States that teaching strategies that combine experiential exercises with theory can make planned change meaningful and valuable to nurses. (NRJ)

Green, Clarissa P.

1983-01-01

75

The Teaching of L2 Pronunciation through Processing Instruction  

ERIC Educational Resources Information Center

The goal of this study is to pilot test whether the instructional approach known as Processing Instruction could be adapted to the teaching of second language (L2) pronunciation. The target sounds selected were the Spanish tap and trill. Three groups of high school students of Spanish as a foreign language participated in the study. One group…

Gonzales-Bueno, Manuela; Quintana-Lara, Marcela

2011-01-01

76

RDI Advising Model for Improving the Teaching-Learning Process  

ERIC Educational Resources Information Center

Introduction: Advising in Educational Psychology from the perspective of RDI takes on a stronger investigative, innovative nature. The model proposed by De la Fuente et al (2006, 2007) and Education & Psychology (2007) was applied to the field of improving teaching-learning processes at a school. Hypotheses were as follows: (1) interdependence…

de la Fuente, Jesus; Lopez-Medialdea, Ana Maria

2007-01-01

77

Toward a Generative Model of the Teaching-Learning Process.  

ERIC Educational Resources Information Center

Until the rise of cognitive psychology, models of the teaching-learning process (TLP) stressed external rather than internal variables. Models remained general descriptions until control theory introduced explicit system analyses. Cybernetic models emphasize feedback and adaptivity but give little attention to creativity. Research on artificial…

McMullen, David W.

78

Student Evaluation of Teaching: An Instrument and a Development Process  

ERIC Educational Resources Information Center

This article describes the process of faculty-led development of a student evaluation of teaching instrument at Centurion School of Rural Enterprise Management, a management institute in India. The instrument was to focus on teacher behaviors that students get an opportunity to observe. Teachers and students jointly contributed a number of…

Alok, Kumar

2011-01-01

79

Teaching Information Systems Development via Process Variants  

ERIC Educational Resources Information Center

Acquiring the knowledge to assemble an integrated Information System (IS) development process that is tailored to the specific needs of a project has become increasingly important. It is therefore necessary for educators to impart to students this crucial skill. However, Situational Method Engineering (SME) is an inherently complex process that…

Tan, Wee-Kek; Tan, Chuan-Hoo

2010-01-01

80

Associative architecture for image processing  

NASA Astrophysics Data System (ADS)

This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

Adar, Rutie; Akerib, Avidan

1997-09-01

81

Image Processing: A State-of-the-Art Way to Learn Science.  

ERIC Educational Resources Information Center

Teachers participating in the Image Processing for Teaching Process, begun at the University of Arizona's Lunar and Planetary Laboratory in 1989, find this technology ideal for encouraging student discovery, promoting constructivist science or math experiences, and adapting in classrooms. Because image processing is not a computerized text, it…

Raphael, Jacqueline; Greenberg, Richard

1995-01-01

82

Polar Caps: Image Processing Tutorial  

NSDL National Science Digital Library

This lesson plan is part of the Center for Educational Resources (CERES), a series of web-based astronomy lessons created by a team of master teachers, university faculty, and NASA researchers. In this tutorial, students learn to use computer image processing techniques to measure the size of Earth's polar ice caps and analyze various phenomena visible on planetary images. This lesson contains expected outcomes for students, materials, background information, follow-up questions, and assessment strategies.

Tuthill, George; Obbink, Kim

83

Computer processing of radiographic images  

NASA Technical Reports Server (NTRS)

In the past 20 years, a substantial amount of effort has been expended on the development of computer techniques for enhancement of X-ray images and for automated extraction of quantitative diagnostic information. The historical development of these methods is described. Illustrative examples are presented and factors influencing the relative success or failure of various techniques are discussed. Some examples of current research in radiographic image processing is described.

Selzer, R. H.

1984-01-01

84

Teaching the Dance Class: Strategies to Enhance Skill Acquisition, Mastery and Positive Self-Image  

ERIC Educational Resources Information Center

Effective teaching of dance skills is informed by a variety of theoretical frameworks and individual teaching and learning styles. The purpose of this paper is to present practical teaching strategies that enhance the mastery of skills and promote self-esteem, self-efficacy, and positive self-image. The predominant thinking and primary research…

Mainwaring, Lynda M.; Krasnow, Donna H.

2010-01-01

85

Digital processing of radiographic images  

NASA Technical Reports Server (NTRS)

Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

Bond, A. D.; Ramapriyan, H. K.

1973-01-01

86

PIPE (pipelined image-processing engine)  

Microsoft Academic Search

One way to represent an image to be processed involves the representation of one or more intrinsic properties of the image in an ordered array. Such a map is often called an iconic image. Parallel processors are ideally suited to the early stages of image processing. Unfortunately, true multistage parallel processing is prohibitively expensive for images of useful size. In

E. W. Kent; M. O. Shneier; R. Lumia

1985-01-01

87

Fingerprint recognition using image processing  

NASA Astrophysics Data System (ADS)

Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

Dholay, Surekha; Mishra, Akassh A.

2011-06-01

88

Computer image processing: Geologic applications  

NASA Technical Reports Server (NTRS)

Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

Abrams, M. J.

1978-01-01

89

Image processing verification tool-IPVT  

Microsoft Academic Search

Validation is an important part of image processing. Automated extraction of structures from medical images like MRI or CT is becoming a routine process for diagnosis. Therefore, the accurate region detection is necessary. The proposed Image Processing Verification Tool-IPVT is as a tool for statistical validation of the image processing results. The IPVT tool statistically evaluates the segmentation ability of

Dusan Heric; Bozidar Potocnik

2004-01-01

90

Concept Learning through Image Processing.  

ERIC Educational Resources Information Center

This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

91

ImageJ: Image processing and analysis in Java  

NASA Astrophysics Data System (ADS)

ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

Rasband, W. S.

2012-06-01

92

Using Interactive Digital Images of Products to Teach Pharmaceutics  

PubMed Central

Objective To implement interactive digital images of drug products and online quizzes in a pharmaceutics course to teach students where to look on product labels for information and how to evaluate ingredients of various dosage forms, and to reinforce pharmaceutical calculations with practical problems. Design Interactive digital images of drug products and a database of quiz questions pertaining to the products were created and an interactive online platform was designed. The interactive digital images were incorporated in pharmaceutics lectures as examples of dosage forms studied and calculations taught. The online quizzes were administered to first-professional year pharmacy students in fall 2004 and fall 2005. Assessment The competency outcome data illustrates that the product-based online quizzes aided students in meeting the desired learning objectives. Modifications to increase ease of use resulted in higher student success rates in the second year of implementation. Student and faculty evaluations of the application were largely positive. Conclusion The development of interactive digital images and product-based online quizzes successfully adapted a traditional learning aid into a viable electronic resource for pharmacy education. PMID:17619660

Pham, Khang H.; Dollar,, Michael

2007-01-01

93

Image Processing with Manifold Models  

E-print Network

image. Discontinuous image with bounded variation. | f| #12;Toward Adaptive Image Priors | f|2 Uniformly smooth C image. Discontinuous image with bounded variation. | f| #12;Toward Adaptive Image Priors | f|2 Uniformly smooth C image. Discontinuous image with bounded variation. | f| | f| #12;Toward Adaptive Image

Milanfar, Peyman

94

Digital Image Processing: A 1996 Review  

Microsoft Academic Search

Digital image processing is a fast developing field rapidly gaining importance through spreading of picture-oriented electronic media and the merging of computer and television technology. Digital image processing is used in image analysis, image transmission and image storage and retrieval. Typical application areas are telecommunication, medicine, remote sensing, and the natural sciences and agriculture.

Jens Damgaard Andersen

1996-01-01

95

Image processing with ImageJ  

Microsoft Academic Search

Wayne Rasband of NIH has created ImageJ, an open source Java-written program that is now at version 1.31 and is used for many imaging applications, including those that that span the gamut from skin analysis to neuroscience. ImageJ is in the public domain and runs on any operating system (OS). ImageJ is easy to use and can do many imaging

M. D. Abramoff; Paulo J. Magalhães; Sunanda J. Ram

2004-01-01

96

Using NASA Space Imaging to Teach Earth and Sun Topics in Professional Development Courses for In-Service Teachers  

NASA Astrophysics Data System (ADS)

several PD courses using NASA imaging technology. It includes various ways to study selected topics in physics and astronomy. We use NASA Images to develop lesson plans and EPO materials for PreK-8 grades. Topics are Space based and they vary from measurements, magnetism on Earth to that for our Sun. In addition we cover topics on ecosystem structure, biomass and water on Earth. Hands-on experiments, computer simulations, analysis of real-time NASA data, and vigorous seminar discussions are blended in an inquiry-driven curriculum to instill confident understanding of basic physical science and modern, effective methods for teaching it. Course also demonstrates ways how scientific thinking and hands-on activities could be implemented in the classroom. This course is designed to provide the non-science student a confident understanding of basic physical science and modern, effective methods for teaching it. Most of topics were selected using National Science Standards and National Mathematics Standards to be addressed in grades PreK-8. The course focuses on helping in several areas of teaching: 1) Build knowledge of scientific concepts and processes; 2) Understand the measurable attributes of objects and the units and methods of measurements; 3) Conducting data analysis (collecting, organizing, presenting scientific data, and to predict the result); 4) Use hands-on approaches to teach science; 5) Be familiar with Internet science teaching resources. Here we share our experiences and challenges we faced teaching this course.

Verner, E.; Bruhweiler, F. C.; Long, T.; Edwards, S.; Ofman, L.; Brosius, J. W.; Holman, G.; St Cyr, O. C.; Krotkov, N. A.; Fatoyinbo Agueh, T.

2012-12-01

97

Parallel Implementation of Hyperspectral Image Processing Algorithms  

E-print Network

Parallel Implementation of Hyperspectral Image Processing Algorithms Antonio Plaza, David Valencia hyperspectral imaging applications, including automatic target recognition for homeland defense and security the growing interest in hyperspectral imaging research, only a few efforts devoted to designing

Plaza, Antonio J.

98

Multicomputer processing for medical imaging  

NASA Astrophysics Data System (ADS)

Medical imaging applications have growing processing requirements, and scalable multicomputers are needed to support these applications. Scalability -- performance speedup equal to the increased number of processors -- is necessary for a cost-effective multicomputer. We performed tests of performance and scalability on one through 16 processors on a RACE multicomputer using Parallel Application system (PAS) software. Data transfer and synchronization mechanisms introduced a minimum of overhead to the multicomputer's performance. We implemented magnetic resonance (MR) image reconstruction and multiplanar reformatting (MPR) algorithms, and demonstrated high scalability; the 16- processor configuration was 80% to 90% efficient, and the smaller configurations had higher efficiencies. Our experience is that PAS is a robust and high-productivity tool for developing scalable multicomputer applications.

Goddard, Iain; Greene, Jonathon; Bouzas, Brian

1995-04-01

99

Teaching the NIATx Model of Process Improvement as an Evidence-Based Process  

ERIC Educational Resources Information Center

Process Improvement (PI) is an approach for helping organizations to identify and resolve inefficient and ineffective processes through problem solving and pilot testing change. Use of PI in improving client access, retention and outcomes in addiction treatment is on the rise through the teaching of the Network for the Improvement of Addiction…

Evans, Alyson C.; Rieckmann, Traci; Fitzgerald, Maureen M.; Gustafson, David H.

2007-01-01

100

RNS Application for Digital Image Processing  

Microsoft Academic Search

In this paper, we carry out a study on the RNS (residue number system) application in digital image processing and propose a RNS image coding scheme that offers high-speed and low-power VLSI implementation for secure image processing. The proposed scheme is more efficient than the RNS image coding scheme of Ammar et al. (2001) in that the proposed method encrypts

Wei Wang; M. N. S. Swamy; M. Omair Ahmad

2004-01-01

101

Image Processing 1: Smoothing Filters  

E-print Network

a grayscale image of a snowflake and a zoom into the image that shows individual pixels. The image was scanned from Bentley and Humphreys' classic book of snowflake images [1]. We have recently used images from that book as figures in papers on snowflake growth [2,3]. The zoom into the image makes dust and undesirable

Reiter, Clifford A.

102

Image save and carry system-based teaching-file library  

NASA Astrophysics Data System (ADS)

Digital imaging technology has introduced some new possibilities of forming teaching files without films. IS&C (Image Save & Carry) system, which is based on magneto-optic disc, is a good medium for this purpose, because of its large capacity, prompt access time, and unified format independent of operating systems. The author have constructed a teaching file library, on which user can add and edit images. CD-ROM and IS&C satisfy most of basic criteria for teaching file construction platform. CD-ROM is the best medium for circulating large numbers of identical copies, while IS&C is advantageous in personal addition and editing of library.

Morimoto, Kouji; Kimura, Michio; Fujii, Toshiyuki

1994-05-01

103

Image processing software for imaging spectrometry  

NASA Technical Reports Server (NTRS)

The paper presents a software system, Spectral Analysis Manager (SPAM), which has been specifically designed and implemented to provide the exploratory analysis tools necessary for imaging spectrometer data, using only modest computational resources. The basic design objectives are described as well as the major algorithms designed or adapted for high-dimensional images. Included in a discussion of system implementation are interactive data display, statistical analysis, image segmentation and spectral matching, and mixture analysis.

Mazer, Alan S.; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

1988-01-01

104

Teaching the Dance Class: Strategies to Enhance Skill Acquisition, Mastery and Positive Self-Image  

Microsoft Academic Search

Effective teaching of dance skills is informed by a variety of theoretical frameworks and individual teaching and learning styles. The purpose of this paper is to present practical teaching strategies that enhance the mastery of skills and promote self-esteem, self-efficacy, and positive self-image. The predominant thinking and primary research findings from dance pedagogy, education, physical education and sport pedagogy, and

Lynda M. Mainwaring; Donna H. Krasnow

2010-01-01

105

Multispectral Image Processing for Plants  

NASA Technical Reports Server (NTRS)

The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

Miles, Gaines E.

1991-01-01

106

GEOMBINATORIC ASPECTS OF PROCESSING LARGE IMAGES AND  

E-print Network

GEOMBINATORIC ASPECTS OF PROCESSING LARGE IMAGES AND LARGE SPATIAL DATABASES by Jan Beck 1 , Vladik@geo.utep.edu, vladik@cs.utep.edu Abstract. Computer processing can drastically improve the qual­ ity of an image memory, so we process it by downloading pieces of the image. Each downloading takes a lot of time, so

Kreinovich, Vladik

107

Process perspective on image quality evaluation  

Microsoft Academic Search

The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative\\/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where

Tuomas Leisti; Raisa Halonen; Anna Kokkonen; Hanna Weckman; Marja Mettänen; Lasse Lensu; Risto Ritala; Pirkko Oittinen; Göte Nyman

2008-01-01

108

Video and Image Processing in Multimedia Systems (Video Processing)  

E-print Network

COT 6930 Video and Image Processing in Multimedia Systems (Video Processing) Instructor: Borko. Content-based image and video indexing and retrieval. Video processing using compressed data. Course concepts and structures 4. Classification of compression techniques 5. Image and video compression

Furht, Borko

109

Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images  

ERIC Educational Resources Information Center

Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two-dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.…

Perry, Jamie; Kuehn, David; Langlois, Rick

2007-01-01

110

Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images  

NSDL National Science Digital Library

Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.

David Kuehn

2007-01-01

111

Inquiring Into the Teaching Process: Towards Self-Evaluation and Professional Development.  

ERIC Educational Resources Information Center

This book is designed to be a stimulant to action and reflection for teachers who would like to make a study of their own teaching. The first part explores how teachers can make a personal appraisal of their teaching. An analytic procedure is used which involves gathering information about four major aspects of the teaching process: (1) the…

Haysom, John

112

Student Evaluation of Teaching Effectiveness of a Nationwide Innovative Education Program on Image Display Technology  

ERIC Educational Resources Information Center

The study presented here explored a student evaluation of the teaching effectiveness of a nationwide innovative education program on image display technology in Taiwan. Using survey data collected through an online questionnaire system, covering 165 classes across 30 colleges and universities in Taiwan, the study aimed to understand the teaching

Yueh, Hsiu-Ping; Chen, Tzy-Ling; Chiu, Li-An; Lee, San-Liang; Wang, An-Bang

2012-01-01

113

The hybrid image processing\\/expert image analysis system  

Microsoft Academic Search

The main purpose of this work is to analyze and interpret the X-ray image by using some knowledge base that relates objects within a scene to one another and to the scene background. Knowledge-based Image Analysis System (KIAS) is a prototype environment for image analysis and interpretation, KIAS consists of an image processing system and an expert system. It oversees

Jung H. Kim; E. H. Park; C. Ntuen; K. H. Sohn; W. Alexander

1990-01-01

114

Signal processing in medical imaging and image-guided intervention  

Microsoft Academic Search

Thisis an introductionto a specialsession ofICASSP devoted to signalprocessing techniquesin medicalimagingand image analysis that consists of this introduction and 5 research presentations, each addressingone aspect of the medicalimaging field in which signal processing plays an irreplaceable role. The topics cover a broad spectrum of medical imaging problems from image acquisition to image analysis to populationbased anatomical modeling. The focus is

Milan Sonka

2011-01-01

115

Image enhancement based on gamma map processing  

NASA Astrophysics Data System (ADS)

This paper proposes a novel image enhancement technique based on Gamma Map Processing (GMP). In this approach, a base gamma map is directly generated according to the intensity image. After that, a sequence of gamma map processing is performed to generate a channel-wise gamma map. Mapping through the estimated gamma, image details, colorfulness, and sharpness of the original image are automatically improved. Besides, the dynamic range of the images can be virtually expanded.

Tseng, Chen-Yu; Wang, Sheng-Jyh; Chen, Yi-An

2010-05-01

116

A Reconfigurable Image Capture and Image Processing System  

E-print Network

a common framework for image processing development on robots, the system's modular, dynamicallyA Reconfigurable Image Capture and Image Processing System for Autonomous Robots --- A Proposal very recently, much research on the topic of autonomous robotics has been concerned with very

Clausen, Michael

117

A prospective randomized trial of content expertise versus process expertise in small group teaching  

Microsoft Academic Search

BACKGROUND: Effective teaching requires an understanding of both what (content knowledge) and how (process knowledge) to teach. While previous studies involving medical students have compared preceptors with greater or lesser content knowledge, it is unclear whether process expertise can compensate for deficient content expertise. Therefore, the objective of our study was to compare the effect of preceptors with process expertise

Adam D Peets; Lara Cooke; Bruce Wright; Sylvain Coderre; Kevin McLaughlin

2010-01-01

118

Combining image-processing and image compression schemes  

NASA Technical Reports Server (NTRS)

An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

Greenspan, H.; Lee, M.-C.

1995-01-01

119

Using Photographic Images as an Interactive Online Teaching Strategy  

ERIC Educational Resources Information Center

Teaching via distance requires inventive instructional strategies to facilitate an optimum learning experience. This qualitative research study evaluated the effect of one unique online teaching strategy called "photovoice" [Wang, C., & Burris, M. (1997). "Photovoice: Concept, methodology, and use for participatory needs assessment." "Health…

Perry, Beth

2006-01-01

120

Image processing for medical diagnosis using CNN  

NASA Astrophysics Data System (ADS)

Medical diagnosis is one of the most important area in which image processing procedures are usefully applied. Image processing is an important phase in order to improve the accuracy both for diagnosis procedure and for surgical operation. One of these fields is tumor/cancer detection by using Microarray analysis. The research studies in the Cancer Genetics Branch are mainly involved in a range of experiments including the identification of inherited mutations predisposing family members to malignant melanoma, prostate and breast cancer. In bio-medical field the real-time processing is very important, but often image processing is a quite time-consuming phase. Therefore techniques able to speed up the elaboration play an important rule. From this point of view, in this work a novel approach to image processing has been developed. The new idea is to use the Cellular Neural Networks to investigate on diagnostic images, like: Magnetic Resonance Imaging, Computed Tomography, and fluorescent cDNA microarray images.

Arena, Paolo; Basile, Adriano; Bucolo, Maide; Fortuna, Luigi

2003-01-01

121

Pyramid Methods in Image Processing  

Microsoft Academic Search

: The data structure used torepresent image information can be criticalto the successful completion of an imageprocessing task. One structure that hasattracted considerable attention is the imagepyramid This consists of a set of lowpass orbandpass copies of an image, eachrepresenting pattern information of adifferent scale. Here we describe a variety ofpyramid methods that we have developedfor image data compression, enhancement,analysis

E. H. Adelson; C. H. Anderson; J. R. Bergen; P. J. Burt; J. M. Ogden

1984-01-01

122

Color image processing for date quality evaluation  

Microsoft Academic Search

Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing

Dah Jye Lee; James K. Archibald

2010-01-01

123

Handbook on COMTAL's Image Processing System  

NASA Technical Reports Server (NTRS)

An image processing system is the combination of an image processor with other control and display devices plus the necessary software needed to produce an interactive capability to analyze and enhance image data. Such an image processing system installed at NASA Langley Research Center, Instrument Research Division, Acoustics and Vibration Instrumentation Section (AVIS) is described. Although much of the information contained herein can be found in the other references, it is hoped that this single handbook will give the user better access, in concise form, to pertinent information and usage of the image processing system.

Faulcon, N. D.

1983-01-01

124

NASA Regional Planetary Image Facility image retrieval and processing system  

NASA Technical Reports Server (NTRS)

The general design and analysis functions of the NASA Regional Planetary Image Facility (RPIF) image workstation prototype are described. The main functions of the MicroVAX II based workstation will be database searching, digital image retrieval, and image processing and display. The uses of the Transportable Applications Executive (TAE) in the system are described. File access and image processing programs use TAE tutor screens to receive parameters from the user and TAE subroutines are used to pass parameters to applications programs. Interface menus are also provided by TAE.

Slavney, Susan

1986-01-01

125

Improvement of hospital processes through business process management in Qaem Teaching Hospital: A work in progress  

PubMed Central

In a world of continuously changing business environments, organizations have no option; however, to deal with such a big level of transformation in order to adjust the consequential demands. Therefore, many companies need to continually improve and review their processes to maintain their competitive advantages in an uncertain environment. Meeting these challenges requires implementing the most efficient possible business processes, geared to the needs of the industry and market segments that the organization serves globally. In the last 10 years, total quality management, business process reengineering, and business process management (BPM) have been some of the management tools applied by organizations to increase business competiveness. This paper is an original article that presents implementation of “BPM” approach in the healthcare domain that allows an organization to improve and review its critical business processes. This project was performed in “Qaem Teaching Hospital” in Mashhad city, Iran and consists of four distinct steps; (1) identify business processes, (2) document the process, (3) analyze and measure the process, and (4) improve the process. Implementing BPM in Qaem Teaching Hospital changed the nature of management by allowing the organization to avoid the complexity of disparate, soloed systems. BPM instead enabled the organization to focus on business processes at a higher level.

Yarmohammadian, Mohammad H.; Ebrahimipour, Hossein; Doosty, Farzaneh

2014-01-01

126

Effects of Using Online Tools in Improving Regulation of the Teaching-Learning Process  

ERIC Educational Resources Information Center

Introduction: The current panorama of Higher Education reveals a need to improve teaching and learning processes taking place there. The rise of the information society transforms how we organize learning and transmit knowledge. On this account, teaching-learning processes must be enhanced, the role of teachers and students must be evaluated, and…

de la Fuente, Jesus; Cano, Francisco; Justicia, Fernando; Pichardo, Maria del Carmen; Garcia-Berben, Ana Belen; Martinez-Vicente, Jose Manuel; Sander, Paul

2007-01-01

127

Using the Process Approach to Teach Writing in 6 Hong Kong Primary Classrooms  

ERIC Educational Resources Information Center

Background: In most primary schools in Hong Kong, a product-oriented approach is used in teaching writing. The process approach to writing has been seen as an improvement over the traditional methods of writing instruction in recent years. However, the effectiveness of using the process approach to teach writing is still inconclusive. It is…

Ho, Belinda

2006-01-01

128

Programmable remapper for image processing  

NASA Technical Reports Server (NTRS)

A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

Juday, Richard D. (inventor); Sampsell, Jeffrey B. (inventor)

1991-01-01

129

Good Talk about Good Teaching: Improving Teaching through Conversation and Community.  

ERIC Educational Resources Information Center

If college faculty were encouraged by leadership to share their teaching experiences and insights, good teaching could flourish. Useful conversation topics include critical moments in teaching and learning; the human condition of teachers and learners; metaphors and images of the teaching process; and autobiographical reflection on great teachers.…

Palmer, Parker J.

1993-01-01

130

Survey: Interpolation Methods in Medical Image Processing  

Microsoft Academic Search

Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation kernels of finite size have been introduced. This paper compares 1) truncated and win- dowed sinc; 2) nearest neighbor; 3) linear; 4) quadratic;

Thomas Martin Lehmann; Claudia Gönner; Klaus Spitzer

1999-01-01

131

Image resolution improvement: A signal processing approach  

Microsoft Academic Search

High-resolution (HR) images are in demand because they not only give the viewer a pleasing view of the scene but also offer additional details that are important for analysis. Signal processing approach can be used to improve the resolution from several sub pixel shifted low resolution images. The image to be super-resolved is estimated from a set of low resolution

P. S. Shilpashree; K. V. Suresh

2010-01-01

132

Digital image processing of metric camera imagery  

Microsoft Academic Search

The use of digitized Spacelab metric camera imagery for map updating is demonstrated for an area of Germany featuring agricultural and industrial areas, and a region of the White Nile. LANDSAT and Spacelab images were combined, and digital image processing techniques used for image enhancement. Updating was achieved by semiautomatic techniques, but for many applications manual editing may be feasible.

P. Lohmann

1985-01-01

133

Digital Image Processing: Gizmo in CIN Diagnosis  

Microsoft Academic Search

Cervical intra-epithelial neoplasia (CIN) is a disorder that indicates the possibility of a developing cervical cancer, and it is therefore important to diagnose and treat it as early as possible. Image Processing Techniques such as image acquisition, segmentation, compression and registration are used in medical field for the diagnosis of CIN. This work focuses on image analysis technique that can

Sangeeta Mangesh

134

Image Processing Software Package in Medical Imaging: A review  

E-print Network

Abstract — MATLAB is at present among the best available technique for image processing. Medical images after digitalized processed can help reducing the number of false positives and they assist medical officers in deciding between follow-up and biopsy. This paper gives a survey of image processing algorithms that have been developed for detection of masses and segmentation techniques. 35 students from university campus participated in the Development of Biomedical Image Processing Software Package for New Learners Survey investigating the use of software package for processing and editing image. Composed of 19 questions, the survey built a comprehensive picture of the software package, programming language, workflow of the tool and captured the attitudes of the respondents. The result of this study shows that MATLAB is among the famous software package and this result is expected to be beneficial and able to assist users on effective image processing and analysis in a newly developed software package. Keywords- MATLAB; image processing; image editting; software package. 1.

Nasrul Humaimi Mahmood; Ching Yee Yong; Kim Mey Chew; Ismail Ariffin /international Journal Of; Nasrul Humaimi Mahmood; Ching Yee Yong; Kim Mey Chew; Ismail Ariffin

135

Combining advanced imaging processing and low cost remote imaging capabilities  

NASA Astrophysics Data System (ADS)

Target images are very important for evaluating the situation when Unattended Ground Sensors (UGS) are deployed. These images add a significant amount of information to determine the difference between hostile and non-hostile activities, the number of targets in an area, the difference between animals and people, the movement dynamics of targets, and when specific activities of interest are taking place. The imaging capabilities of UGS systems need to provide only target activity and not images without targets in the field of view. The current UGS remote imaging systems are not optimized for target processing and are not low cost. McQ describes in this paper an architectural and technologic approach for significantly improving the processing of images to provide target information while reducing the cost of the intelligent remote imaging capability.

Rohrer, Matthew J.; McQuiddy, Brian

2008-04-01

136

Quantitative image processing in fluid mechanics  

NASA Technical Reports Server (NTRS)

The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

Hesselink, Lambertus; Helman, James; Ning, Paul

1992-01-01

137

Non-linear Post Processing Image Enhancement  

NASA Technical Reports Server (NTRS)

A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

Hunt, Shawn; Lopez, Alex; Torres, Angel

1997-01-01

138

Matching rendered and real world images by digital image processing  

NASA Astrophysics Data System (ADS)

Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

2010-05-01

139

Programmable Iterative Optical Image And Data Processing  

NASA Technical Reports Server (NTRS)

Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

Jackson, Deborah J.

1995-01-01

140

Water surface capturing by image processing  

Technology Transfer Automated Retrieval System (TEKTRAN)

An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

141

Grain Counting Method Based On Image Processing  

Microsoft Academic Search

Grain counting is very important for breeding and quality inspection. In order to improve the efficiency and precision of grain counting, a novel method based on MATLAB image processing technology and mechanical vibration technology was proposed. It can effectively resolve the overlaps and conglutinations among grains by mechanical vibration and image erosion processing respectively. Experiment results show that this method

Zhao Ping; Li Yongkui

2009-01-01

142

Automatic processing, analysis, and recognition of images  

NASA Astrophysics Data System (ADS)

New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.

2004-11-01

143

Image processing: mathematics, engineering, or art  

SciTech Connect

From the strict mathematical viewpoint, it is impossible to fully achieve the goal of digital image processing, which is to determine an unknown function of two dimensions from a finite number of discrete measurements linearly related to it. However, the necessity to display image data in a form that is visually useful to an observer supersedes such mathematically correct admonitions. Engineering defines the technological limits of what kind of image processing can be done and how the resulting image can be displayed. The appeal and usefulness of the final image to the human eye pertains to aesthetics. Effective image processing necessitates unification of mathematical theory, practical implementation, and artistic display. 59 references, 6 figures.

Hanson, K.M.

1985-01-01

144

On some applications of diffusion processes for image processing  

NASA Astrophysics Data System (ADS)

We propose a new algorithm inspired by the properties of diffusion processes for image filtering. We show that purely nonlinear diffusion processes ruled by Fisher equation allows contrast enhancement and noise filtering, but involves a blurry image. By contrast, anisotropic diffusion, described by Perona and Malik algorithm, allows noise filtering and preserves the edges. We show that combining the properties of anisotropic diffusion with those of nonlinear diffusion provides a better processing tool which enables noise filtering, contrast enhancement and edge preserving.

Morfu, S.

2009-06-01

145

Mobile Phone Images and Video in Science Teaching and Learning  

ERIC Educational Resources Information Center

This article reports a study into how mobile phones could be used to enhance teaching and learning in secondary school science. It describes four lessons devised by groups of Sri Lankan teachers all of which centred on the use of the mobile phone cameras rather than their communication functions. A qualitative methodological approach was used to…

Ekanayake, Sakunthala Yatigammana; Wishart, Jocelyn

2014-01-01

146

Image processing for diffusion tensor magnetic resonance imaging  

Microsoft Academic Search

Abstract. This paper, describes image processing techniques for Diffusion Tensor Magnetic Resonance. In Diffusion Tensor MRI, a tensor describing local water diffusion is acquired for each voxel. The geometric nature of the diffusion tensors can quantitatively characterize the local structure in tissues such as bone, muscles, and white matter of the brain. The close relationship between local image structure and

C. F Westin; S. Peled; H. Gubjartsson; R. Kikinis; F. Jolesz

1997-01-01

147

Technology Integration into the Teaching-Learning Process by Business Education Teachers  

ERIC Educational Resources Information Center

This study addressed the factors that explain the integration of technology into the teaching-learning process in Louisiana's secondary business education programs. Four variables explain some of the variance in teachers' integration of technology in instruction. These variables are perceived teaching effectiveness, perceived barriers to…

Redmann, Donna H.; Kotrlik, Joe W.

2004-01-01

148

The Process of Physics Teaching Assistants' Pedagogical Content Knowledge Development  

ERIC Educational Resources Information Center

This study explored the process of physics teaching assistants' (TAs) PCK development in the context of teaching a new undergraduate introductory physics course. "Matter and Interactions" (M&I) has recently adopted a new introductory physics course that focuses on the application of a small number of fundamental physical…

Seung, Eulsun

2013-01-01

149

Process Evaluation of a Teaching and Learning Centre at a Research University  

ERIC Educational Resources Information Center

This paper describes the evaluation of a teaching and learning centre (TLC) five?years after its inception at a mid-sized, midwestern state university. The mixed methods process evaluation gathered data from 209 attendees and non-attendees of the TLC from the full-time, benefit-eligible teaching faculty. Focus groups noted feelings of…

Smith, Deborah B.; Gadbury-Amyot, Cynthia C.

2014-01-01

150

Evaluating Students' Learning and Communication Processes: Handbook 2--Diagnostic Teaching Unit: Language Arts.  

ERIC Educational Resources Information Center

Presenting a diagnostic teaching unit for grade 7 language arts, this handbook is intended to be used along with the companion handbook 1, "Evaluating Students' Learning and Communication Processes: Integrating Diagnostic Evaluation and Instruction." The student activities of the diagnostic teaching units in the handbook have been designed to…

Alberta Dept. of Education, Edmonton. Student Evaluation Branch.

151

Evaluating Students' Learning and Communication Processes: Handbook 3--Diagnostic Teaching Units: Social Studies.  

ERIC Educational Resources Information Center

Presenting the diagnostic teaching units for grades 7, 8, and 9 social studies, this handbook is intended to be used along with the companion handbook 1, "Evaluating Students' Learning and Communication Processes: Integrating Diagnostic Evaluation and Instruction." The student activities of the diagnostic teaching units in the handbook have been…

Alberta Dept. of Education, Edmonton. Student Evaluation Branch.

152

Process perspective on image quality evaluation  

NASA Astrophysics Data System (ADS)

The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

2008-01-01

153

Topics in genomic image processing  

E-print Network

Coding Performance for the Foreground Objects 28 a. EMIC Results with Different Wavelet Filters and Decomposition Levels . . . . . . . . . . . . . 28 b. Comparison with Other Lossless Coding Techniques 29 2. Lossy-to-lossless Coding Performance... for the Back- ground Objects . . . . . . . . . . . . . . . . . . . . . . 31 a. EMIC Results with Different Wavelet Filters and Decomposition Levels . . . . . . . . . . . . . 31 b. Comparison with JPEG-2000 . . . . . . . . . . . 34 III MICROARRAY IMAGE...

Hua, Jianping

2006-04-12

154

Astronomical Image and Signal Processing  

E-print Network

of the amount of information in the image. Shannon [38], in the framework of communication theory, suggested loss of information. Shannon entropy does not take into account the corre- lation between pixels- tropy. The wavelet transform (WT) is considered one of the best tools to do this job. It allows us

Starck, Jean-Luc

155

The humanbecoming connection: nursing students find meaning in the teaching-learning processes.  

PubMed

Students are building a foundation to help them move fluidly and naturally while making human to human connections. Parse's theory of humanbecoming and the associated teaching-learning processes were used as underpinnings to guide students as they come to know more about themselves and others. The teaching-learning processes help nursing students understand that being present and bearing witness are an important part of honoring quality of life and respecting value choices. The purpose of this column is to highlight the humanbecoming teaching-learning processes as students explore their experiences of learning to be with others in true presence. PMID:23575487

De Natale, Mary Lou; Klevay, Anne M

2013-04-01

156

Teaching while selecting images for satellite-based forest mapping Froduald Kabanza and Kami Rousseau  

E-print Network

. An important step in this overall process is the acquisition of image data that will best allow assessing-system, image processing, image acquisition #12;1 Introduction The identification of zones that changed overtime is a component of a more complex system for image processing, called SITI (for « Système Intelligent de

Kabanza, Froduald

157

Earth Observation Services (Image Processing Software)  

NASA Technical Reports Server (NTRS)

San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

1992-01-01

158

Image processing for drawing recognition  

NASA Astrophysics Data System (ADS)

The task of recognizing edges of rectangular structures is well known. Still, almost all of them work with static images and has no limit on work time. We propose application of conducting homography for the video stream which can be obtained from the webcam. We propose algorithm which can be successfully used for this kind of application. One of the main use cases of such application is recognition of drawings by person on the piece of paper before webcam.

Feyzkhanov, Rustem; Zhelavskaya, Irina

2014-03-01

159

2004 International Conference on Image Processing (ICIP) DOCUMENT IMAGE SECRET SHARING USING BIT-LEVEL PROCESSING  

E-print Network

2004 International Conference on Image Processing (ICIP) DOCUMENT IMAGE SECRET SHARING USING BIT-LEVEL PROCESSING Rustisluv Lukac und Konstuntinos N. Plutuniotis Bell Canada Multimedia Laboratory, The Edward S for encryption of pri- vate financial and pharmaceutical digital documents. and digital signature images

Plataniotis, Konstantinos N.

160

Nonlinear Optical Image Processing with Bacteriorhodopsin Films  

NASA Technical Reports Server (NTRS)

The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

Downie, John D.; Deiss, Ron (Technical Monitor)

1994-01-01

161

Axioms and fundamental equations of image processing  

Microsoft Academic Search

Image-processing transforms must satisfy a list of formal requirements. We discuss these requirements and classify them into three categories: “architectural requirements” like locality, recursivity and causality in the scale space, “stability requirements” like the comparison principle and “morphological requirements”, which correspond to shape-preserving properties (rotation invariance, scale invariance, etc.). A complete classification is given of all image multiscale transforms satisfying

Luis Alvarez; Frédéric Guichard; Pierre-Louis Lions; Jean-Michel Morel

1993-01-01

162

Image Processing Applications for Geologic Mapping  

Microsoft Academic Search

The use of satellite data, particularly Landsat images, for geologic mapping provides the geologist with a powerful tool. The digital format of these data permits applications of image processing to extract or enhance information useful for mapping purposes. Examples are presented of lithologic classification using texture measures, automatic lineament detection and structural analysis, and use of registered multisource satellite data.

Michael Abrams; Annick Blusson; Veronique Carrere; Phu Thien Nguyen; Yves Rabu

1985-01-01

163

Medical image processing with optical Fourier techniques  

Microsoft Academic Search

Medical image processing is demonstrated by using Fourier techniques. Two optical Fourier systems are designed: the first one is a real-time optical processor with spatial filters and the second one is a self-adaptive optical processor with nonlinear optical films of the biomaterial Bacteriorhodopsin. Medical images including mammograms and Pap smears are investigated by using our optical systems. The desired components

Pengfei Wu

2003-01-01

164

Checking Fits With Digital Image Processing  

NASA Technical Reports Server (NTRS)

Computer-aided video inspection of mechanical and electrical connectors feasible. Report discusses work done on digital image processing for computer-aided interface verification (CAIV). Two kinds of components examined: mechanical mating flange and electrical plug.

Davis, R. M.; Geaslen, W. D.

1988-01-01

165

Prospective faculty developing understanding of teaching and learning processes in science  

NASA Astrophysics Data System (ADS)

Historically, teaching has been considered a burden by many academics at institutions of higher education, particularly research scientists. Furthermore, university faculty and prospective faculty often have limited exposure to issues associated with effective teaching and learning. As a result, a series of ineffective teaching and learning strategies are pervasive in university classrooms. This exploratory case study focuses on four biology graduate teaching fellows (BGF) who participated in a National Science Foundation (NSF) GK-12 Program. Such programs were introduced by NSF to enhance the preparation of prospective faculty for their future professional responsibilities. In this particular program, BGF were paired with high school biology teachers (pedagogical mentors) for at least one year. During this yearlong partnership, BGF were involved in a series of activities related to teaching and learning ranging from classroom teaching, tutoring, lesson planning, grading, to participating in professional development conferences and reflecting upon their practices. The purpose of this study was to examine the changes in BGF understanding of teaching and learning processes in science as a function of their pedagogical content knowledge (PCK). In addition, the potential transfer of this knowledge between high school and higher education contexts was investigated. The findings of this study suggest that understanding of teaching and learning processes in science by the BGF changed. Specific aspects of the BGF involvement in the program (such as classroom observations, practice teaching, communicating with mentors, and reflecting upon one's practice) contributed to PCK development. In fact, there is evidence to suggest that constant reflection is critical in the process of change. Concurrently, BGFs enhanced understanding of science teaching and learning processes may be transferable from the high school context to the university context. Future research studies should be designed to explore explicitly this transfer phenomenon.

Pareja, Jose I.

166

Color image processing for date quality evaluation  

NASA Astrophysics Data System (ADS)

Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing a challenging task. Since most agricultural applications only require analysis on a predefined set or range of colors, mapping these relevant colors to a small number of indexes allows simple and efficient color image processing for quality evaluation. This paper presents a simple but efficient color mapping and image processing technique that is designed specifically for real-time quality evaluation of Medjool dates. In contrast with more complex color image processing techniques, the proposed color mapping method makes it easy for a human operator to specify and adjust color-preference settings for different color groups representing distinct quality levels. Using this color mapping technique, the color image is first converted to a color map that has one color index represents a color value for each pixel. Fruit maturity level is evaluated based on these color indices. A skin lamination threshold is then determined based on the fruit surface characteristics. This adaptive threshold is used to detect delaminated fruit skin and hence determine the fruit quality. The performance of this robust color grading technique has been used for real-time Medjool date grading.

Lee, Dah Jye; Archibald, James K.

2010-01-01

167

Overview on METEOSAT geometrical image data processing  

NASA Technical Reports Server (NTRS)

Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

Diekmann, Frank J.

1994-01-01

168

Image processing instrumentation for speckle interferometry and imaging  

NASA Astrophysics Data System (ADS)

This final technical report describes the rationale for the acquisition of computer hardware for assembling an interactive image processing facility to be used for on-going programs of high angular resolution speckle interferometry of astronomical objects. Speckle interferometry is a technique for obtaining diffraction limited spatial information of distant objects through the turbulent atmosphere while simultaneously permitting differential positional measurements of very high accuracy. This method is being applied at Georgia State University in a program whose scientific goal is the detection of planetary mass companions in binary star systems. Funding from the DoD-University Research Instrumentation Program has permitted the acquisition of a dedicated digital image processing system to optimize the reduction and analysis of these data. The system components include a VAX 11/750 computer serving as host machine to an International Imaging Systems Model 70/F image processor with a variety of peripherals. The system is fully operational and extensive software development is underway.

McAlister, H. A.

1984-11-01

169

Image Grammar: Using Grammatical Structures To Teach Writing.  

ERIC Educational Resources Information Center

This book is based on the premise that a writer is much like an artist who paints images, only using grammatical structures as tools. In conjunction with this approach, each chapter is divided into concepts and strategies: concepts illustrate how professional writers have applied image grammar to develop their art, and strategies provide…

Noden, Harry R.

170

Perceptions and Images of North Africa: What American Schools Teach.  

ERIC Educational Resources Information Center

Examined descriptions of North Africa (particularly Tunisia) found in U.S. high school social studies textbooks, noting the resulting perceptions and images these descriptions created in the minds of teachers and students. Data from examination of textbooks and interviews with teachers indicated that few high school students were exposed to images

Robinson, Victoria

2002-01-01

171

A VIRTUAL REALITY ELECTROCARDIOGRAPHY TEACHING TOOL Image Synthesis Group  

E-print Network

Science Dept. Trinity College Dublin Dublin 2, Ireland john.t.ryan@cs.tcd.ie Carol O'Sullivan Image Synthesis Group Computer Science Dept. Trinity College Dublin Dublin 2, Ireland osullica@tcd.ie Christopher Bell Physiology Dept. Trinity College Dublin Dublin 2, Ireland cbell@tcd.ie Robert Mooney Image

O'Sullivan, Carol

172

ATFTools: Image Analysis Software Developed for Undergraduate Teaching and Research  

NASA Astrophysics Data System (ADS)

ATFtools is a suite of programs designed to provide simple but powerful set of tools for image analysis and photometry of FITS format images. The programs include tools for displaying and editing FITS headers, calibrating position by pattern matching to the GSC using WCS keywords, cropping and registering multiple images, differential and absolute photometry, automated supernova searches, variable star monitoring, and photometric calibration using images of standard fields. The program interface was designed to be straightforward and easily learned by introductory undergraduate students. The programs have been ported to several UNIX O/S (Solaris, Unixware, Linux) and to MS-Windows 95. In addition, the UNIX port has a powerful image display program available with intereactive photometry and WCS calibration. All programs are freely available by anonymous ftp.

Mutel, R.; Downey, E.

1996-05-01

173

Teaching the Writing Process through Full Dyadic Writing.  

ERIC Educational Resources Information Center

A study investigated the effectiveness of full dyadic writing as a technique for teaching writing to students of English as a Second Language (ESL). Subjects were 31 college students of diverse cultural backgrounds enrolled in ESL sections of freshman English. Each chose a partner with a different native language with whom to write two essays, the…

Aghbar, Ali-Asghar; Alam, Mohammed

174

EFL Teaching in Brazil: Emphasizing Conditions, Processes, or Product?  

ERIC Educational Resources Information Center

To investigate teaching practices for English as a foreign/second language (EFL) at the university level in Brazil, three researchers analyzed the research presented at the 1990 Brazilian National Conference of EFL University Teachers (ENPULI Conference). Focus of the analysis was on the main objective of the study and the emphasis placed on…

Motta-Roth, Desiree; And Others

175

The Uncertainty Principle in Image Processing  

Microsoft Academic Search

The uncertainty principle is recognized as one of the fundamental results in signal processing. Its role in inference is, however, less well known outside of quantum mechanics. It is the aim of this paper to provide a unified approach to the problem of uncertainty in image processing. It is shown that uncertainty can be derived from the fundamental constraints on

Roland Wilson; Goesta H. Granlund

1984-01-01

176

Automated fruit grading system using image processing  

Microsoft Academic Search

This paper describes the operations and performance of an automated quality verification system for agricultural products and its main features. The system utilizes improved engineering designs and image-processing techniques to convey and grade products. Basically two inspection stages of the system can be identified: external fruit inspection and internal fruit inspection. Surface inspection is accomplished through processing of color CCD

John B. Njoroge; Kazunori Ninomiya; Naoshi Kondo; H. Toita

2002-01-01

177

Image Processing on Encoded Video Sequences  

Microsoft Academic Search

This paper presents a novel approach to processing encoded video sequences prior to com- plete decoding. Scene changes are easily detected using DCT coefficients in JPEG and MPEG encoded video sequences. In addition, by analyzing the DCT coefficients, regions of interest may be isolated prior to decompression, increasing efficiency of any subsequent image processing steps, such as edge detection. The

Farshid Arman; Arding Hsu; Ming-yee Chiu

1994-01-01

178

Image Processing Application for Cognition (IPAC) - Traditional and Emerging Topics in Image Processing in Astronomy (Invited)  

NASA Astrophysics Data System (ADS)

A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).

Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.

2008-08-01

179

Bistatic SAR: Signal Processing and Image Formation.  

SciTech Connect

This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

Wahl, Daniel E.; Yocky, David A.

2014-10-01

180

Report on using TIPS (Teaching Information Processing System) in teaching physics and astronomy  

NSDL National Science Digital Library

A computer-managed instruction system, TIPS, has been used for over a decade in the teaching of diverse disciplines. This paper describes the recent use of TIPS in physics and astronomy courses at Kansas State University, Memphis State University, University of New Mexico, and University of WisconsinâGreen Bay. Student reactions to TIPS were largely positive, but the degree of success in improving student performance reported in many articles has not been observed.

Folland, Nathan; Marchini, Robert R.; Rhyner, Charles R.; Zeilik, Michael

2005-10-21

181

Parallel asynchronous systems and image processing algorithms  

NASA Technical Reports Server (NTRS)

A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

Coon, D. D.; Perera, A. G. U.

1989-01-01

182

Multivariate Image Analysis in Mineral Processing  

Microsoft Academic Search

\\u000a In several process industries including mineral processing, where the materials are solids or slurries, some important measurements\\u000a cannot be obtained using standard instrumentation (e.g., flow, temperature, pressure, pH, power draw, etc.), but can be visually appraised, and could be automatically quantified using machine vision techniques. In general, the information\\u000a to extract from process images is not well defined and is

Carl Duchesne

183

A 3D image processing method for manufacturing process automation  

Microsoft Academic Search

Three-dimensional (3D) image processing provides a useful tool for machine vision applications. Typically a 3D vision system is divided into data acquisition, low-level processing, object representation and matching. In this paper, a 3D object pose estimation method is developed for an automated manufacturing assembly process. The experimental results show that the 3D pose estimation method produces accurate geometrical information for

Dongming Zhao; Songtao Li

2005-01-01

184

Fundamental concepts of digital image processing  

SciTech Connect

The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

Twogood, R.E.

1983-03-01

185

Interactive Computer Assisted Instruction in Teaching of Process Analysis and Simulation.  

ERIC Educational Resources Information Center

To improve the instructional process, time shared computer-assisted instructional methods were developed to teach upper division undergraduate chemical engineering students the concepts of process simulation and analysis. The interactive computer simulation aimed at enabling the student to learn the difficult concepts of process dynamics by…

Nuttall, Herbert E., Jr.; Himmelblau, David M.

186

Image Processing and the Performance Gap  

Microsoft Academic Search

\\u000a Automated image processing and analysis methods have brought new dimensions, literally and figuratively, to medical imaging.\\u000a A large array of tools for visualization, quantization, classification, and decision-making are available to aid clinicians\\u000a at all junctures: in real-time diagnosis and therapy, in planning, and in retrospective meta-analyses. Many of those tools,\\u000a however, are not in regular use by radiologists. This chapter

Steven C. Horii; Murray H. Loew

187

Image processing of angiograms: A pilot study  

NASA Technical Reports Server (NTRS)

The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

1974-01-01

188

Parallel Implementation of Hyperspectral Image Processing Algorithms  

Microsoft Academic Search

High computing performance of algorithm analysis is essential in many hyperspectral imaging applications, including automatic target recognition for homeland defense and security, risk\\/hazard prevention and monitoring, wild-land fire tracking and biological threat detection. Despite the growing interest in hyperspectral imaging research, only a few efforts devoted to designing and implementing well-conformed parallel processing solutions currently exist in the open literature.

Antonio Plaza; David Valencia; Javier Plaza; Juan S ´ anchez-Testal; Sergio Mu; Soraya Bl

2006-01-01

189

Digital-image processing and image analysis of glacier ice  

USGS Publications Warehouse

This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

Fitzpatrick, Joan J.

2013-01-01

190

Image Processing for Diffusion Tensor Magnetic Resonance Imaging  

Microsoft Academic Search

This paper1 describes image processing techniques for Diffusion Ten- sor Magnetic Resonance. In Diffusion Tensor MRI, a tensor describing local wa- ter diffusion is acquired for each voxel. The geometric nature of the diffusion tensors can quantitatively characterize the local structure in tissues such as bone, muscles, and white matter of the brain. The close relationship between local im- age

Carl-fredrik Westin; S. E. Maier; B. Khidhir; Peter Everett; Ferenc A. Jolesz; Ron Kikinis

1999-01-01

191

Image processing instrumentation for speckle interferometry and imaging  

Microsoft Academic Search

This final technical report describes the rationale for the acquisition of computer hardware for assembling an interactive image processing facility to be used for on-going programs of high angular resolution speckle interferometry of astronomical objects. Speckle interferometry is a technique for obtaining diffraction limited spatial information of distant objects through the turbulent atmosphere while simultaneously permitting differential positional measurements of

H. A. McAlister

1984-01-01

192

Image processing applications for geologic mapping  

SciTech Connect

The use of satellite data, particularly Landsat images, for geologic mapping provides the geologist with a powerful tool. The digital format of these data permits applications of image processing to extract or enhance information useful for mapping purposes. Examples are presented of lithologic classification using texture measures, automatic lineament detection and structural analysis, and use of registered multisource satellite data. In each case, the additional mapping information provided relative to the particular treatment is evaluated. The goal is to provide the geologist with a range of processing techniques adapted to specific mapping problems.

Abrams, M.; Blusson, A.; Carrere, V.; Nguyen, T.; Rabu, Y.

1985-03-01

193

Processing infrared images of aircraft lapjoints  

NASA Technical Reports Server (NTRS)

Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.

Syed, Hazari; Winfree, William P.; Cramer, K. E.

1992-01-01

194

Support Routines for In Situ Image Processing  

NASA Technical Reports Server (NTRS)

This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the pointing of in situ cameras, (8) marsinvrange: Inverse of marsrange . given a range file, re-computes an XYZ file that closely matches the original. . marsproj: Projects an XYZ coordinate through the camera model, and reports the line/sample coordinates of the point in the image, (9) marsprojfid: Given the output of marsfidfinder, projects the XYZ locations and compares them to the found locations, creating a report showing the fiducial errors in each image. marsrad: Radiometrically corrects an image, (10) marsrelabel: Updates coordinate system or camera model labels in an image, (11) marstiexyz: Given a stereo pair, allows the user to interactively pick a point in each image and reports the XYZ value corresponding to that pair of locations. marsunmosaic: Extracts a single frame from a mosaic, which will be created such that it could have been an input to the original mosaic. Useful for creating simulated input frames using different camera models than the original mosaic used, and (12) merinverter: Uses an inverse lookup table to convert 8-bit telemetered data to its 12-bit original form. Can be used in other missions despite the name.

Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

2013-01-01

195

Processing Images of Craters for Spacecraft Navigation  

NASA Technical Reports Server (NTRS)

A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

2009-01-01

196

Product review: lucis image processing software.  

PubMed

Lucis is a software program that allows the manipulation of images through the process of selective contrast pattern emphasis. Using an image-processing algorithm called Differential Hysteresis Processing (DHP), Lucis extracts and highlights patterns based on variations in image intensity (luminance). The result is that details can be seen that would otherwise be hidden in deep shadow or excessive brightness. The software is contained on a single floppy disk, is easy to install on a PC, simple to use, and runs on Windows 95, Windows 98, and Windows NT operating systems. The cost is $8,500 for a license, but is estimated to save a great deal of money in photographic materials, time, and labor that would have otherwise been spent in the darkroom. Superb images are easily obtained from unstained (no lead or uranium) sections, and stored image files sent to laser printers are of publication quality. The software can be used not only for all types of microscopy, including color fluorescence light microscopy, biological and materials science electron microscopy (TEM and SEM), but will be beneficial in medicine, such as X-ray films (pending approval by the FDA), and in the arts. PMID:10206154

Johnson, J E

1999-04-01

197

Mariner 9-Image processing and products  

USGS Publications Warehouse

The purpose of this paper is to describe the system for the display, processing, and production of image-data products created to support the Mariner 9 Television Experiment. Of necessity, the system was large in order to respond to the needs of a large team of scientists with a broad scope of experimental objectives. The desire to generate processed data products as rapidly as possible to take advantage of adaptive planning during the mission, coupled with the complexities introduced by the nature of the vidicon camera, greatly increased the scale of the ground-image processing effort. This paper describes the systems that carried out the processes and delivered the products necessary for real-time and near-real-time analyses. References are made to the computer algorithms used for the, different levels of decalibration and analysis. ?? 1973.

Levinthal, E.C.; Green, W.B.; Cutts, J.A.; Jahelka, E.D.; Johansen, R.A.; Sander, M.J.; Seidman, J.B.; Young, A.T.; Soderblom, L.A.

1973-01-01

198

FITSH: Software Package for Image Processing  

NASA Astrophysics Data System (ADS)

FITSH provides a standalone environment for analysis of data acquired by imaging astronomical detectors. The package provides utilities both for the full pipeline of subsequent related data processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple image combinations, spatial transformations and interpolations, etc.) and for aiding the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The utilities in the package are built on the top of the commonly used UNIX/POSIX shells (hence the name of the package), therefore both frequently used and well-documented tools for such environments can be exploited and managing massive amount of data is rather convenient.

Pál, András

2011-11-01

199

Digital image processing of vascular angiograms  

NASA Technical Reports Server (NTRS)

The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

1975-01-01

200

Review of biomedical signal and image processing  

PubMed Central

This article is a review of the book “Biomedical Signal and Image Processing” by Kayvan Najarian and Robert Splinter, which is published by CRC Press, Taylor & Francis Group. It will evaluate the contents of the book and discuss its suitability as a textbook, while mentioning highlights of the book, and providing comparison with other textbooks.

2013-01-01

201

CMSC 426: Image Processing (Computer Vision)  

E-print Network

CMSC 426: Image Processing (Computer Vision) David Jacobs Today's class · What is vision · What is computer vision · Layout of the class #12;Vision · ``to know what is where, by looking.'' (Marr). · Where · What Why is Vision Interesting? · Psychology ­ ~ 50% of cerebral cortex is for vision. ­ Vision is how

Jacobs, David

202

Cubic convolution interpolation for digital image processing  

Microsoft Academic Search

Cubic convolution interpolation is a new technique for resampling discrete data. It has a number of desirable features which make it useful for image processing. The technique can be performed efficiently on a digital computer. The cubic convolution interpolation function converges uniformly to the function being interpolated as the sampling increment approaches zero. With the appropriate boundary conditions and constraints

ROBERT G. KEYS

1981-01-01

203

Spectral image processing in real-time  

Microsoft Academic Search

The fields of classical image processing and optical spectroscopy developed independently since a long time. While the first subject deals with pictorial information that uses the description of material by their surfaces in terms of brightness, texture and color depending on the illumination in the two dimensional field of view of the optics, the second one classifies usually material properties

Matthias F. Carlsohn

2006-01-01

204

Stochastic processes, estimation theory and image enhancement  

NASA Technical Reports Server (NTRS)

An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

Assefi, T.

1978-01-01

205

The Guidance Role of the Instructor in the Teaching and Learning Process  

ERIC Educational Resources Information Center

This study examines the guidance role of the instructor in the teaching and learning process. The paper x-rays the need for the learners to be consciously guided by their teachers as this facilitates and complements the learning process. Gagne's theory of conditions of learning, phases of learning and model for design of instruction was adopted to…

Alutu, Azuka N. G.

2006-01-01

206

Development of the Instructional Model by Integrating Information Literacy in the Class Learning and Teaching Processes  

ERIC Educational Resources Information Center

This study was aimed at developing an instructional model by integrating information literacy in the instructional process of general education courses at an undergraduate level. The research query, "What is the teaching methodology that integrates information literacy in the instructional process of general education courses at an undergraduate…

Maitaouthong, Therdsak; Tuamsuk, Kulthida; Techamanee, Yupin

2011-01-01

207

A Four Stage Process of Cooperative Teaching for Beginning University Teachers Meeting First Year University Students  

Microsoft Academic Search

A four stage process of co-operative teaching for beginning university teachers meeting first year groups of students provides a structured way in which classes can be developed. Through a process of induction, barriers to classroom participation are minimised, roles are clarified, academic goals are considered and an active knowledge concept is introduced. Students are encouraged to critically assess their own

JOANNA KIDMAN; KEN STEVENS

208

Comparative Study of Automatic Acquisition Methods of Image Processing Procedures  

Microsoft Academic Search

Two image processing expert systems, IMPRESS[1] and IMPRESS-Pro[2] have already been developed by our research group. These systems automatically generate image processing\\u000a procedures from sample pairs of an image and a sketch representing an object to be extracted from it. Automatic acquisition\\u000a of the image processing procedure is a kind of knowledge discovery process, because an image processing procedure which

Toshihiro Hamada; Akinobu Shimizu; Toyofumi Saito; Jun-ichi Hasegawa; Jun-ichiro Toriwaki

2000-01-01

209

Color Imaging management in film processing  

NASA Astrophysics Data System (ADS)

The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

2003-12-01

210

Subband/transform functions for image processing  

NASA Technical Reports Server (NTRS)

Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

Glover, Daniel

1993-01-01

211

Advanced communications technologies for image processing  

NASA Technical Reports Server (NTRS)

It is essential for image analysts to have the capability to link to remote facilities as a means of accessing both data bases and high-speed processors. This can increase productivity through enhanced data access and minimization of delays. New technology is emerging to provide the high communication data rates needed in image processing. These developments include multi-user sharing of high bandwidth (60 megabits per second) Time Division Multiple Access (TDMA) satellite links, low-cost satellite ground stations, and high speed adaptive quadrature modems that allow 9600 bit per second communications over voice-grade telephone lines.

Likens, W. C.; Jones, H. W.; Shameson, L.

1984-01-01

212

Ethical and legal aspects on the use of images and photographs in medical teaching and publication.  

PubMed

The aim of the study was to investigate the legal and ethical concerns raised from the use of photographs and images in medical publication. A search in the pertinent literature was performed. It is of paramount importance that the patient's autonomy, privacy and confidentiality is respected. In all cases in which photographs and images contain identifiable information patient's consent for any potential use of this material is mandatory. Patients should be aware that with the evolution of electronic publication, once an image is published, there is no efficient control of its future misuse. Physicians and hospitals have a duty to use with confidentiality any material kept in the patient's medical records. Efforts should be made to anonymised images and photographs used in teaching and publication so that such information does not raise ethical and legal concerns. The procedures for using photographs and images in medical publication and teaching should respect the ethical principles and contain only anonymous information to avoid legal consequences. Continuous scrutiny and reform is required in order to adapt to the changing social and scientific environment. PMID:20671657

Mavroforou, A; Antoniou, G; Giannoukas, A D

2010-08-01

213

Computer image processing in marine resource exploration  

NASA Technical Reports Server (NTRS)

Pictographic data or imagery is commonly used in marine exploration. Pre-existing image processing techniques (software) similar to those used on imagery obtained from unmanned planetary exploration were used to improve marine photography and side-scan sonar imagery. Features and details not visible by conventional photo processing methods were enhanced by filtering and noise removal on selected deep-sea photographs. Information gained near the periphery of photographs allows improved interpretation and facilitates construction of bottom mosaics where overlapping frames are available. Similar processing techniques were applied to side-scan sonar imagery, including corrections for slant range distortion, and along-track scale changes. The use of digital data processing and storage techniques greatly extends the quantity of information that can be handled, stored, and processed.

Paluzzi, P. R.; Normark, W. R.; Hess, G. R.; Hess, H. D.; Cruickshank, M. J.

1976-01-01

214

Automated synthesis of image processing procedures using AI planning techniques  

NASA Technical Reports Server (NTRS)

This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

Chien, Steve; Mortensen, Helen

1994-01-01

215

IEEE TRANSACTION ON IMAGE PROCESSING 1 A Robust Document Image Binarization Technique  

E-print Network

IEEE TRANSACTION ON IMAGE PROCESSING 1 A Robust Document Image Binarization Technique for Degraded techniques. Index Terms Document Image Processing, Document Analysis, Pixel Classification, Degraded Document. #12;IEEE TRANSACTION ON IMAGE PROCESSING 2 technique is important for the ensuing document image

Tan, Chew Lim

216

Architecture for web-based image processing  

NASA Astrophysics Data System (ADS)

A computer systems architecture for processing medical images and other data coming over the Web is proposed. The architecture comprises a Java engine for communicating images over the Internet, storing data in local memory, doing floating point calculations, and a coprocessor MIMD parallel DSP for doing fine-grained operations found in video, graphics, and image processing applications. The local memory is shared between the Java engine and the parallel DSP. Data coming from the Web is stored in the local memory. This approach avoids the frequent movement of image data between a host processor's memory and an image processor's memory, found in many image processing systems. A low-power and high performance parallel DSP architecture containing lots of processors interconnected by a segmented hierarchical network has been developed. The instruction set of the 16-bit processor supports video, graphics, and image processing calculations. Two's complement arithmetic, saturation arithmetic, and packed instructions are supported. Higher data precision such as 32-bit and 64-bit can be achieved by cascading processors. A VLSI chip implementation of the architecture containing 64 processors organized in 16 clusters and interconnected by a statically programmable hierarchical bus is in progress. The buses are segmentable by programming switches on the bus. The instruction memory of each processor has sixteen 40-bit words. Data streaming through the processor is manipulated by the instructions. Multiple operations can be performed in a single cycle in a processor. A low-power handshake protocol is used for synchronization between the sender and the receiver of data. Temporary storage for data and filter coefficients is provided in each chip. A 256 by 16 memory unit is included in each of the 16 clusters. The memory unit can be used as a delay line, FIFO, lookup table or random access memory. The architecture is scalable with technology. Portable multimedia terminals like U.C. Berkeley's InfoPad can be developed using the proposed parallel DSP architecture, color display, pen interface, and wireless network communication for use in clinics, hospitals, homes, offices, and factories.

Srini, Vason P.; Pini, David; Armstrong, Matt D.; Alalusi, Sayf H.; Thendean, John; Ueng, Sain-Zee; Bushong, David P.; Borowski, Erek S.; Chao, Elaine; Rabaey, Jan M.

1997-09-01

217

IMAGE 100: The interactive multispectral image processing system  

NASA Technical Reports Server (NTRS)

The need for rapid, cost-effective extraction of useful information from vast quantities of multispectral imagery available from aircraft or spacecraft has resulted in the design, implementation and application of a state-of-the-art processing system known as IMAGE 100. Operating on the general principle that all objects or materials possess unique spectral characteristics or signatures, the system uses this signature uniqueness to identify similar features in an image by simultaneously analyzing signatures in multiple frequency bands. Pseudo-colors, or themes, are assigned to features having identical spectral characteristics. These themes are displayed on a color CRT, and may be recorded on tape, film, or other media. The system was designed to incorporate key features such as interactive operation, user-oriented displays and controls, and rapid-response machine processing. Owing to these features, the user can readily control and/or modify the analysis process based on his knowledge of the input imagery. Effective use can be made of conventional photographic interpretation skills and state-of-the-art machine analysis techniques in the extraction of useful information from multispectral imagery. This approach results in highly accurate multitheme classification of imagery in seconds or minutes rather than the hours often involved in processing using other means.

Schaller, E. S.; Towles, R. W.

1975-01-01

218

Position invariant linear operations in image processing  

E-print Network

POSITION INVARIANT LINEAR OPERATIONS IN IMAGE PROCESSING A Thesis by TERRENCE LEE DILLON Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement for the degree of MASTER OF SCIENCE May 1972 Major... OF SIGNAL AND MINIMUM MODULATION REQUIRED FOR DETECTION (Adapted from [28]) 35 In many instances the modulation or contrast of an object may be reduced by atmospheric haze. With reference to the notations on Figure 14, a 50X modulation of the target...

Dillon, Terrence Lee

1972-01-01

219

Digital image processing of vascular angiograms  

NASA Technical Reports Server (NTRS)

A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.

Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.

1975-01-01

220

Color Image Processing and Object Tracking System  

NASA Technical Reports Server (NTRS)

This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

1996-01-01

221

Using NASA Space Imaging Technology to Teach Earth and Sun Topics  

NASA Astrophysics Data System (ADS)

We teach an experimental college-level course, directed toward elementary education majors, emphasizing "hands-on" activities that can be easily applied to the elementary classroom. This course, Physics 240: "The Sun-Earth Connection" includes various ways to study selected topics in physics, earth science, and basic astronomy. Our lesson plans and EPO materials make extensive use of NASA imagery and cover topics about magnetism, the solar photospheric, chromospheric, coronal spectra, as well as earth science and climate. In addition we are developing and will cover topics on ecosystem structure, biomass and water on Earth. We strive to free the non-science undergraduate from the "fear of science" and replace it with the excitement of science such that these future teachers will carry this excitement to their future students. Hands-on experiments, computer simulations, analysis of real NASA data, and vigorous seminar discussions are blended in an inquiry-driven curriculum to instill confident understanding of basic physical science and modern, effective methods for teaching it. The course also demonstrates ways how scientific thinking and hands-on activities could be implemented in the classroom. We have designed this course to provide the non-science student a confident basic understanding of physical science and modern, effective methods for teaching it. Most of topics were selected using National Science Standards and National Mathematics Standards that are addressed in grades K-8. The course focuses on helping education majors: 1) Build knowledge of scientific concepts and processes; 2) Understand the measurable attributes of objects and the units and methods of measurements; 3) Conduct data analysis (collecting, organizing, presenting scientific data, and to predict the result); 4) Use hands-on approaches to teach science; 5) Be familiar with Internet science teaching resources. Here we share our experiences and challenges we face while teaching this course.

Verner, E.; Bruhweiler, F. C.; Long, T.

2011-12-01

222

The Process of Teaching and Learning about Reflection: Research Insights from Professional Nurse Education  

ERIC Educational Resources Information Center

The study aimed to investigate the process of reflection in professional nurse education and the part it played in a teaching and learning context. The research focused on the social construction of reflection within a post-registration, palliative care programme, accessed by nurses, in the United Kingdom (UK). Through an interpretive ethnographic…

Bulman, Chris; Lathlean, Judith; Gobbi, Mary

2014-01-01

223

Using a Laboratory Simulator in the Teaching and Study of Chemical Processes in Estuarine Systems  

ERIC Educational Resources Information Center

The teaching of Chemical Oceanography in the Faculty of Marine and Environmental Sciences of the University of Cadiz (Spain) has been improved since 1994 by the employment of a device for the laboratory simulation of estuarine mixing processes and the characterisation of the chemical behaviour of many substances that pass through an estuary. The…

Garcia-Luque, E.; Ortega, T.; Forja, J. M.; Gomez-Parra, A.

2004-01-01

224

Personalized Instruction, Group Process and the Teaching of Psychological Theories of Learning.  

ERIC Educational Resources Information Center

An innovative approach to teaching learning theory to undergraduates was tested by comparing a modified Personalized System of Instruction (PSI) group process class (n=19) to a traditional teacher-centered control class (n=32). Predictions were that academic performance and motivation would be improved by the PSI method, and student satisfaction…

DiScipio, William J.; Crohn, Joan

225

Learning and Teaching about the Nature of Science through Process Skills  

ERIC Educational Resources Information Center

This dissertation, a three-paper set, explored whether the process skills-based approach to nature of science instruction improves teachers' understandings, intentions to teach, and instructional practice related to the nature of science. The first paper examined the nature of science views of 53 preservice science teachers before and after a…

Mulvey, Bridget K.

2012-01-01

226

A National Research Survey of Technology Use in the BSW Teaching and Learning Process  

ERIC Educational Resources Information Center

The purpose of this descriptive-correlational research study was to assess the overall use of technology in the teaching and learning process (TLP) by BSW educators. The accessible and target population included all full-time, professorial-rank, BSW faculty in Council on Social Work Education--accredited BSW programs at land grant universities.…

Buquoi, Brittany; McClure, Carli; Kotrlik, Joseph W.; Machtmes, Krisanna; Bunch, J. C.

2013-01-01

227

Mathematics Teaching and Learning as a Mediating Process: The Case of Tape Diagrams  

ERIC Educational Resources Information Center

This article examines how visual representations may mediate the teaching and learning of mathematics over time in Japanese elementary classrooms. Using the Zone of Proximal Development Mathematical Learning Model (Murata & Fuson, 2006; Fuson & Murata, 2007), the process of mediation is explicated. The tape diagram, a central visual representation…

Murata, Aki

2008-01-01

228

Social interest and the development of cortical face specialization: what autism teaches us about face processing.  

E-print Network

1 Social interest and the development of cortical face specialization: what autism teaches us about spectrum disorder (ASD) inform upon theories of the development of "normal" face processing, and the story social interest, they may fail to develop cortical face specialization. Face specialization may develop

Gauthier, Isabel

229

ICCE/ICCAI 2000 Full & Short Papers (Teaching and Learning Processes).  

ERIC Educational Resources Information Center

This document contains the full and short papers on teaching and learning processes from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction) covering the following topics: a code restructuring tool to help scaffold novice programmers; efficient study of Kanji using…

2000

230

Teaching Heritage  

NSDL National Science Digital Library

Subtitled "a professional development Website for teachers," Teaching Heritage is an impressive collection of information and resources for teaching Australian history and culture. There are eight main sections to the site: four offer teaching resources and four provide teaching units. The resource sections include an examination of different ways of defining heritage, an Australian heritage timeline, discussions of different approaches to teaching heritage through media, and outcomes-based approaches in teaching and assessing heritage coursework. The teaching units deal in depth with issues of citizenship, nationalism, Australian identities, and new cultural values. A Heritage Gallery features images of various culturally significant or representative places in Australia, such as New Italy, the Dundullimal Homestead, Australian Hall, Kelly's Bush, and many more. Obviously, teachers of Civics on the southern continent will find this site extremely useful, but the teaching units -- rich with texts and images -- also offer fascinating introductions for anyone interested in the issues of Australian nation-making.

231

HYPERSPECTRAL IMAGING: SIGNAL PROCESSING ALGORITHM DESIGN AND ANALYSIS  

E-print Network

HYPERSPECTRAL IMAGING: SIGNAL PROCESSING ALGORITHM DESIGN AND ANALYSIS Chein-I Chang Remote Sensing: Independent Component Analysis-Based Abundance Quantification PART V: HYPERSPECTRAL IMAGE COMPRESSION Chapter Chapter 19: Spectral/Spatial Hyperspectral Image Compression Chapter 20: Hyperspectral Information

Chang, Chein-I

232

Parallel Spatial-Spectral Processing of Hyperspectral Images  

E-print Network

- vised and supervised classification, spectral unmixing and compression of hyperspectral image data. Most of hyperspectral imaging mixture analysis, or data compression [3]. For instance, several computational7 Parallel Spatial-Spectral Processing of Hyperspectral Images Antonio J. Plaza Department

Plaza, Antonio J.

233

Digital imaging processing for biophysical applications  

NASA Astrophysics Data System (ADS)

Many biological and biophysical experimental setups rely on digital imaging processing. The introduction of a new generation of digital cameras enables new experiments where time dependent processes can be detected with a high time resolution and high signal-to-noise ratio. However, there are no software tools available with which the full potential of the digital cameras can be explored. Although the data streams of up to 24 MB/s are readily processed by the available hardware, they present an immense challenge to the current software packages. We present a software concept based on the object oriented paradigm, with which digital cameras can be controlled and full images at full rate are captured, processed, and displayed simultaneously over extended time periods, just limited by the capacity of the hard disk space. By implementing wavelet based compression algorithms the obstacle of archiving the immense amount of data is overcome. We present examples in which original data files are compressed to 10% of its original size without loss of information. The modular character of the object based program enables the implementation of a wide range of different applications into the program.

Schilling, Jörg; Sackmann, Erich; Bausch, Andreas R.

2004-09-01

234

FITSH- a software package for image processing  

NASA Astrophysics Data System (ADS)

In this paper we describe the main features of the software package named FITSH, intended to provide a standalone environment for analysis of data acquired by imaging astronomical detectors. The package both provides utilities for the full pipeline of subsequent related data-processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple-image combinations, spatial transformations and interpolations) and aids the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The set of utilities found in this package is built on top of the commonly used UNIX/POSIX shells (hence the name of the package); therefore, both frequently used and well-documented tools for such environments can be exploited and managing a massive amount of data is rather convenient.

Pál, András.

2012-04-01

235

The Airborne Ocean Color Imager - System description and image processing  

NASA Technical Reports Server (NTRS)

The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.

Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.

1992-01-01

236

Portable EDITOR (PEDITOR): A portable image processing system. [satellite images  

NASA Technical Reports Server (NTRS)

The PEDITOR image processing system was created to be readily transferable from one type of computer system to another. While nearly identical in function and operation to its predecessor, EDITOR, PEDITOR employs additional techniques which greatly enhance its portability. These cover system structure and processing. In order to confirm the portability of the software system, two different types of computer systems running greatly differing operating systems were used as target machines. A DEC-20 computer running the TOPS-20 operating system and using a Pascal Compiler was utilized for initial code development. The remaining programmers used a Motorola Corporation 68000-based Forward Technology FT-3000 supermicrocomputer running the UNIX-based XENIX operating system and using the Silicon Valley Software Pascal compiler and the XENIX C compiler for their initial code development.

Angelici, G.; Slye, R.; Ozga, M.; Ritter, P.

1986-01-01

237

Development of the SOFIA Image Processing Tool  

NASA Technical Reports Server (NTRS)

The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

Adams, Alexander N.

2011-01-01

238

Gasoline-ethanol (Gasohol) fuel blend spray characterization using digital imaging and image processing  

Microsoft Academic Search

This paper tries to characterize the spray of gasoline-ethanol blends using schlieren imaging technique and image processing. Five different gasoline-ethanol fuel blend rates by volume are prepared. The Image of the spray is captured using schlieren imaging technique and image processing is employed to extract macro spray characteristics- spray tip penetration and spray cone angle. Based on the extracted tip

Emishaw D. Iffa; A. K. Amirruddin; S. Shaharin Anwar; A. R. A. Aziz

2009-01-01

239

High speed processing of biomedical images using programmable gpu  

Microsoft Academic Search

In this paper, we report our research results on high speed processing of large size biomedical images. The biomedical images usually contain various shapes of bioorganism. To accurately quantify these objects, shape-independent image processing techniques are needed. One of such techniques is level set (LS) method. However, its application to large size images is constrained by the extremely high computational

Jin Young Hong; May D. Wang

2004-01-01

240

Teaching the Writing Process through Digital Storytelling in Pre-service Education  

E-print Network

includes self-assessment and pushes the writer to deeper self-knowledge (Davis & Waggert 2006). Lehr (1995) says that the heart of the writing process is reflection and revision. Through the process of writing, reflecting, and revising, the writer... with idealized stories about teaching that do not match the realities that they encounter in the classroom. Digital storytelling provided an effective means to capture real classroom experiences that led student teachers to reflect upon experience and revise...

Green, Martha Robison

2012-07-16

241

A New Image Processing and GIS Package  

NASA Technical Reports Server (NTRS)

The image processing and GIS package ELAS was developed during the 1980's by NASA. It proved to be a popular, influential and powerful in the manipulation of digital imagery. Before the advent of PC's it was used by hundreds of institutions, mostly schools. It is the unquestioned, direct progenitor or two commercial GIS remote sensing packages, ERDAS and MapX and influenced others, such as PCI. Its power was demonstrated by its use for work far beyond its original purpose, having worked several different types of medical imagery, photomicrographs of rock, images of turtle flippers and numerous other esoteric imagery. Although development largely stopped in the early 1990's the package still offers as much or more power and flexibility than any other roughly comparable package, public or commercial. It is a huge body or code, representing more than a decade of work by full time, professional programmers. The current versions all have several deficiencies compared to current software standards and usage, notably its strictly command line interface. In order to support their research needs the authors are in the process of fundamentally changing ELAS, and in the process greatly increasing its power, utility, and ease of use. The new software is called ELAS II. This paper discusses the design of ELAS II.

Rickman, D.; Luvall, J. C.; Cheng, T.

1998-01-01

242

1680 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 6, JUNE 2006 Quality-Aware Images  

E-print Network

1680 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 6, JUNE 2006 Quality-Aware Images Zhou the concept of quality-aware image, in which certain extracted features of the original (high-quality) image are embedded into the image data as invisible hidden messages. When a distorted version of such an image

Simoncelli, Eero

243

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. XX, MONTH, YEAR 1 Quality-Aware Images  

E-print Network

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. XX, MONTH, YEAR 1 Quality-Aware Images Zhou of quality-aware image, in which certain extracted features of the original (high-quality) image are embedded into the image data as invisible hidden messages. When a distorted version of such an image is received, users

Wang, Zhou

244

IMAGESERVER: A SYSTEM FOR A DISTRIBUTED IMAGE PROCESSING APPLICATION BASED ON JAVA ADVANCED IMAGING  

Microsoft Academic Search

Recently, imaging has become an increasingly important subject in many fields, as well as an important field in networking computing since the release of Java Advanced Imaging (JAI) by Sun Microsystems. An ImageServer was developed using Java RMI and JAI technology. This ImageServer provides common functions for image processing which are very convenient for distributed imaging. In addition to the

Xinwen YU; Seishi Ninomiya; Matthew Laurenson; Zuorui Shen

2003-01-01

245

Creative Process Mentoring: Teaching the "Making" in Dance-Making  

ERIC Educational Resources Information Center

Within the Western fine arts tradition of concert and theatrical dance, new dances may be created in any number of ways. No matter how dance making begins, however, unless the work is to be improvised afresh each time it is performed, a process of developing, revising, and "setting" the work needs to take place. To move confidently and…

Lavender, Larry

2006-01-01

246

Teaching Information Literacy and Scientific Process Skills: An Integrated Approach.  

ERIC Educational Resources Information Center

Describes an online searching and scientific process component taught as part of the laboratory for a general zoology course. The activities were designed to be gradually more challenging, culminating in a student-developed final research project. Student evaluations were positive, and faculty indicated that student research skills transferred to…

Souchek, Russell; Meier, Marjorie

1997-01-01

247

Kagan Structures, Processing, and Excellence in College Teaching  

ERIC Educational Resources Information Center

Frequent student processing of lecture content (1) clears working memory, (2) increases long-term memory storage, (3) produces retrograde memory enhancement, (4) creates episodic memories, (5) increases alertness, and (6) activates many brain structures. These outcomes increase comprehension of and memory for content. Many professors now…

Kagan, Spencer

2014-01-01

248

Teaching Introductory Geology by a Paradigm, Process and Product Approach  

Microsoft Academic Search

Students in introductory geology courses can easily become lost in the minutiae of terms and seemingly random ideas and theories. One way to avoid this and provide a holistic picture of each major subject area in a beginning course is to introduce, at the start of each section, the ruling paradigm, the processes, and resultant products. By use of these

M. Reams

2008-01-01

249

Preparing Teachers to Teach Science: Learning Science as a Process.  

ERIC Educational Resources Information Center

Cites the lack of students' understanding and practicing of science processes as evidenced in science fair projects. Major contributors to the decline in science achievement are discussed. Author suggests teachers need experience with "sciencing" in the form of original investigative projects. Coursework designed to meet this goal is described.…

Cornell, Elizabeth A.

1985-01-01

250

The color image processing technology of the milk somatic cells  

Microsoft Academic Search

Image processing is widely applied in the field of the biology and medicine domain, however, the study of the milk somatic cell image processing technology is a new topic currently. This paper carries on the pre-processing to the milk somatic cell images firstly. Then in the foundation of the selection of the color space and the analysis based on the

Yuedong Wang; Heru Xue

2010-01-01

251

Image processing coprocessor implementation for Xilinx XC6000 series FPGAs  

Microsoft Academic Search

This paper presents an Image Processing Coprocessor implementation for XC6000 series FPGAs. The FPGA acts as a semi-autonomous abstract coprocessor carrying out image processing operational independently. This paper outlines the main structure of the image processing coprocessor in addition to its high level programming environment. The environment provides a library of very high level, parametrized architecture descriptions which are scaleable

Khaled Benkrid; K. Alotaibi; Danny Crookes; A. Bouridane; A. Benkrid

1999-01-01

252

Multispectral image processing: the nature factor  

NASA Astrophysics Data System (ADS)

The images processed by our brain represent our window into the world. For some animals this window is derived from a single eye, for others, including humans, two eyes provide stereo imagery, for others like the black widow spider several eyes are used (8 eyes), and some insects like the common housefly utilize thousands of eyes (ommatidia). Still other animals like the bat and dolphin have eyes for regular vision, but employ acoustic sonar vision for seeing where their regular eyes don't work such as in pitch black caves or turbid water. Of course, other animals have adapted to dark environments by bringing along their own lighting such as the firefly and several creates from the depths of the ocean floor. Animal vision is truly varied and has developed over millennia in many remarkable ways. We have learned a lot about vision processes by studying these animal systems and can still learn even more.

Watkins, Wendell R.

1998-09-01

253

Imaging fault zones using 3D seismic image processing techniques  

NASA Astrophysics Data System (ADS)

Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes and collecting these into "disturbance geobodies". These seismic image processing methods represents a first efficient step toward a construction of a robust technique to investigate sub-seismic strain, mapping noisy deformed zones and displacement within subsurface geology (Dutzer et al.,2011; Iacopini et al.,2012). In all these cases, accurate fault interpretation is critical in applied geology to building a robust and reliable reservoir model, and is essential for further study of fault seal behavior, and reservoir compartmentalization. They are also fundamental for understanding how deformation localizes within sedimentary basins, including the processes associated with active seismogenetic faults and mega-thrust systems in subduction zones. Dutzer, JF, Basford., H., Purves., S. 2009, Investigating fault sealing potential through fault relative seismic volume analysis. Petroleum Geology Conference series 2010, 7:509-515; doi:10.1144/0070509 Marfurt, K.J., Chopra, S., 2007, Seismic attributes for prospect identification and reservoir characterization. SEG Geophysical development Iacopini, D., Butler, RWH. & Purves, S. (2012). 'Seismic imaging of thrust faults and structural damage: a visualization workflow for deepwater thrust belts'. First Break, vol 5, no. 30, pp. 39-46.

Iacopini, David; Butler, Rob; Purves, Steve

2013-04-01

254

Teaching undergraduates the process of peer review: learning by doing  

NSDL National Science Digital Library

An active approach allowed undergraduates in Health Sciences to learn the dynamics of peer review at first hand. A four-stage process was used. In stage 1, students formed self-selected groups to explore specific issues. In stage 2, each group posted their interim reports online on a specific date. Each student read all the other reports and prepared detailed critiques. In stage 3, each report was discussed at sessions where the lead discussant was selected at random. All students participated in the peer review process. The written critiques were collated and returned to each group, who were asked to resubmit their revised reports within 2 wk. In stage 4, final submissions accompanied by rebuttals were graded. Student responses to a questionnaire were highly positive. They recognized the individual steps in the standard peer review, appreciated the complexities involved, and got a first-hand experience of some of the inherent variabilities involved. The absence of formal presentations and the opportunity to read each other's reports permitted them to study issues in greater depth.

2010-09-01

255

DKIST visible broadband imager data processing pipeline  

NASA Astrophysics Data System (ADS)

The Daniel K. Inouye Solar Telescope (DKIST) Data Handling System (DHS) provides the technical framework and building blocks for developing on-summit instrument quality assurance and data reduction pipelines. The DKIST Visible Broadband Imager (VBI) is a first light instrument that alone will create two data streams with a bandwidth of 960 MB/s each. The high data rate and data volume of the VBI require near-real time processing capability for quality assurance and data reduction, and will be performed on-summit using Graphics Processing Unit (GPU) technology. The VBI data processing pipeline (DPP) is the first designed and developed using the DKIST DHS components, and therefore provides insight into the strengths and weaknesses of the framework. In this paper we lay out the design of the VBI DPP, examine how the underlying DKIST DHS components are utilized, and discuss how integration of the DHS framework with GPUs was accomplished. We present our results of the VBI DPP alpha release implementation of the calibration, frame selection reduction, and quality assurance display processing nodes.

Beard, Andrew; Cowan, Bruce; Ferayorni, Andrew

2014-07-01

256

Teaching Introductory Geology by a Paradigm, Process and Product Approach  

NASA Astrophysics Data System (ADS)

Students in introductory geology courses can easily become lost in the minutiae of terms and seemingly random ideas and theories. One way to avoid this and provide a holistic picture of each major subject area in a beginning course is to introduce, at the start of each section, the ruling paradigm, the processes, and resultant products. By use of these three Ps: paradigm, processes, and products, students have a reasonably complete picture of the subject area. If they knew nothing more than this simple construct, they would have an excellent perspective of the subject area. This provides a jumping off point for the instructor to develop the details. The three Ps can make course construction much more straightforward and complete. Students benefit since they have a clearer idea of what the subject is about and its importance. Retention may be improved and carryover to advanced courses may be aided. For faculty, the use of these three P's makes organizing a course more straightforward. Additionally, the instructor benefits include: 1. The main points are clearly stated, thus avoiding the problem of not covering the essential concepts. 2. The course topics hold together, pedagogically. There is significant opportunity for continuity of thought. 3. An outline is developed that is easily analyzed for holes or omissions. 4. A course emerges with a balance of topics, permitting appropriate time to be devoted to significant subject matter. 5. If a course is shared between faculty or passes from one faculty to another by semester or quarter, there is greater assurance that topics and concepts everyone agrees on can be adequately covered. 6. There is less guesswork involved in planning a course. New faculty have an approach that will make sense and allow them to feel less awash and more focused. In summary, taking time to construct a course utilizing the important paradigms, processes, and products can provide significant benefits to the instructor and the student. Material can be presented in a more coherent manner and allow students the opportunity to grasp essential concepts from the very beginning. There are fewer potential surprises and greater likelihood that key ideas can be retained, as opposed to retaining isolated fragments of information. Illustrations from over a decade of use in an introductory Physical and Historical Geology course will be presented.

Reams, M.

2008-12-01

257

Image processing and products for the Magellan mission to Venus  

NASA Technical Reports Server (NTRS)

The Magellan mission to Venus is providing planetary scientists with massive amounts of new data about the surface geology of Venus. Digital image processing is an integral part of the ground data system that provides data products to the investigators. The mosaicking of synthetic aperture radar (SAR) image data from the spacecraft is being performed at JPL's Multimission Image Processing Laboratory (MIPL). MIPL hosts and supports the Image Data Processing Subsystem (IDPS), which was developed in a VAXcluster environment of hardware and software that includes optical disk jukeboxes and the TAE-VICAR (Transportable Applications Executive-Video Image Communication and Retrieval) system. The IDPS is being used by processing analysts of the Image Data Processing Team to produce the Magellan image data products. Various aspects of the image processing procedure are discussed.

Clark, Jerry; Alexander, Doug; Andres, Paul; Lewicki, Scott; Mcauley, Myche

1992-01-01

258

The Evolution of English Language Teaching during Societal Transition in Finland--A Mutual Relationship or a Distinctive Process?  

ERIC Educational Resources Information Center

This study describes the evolution of English language teaching in Finland and looks into the connections of the societal and educational changes in the country as explanatory factors in the process. The results of the study show that the language teaching methodology and the status of foreign languages in Finland are clearly connected to the…

Jaatinen, Riitta; Saarivirta, Toni

2014-01-01

259

Millikan Lecture 1994: Understanding and teaching important scientific thought processes  

NSDL National Science Digital Library

Physics is an intellectually demanding discipline and many students have difficulties learning to deal with it. Further, our instruction is often far less effective than we realize. Indeed, recent investigations have revealed that many students, even when getting good grades, emerge from their basic physics courses with signification scientific misconcepts, with prescientific notions, with poor problem-solving skills, and with an inability to apply what they ostensibly learned. In short, students' acquired physics knowledge is often largely nominal rather than functiional. This situation leads one to ask: Why is this so, and what can be done about it? More specifically, it has led me to address the following two basic questions: (a) Can one understand better the underlying throught processes required to deal with a science like physics? (b) How can such an understanding be used to design more effective instruction? These are the questions which have been the focus of my work during the last several years and which I want to discuss in this article.

Reif, Frederick

2011-07-28

260

A fusion method for visible and infrared images based on contrast pyramid with teaching learning based optimization  

NASA Astrophysics Data System (ADS)

This paper proposes a novel image fusion scheme based on contrast pyramid (CP) with teaching learning based optimization (TLBO) for visible and infrared images under different spectrum of complicated scene. Firstly, CP decomposition is employed into every level of each original image. Then, we introduce TLBO to optimizing fusion coefficients, which will be changed under teaching phase and learner phase of TLBO, so that the weighted coefficients can be automatically adjusted according to fitness function, namely the evaluation standards of image quality. At last, obtain fusion results by the inverse transformation of CP. Compared with existing methods, experimental results show that our method is effective and the fused images are more suitable for further human visual or machine perception.

Jin, Haiyan; Wang, Yanyan

2014-05-01

261

Spot restoration for GPR image post-processing  

DOEpatents

A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

Paglieroni, David W; Beer, N. Reginald

2014-05-20

262

Networks for image acquisition, processing and display  

NASA Technical Reports Server (NTRS)

The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

Ahumada, Albert J., Jr.

1990-01-01

263

On Anisotropic Diffusion in 3D image processing and image sequence analysis  

Microsoft Academic Search

A morphological multiscale method in 3D image and 3D image sequence processing is discussed which identifies edges on level sets and the motion of features in time. Based on these indicator evaluation the image data is processed applying nonlinear diffusion and the theory of geometric evolution problems. The aim is to smooth level sets of a 3D image while simultaneously

Karol Mikula; Martin Rumpf; Fiorella Sgallari

264

Automated construction of three dimensional image processing procedures by pictorial example with application to medical images  

Microsoft Academic Search

Presents an expert system for three dimensional image processing. In analyzing three dimensional gray images, e.g. CT images, a process for extraction of interest regions from each image is frequently required. However, it is difficult to construct all extinction procedure with parameters optimized for each purpose. The proposed system can automatically construct a three dimensional object extraction procedure based on

A. Shimizu; Xiang-Rong Zhou; J. Hasegawa; J. Toriwaki

1996-01-01

265

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 8, NO. 9, SEPTEMBER 1999 1243 Image Segmentation and Labeling  

E-print Network

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 8, NO. 9, SEPTEMBER 1999 1243 Image Segmentation of an infection to yield a segmentation of the image into homogeneous regions. This process is implemented using models. I. INTRODUCTION IMAGE segmentation is a fundamental problem in computer vision which has been

Alajaji, Fady

266

A proposal for an educational system service to support teaching\\/learning process for logic programming  

Microsoft Academic Search

The Information and Communication Technology (ICT) have been successfully used to transform either partial\\/total face to face or distance learning education [1]. Educational software systems aid the teaching\\/learning process, promoting the development of a lot of Virtual Learning Environments (VLEs), improving the assimilation of the content presented in the class. Using mechanisms such as Hardware as a Service (HaaS) and

Eric R. G. Dantas; Ryan R. de Azevedo; Cleyton M. O. Rodrigues; Silas C. Almeida; Fred Freitas; Vinicius C. Garcia

2011-01-01

267

Methods for processing and imaging marsh foraminifera  

USGS Publications Warehouse

This study is part of a larger U.S. Geological Survey (USGS) project to characterize the physical conditions of wetlands in southwestern Louisiana. Within these wetlands, groups of benthic foraminifera-shelled amoeboid protists living near or on the sea floor-can be used as agents to measure land subsidence, relative sea-level rise, and storm impact. In the Mississippi River Delta region, intertidal-marsh foraminiferal assemblages and biofacies were established in studies that pre-date the 1970s, with a very limited number of more recent studies. This fact sheet outlines this project's improved methods, handling, and modified preparations for the use of Scanning Electron Microscope (SEM) imaging of these foraminifera. The objective is to identify marsh foraminifera to the taxonomic species level by using improved processing methods and SEM imaging for morphological characterization in order to evaluate changes in distribution and frequency relative to other environmental variables. The majority of benthic marsh foraminifera consists of agglutinated forms, which can be more delicate than porcelaneous forms. Agglutinated tests (shells) are made of particles such as sand grains or silt and clay material, whereas porcelaneous tests consist of calcite.

Dreher, Chandra A.; Flocks, James G.

2011-01-01

268

Image processing and recognition using diffractive and digital techniques  

NASA Astrophysics Data System (ADS)

Image processing and recognition methods are useful in many fields. According to situation, different techniques are used. For many years, methods based on optical Fourier transformation were very popular. Image recognition was performed generally by using optical correlators. Correlation techniques were strongly developed especially for military applications, but in many cases (industrial, biological and biomedical applications) these techniques suffer from a number of limitations. For these reasons, methods based on extraction and statistical processing of image features are more useful. Set of features can be extracted directly from an image (features based on image morphology, image moments etc.) or from image transforms (Fourier, Radon, Hough, Sine, Cosine etc.). The Fourier transformation is one of the most important in image processing. It can be simply performed by using an optical diffractometer. It allows to build image descriptors independent on image translation and after processing independent on image rotation. Diffractometers are very convenient in industrial and medical applications. Digital image processing and recognition were strongly developed on powerful workstations, however these procedures can also be implemented in PCs with DSP microprocessor cards or in situations where digital transforms used for image processing can be simply implemented and do not consume a lot of time. The example of biomedical image recognition performed in an optical way, by using a diffractometer, and in a digital system with a CCD camera will be described here.

Galas, Jacek

1994-10-01

269

Simplified image processing system for softcopy presentation  

E-print Network

, (2, 2) f, (l, l) f, (5, 1) f, (2, 5) f, (4, 5) j, (1, 2) j, (1, 4) Tl T 2 T3 T 4 T5 T 6 T 7 T 8 T9 T10 Tll T12 T13 T14 T15 T16 T17 T18 j, (3, 3) T19 j, (3, 3) Figure 5. 5 Suggested filter realization p + I PP Pl+a+6 KBYTE PQM TABLE p t... 46 47 48 51 52 53 54 55 56 57 58 59 60 vnj FIGURE 6. 11 FIGURE 6. 12 FIGURE 6. 13 FIGURE 6. 14 FIGURE A. l FIGURE A. 2 FIGURE A. 3 FIGURE A. 4 FIGURE A. 5 FIGURE A. 6 FIGURE A. 7 FIGURE 8. 1 FIGURE C. 1 Processed Plant image...

Corleto-Mena, Jose Gilberto

2012-06-07

270

Corn plant locating by image processing  

NASA Astrophysics Data System (ADS)

The feasibility investigation of using machine vision technology to locate corn plants is an important issue for field production automation in the agricultural industry. This paper presents an approach which was developed to locate the center of a corn plant using image processing techniques. Corn plants were first identified using a main vein detection algorithm by detecting a local feature of corn leaves leaf main veins based on the spectral difference between mains and leaves then the center of the plant could be located using a center locating algorithm by tracing and extending each detected vein line and evaluating the center of the plant from intersection points of those lines. The experimental results show the usefulness of the algorithm for machine vision applications related to corn plant identification. Such a technique can be used for pre. cisc spraying of pesticides or biotech chemicals. 1.

Jia, Jiancheng; Krutz, Gary W.; Gibson, Harry W.

1991-02-01

271

Image processing methods for identifying species of plants  

Microsoft Academic Search

More selective methods for applying agricultural herbicides on fields can result in substantial cost savings. Three image processing methods were tested for their ability to identify four different images of plant species. First two images were different and the other two were similar. The images are preprocessed by segmentation and spatial filtering using the Color Chromaticity Chart. The test results

Shulin Dave; Ken Runtz

1995-01-01

272

DSP real-time operating system coordinates image processing  

Microsoft Academic Search

Because of their high-speed arithmetic, interrupt processing, and I\\/O facilities, DSPs are a natural for applications that require image capture, control, and processing. In some cases, DSPs are used as controllers in conjunction with specialized imaging ASICs. In other applications, one or more DSPs are used to execute both control and core imaging functions like FFTs, filters, and correlations. The

M. Grosen

1996-01-01

273

Application of near-infrared image processing in agricultural engineering  

Microsoft Academic Search

Recently, with development of computer technology, the application field of near-infrared image processing becomes much wider. In this paper the technical characteristic and development of modern NIR imaging and NIR spectroscopy analysis were introduced. It is concluded application and studying of the NIR imaging processing technique in the agricultural engineering in recent years, base on the application principle and developing

Ming-Hong Chen; Guo-Ping Zhang; Hongxing Xia

2009-01-01

274

75 FR 38118 - In the Matter of Certain Electronic Devices With Image Processing Systems, Components Thereof...  

Federal Register 2010, 2011, 2012, 2013, 2014

...Certain Electronic Devices With Image Processing Systems, Components Thereof...certain electronic devices with image processing systems, components thereof...certain electronic devices with image processing systems, components...

2010-07-01

275

Image processing for indexing of cineangiograms  

NASA Astrophysics Data System (ADS)

Video is the primary media used for cardiology diagnosis. Video data is currently stored on either cinefilm or videotape and provides no means of integration with other information such as patient's history, x-ray images, and test results. Maintaining and searching this data is also difficult. A multimedia database system would significantly enhance the diagnosis process by facilitating these integration, maintenance, and searching processes. One of the challenges in designing such a database is indexing the sequential video streams. The required storage space and the retrieval time to one specific video frame are two crucial issues in the design of a video database system. This paper presents two novel and fast image processing techniques for use in the selection of video frames. These selected frames can then be stored in the database. The first type of analysis is based on the analysis of spatial transitions and the second is based on the spectral components in the discrete cosine domain. The proposed techniques extract the length of the total edges in every frame of the cineangiogram. The extracted parameter is directly related to the amount of radio opaque dye injected in the coronary arteries. Those frames in the cineangiogram with a significant level of dye are then identified and indexed for evaluation. The extracted parameter index is used to select these frames for physician review. The same techniques are applied to the centered window of every frame to extract the cardiac cycles in the cineangiogram. This information can be used to index the stored cineangiogram for fast retrieval. It can also be used to synchronize two cineangiograms playing at the same time for clinical comparison and evaluation. Both techniques have been applied to twenty-one sets of cineangiograms. The results are very consistent with physician observation. Since both techniques produce comparable result, the selection between them is dependent on the format of the video. The second technique is effective when the compression standard JPEG or MPEG is used to encode the cineangiogram while the first one can be used for uncompressed or lossless compressed cineangiograms.

Liu, Hain-Ching H.; Zick, Gregory L.; Sheehan, Florence H.

1995-05-01

276

Viewpoints on Medical Image Processing: From Science to Application  

PubMed Central

Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

2013-01-01

277

Diagnostic image processing of remote operating seals for aerospace application  

NASA Astrophysics Data System (ADS)

This paper discusses the use of image processing techniques for monitoring the performance of a sealing ring for an aircraft actuator. Methods of processing the images of a deforming seal ring and the mathematical relations between the computer acquired Cartesian images to the axisymmetric ring geometry are presented. Some of the features which allow the accurate interpretation of the images are discussed. The results obtained show the deformation of a seal ring under a hydrodynamic environment that exists in the actuator.

Nwagboso, C. O.

1991-05-01

278

DTV color and image processing: past, present, and future  

NASA Astrophysics Data System (ADS)

The image processor in digital TV has started to play an important role due to the customers' growing desire for higher quality image. The customers want more vivid and natural images without any visual artifact. Image processing techniques are to meet customers' needs in spite of the physical limitation of the panel. In this paper, developments in image processing techniques for DTV in conjunction with developments in display technologies at Samsung R and D are reviewed. The introduced algorithms cover techniques required to solve the problems caused by the characteristics of the panel itself and techniques for enhancing the image quality of input signals optimized for the panel and human visual characteristics.

Kim, Chang-Yeong; Lee, SeongDeok; Park, Du-Sik; Kwak, Youngshin

2006-01-01

279

Thermal-imaging of foods in heating process  

Microsoft Academic Search

Thermal imaging technique is useful for detecting 2-dimensional spatial distribution and temporal variation of temperature. Thermal images of foods during the cooking process and\\/or pulse heating process are observed for evaluating the quality of foods in the volume fabrications. Inhomogeneous temperature distribution of a pancake can be observed during the baking process. Water concentration of flour can be measured from

Chieko Nakayama; Yu Ikegaya; Toru Katsumata; Hiroaki Aizawa; Mitsuo Honda; Masayuki Shibasaki; Koichi Otsubo; Shuji Komuro

2008-01-01

280

Spatial versus temporal stability issues in image processing neuro chips  

Microsoft Academic Search

A typical image processing neuro chip consists of a regular array of very simple cell circuits. When it is implemented by a CMOS process, two stability issues naturally arise. First, parasitic capacitors of MOS transistors induce temporal dynamics. Since a processed image is given as the stable limit point of the temporal dynamics, a temporally unstable chip is unusable. Second,

Takashi Matsumoto; Haruo Kobayashi; Yoshio Togawa

1992-01-01

281

The Study of Image Processing Method for AIDS PA Test  

NASA Astrophysics Data System (ADS)

At present, the main test technique of AIDS is PA in China. Because the judgment of PA test image is still depending on operator, the error ration is high. To resolve this problem, we present a new technique of image processing, which first process many samples and get the data including coordinate of center and the rang of kinds images; then we can segment the image with the data; at last, the result is exported after data was judgment. This technique is simple and veracious; and it also turns out to be suitable for the processing and analyzing of other infectious diseases' PA test image.

Zhang, H. J.; Wang, Q. G.

2006-10-01

282

Image-Processing Software For A Hypercube Computer  

NASA Technical Reports Server (NTRS)

Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

1992-01-01

283

Fractal Image Process Based Image Comparison Search Engine  

Microsoft Academic Search

A search engine allows users to quickly obtain information from networks. Tradi- tional search engines can only search the data of modal characters. However, when us- ers obtain an image from a newspaper or magazine, they want to retrieve related infor- mation about this type of image through a search engine, traditional search engines are incapable of meeting their needs.

Kuang-fu Li; Tung-shou Chen; Kuei-hao Chen

2003-01-01

284

Medical image processing on the GPU - past, present and future.  

PubMed

Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. PMID:23906631

Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M

2013-12-01

285

High Speed Terahertz Pulse Imaging in the Reflection Geometry and Image Quality Enhancement by Digital Image Processing  

NASA Astrophysics Data System (ADS)

We describe the formation and enhancement of two dimensional pulsed terahertz (THz) images obtained in the reflection geometry with a high-speed optical delay line. Two test objects are imaged and analyzed with respect to material information and concealed structure. Clear THz images were obtained with various imaging modes and were compared with the X-ray images. The THz image of a sample revealed material features that the X-ray image cannot distinguish. We could enhance the THz image quality using various image processing techniques, such as edge detection, de-noising, high-pass filtering, and wavelet filtering.

Shon, Chae-Hwa; Chong, Won-Yong; Jeon, Seok-Gy; Kim, Geun-Ju; Kim, Jung-Il; Jin, Yun-Sik

2008-01-01

286

Quantum Image Morphology Processing Based on Quantum Set Operation  

NASA Astrophysics Data System (ADS)

Set operation is the essential operation of mathematical morphology, but it is difficult to complete the set operation quickly on the electronic computer. Therefore, the efficiency of traditional morphology processing is very low. In this paper, by adopting the method of the combination of quantum computation and image processing, though multiple quantum logical gates and combining the quantum image storage, quantum loading scheme and Boyer search algorithm, a novel quantum image processing method is proposed, which is the morphological image processing based on quantum set operation. The basic operations, such as erosion and dilation, are carried out for the images by using the quantum erosion algorithm and quantum dilation algorithm. Because the parallel capability of quantum computation can improve the speed of the set operation greatly, the image processing gets higher efficiency. The runtime of our quantum algorithm is {O}({? M N}). As a result, this method can produce better results.

Zhou, Ri-Gui; Chang, Zhi-bo; Fan, Ping; Li, Wei; Huan, Tian-tian

2014-11-01

287

Abstract --Image segmentation plays an important role in medical image processing. The aim of conventional hard  

E-print Network

Abstract -- Image segmentation plays an important role in medical image processing. The aim within each voxel, which we call a mixture, was considered in establishing an image segmentation-EM mixture segmentation methodology was tested by digital phantom MR and patient CT images with PV effect

288

Quantifying preferential flows in porous soils: An original imaging and image processing procedure  

Microsoft Academic Search

With the increasing accessibility of image acquisition techniques, image processing has become an essential component of soil science. Here, Image Processing is specifically used to the accurate prediction of solute transport through soils. To do so, dye tracers are leeched into soil cores to visualize active flow paths in cross sections of soil. We have derived a cheap alternative, using

P. Delmas; C. Duwig; J. Marquez; B. Prado

2010-01-01

289

An Image Processing Algorithm Based On FMAT  

NASA Technical Reports Server (NTRS)

Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

Wang, Lui; Pal, Sankar K.

1995-01-01

290

Coherent image processing using quaternion wavelets  

NASA Astrophysics Data System (ADS)

We develop a quaternion wavelet transform (QWT) as a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant, tight frame representation whose coefficients sport a magnitude and three phase values, two of which are directly proportional to local image shifts. The QWT can be efficiently computed using a dual-tree filter bank and is based on a 2-D Hilbert transform. We demonstrate how the QWT's magnitude and phase can be used to accurately analyze local geometric structure in images. We also develop a multiscale flow/motion estimation algorithm that computes a disparity flow map between two images with respect to local object motion.

Chan, Wai L.; Choi, Hyeokho; Baraniuk, Richard G.

2005-08-01

291

Experiments with recursive estimation in astronomical image processing  

NASA Technical Reports Server (NTRS)

Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

Busko, I.

1992-01-01

292

SUSAN - A New Approach to Low Level Image Processing  

Microsoft Academic Search

This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction. Non-linear filtering is used to define which parts of the image are closely related to each individual pixel; each pixel has associated with it a local image region which is of similar brightness to that pixel. The new

Stephen M. Smith; J. Michael Brady; Stephen M. Smith

1997-01-01

293

Optimisation of natural images processing using different evolutionary algorithms  

Microsoft Academic Search

The development of image processing methods to discriminate between weed, crop and soil is an important step for precision agriculture, the main goal of which is the site-specific management of crops. The main challenge in terms of image analysis is to achieve an appropriate discrimination in outdoor field images under varying conditions of lighting, soil background texture and crop damage.

Xavier P. Burgos-artizzu; Angela Ribeiro; Alberto Tellaeche; Gonzalo Pajares

2008-01-01

294

Experiences with digital processing of images at INPE  

NASA Technical Reports Server (NTRS)

Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.

Mascarenhas, N. D. A. (principal investigator)

1984-01-01

295

Stagewise processing in error-correcting codes and image restoration  

E-print Network

Stagewise processing in error-correcting codes and image restoration K. Y. Michael Wong Department and image restoration, by extracting information from the former stage and using it selectively to improve In error-correcting codes [1] and image restoration [2], the choice of the so-called hyperparameters

Wong, Michael K Y

296

PHOTOGRAMMETRIC PROCESSING OF LOW ALTITUDE IMAGE SEQUENCES BY UNMANNED AIRSHIP  

Microsoft Academic Search

Low altitude aerial image sequences have the advantages of high overlap, multi viewing and very high ground resolution. These kinds of images can be used in various applications that need high precision or fine texture. This paper mainly focuses on the photogrammetric processing of low altitude image sequences acquired by unmanned airship, which automatically flies according to the predefined flight

Yongjun Zhang

297

A differential geometry approach for biomedical image processing  

Microsoft Academic Search

We show in this paper how simple considerations about bio-arrays images lead to a peak segmentation allowing the genes activity analysis. Bio-arrays images have a particular structure and the aim of the paper is to present a mathematical method allowing their automatic processing. The differential geometry approach used here can be also employed for other types of images presenting grey

Jacques Demongeot; Jean-Pierre Françoise; Mathieu Richard; Franck Senegas; Thierry-Pascal Baum

2002-01-01

298

Cardiovascular Imaging and Image Processing: Theory and Practice - 1975  

NASA Technical Reports Server (NTRS)

Ultrasonography was examined in regard to the developmental highlights and present applicatons of cardiac ultrasound. Doppler ultrasonic techniques and the technology of miniature acoustic element arrays were reported. X-ray angiography was discussed with special considerations on quantitative three dimensional dynamic imaging of structure and function of the cardiopulmonary and circulatory systems in all regions of the body. Nuclear cardiography and scintigraphy, three--dimensional imaging of the myocardium with isotopes, and the commercialization of the echocardioscope were studied.

Harrison, Donald C. (editor); Sandler, Harold (editor); Miller, Harry A. (editor); Hood, Manley J. (Editor); Purser, Paul E. (Editor); Schmidt, Gene (Editor)

1975-01-01

299

A color image processing pipeline for digital microscope  

NASA Astrophysics Data System (ADS)

Digital microscope has found wide application in the field of biology, medicine et al. A digital microscope differs from traditional optical microscope in that there is no need to observe the sample through an eyepiece directly, because the optical image is projected directly on the CCD/CMOS camera. However, because of the imaging difference between human eye and sensor, color image processing pipeline is needed for the digital microscope electronic eyepiece to get obtain fine image. The color image pipeline for digital microscope, including the procedures that convert the RAW image data captured by sensor into real color image, is of great concern to the quality of microscopic image. The color pipeline for digital microscope is different from digital still cameras and video cameras because of the specific requirements of microscopic image, which should have the characters of high dynamic range, keeping the same color with the objects observed and a variety of image post-processing. In this paper, a new color image processing pipeline is proposed to satisfy the requirements of digital microscope image. The algorithm of each step in the color image processing pipeline is designed and optimized with the purpose of getting high quality image and accommodating diverse user preferences. With the proposed pipeline implemented on the digital microscope platform, the output color images meet the various analysis requirements of images in the medicine and biology fields very well. The major steps of color imaging pipeline proposed include: black level adjustment, defect pixels removing, noise reduction, linearization, white balance, RGB color correction, tone scale correction and gamma correction.

Liu, Yan; Liu, Peng; Zhuang, Zhefeng; Chen, Enguo; Yu, Feihong

2012-10-01

300

Process-Oriented Inquiry--A Constructivist Approach to Early Childhood Science Education: Teaching Teachers to Do Science  

ERIC Educational Resources Information Center

Process-oriented inquiry can help preservice and inservice early childhood teachers implement constructivist science education in their own classrooms. In this article, we discuss the basic elements of process-oriented inquiry applied to early childhood science education, show how we foster the development of process-oriented inquiry teaching

Martin, David Jerner; Jean-Sigur, Raynice; Schmidt, Emily

2005-01-01

301

Solar flare tracking using image processing techniques  

Microsoft Academic Search

Abstract. Automatic property measurement of solar flares through their complete cyclic development is valuable in the ,studies of solar flares. From the analysis of solar H? images, we are able to use the Support Vector Machine (SVM) to automatically detect flares, and apply image segmentation techniques to compute the properties of solar flares. Wealso,present our solution for automatically tracking the

Ming Qu; Frank Y. Shih; Ju Jing; Haimin Wang

2004-01-01

302

The Development of Sun-Tracking System Using Image Processing  

PubMed Central

This article presents the development of an image-based sun position sensor and the algorithm for how to aim at the Sun precisely by using image processing. Four-quadrant light sensors and bar-shadow photo sensors were used to detect the Sun's position in the past years. Nevertheless, neither of them can maintain high accuracy under low irradiation conditions. Using the image-based Sun position sensor with image processing can address this drawback. To verify the performance of the Sun-tracking system including an image-based Sun position sensor and a tracking controller with embedded image processing algorithm, we established a Sun image tracking platform and did the performance testing in the laboratory; the results show that the proposed Sun tracking system had the capability to overcome the problem of unstable tracking in cloudy weather and achieve a tracking accuracy of 0.04°. PMID:23615582

Lee, Cheng-Dar; Huang, Hong-Cheng; Yeh, Hong-Yih

2013-01-01

303

Application of near-infrared image processing in agricultural engineering  

NASA Astrophysics Data System (ADS)

Recently, with development of computer technology, the application field of near-infrared image processing becomes much wider. In this paper the technical characteristic and development of modern NIR imaging and NIR spectroscopy analysis were introduced. It is concluded application and studying of the NIR imaging processing technique in the agricultural engineering in recent years, base on the application principle and developing characteristic of near-infrared image. The NIR imaging would be very useful in the nondestructive external and internal quality inspecting of agricultural products. It is important to detect stored-grain insects by the application of near-infrared spectroscopy. Computer vision detection base on the NIR imaging would be help to manage food logistics. Application of NIR imaging promoted quality management of agricultural products. In the further application research fields of NIR image in the agricultural engineering, Some advices and prospect were put forward.

Chen, Ming-hong; Zhang, Guo-ping; Xia, Hongxing

2009-07-01

304

Image process technique used in a large FOV compound eye imaging system  

NASA Astrophysics Data System (ADS)

Biological inspiration has produced some successful solutions for different imaging systems. Inspired by the compound eye of insects, this paper presents some image process techniques used in the spherical compound eye imaging system. By analyzing the relationship between the system with large field of view (FOV) and each lens, an imaging system based on compound eyes has been designed, where 37 lenses pointing in different directions are arranged on a spherical substrate. By researching the relationship between the lens position and the corresponding image geometrical shape to realize a large FOV detection, the image process technique is proposed. To verify the technique, experiments are carried out based on the designed compound eye imaging system. The results show that an image with FOV over 166° can be acquired while keeping excellent image process quality.

Cao, Axiu; Shi, Lifang; Shi, Ruiying; Deng, Qiling; Du, Chunlei

2012-11-01

305

Study on the improvement of overall optical image quality via digital image processing  

NASA Astrophysics Data System (ADS)

This paper studies the effects of improving overall optical image quality via Digital Image Processing (DIP) and compares the promoted optical image with the non-processed optical image. Seen from the optical system, the improvement of image quality has a great influence on chromatic aberration and monochromatic aberration. However, overall image capture systems-such as cellphones and digital cameras-include not only the basic optical system but also many other factors, such as the electronic circuit system, transducer system, and so forth, whose quality can directly affect the image quality of the whole picture. Therefore, in this thesis Digital Image Processing technology is utilized to improve the overall image. It is shown via experiments that system modulation transfer function (MTF) based on the proposed DIP technology and applied to a comparatively bad optical system can be comparable to, even possibly superior to, the system MTF derived from a good optical system.

Tsai, Cheng-Mu; Fang, Yi Chin; Lin, Yu Chin

2008-12-01

306

Image processing applications for customized mining and ore classification  

Microsoft Academic Search

During the mining operation, ore sorting and directing different grade ores to different processing circuits is a manual task\\u000a in most of working mines, but this work puts a step forward toward automation of this process. The radical development in\\u000a the area of image and data processing allows speedy processing of the full color digital images for the preferred investigations.

Vasudev Singh; Trilok Nath Singh; Veerendra Singh

307

IPL processing of the Viking orbiter images of Mars  

NASA Technical Reports Server (NTRS)

The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through spectral filters. The Image Processing Laboratory of the Jet Propulsion Laboratory was responsible for the design, development, and application of the software required to produce these 'second-order' products.

Ruiz, R. M.; Elliott, D. A.; Yagi, G. M.; Pomphrey, R. B.; Power, M. A.; Farrell, W., Jr.; Lorre, J. J.; Benton, W. D.; Dewar, R. E.; Cullen, L. E.

1977-01-01

308

High resolution image processing on low-cost microcomputers  

NASA Technical Reports Server (NTRS)

Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.

Miller, R. L.

1993-01-01

309

Image data processing of earth resources management. [technology transfer  

NASA Technical Reports Server (NTRS)

Various image processing and information extraction systems are described along with the design and operation of an interactive multispectral information system, IMAGE 100. Analyses of ERTS data, using IMAGE 100, over a number of U.S. sites are presented. The following analyses are included: investigations of crop inventory and management using remote sensing; and (2) land cover classification for environmental impact assessments. Results show that useful information is provided by IMAGE 100 analyses of ERTS data in digital form.

Desio, A. W.

1974-01-01

310

Inverse Problems and Parameter Identification in Image Processing  

Microsoft Academic Search

Many problems in imaging are actually inverse problems. One reason for this is that conditions and parameters of the physical\\u000a processes underlying the actual image acquisition are usually not known. Examples for this are the inhomogeneities of the\\u000a magnetic field in magnetic resonance imaging (MRI) leading to nonlinear deformations of the anatomic structures in the recorded\\u000a images, material parameters in

Jens F. Acker; Benjamin Berkels; Kristian Bredies; Mamadou S. Diallo; Marc Droske; Christoph S. Garbe; Matthias Holschneider; Jaroslav Hron; Claudia Kondermann; Michail Kulesh; Peter Maass; Nadine Olischläger; Heinz-Otto Peitgen; Tobias Preusser; Martin Rumpf; Karl Schaller; Frank Scherbaum; Stefan Turek

311

Image Processing In Laser-Beam-Steering Subsystem  

NASA Technical Reports Server (NTRS)

Conceptual design of image-processing circuitry developed for proposed tracking apparatus described in "Beam-Steering Subsystem For Laser Communication" (NPO-19069). In proposed system, desired frame rate achieved by "windowed" readout scheme in which only pixels containing and surrounding two spots read out and others skipped without being read. Image data processed rapidly and efficiently to achieve high frequency response.

Lesh, James R.; Ansari, Homayoon; Chen, Chien-Chung; Russell, Donald W.

1996-01-01

312

Study of wood defects recognition based on Image Processing  

Microsoft Academic Search

Image Processing of wood defects is important for wood defects recognition. Wood defects influence on wood production quality. X-ray testing system was adopted to detect wood defects. Because it not only can detect wood seeming defects, but also can detect inner defects. The collected images with defects were done median filter processing and edge detection so that the position, size

Hongbo Mu; Dawei Qi; Mingming Zhang

2010-01-01

313

The research of image processing algorithms in AGVS  

Microsoft Academic Search

Because of its flexibility and reliability, computer vision guidance is applied into automated guided vehicle system on express highway. Taken landmark as guided line symbol, based on road video image acquisition and processing, the guided line can be identified and the input information for guidance control can be obtained. Since common technology of image processing troubles actual application, algorithms are

Jun-You Zhang; Shu-Feng Wang

2007-01-01

314

Fire detection in tunnels using an image processing method  

Microsoft Academic Search

Fires in road tunnels are potentially hazardous. In addition to the conventional fire detection methods, new detection methods using image processing technology with infrared or color cameras have been reported. This article points out the problems associated with conventional fire detectors, and reviews some of the new detection methods. In particular, a new image processing method which compares two histograms

S. Noda; K. Ueda

1994-01-01

315

Encouraging undergraduate research: a digital image processing approach  

Microsoft Academic Search

In recent years, undergraduate research has gained significant attention in many computer science departments. There have been efforts to identify courses that could support undergraduate research activities. Some courses such as image processing provides excellent research opportunities to undergraduate students. However, in small and comprehensive schools there is little or no image processing research to support the course. This paper

Rahman Tashakkori

2005-01-01

316

Application of electrical capacitance tomography for imaging industrial processes  

Microsoft Academic Search

Electrical tomography is, in certain cases, the most attractive method for real imaging of industrial processes, because of\\u000a its inherent simplicity, rugged construction of the tomographer and high-speed capability. This paper presents examples illustrating\\u000a applications of electrical tomography for imaging fluidized beds, bubble columns and pneumatic conveyors. Electrical tomography\\u000a opens up new ways for processing, imaging and modelling multi-phase flows

Dyakowski Tom

2005-01-01

317

Vector-valued image processing by parallel level sets.  

PubMed

Vector-valued images such as RGB color images or multimodal medical images show a strong interchannel correlation, which is not exploited by most image processing tools. We propose a new notion of treating vector-valued images which is based on the angle between the spatial gradients of their channels. Through minimizing a cost functional that penalizes large angles, images with parallel level sets can be obtained. After formally introducing this idea and the corresponding cost functionals, we discuss their Gâteaux derivatives that lead to a diffusion-like gradient descent scheme. We illustrate the properties of this cost functional by several examples in denoising and demosaicking of RGB color images. They show that parallel level sets are a suitable concept for color image enhancement. Demosaicking with parallel level sets gives visually perfect results for low noise levels. Furthermore, the proposed functional yields sharper images than the other approaches in comparison. PMID:23955746

Ehrhardt, Matthias Joachim; Arridge, Simon R

2014-01-01

318

Learning to Teach: Enhancing Pre-Service Teachers' Awareness of the Complexity of Teaching-Learning Processes  

ERIC Educational Resources Information Center

Why is it so challenging to provide pre-service teachers with adequate competence to cope with the complexity of the classroom context? Three key difficulties are frequently reported as reducing the effectiveness of teacher education programs: the construction of an integrated body of knowledge about teaching, the application of theories to…

Eilam, Billie; Poyas, Yael

2009-01-01

319

Event-driven workflow management for medical image processing and analysis in a large image database  

Microsoft Academic Search

A workflow management system using a centralized database and distributed computing for medical image processing and analysis is presented. Such management systems are becoming increasingly relevant to assure processing quality as datasets grow larger and image processing pipelines become more complex. The implementation includes a web-form based oracle database application for information management and event dispatching, and different modules for

Lifeng Liu; Dominik Meier; Mariann Polgar-Turcsanyi; Pawel Karkocha; Rohit Bakshi; Charles R. G. Guttmann

320

A residual hybrid encoder for image processing  

E-print Network

than transform domain I systems [3] . C. Transform Domain In transform domain systems, the object is to map the image to another domain where the coefficients are less correlated. For efficient coding the transformation must be invertible... SECTION I. INTRODUCTION ~ Page II. IMAGE COMPRESSION TECHNIQUES A. Background B. Spatial Domain C. Transform Domain D. Hybrid Coders . III. ADAPTATION STRATEGIES 13 A. Forward Versus Backward Adaptation B. Quantizer Adaptation ~ ~ ~ ~ ~ C...

Beck, John Andrew

1980-01-01

321

Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition  

NASA Technical Reports Server (NTRS)

Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

Downie, John D.; Tucker, Deanne (Technical Monitor)

1994-01-01

322

Graphical user interface for image acquisition and processing  

DOEpatents

An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

Goldberg, Kenneth A. (Berkeley, CA)

2002-01-01

323

Approach to retina optical coherence tomography image processing  

NASA Astrophysics Data System (ADS)

Optical coherence tomography (OCT) is a recently developed imaging technology. By using Zeiss STRATUS OCT one could obtain clear tomography pictures of retina and macula lutea. The clinical use of image processing requires both medical knowledge and expertise in image processing techniques. This paper focused on processing of retina OCT image to design the automatic retina OCT image identification system that could help us evaluate retina, examine and clinically diagnose the fundus diseases. The motivation of our work is to extract the contour and highlight the feature area of the focus clearly and exactly. Generally it is related to image segmentation, enhancement and binarization etc. In the paper we try to reduce the image noise and make the symbolic area connected through color segmentation, low-pass filter and mathematical morphology algorithm etc., finally discern some common and different properties of postprocessing image compared with the real OCT image. Experiments were done on cystoid macular edema, macular hole and normal retina OCT image. The results show that the project raised is feasible and suitable for further image identification and classification according to ophthalmology criteria.

Yuan, Jiali; Liu, Ruihua; Xuan, Gao; Yang, Jun; Yuan, Libo

2007-03-01

324

Development of multispectral image processing algorithms for identification of wholesome, septicemic, and inflammatory process chickens  

Microsoft Academic Search

A multispectral imaging system and image processing algorithms for food safety inspection of poultry carcasses were demonstrated. Three key wavelengths of 460, 540, and 700nm, previously identified using a visible\\/near-infrared spectrophotometer, were implemented in a common-aperture multispectral imaging system, and images were collected for 174 wholesome, 75 inflammatory process, and 170 septicemic chickens. Principal component analysis was used to develop

Chun-Chieh Yang; Kuanglin Chao; Yud-Ren Chen

2005-01-01

325

Data processing of vibrational chemical imaging for pharmaceutical applications.  

PubMed

Vibrational spectroscopy (MIR, NIR and Raman) based hyperspectral imaging is one of the most powerful tools to analyze pharmaceutical preparation. Indeed, it combines the advantages of vibrational spectroscopy to imaging techniques and allows therefore the visualization of distribution of compounds or crystallization processes. However, these techniques provide a huge amount of data that must be processed to extract the relevant information. This review presents fundamental concepts of hyperspectral imaging, the basic theory of the most used chemometric tools used to pre-process, process and post-process the generated data. The last part of the present paper focuses on pharmaceutical applications of hyperspectral imaging and highlights the data processing approaches to enable the reader making the best choice among the different tools available. PMID:24809748

Sacré, P-Y; De Bleye, C; Chavez, P-F; Netchacovitch, L; Hubert, Ph; Ziemons, E

2014-12-01

326

The real-time image processing technique based on DSP  

Microsoft Academic Search

This paper proposes a novel real-time image processing technique based on digital singnal processor (DSP). At the aspect of\\u000a wavelet transform(WT) algorithm, the technique uses algorithm of second generation wavelet transform-lifting scheme WT that\\u000a has low calculation complexity property for the 2-D image data processing. Since the processing effect of lifting scheme WT\\u000a for 1-D data is better than the

Qi Chang; Chen Yue-hua; Huang Tian-shu

2005-01-01

327

A model for simulation and processing of radar images  

NASA Technical Reports Server (NTRS)

A model for recording, processing, presentation, and analysis of radar images in digital form is presented. The observed image is represented as having two random components, one which models the variation due to the coherent addition of electromagnetic energy scattered from different objects in the illuminated areas. This component is referred to as fading. The other component is a representation of the terrain variation which can be described as the actual signal which the radar is attempting to measure. The combination of these two components provides a description of radar images as being the output of a linear space-variant filter operating on the product of the fading and terrain random processes. In addition, the model is applied to a digital image processing problem using the design and implementation of enhancement scene. Finally, parallel approaches are being employed as possible means of solving other processing problems such as SAR image map-matching, data compression, and pattern recognition.

Stiles, J. A.; Frost, V. S.; Shanmugam, K. S.; Holtzman, J. C.

1981-01-01

328

Three-dimensional interactive processor which processes 64 directional images  

NASA Astrophysics Data System (ADS)

We are developing a next-generation 3D display which offers natural 3D images by projecting a number of directional images. We have already demonstrated a prototype 3D display which generates 64 directional images simultaneously by using 64 LCD panels. In this study we developed a 3D interactive processor which can process 64 directional images simultaneously by using a PC cluster. The PC cluster consisted of 64 small PC's. The image synchronization mechanism was implemented to update 64 directional images at the same time. We made the commands to control the system and the API used for programming. The processor can display moving 3D images with 30 fps. The 3D images can be interactively updated according to the 3D mouse. We also developed the interpreter programs for the DXF and VRML data so that the 3D interactive processor can deal widely-used 3D data in the DXF and VRML formats.

Kudo, Takaaki; Takaki, Yasuhiro

2004-10-01

329

Embedded processor extensions for image processing  

NASA Astrophysics Data System (ADS)

The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.

Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy

2008-04-01

330

A European de facto standard for image folders applied to telepathology and teaching.  

PubMed

Since 1980, French pathologists at ADICAP (Association pour le Développement de l'Informatique en Cytologie et en Anatomie Pathologique) have created a common language code allowing the use of computers for routine applications. This code permitted the production of an associated exhaustive image bank of approximately 30,000 images. This task involved many specialists necessitating the definition of specific processes for security and simplicity of data handling. In particular, it has been necessary to develop image communication. To achieve that goal, it was necessary to define a folder, associating textual information to images. That was done through several industrial software providers contribution. Consequently, this folder, using a common packaging standard, allowed any pathologist access to images, codified data and clinical information. Accessing folders has been made easy by launching a Web server at CRIHAN under the supervision of ADICAP. An ADICAP software user may not only browse through the folder but may also import them into their own system and produce new folders. Today more than a hundred users in France and in foreign countries are able to provide diagnostic advice and also referential products useful for further education and quality control. The next challenge is the development of this preliminary de facto approach toward an internationally admitted standard suited for morphological image exchange. PMID:9600422

Klossa, J; Cordier, J C; Flandrin, G; Got, C; Hémet, J

1998-02-01

331

Computer vision applications for coronagraphic optical alignment and image processing  

E-print Network

Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

Savransky, Dmitry; Poyneer, Lisa A; Macintosh, Bruce A; 10.1364/AO.52.003394

2013-01-01

332

Data management in pattern recognition and image processing systems  

NASA Technical Reports Server (NTRS)

Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.

Zobrist, A. L.; Bryant, N. A.

1976-01-01

333

A Case Study Analysing the Process of Analogy-Based Learning in a Teaching Unit about Simple Electric Circuits  

ERIC Educational Resources Information Center

The purpose of this case study is to analyse the learning processes of a 16-year-old student as she learns about simple electric circuits in response to an analogy-based teaching sequence. Analogical thinking processes are modelled by a sequence of four steps according to Gentner's structure mapping theory (activate base domain, postulate local…

Paatz, Roland; Ryder, James; Schwedes, Hannelore; Scott, Philip

2004-01-01

334

IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Lossless Watermarking for Image Authentication  

E-print Network

Terms--Forgery detection, invertible authentication, loss- less compression, reversible data embeddingIEEE TRANSACTIONS ON IMAGE PROCESSING 1 Lossless Watermarking for Image Authentication: A New) authentication watermarking, which enables zero-distortion reconstruction of the un-watermarked images upon

Sharma, Gaurav

335

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 10, NO. 10, OCTOBER 2001 1397 Embedded Foveation Image Coding  

E-print Network

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 10, NO. 10, OCTOBER 2001 1397 Embedded Foveation Image resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with in. In this paper, we propose an embedded foveation image coding (EFIC) algorithm, which orders the encoded

Wang, Zhou

336

Learning and teaching about the nature of science through process skills  

NASA Astrophysics Data System (ADS)

This dissertation, a three-paper set, explored whether the process skills-based approach to nature of science instruction improves teachers' understandings, intentions to teach, and instructional practice related to the nature of science. The first paper examined the nature of science views of 53 preservice science teachers before and after a year of secondary science methods instruction that incorporated the process skills-based approach. Data consisted of each participant's written and interview responses to the Views of the Nature of Science (VNOS) questionnaire. Systematic data analysis led to the conclusion that participants exhibited statistically significant and practically meaningful improvements in their nature of science views and viewed teaching the nature of science as essential to their future instruction. The second and third papers assessed the outcomes of the process skills-based approach with 25 inservice middle school science teachers. For the second paper, she collected and analyzed participants' VNOS and interview responses before, after, and 10 months after a 6-day summer professional development. Long-term retention of more aligned nature of science views underpins teachers' ability to teach aligned conceptions to their students yet it is rarely examined. Participants substantially improved their nature of science views after the professional development, retained those views over 10 months, and attributed their more aligned understandings to the course. The third paper addressed these participants' instructional practices based on participant-created video reflections of their nature of science and inquiry instruction. Two participant interviews and class notes also were analyzed via a constant comparative approach to ascertain if, how, and why the teachers explicitly integrated the nature of science into their instruction. The participants recognized the process skills-based approach as instrumental in the facilitation of their improved views. Additionally, the participants saw the nature of science as an important way to help students to access core science content such as the theory of evolution by natural selection. Most impressively, participants taught the nature of science explicitly and regularly. This instruction was student-centered, involving high levels of student engagement in ways that represented applying, adapting, and innovating on what they learned in the summer professional development.

Mulvey, Bridget K.

337

University Image: An Information Processing Perspective.  

ERIC Educational Resources Information Center

A study investigated the perceptions of human resource managers (n=243) concerning graduates from 9 area colleges, using 40 criteria to differentiate potential employees. The results support use of the methodology by colleges and universities to investigate factors contributing to institutional image within the business community. (MSE)

Parameswaran, Ravi; Glowacka, Aleksandra E.

1995-01-01

338

SVD spectral feature of image processing  

Microsoft Academic Search

Since Golub and Reinsch proposed the singular value decomposition (SVD) algorithm in 1970, SVD first became an effective method to the least square problems. Recently, SVD has been successfully applied in many fields, such as image data compression, feature extraction and so on. This paper discourses on the singular value spectral sequence (SVSS), gives the connotation and application of SVSS

Deshen Xia; Hua Li; Yong Qiu

1996-01-01

339

Genetic-based fuzzy image filter and its application to image processing.  

PubMed

In this paper, we propose a Genetic-based Fuzzy Image Filter (GFIF) to remove additive identical independent distribution (i.i.d.) impulse noise from highly corrupted images. The proposed filter consists of a fuzzy number construction process, a fuzz filtering process, a genetic learning process, and an image knowledge base. First, the fuzzy number construction process receives sample images or the noise-free image and then constructs an image knowledge base for the fuzzy filtering process. Second, the fuzzy filtering process contains a parallel fuzzy inference mechanism, a fuzzy mean process, and a fuzzy decision process to perform the task of noise removal. Finally, based on the genetic algorithm, the genetic learning process adjusts the parameters of the image knowledge base. By the experimental results, GFIF achieves a better performance than the state-of-the-art filters based on the criteria of Peak-Signal-to-Noise-Ratio (PSNR), Mean-Square-Error (MSE), and Mean-Absolute-Error (MAE). On the subjective evaluation of those filtered images, GFIF also results in a higher quality of global restoration. PMID:16128454

Lee, Chang-Shing; Guo, Shu-Mei; Hsu, Chin-Yuan

2005-08-01

340

Ground control requirements for precision processing of ERTS images  

USGS Publications Warehouse

When the first Earth Resources Technology Satellite (ERTS-A) flies in 1972, NASA expects to receive and bulk-process 9,000 images a week. From this deluge of images, a few will be selected for precision processing; that is, about 5 percent will be further treated to improve the geometry of the scene, both in the relative and absolute sense. Control points are required for this processing. This paper describes the control requirements for relating ERTS images to a reference surface of the earth. Enough background on the ERTS-A satellite is included to make the requirements meaningful to the user.

Burger, Thomas C.

1972-01-01

341

Real-Time Digital Processing Of Color Bronchoscopic Images  

NASA Astrophysics Data System (ADS)

In this paper we present possibilities of improving conventional and fluorescence bronchoscopic diagnosis by means of digital image processing methods. After a description of the particularities of the bronchoscopic imaging system on one hand and the state-of-the-art in real-time digital image processing on the other hand, we present a series of methods which improve bronchoscopic diagnosis. These methods are based on real-time processing of the bronchoscopic images in video form. Among them, spatial and temporal filtering have been implemented with good results. For further improvements of the bronchoscopy in general and the fluorescence bronchoscopy in particular, we propose two other methods which are histogram equalization and the generation of an intrinsic fluorescence image.

Hugli, Heinz; Frei, Werner

1981-12-01

342

Applications of nuclear magnetic resonance imaging in process engineering  

Microsoft Academic Search

During the past decade, the application of nuclear magnetic resonance (NMR) imaging techniques to problems of relevance to the process industries has been identified. The particular strengths of NMR techniques are their ability to distinguish between different chemical species and to yield information simultaneously on the structure, concentration distribution and flow processes occurring within a given process unit. In this

Lynn F. Gladden; Paul Alexander

1996-01-01

343

Subband/Transform MATLAB Functions For Processing Images  

NASA Technical Reports Server (NTRS)

SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

Glover, D.

1995-01-01

344

Processing ISS Images of Titan's Surface  

NASA Technical Reports Server (NTRS)

One of the primary goals of the Cassini-Huygens mission, in orbit around Saturn since July 2004, is to understand the surface and atmosphere of Titan. Surface investigations are primarily accomplished with RADAR, the Visual and Infrared Mapping Spectrometer (VIMS), and the Imaging Science Subsystem (ISS) [1]. The latter two use methane "windows", regions in Titan's reflectance spectrum where its atmosphere is most transparent, to observe the surface. For VIMS, this produces clear views of the surface near 2 and 5 microns [2]. ISS uses a narrow continuum band filter (CB3) at 938 nanometers. While these methane windows provide our best views of the surface, the images produced are not as crisp as ISS images of satellites like Dione and Iapetus [3] due to the atmosphere. Given a reasonable estimate of contrast (approx.30%), the apparent resolution of features is approximately 5 pixels due to the effects of the atmosphere and the Modulation Transfer Function of the camera [1,4]. The atmospheric haze also reduces contrast, especially with increasing emission angles [5].

Perry, Jason; McEwen, Alfred; Fussner, Stephanie; Turtle, Elizabeth; West, Robert; Porco, Carolyn; Knowles, Ben; Dawson, Doug

2005-01-01

345

Image processing of underwater multispectral imagery  

USGS Publications Warehouse

Capturing in situ fluorescence images of marine organisms presents many technical challenges. The effects of the medium, as well as the particles and organisms within it, are intermixed with the desired signal. Methods for extracting and preparing the imagery for analysis are discussed in reference to a novel underwater imaging system called the low-light-level underwater multispectral imaging system (LUMIS). The instrument supports both uni- and multispectral collections, each of which is discussed in the context of an experimental application. In unispectral mode, LUMIS was used to investigate the spatial distribution of phytoplankton. A thin sheet of laser light (532 nm) induced chlorophyll fluorescence in the phytoplankton, which was recorded by LUMIS. Inhomogeneities in the light sheet led to the development of a beam-pattern-correction algorithm. Separating individual phytoplankton cells from a weak background fluorescence field required a two-step procedure consisting of edge detection followed by a series of binary morphological operations. In multispectral mode, LUMIS was used to investigate the bio-assay potential of fluorescent pigments in corals. Problems with the commercial optical-splitting device produced nonlinear distortions in the imagery. A tessellation algorithm, including an automated tie-point-selection procedure, was developed to correct the distortions. Only pixels corresponding to coral polyps were of interest for further analysis. Extraction of these pixels was performed by a dynamic global-thresholding algorithm.

Zawada, D.G.

2003-01-01

346

Color sensitivity of the multi-exposure HDR imaging process  

NASA Astrophysics Data System (ADS)

Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

2013-04-01

347

Digital image processing and analysis for activated sludge wastewater treatment.  

PubMed

Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants. PMID:25381111

Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

2015-01-01

348

Image processing in automated measurements of raindrop size and distribution  

Microsoft Academic Search

A rapid method for evaluating raindrop size and size distribution has been developed. It is based on image processing with correlation analysis in the frequency domain. This technique has the advantage of being a direct measurement method that automatically identifies and counts raindrops. Calibration was carried out using a standard image with known raindrop sizes. Drop sizes, ranging from less

Paulo Estevão Cruvinel; Sidney Rosa Vieira; Silvio Crestana; Edson Roberto Minatel; Marcos Luiz Mucheroni; André Torre Neto

1999-01-01

349

Language Determination: Natural Language Processing from Scanned Document Images  

Microsoft Academic Search

Many documents are available to a computer only as images from paper. However, most nat- ural language processing systems expect their input as character-coded text, which may be difficult or expensive to extract accurately from the page. We describe a method for con- verting a document image into character shape codes and word shape tokens. We believe that this representation,

Penelope Sibun; A. Lawrence Spitz

1994-01-01

350

Position-Dependent Defocus Processing for Acoustic Holography Images  

E-print Network

Position-Dependent Defocus Processing for Acoustic Holography Images Ruming Yin,1 Patrick J. Flynn 2002 ABSTRACT: Acoustic holography is a transmission-based ultrasound imaging method that uses optical by acoustic holography requires position-dependent filtering for the enhancement step. It is found

Flynn, Patrick J.

351

WAVELETS IN TEMPORAL AND SPATIAL PROCESSING OF BIOMEDICAL IMAGES  

Microsoft Academic Search

? Abstract We review some of the most recent advances in the area of wavelet applications in medical imaging. We first review key concepts in the processing of medical images with wavelet transforms and multiscale analysis, including time- frequency tiling, overcomplete representations, higher dimensional bases, symmetry, boundary effects, translational invariance, orientation selectivity, and best-basis selec- tion. We next describe some

Andrew F. Laine

2000-01-01

352

Comparative analysis of NDE techniques with image processing  

Microsoft Academic Search

The paper reports comparative results of nondestructive testing (NDT) based experimentation done on created flaws in the casting at the Central Foundry Forge Plant (CFFP) of Bharat Heavy Electrical Ltd. India (BHEL). The present experimental study is aimed at comparing the evaluation of image processing methods applied on the radiographic images of welding defects such as slag inclusion, porosity, lack-of-root

Vijay R. Rathod; R. S. Anand; Alaknanda Ashok

2012-01-01

353

Image processing for flight crew enhanced situation awareness  

NASA Technical Reports Server (NTRS)

This presentation describes the image processing work that is being performed for the Enhanced Situational Awareness System (ESAS) application. Specifically, the presented work supports the Enhanced Vision System (EVS) component of ESAS.

Roberts, Barry

1993-01-01

354

Application of digital image processing techniques to astronomical imagery 1977  

NASA Technical Reports Server (NTRS)

Nine specific techniques of combination of techniques developed for applying digital image processing technology to existing astronomical imagery are described. Photoproducts are included to illustrate the results of each of these investigations.

Lorre, J. J.; Lynn, D. J.

1978-01-01

355

Nonlinear optical Fourier filtering technique for medical image processing  

E-print Network

spatial frequen- cies corresponding to soft dense breast tissue and displaying only high spatial modulators SLM , mirrors, and a diode laser, portable systems can be fabricated for medical image processing

Rao, D.V.G.L.N.

356

IN VIVO OPTICAL MOLECULAR IMAGING: PRINCIPLES AND SIGNAL PROCESSING ISSUES  

E-print Network

expressions, nuclear trafficking, etc. One of the main applications of cell level molecular imag- ing is high content screening (HCS) to accelerate the drug discovery process. For the radiologist, the term "molec

357

An image processing of a Raphael's portrait of Leonardo  

E-print Network

In one of his paintings, the School of Athens, Raphael is depicting Leonardo da Vinci as the philosopher Plato. Some image processing tools can help us in comparing this portrait with two Leonardo's portraits, considered as self-portraits.

Sparavigna, Amelia Carolina

2011-01-01

358

Graphical Technique to Support the Teaching/Learning Process of Software Process Reference Models  

NASA Astrophysics Data System (ADS)

In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods, are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams. We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model (MoProSoft). Additionally, we perform an evaluation of the usefulness of dimods. The result of the evaluation shows that dimods may be a support tool that facilitates the understanding, memorization, and learning of software PRMs in both, software development organizations and universities. The results also show that dimods may have advantages over the traditional description methods for these types of models.

Espinosa-Curiel, Ismael Edrein; Rodríguez-Jacobo, Josefina; Fernández-Zepeda, José Alberto

359

Design and implementation of non-linear image processing functions for CMOS image sensor  

NASA Astrophysics Data System (ADS)

Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 ?m including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.

Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel

2012-11-01

360

ELAS: A powerful, general purpose image processing package  

NASA Technical Reports Server (NTRS)

ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.

Walters, David; Rickman, Douglas

1991-01-01

361

Photographic Images as an Interactive Online Teaching Technology: Creating Online Communities  

ERIC Educational Resources Information Center

Creating a sense of community in the online classroom is a challenge for educators who teach via the Internet. There is a growing body of literature supporting the importance of the community construct in online courses (Liu, Magjuka, Curtis, & Lee, 2007). Thus, educators are challenged to develop and implement innovative teaching technologies…

Perry, Beth; Dalton, Janice; Edwards, Margaret

2009-01-01

362

An Image Processing Approach to Linguistic Translation  

NASA Astrophysics Data System (ADS)

The art of translation is as old as written literature. Developments since the Industrial Revolution have influenced the practice of translation, nurturing schools, professional associations, and standard. In this paper, we propose a method of translation of typed Kannada text (taken as an image) into its equivalent English text. The National Instruments (NI) Vision Assistant (version 8.5) has been used for Optical character Recognition (OCR). We developed a new way of transliteration (which we call NIV transliteration) to simplify the training of characters. Also, we build a special type of dictionary for the purpose of translation.

Kubatur, Shruthi; Sreehari, Suhas; Hegde, Rajeshwari

2011-12-01

363

Digital processing of side-scan sonar data with the Woods Hole image processing system software  

USGS Publications Warehouse

Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low-resolution side-scan sonar data. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for processing side-scan sonar data. This report describes the steps required to process the collected data and to produce an image that has equal along- and across-track resol

Paskevich, Valerie F.

1992-01-01

364

Intelligent control method of rotary kiln process based on image processing technology: A survey  

Microsoft Academic Search

The rotary kiln industrial production system is a typical complex nonlinear multivariable process with strongly coupling and large time delays. The up-to-the-minute research results of optimized operation and intelligent control of rotary kiln process based on image processing technology. It mainly includes the flam image processing technology, soft-sensor modeling, state recognition, and fault diagnosis and hybrid intelligent control strategy. In

Wang Jie-Sheng; Zhang Li; Gao Xian-Wen; Sun Shi-Feng

2010-01-01

365

Teaching Reading  

ERIC Educational Resources Information Center

"Teaching Reading" uncovers the interactive processes that happen when people learn to read and translates them into a comprehensive easy-to-follow guide on how to teach reading. Richard Day's revelations on the nature of reading, reading strategies, reading fluency, reading comprehension, and reading objectives make fascinating…

Day, Richard R.

2013-01-01

366

The research on image processing technology of the star tracker  

NASA Astrophysics Data System (ADS)

As the core of visual sensitivity via imaging, image processing technology, especially for star tracker, is mainly characterized by such items as image exposure, optimal storage, background estimation, feature correction, target extraction, iteration compensation. This paper firstly summarizes the new research on those items at home and abroad, then, according to star tracker's practical engineering, environment in orbit and lifetime information, shows an architecture about rapid fusion between multiple frame images, which can be used to restrain oversaturation of the effective pixels, which means star tracker can be made more precise, more robust and more stable.

Li, Yu-ming; Li, Chun-jiang; Zheng, Ran; Li, Xiao; Yang, Jun

2014-11-01

367

Process-oriented inquiry—A constructivist approach to early childhood science education: Teaching teachers to do science  

Microsoft Academic Search

Process-oriented inquiry can help preservice and inservice early childhood teachers implement constructivist science education\\u000a in their own classrooms. In this article, we discuss the basic elements of process-oriented inquiry applied to early childhood\\u000a science education, show how we foster the development of process-oriented inquiry teaching skills with our preservice early\\u000a childhood education students, and argue that the validity of children’s

David Jerner Martin; Raynice Jean-Sigur; Emily Schmidt

2005-01-01

368

The design of a distributed image processing and dissemination system  

SciTech Connect

The design and implementation of a distributed image processing and dissemination system was undertaken and accomplished as part of a prototype communication and intelligence (CI) system, the contingency support system (CSS), which is intended to support contingency operations of the Tactical Air Command. The system consists of six (6) Sun 3/180C workstations with integrated ITEX image processors and three (3) 3/50 diskless workstations located at four (4) system nodes (INEL, base, and mobiles). All 3/180C workstations are capable of image system server functions where as the 3/50s are image system clients only. Distribution is accomplished via both local and wide area networks using standard Defense Data Network (DDN) protocols (i.e., TCP/IP, et al.) and Defense Satellite Communication Systems (DSCS) compatible SHF Transportable Satellite Earth Terminals (TSET). Image applications utilize Sun's Remote Procedure Call (RPC) to facilitate the image system client and server relationships. The system provides functions to acquire, display, annotate, process, transfer, and manage images via an icon, panel, and menu oriented Sunview{trademark} based user interface. Image spatial resolution is 512 {times} 480 with 8-bits/pixel black and white and 12/24 bits/pixel color depending on system configuration. Compression is used during various image display and transmission functions to reduce the dynamic range of image data of 12/6/3/2 bits/pixel depending on the application. Image acquisition is accomplished in real-time or near-real-time by special purpose Itex image hardware. As a result all image displays are highly interactive with attention given to subsecond response time. 3 refs., 7 figs.

Rafferty, P.; Hower, L.

1990-01-01

369

Imaging Implicit Morphological Processing: Evidence from Hebrew  

PubMed Central

Is morphology a discrete and independent element of lexical structure or does it simply reflect a fine-tuning of the system to the statistical correlation that exists among orthographic and semantic properties of words? Hebrew provides a unique opportunity to examine morphological processing in the brain because of its rich morphological system. In an fMRI masked priming experiment we investigated the neural networks involved in implicit morphological processing in Hebrew. In the lMFG and lIFG, activation was found to be significantly reduced when the primes were morphologically related to the targets. This effect was not influenced by the semantic transparency of the morphological prime, and was not found in the semantic or orthographic condition. Additional morphologically related decrease in activation was found in the lIPL although there, activation was significantly modulated by semantic transparency. Our findings regarding implicit morphological processing suggest that morphology is an automatic and distinct aspect of visually processing words. These results also coincide with the behavioral data previously obtained demonstrating the central role of morphological processing in reading Hebrew. PMID:19803693

Bick, Atira S; Frost, Ram; Goelman, Gadi

2013-01-01

370

Computer-assisted image processing for lung cancer localization  

NASA Astrophysics Data System (ADS)

Autofluorescence of the tumor and surrounding tissue is the largest background source in fluorescence diagnosis. A new system which applied computer image processing technique to lung cancer localization by laser fluorescence bronchoscopy has been developed to subtract the autofluorescence background. The results of our trial tests in tissue- simulating phantom and the porcine thigh muscle models are satisfied. There is great hope that this computer-assisted image processing system will significantly enhance the contrast of fluorescence image and drop false results for early human lung cancer examination.

Xie, Shusen; Zheng, Wei; Li, Yongsen; Lai, Kezhong; Lin, Qirong

1996-09-01

371

High Dynamic Range Processing for Magnetic Resonance Imaging  

PubMed Central

Purpose To minimize feature loss in T1- and T2-weighted MRI by merging multiple MR images acquired at different TR and TE to generate an image with increased dynamic range. Materials and Methods High Dynamic Range (HDR) processing techniques from the field of photography were applied to a series of acquired MR images. Specifically, a method to parameterize the algorithm for MRI data was developed and tested. T1- and T2-weighted images of a number of contrast agent phantoms and a live mouse were acquired with varying TR and TE parameters. The images were computationally merged to produce HDR-MR images. All acquisitions were performed on a 7.05 T Bruker PharmaScan with a multi-echo spin echo pulse sequence. Results HDR-MRI delineated bright and dark features that were either saturated or indistinguishable from background in standard T1- and T2-weighted MRI. The increased dynamic range preserved intensity gradation over a larger range of T1 and T2 in phantoms and revealed more anatomical features in vivo. Conclusions We have developed and tested a method to apply HDR processing to MR images. The increased dynamic range of HDR-MR images as compared to standard T1- and T2-weighted images minimizes feature loss caused by magnetization recovery or low SNR. PMID:24250788

Sukerkar, Preeti A.; Meade, Thomas J.

2013-01-01

372

Image processing for improved eye-tracking accuracy  

NASA Technical Reports Server (NTRS)

Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

Mulligan, J. B.; Watson, A. B. (Principal Investigator)

1997-01-01

373

Digital image processing of bone - Problems and potentials  

NASA Technical Reports Server (NTRS)

The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

Morey, E. R.; Wronski, T. J.

1980-01-01

374

Image Processing Using Smooth Ordering of its Patches  

NASA Astrophysics Data System (ADS)

We propose an image processing scheme based on reordering of its patches. For a given corrupted image, we extract all patches with overlaps, refer to these as coordinates in high-dimensional space, and order them such that they are chained in the "shortest possible path", essentially solving the traveling salesman problem. The obtained ordering applied to the corrupted image, implies a permutation of the image pixels to what should be a regular signal. This enables us to obtain good recovery of the clean image by applying relatively simple 1D smoothing operations (such as filtering or interpolation) to the reordered set of pixels. We explore the use of the proposed approach to image denoising and inpainting, and show promising results in both cases.

Ram, Idan; Elad, Michael; Cohen, Israel

2013-07-01

375

Imaging Implicit Morphological Processing: Evidence from Hebrew  

ERIC Educational Resources Information Center

Is morphology a discrete and independent element of lexical structure or does it simply reflect a fine-tuning of the system to the statistical correlation that exists among orthographic and semantic properties of words? Hebrew provides a unique opportunity to examine morphological processing in the brain because of its rich morphological system.…

Bick, Atira S.; Frost, Ram; Goelman, Gadi

2010-01-01

376

Reducing the absorbed dose in analogue radiography of infant chest images by improving the image quality, using image processing techniques.  

PubMed

Radiographic inspection is one of the most widely employed techniques for medical testing methods. Because of poor contrast and high un-sharpness of radiographic image quality in films, converting radiographs to a digital format and using further digital image processing is the best method of enhancing the image quality and assisting the interpreter in their evaluation. In this research work, radiographic films of 70 infant chest images with different sizes of defects were selected. To digitise the chest images and employ image processing the two algorithms (i) spatial domain and (ii) frequency domain techniques were used. The MATLAB environment was selected for processing in the digital format. Our results showed that by using these two techniques, the defects with small dimensions are detectable. Therefore, these suggested techniques may help medical specialists to diagnose the defects in the primary stages and help to prevent more repeat X-ray examination of paediatric patients. PMID:21743073

Karimian, A; Yazdani, S; Askari, M A

2011-09-01

377

A Software Package For Biomedical Image Processing And Analysis  

NASA Astrophysics Data System (ADS)

The decreasing cost of computing power and the introduction of low cost imaging boards justifies the increasing number of applications of digital image processing techniques in the area of biomedicine. There is however a large software gap to be fulfilled, between the application and the equipment. The requirements to bridge this gap are twofold: good knowledge of the hardware provided and its interface to the host computer, and expertise in digital image processing and analysis techniques. A software package incorporating these two requirements was developped using the C programming language, in order to create a user friendly image processing programming environment. The software package can be considered in two different ways: as a data structure adapted to image processing and analysis, which acts as the backbone and the standard of communication for all the software; and as a set of routines implementing the basic algorithms used in image processing and analysis. Hardware dependency is restricted to a single module upon which all hardware calls are based. The data structure that was built has four main features: hierchical, open, object oriented, and object dependent dimensions. Considering the vast amount of memory needed by imaging applications and the memory available in small imaging systems, an effective image memory management scheme was implemented. This software package is being used for more than one and a half years by users with different applications. It proved to be an efficient tool for helping people to get adapted into the system, and for standardizing and exchanging software, yet preserving flexibility allowing for users' specific implementations. The philosophy of the software package is discussed and the data structure that was built is described in detail.

Goncalves, Joao G. M.; Mealha, Oscar

1988-06-01

378

Teaching themes Entry of viruses into cells: receptor binding, structural basis of the entry process and uncoating.  

E-print Network

Teaching themes Entry of viruses into cells: receptor binding, structural basis of the entry process and uncoating. Virus replication : function of virus proteins, functions of virus RNA secondary structures, interactions between virus and host-cell proteins, how viruses can establish persistent

Brierley, Andrew

379

Health-Related Intensity Profiles of Physical Education Classes at Different Phases of the Teaching/Learning Process  

ERIC Educational Resources Information Center

Study aim: To assess the intensities of three types of physical education (PE) classes corresponding to the phases of the teaching/learning process: Type 1--acquiring and developing skills, Type 2--selecting and applying skills, tactics and compositional principles and Type 3--evaluating and improving performance skills. Material and methods: A…

Bronikowski, Michal; Bronikowska, Malgorzata; Kantanista, Adam; Ciekot, Monika; Laudanska-Krzeminska, Ida; Szwed, Szymon

2009-01-01

380

The Perceptions of Student Teachers about the Effects of Class Size with Regard to Effective Teaching Process  

ERIC Educational Resources Information Center

The main purpose of this study was to determine student teachers' perceptions concerning the effects of class size with regard to the teaching process. A total of 41 fourth-year student teachers participated in the study. A questionnaire including open-ended items was used for data collection. The study revealed that there is a direct relationship…

Cakmak, Melek

2009-01-01

381

Fremdsprachenunterricht als Kommunikationsprozess (Foreign Language Teaching as a Communicative Process). Language Centre News, No. 1. Focus on Spoken Language.  

ERIC Educational Resources Information Center

Teaching, as a communicative process, ranges between purely message-oriented communication (the goal) and purely language-oriented communication (a means). Classroom discourse ("Close the window", etc.) is useful as a drill but is also message-oriented. Skill in message-oriented communication is acquired only through practice in this kind of…

Butzkamm, Wolfgang

382

Validation Study of the Scale for "Assessment of the Teaching-Learning Process", Student Version (ATLP-S)  

ERIC Educational Resources Information Center

Introduction: The main goal of this study is to evaluate the psychometric and assessment features of the Scale for the "Assessment of the Teaching-Learning Process, Student Version" (ATLP-S), for both practical and theoretical reasons. From an applied point of view, this self-report measurement instrument has been designed to encourage student…

de la Fuente, Jesus; Sander, Paul; Justicia, Fernando; Pichardo, M. Carmen; Garcia-Berben, Ana B.

2010-01-01

383

Image data processing system requirements study. Volume 1: Analysis. [for Earth Resources Survey Program  

NASA Technical Reports Server (NTRS)

Digital image processing, image recorders, high-density digital data recorders, and data system element processing for use in an Earth Resources Survey image data processing system are studied. Loading to various ERS systems is also estimated by simulation.

Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

1973-01-01

384

Multimission image processing and science data visualization  

NASA Technical Reports Server (NTRS)

The Operational Science Analysis (OSA) Functional area supports science instrument data display, analysis, visualization and photo processing in support of flight operations of planetary spacecraft managed by the Jet Propulsion Laboratory (JPL). This paper describes the data products generated by the OSA functional area, and the current computer system used to generate these data products. The objectives on a system upgrade now in process are described. The design approach to development of the new system are reviewed, including use of the Unix operating system and X-Window display standards to provide platform independence, portability, and modularity within the new system, is reviewed. The new system should provide a modular and scaleable capability supporting a variety of future missions at JPL.

Green, William B.

1993-01-01

385

Digital interactive image analysis by array processing  

NASA Technical Reports Server (NTRS)

An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.

Sabels, B. E.; Jennings, J. D.

1973-01-01

386

Digital image processing of coal stream combustion  

E-print Network

spacing. Thus interactions exist amongst the particles for dense clouds. While the earlier literature dealt with combustion processes of isolated particles, the recent research focusses upon the interactive combustion. The interactive combustion studies... include arrays consisting of a finite number of particles, and streams and clouds of a large number of particles. Particularly stream combustion models assume cylindrical geometry and predict the ignition and combustion characteristics. The models show...

Gopalakrishnan, Chengappalli Periyasamy

1994-01-01

387

Octahedral transforms for 3-D image processing.  

PubMed

The octahedral group is one of the finite subgroups of the rotation group in 3-D Euclidean space and a symmetry group of the cubic grid. Compression and filtering of 3-D volumes are given as application examples of its representation theory. We give an overview over the finite subgroups of the 3-D rotation group and their classification. We summarize properties of the octahedral group and basic results from its representation theory. Wide-sense stationary processes are processes with group theoretical symmetries whose principal components are closely related to the representation theory of their symmetry group. Linear filter systems are defined as projection operators and symmetry-based filter systems are generalizations of the Fourier transforms. The algorithms are implemented in Maple/Matlab functions and worksheets. In the experimental part, we use two publicly available MRI volumes. It is shown that the assumption of wide-sense stationarity is realistic and the true principal components of the correlation matrix are very well approximated by the group theoretically predicted structure. We illustrate the nature of the different types of filter systems, their invariance and transformation properties. Finally, we show how thresholding in the transform domain can be used in 3-D signal processing. PMID:19674954

Lenz, Reiner; Latorre Carmona, Pedro

2009-12-01

388

Implementation of a radiology electronic imaging network: the community teaching hospital experience.  

PubMed

Because of their typically small in-house computer and network staff, non-university hospitals often hesitate to consider picture archiving and communication system (PACS) as a solution to the very demanding financial, clinical, and technological needs of today's Radiology Department. This article presents the experiences of the 3-year process for the design and implementation of the Radiology Electronic Imaging Network (REIN) in the Department of Radiology at The Western Pennsylvania Hospital (WPH). WPH embarked on this project in late 1994 to find a solution to the very pressing demands to reduce operating costs and improve service to primary care clinicians, both on-site and at WPH-affiliated clinics. A five-member committee consisting of in-house medical, administrative, information services, and medical physics staff was formed to design a network that would satisfy specific needs of WPH by using a phased mini-PACS approach and to select the various vendors to implement it. Suppliers for individual mini-PACS were selected to provide modality-specific solutions. For the backbone network, vendors were evaluated based on their technological progress, competence and resources, the commitment of the company to the imaging network business, and their willingness to embark on a mid-sized PACS project such as this. Based on patient volume, workflow patterns, and image quality requirements, the committee produced proposals detailing number and location of workstations, short- and long-term memory requirements, and so on. Computed tomography/magnetic resonance imaging, computer radiography, ultrasound, nuclear medicine, digital fluoroscopy, and angiography mini-PACS have been implemented over the past 2 years, and most of these are already integrated into the main REIN. This article presents detailed information concerning the design, selection and implementation processes, including storage requirement calculations. This indicates that PACS implementation is achievable for community hospitals with small computer, networking, and physics departments. Also presented are recommendations concerning design and vendor selection, that may be helpful for similar institutions. PMID:9268864

Arreola, M; Neiman, H L; Sugarman, A; Laurenti, L; Forys, R

1997-08-01

389

Image processing system performance prediction and product quality evaluation  

NASA Technical Reports Server (NTRS)

The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

Stein, E. K.; Hammill, H. B. (principal investigators)

1976-01-01

390

Computing with Curvelets: From Image Processing to Turbulent Flows  

Microsoft Academic Search

The curvelet transform is a multiscale and multidirectional transform, which allows an almost optimal non-adaptive sparse representation for curve-like features and edges. Applica- tions of curvelets have quicken increasing interest in the community of applied mathematics, signal processing and seismic geology over the past years. In this paper, we describe some recent applications involving image processing, seismic data exploration, turbulent

Jianwei Ma; Gerlind Plonka

2009-01-01

391

Image processing on compressed data for large video databases  

Microsoft Academic Search

This paper presents a novel approach to processing encoded video sequences prior to decoding. Scene changes may be easily detected using DCT coefficients in JPEG and MPEG encoded video sequences. In addition, by analyzing the DCT coefficients, regions of interest may be isolated prior to decompression, increasing efficiency of any subsequent image processing steps, such as edge detection. The results

Farshid Arman; Arding Hsu; Ming-Yee Chiu

1993-01-01

392

Image processing and analysis techniques for reading kinetheodolite film scales  

Microsoft Academic Search

This report describes a series of techniques for processing and analyzing images. Although developed for a specific purpose, namely the automatic reading of angular information on Askania kinetheodolite film, most of the techniques are quite general, and potentially applicable to a wide variety of problems. The processes described include real time binarisation of a television signal, production and analysis of

A. M. Bagot

1982-01-01

393

Data Processing of MIPSGAL 70 Micron Images  

NASA Astrophysics Data System (ADS)

We describe the modifications and enhancements that we have made to the standard SSC pipeline produced, 70 micron basic calibrated data (BCDs) for the MIPSGAL survey. The high background levels and large number of saturating sources in the observed portion of the Galactic plane create large variations in the responsivity of the Ge:Ga detectors used in the MIPS instrument. We detail how we reprocess the stimulator solutions using the GeRT software provided by the MIPS instrument team. The stim-corrected data have then a delta flat field applied to correct for short term responsivity variations. We explore several methods of destriping the data on a scan to scan level. A globally derived gain correction appears to be the best solution although a wavelet based destriper also retrieves good results. The photometry is checked by comparing the resulting mosaic images to IRIS reprocessed 60 micron data (Miville-Deschenes & Lagache 2005; ApJSS, 157, 302). Over most of the dynamic range of the data, the color-corrected MIPSGAL data agrees to 10% with IRIS. For the brightest regions, the values can be descrepant by 30% with a substantial fraction of the discrepancy due to the uncertainty in the color correction applied. This work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Support for this work was provided by NASA through an award issued by JPL/Caltech.

Paladini, Roberta; Frayer, D.; Noriega-Crespo, A.; Carey, S.; Mizuno, D.; Shenoy, S.; Kramer, K.; Kuchar, T.; Marleau, F.; Price, S.; Padgett, D.; Ingalls, J.

2006-12-01

394

Small Interactive Image Processing System (SMIPS) system description  

NASA Technical Reports Server (NTRS)

The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.

Moik, J. G.

1973-01-01

395

Automating the Photogrammetric Bridging Based on MMS Image Sequence Processing  

NASA Astrophysics Data System (ADS)

The photogrammetric bridging or traverse is a special bundle block adjustment (BBA) for connecting a sequence of stereo-pairs and of determining the exterior orientation parameters (EOP). An object point must be imaged in more than one stereo-pair. In each stereo-pair the distance ratio between an object and its corresponding image point varies significantly. We propose to automate the photogrammetric bridging based on a fully automatic extraction of homologous points in stereo-pairs and on an arbitrary Cartesian datum to refer the EOP and tie points. The technique uses SIFT algorithm and the keypoint matching is given by similarity descriptors of each keypoint based on the smallest distance. All the matched points are used as tie points. The technique was applied initially to two pairs. The block formed by four images was treated by BBA. The process follows up to the end of the sequence and it is semiautomatic because each block is processed independently and the transition from one block to the next depends on the operator. Besides four image blocks (two pairs), we experimented other arrangements with block sizes of six, eight, and up to twenty images (respectively, three, four, five and up to ten bases). After the whole image pairs sequence had sequentially been adjusted in each experiment, a simultaneous BBA was run so to estimate the EOP set of each image. The results for classical ("normal case") pairs were analyzed based on standard statistics regularly applied to phototriangulation, and they show figures to validate the process.

Silva, J. F. C.; Lemes Neto, M. C.; Blasechi, V.

2014-11-01

396

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 4, APRIL 2004 1 Image Quality Assessment: From Error Visibility to  

E-print Network

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 4, APRIL 2004 1 Image Quality Assessment: From processing applications. First, it can be used to dynamically monitor and adjust image quality. For example concealment and post- filtering algorithms at the decoder. Third, it can be used to benchmark image processing

Simoncelli, Eero

397

600 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 4, APRIL 2004 Image Quality Assessment: From Error Visibility to  

E-print Network

600 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 4, APRIL 2004 Image Quality Assessment a variety of roles in image processing applications. First, it can be used to dy- namically monitor to optimize algorithms and parameter settings of image processing systems. For instance, in a visual

Wang, Zhou

398

A comparison of polarization image processing across different platforms  

NASA Astrophysics Data System (ADS)

Division-of-focal-plane (DoFP) polarimeters for the visible spectrum hold the promise of being able to capture both the angle and degree of linear polarization in real-time and at high spatial resolution. These sensors are realized by monolithic integration of CCD imaging elements with metallic nanowire polarization filter arrays at the focal plane of the sensor. These sensors capture large amounts of raw polarization data and present unique computational challenges as they aim to provide polarimetric information at high spatial and temporal resolutions. The image processing pipeline in a typical DoFP polarimeter is: per-pixel calibration, interpolation of the four sub-sampled polarization pixels, Stokes parameter estimation, angle and degree of linear polarization estimation, and conversion from polarization domain to color space for display purposes. The entire image processing pipeline must operate at the same frame rate as the CCD polarization imaging sensor (40 frames per second) or higher in order to enable real-time extraction of the polarization properties from the imaged environment. To achieve the necessary frame rate, we have implemented and evaluated the image processing pipeline on three different platforms: general purpose CPU, graphics processing unit (GPU), and an embedded FPGA. The computational throughput, power consumption, precision and physical limitations of the implementations on each platform are described in detail and experimental data is provided.

York, Timothy; Powell, Samuel; Gruev, Viktor

2011-10-01

399

Models of formation and some algorithms of hyperspectral image processing  

NASA Astrophysics Data System (ADS)

Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.

Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.

2014-12-01

400

Management Of Airborne Reconnaissance Images Through Real-Time Processing  

NASA Astrophysics Data System (ADS)

Digital reconnaissance images gathered by low-altitude over-flights with resolutions on the order of a few feet and fields of view up to 120 degrees can generate millions of pixels per second. Storing this data in-flight, transmitting it to the ground, and analyzing it presents significant problems to the tactical community. One potential solution is in-flight preview and pruning of the data where an operator keeps or transmits only those image segments which on first view contain potential intelligence data. To do this, the images must be presented to the operator in a geometrically correct form. Wide-angle dis-tortion, distortions induced by yaw, pitch, roll and altitude variations, and distortions due to non-ideal alignment of the focal plane array must be removed so the operator can quickly assess the scene content and make decisions on which image segments to keep. When multiple sensors are used with a common field of view, they must be mutually coregistered to permit multispectral or multimode processing to exploit these rich data dimensions. In addition, the operator should be able to alter the apparent point of view of the image, i.e., be able to zoom in and out, rotate, and roam through the displayed field of view while maintaining geometric and radiometric precision. These disparate requirements have a common feature in the ability to perform real-time image geometry manipulation. The role of image geometry manipulation, or image warping, is reviewed and a "strawman" system dis-cussed which incorporates the Pipelined Resampling Processor (PRP). The PRP is a real-time image warping processor discussed at this conference in previous years"2'3". Actual results from the PRP prototype are presented. In addition, other image processing aids such as image enhancement and object classification are discussed as they apply to reconnaissance applications.

Endsley, Neil H.

1985-12-01

401

Color image processing and object tracking workstation  

NASA Technical Reports Server (NTRS)

A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.

Klimek, Robert B.; Paulick, Michael J.

1992-01-01

402

AR/D image processing system  

NASA Technical Reports Server (NTRS)

General Dynamics has developed advanced hardware, software, and algorithms for use with the Tomahawk cruise missile and other unmanned vehicles. We have applied this technology to the problem of locating and determining the orientation of the docking port of a target vehicle with respect to an approaching spacecraft. The system described in this presentation utilizes a multi-processor based computer to digitize and process television imagery and extract parameters such as range to the target vehicle, approach, velocity, and pitch and yaw angles. The processor is based on the Inmos T-800 Transputer and is configured as a loosely coupled array. Each processor operates asynchronously and has its own local memory. This allows additional processors to be easily added if additional processing power is required for more complex tasks. Total system throughput is approximately 100 MIPS (scalar) and 60 MFLOPS and can be expanded as desired. The algorithm implemented on the system uses a unique adaptive thresholding technique to locate the target vehicle and determine the approximate position of the docking port. A target pattern surrounding the port is than analyzed in the imagery to determine the range and orientation of the target. This information is passed to an autopilot which uses it to perform course and speed corrections. Future upgrades to the processor are described which will enhance its capabilities for a variety of missions.

Wookey, Cathy; Nicholson, Bruce

1991-01-01

403

Efficiency of image processing architectures near the focal plane array  

NASA Astrophysics Data System (ADS)

We report on the capabilities and efficiencies made possible by placing image processing functions near or on the Focal Plane Array (FPA). Recent work in advanced near FPA signal processing has shown that it is possible to migrate many of the heretofore off focal plane image processing tasks onto the Readout Integrated Circuit (ROIC). The goals of this work are to describe and demonstrate the feasibility of "Activity Sensing" and the associated computational efficiency with this type of on FPA processing. Bottleneck reduction, intelligent information processing, and adaptive bandwidth compression are also key challenges of the next generation FPA architectures with on FPA processing. We report on the development and performance benefits expected from an Activity Sensing algorithm using recorded infrared (IR) Data from a large format 1024 × 1024 variable acuity Indium-Antimonide1 FPA sensor.

Caulfield, J. T.; McCarley, P. L.; Curzan, J. P.; Massie, M. A.

2006-05-01

404

Teaching dual-process diagnostic reasoning to doctor of nursing practice students: problem-based learning and the illness script.  

PubMed

Accelerating the development of diagnostic reasoning skills for nurse practitioner students is high on the wish list of many faculty. The purpose of this article is to describe how the teaching strategy of problem-based learning (PBL) that drills the hypothetico-deductive or analytic reasoning process when combined with an assignment that fosters pattern recognition (a nonanalytic process) teaches and reinforces the dual process of diagnostic reasoning. In an online Doctor of Nursing Practice program, four PBL cases that start with the same symptom unfold over 2 weeks. These four cases follow different paths as they unfold leading to different diagnoses. Culminating each PBL case, a unique assignment called an illness script was developed to foster the development of pattern recognition. When combined with hypothetico-deductive reasoning drilled during the PBL case, students experience the dual process approach to diagnostic reasoning used by clinicians. PMID:25350904

Durham, Catherine O; Fowler, Terri; Kennedy, Sally

2014-11-01

405

On-demand server-side image processing for web-based DICOM image display  

NASA Astrophysics Data System (ADS)

Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

2000-04-01

406

Wavelets in temporal and spatial processing of biomedical images.  

PubMed

We review some of the most recent advances in the area of wavelet applications in medical imaging. We first review key concepts in the processing of medical images with wavelet transforms and multiscale analysis, including time-frequency tiling, overcomplete representations, higher dimensional bases, symmetry, boundary effects, translational invariance, orientation selectivity, and best-basis selection. We next describe some applications in magnetic resonance imaging, including activation detection and denoising of functional magnetic resonance imaging and encoding schemes. We then present an overview in the area of ultrasound, including computational anatomy with three-dimensional cardiac ultrasound. Next, wavelets in tomography are reviewed, including their relationship to the radon transform and applications in position emission tomography imaging. Finally, wavelet applications in digital mammography are reviewed, including computer-assisted diagnostic systems that support the detection and classification of small masses and methods of contrast enhancement. PMID:11701522

Laine, A F

2000-01-01

407

Cloud based toolbox for image analysis, processing and reconstruction tasks.  

PubMed

This chapter describes a novel way of carrying out image analysis, reconstruction and processing tasks using cloud based service provided on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) infrastructure. The toolbox allows users free access to a wide range of useful blocks of functionalities (imaging functions) that can be connected together in workflows allowing creation of even more complex algorithms that can be re-run on different data sets, shared with others or additionally adjusted. The functions given are in the area of cellular imaging, advanced X-ray image analysis, computed tomography and 3D medical imaging and visualisation. The service is currently available on the website www.cloudimaging.net.au . PMID:25381109

Bednarz, Tomasz; Wang, Dadong; Arzhaeva, Yulia; Lagerstrom, Ryan; Vallotton, Pascal; Burdett, Neil; Khassapov, Alex; Szul, Piotr; Chen, Shiping; Sun, Changming; Domanski, Luke; Thompson, Darren; Gureyev, Timur; Taylor, John A

2015-01-01

408

Digital camera zooming based on unified CFA image processing steps  

Microsoft Academic Search

A unified camera image processing system that performs zooming and full color image reconstruction for single-sensor digital cameras is introduced. Compact and low-cost single-sensor solutions often lack optical zooming capabilities and thus depend on digital techniques. However, the computational power required for high-quality output using traditional techniques is generally too prohibitive to implement in such devices. The proposed scheme employs

R. Lukac; K. Martin; K. N. Platanoitis

2004-01-01

409

Metrology of Image Processing in Spectral Reflectance Measurement by Uav  

NASA Astrophysics Data System (ADS)

Remote sensing based on unmanned airborne vehicles (UAVs) is rapidly developing field of technology. For many of potential UAV remote sensing applications, accurate reflectance measurements are required. Overall objective of our investigation is to develop a SI-traceable procedure for reflectance measurement using spectrometric image data collected by an UAV. In this article, our objective is to investigate the uncertainty propagation of image data post-processing. We will also present the first results of three traceable UAV remote sensing campaigns.

Honkavaara, E.; Hakala, T.; Markelin, L.; Peltoniemia, , J.

2014-03-01

410

A study of correlation technique on pyramid processed images  

Microsoft Academic Search

The pyramid algorithm is potentially a powerful tool for advanced television image processing and for pattern recognition.\\u000a An attempt is made to design and develop both hardware and software for a system which performs decomposition and reconstruction\\u000a of digitized images by implementing the Burt pyramid algorithm.\\u000a \\u000a In this work, an attempt is also made to study correlation performance on reconstructed

M. Sankar Kishore; K. Veerabhadra Rao

2000-01-01

411

Surface Distresses Detection of Pavement Based on Digital Image Processing  

Microsoft Academic Search

\\u000a Pavement crack is the main form of early diseases of pavement. The use of digital photography to record pavement images and\\u000a subsequent crack detection and classification has undergone continuous improvements over the past decade. Digital image processing\\u000a has been applied to detect the pavement crack for its advantages of large amount of information and automatic detection. The\\u000a applications of digital

Aiguo Ouyang; Chagen Luo; Chao Zhou

2010-01-01

412

Eye-Gaze Tracking Research Based on Image Processing  

Microsoft Academic Search

This paper presents an eye-gaze tracking system based on the image processing. All the computations are performed in software and the system just needs a PC camera attached to the user's computer. We first extract the facial regions form the images using the skin-color model and connected-component analysis. Then the eye regions are detected by employing the rules and area

Tao Liu; Changle Pang

2008-01-01

413

Experiential Learning Process: Exploring Teaching and Learning of Strategic Management Framework through the Winter Survival Exercise  

ERIC Educational Resources Information Center

This article examines an attempt to introduce experiential learning methods in a business strategy course. In organizational behavior and industrial/organizational psychology, experiential teaching methods have been so widely adopted that some authors have suggested dropping the distinction between experiential and traditional teaching. Although…

Joshi, Maheshkumar P.; Davis, Elizabeth B.; Kathuria, Ravi; Weidner, C. Ken, II

2005-01-01

414

The Process of Developing a Partnership between Teaching Artists and Teachers  

ERIC Educational Resources Information Center

The author is a teaching artist. Most of her previous experiences as a teaching artist have afforded her the opportunity to create and facilitate a lesson or an arc of lessons through a residency in a school. She was the "expert" artist in the partnership. The author longs to have a deeper, more fruitful impact on classroom learning. What might…

Lee, Bridget

2013-01-01

415

Developing the skills and techniques for online language teaching: a focus on the process  

Microsoft Academic Search

This paper aims to describe the experience of two online tutors as they learn to teach the language to learners at a distance. The two tutors formed part of a cohort of eight participants who attended a four-week training course (Stage 1) followed by an eight-week online teaching practice (Stage 2) from November 2006 to February 2007 at Griffith University,

Mike Levy; Yuping Wang; Nian-Shing Chen

2009-01-01

416

Self-Assessment of Self-Assessment in a Process of Co-Teaching  

ERIC Educational Resources Information Center

The present paper engages in a qualitative research of self-assessment of two lecturers and their students within the framework of a mathematics teaching seminar course (a course during which students submit a research final work) at a teachers' training college in Israel. Two lecturers co-teach in the course - one of them in the discipline of…

Wolffensperger, Yochie; Patkin, Dorit

2013-01-01

417

A Pilot-Scale Heat Recovery System for Computer Process Control Teaching and Research.  

ERIC Educational Resources Information Center

Describes the experimental system and equipment including an interface box for displaying variables. Discusses features which make the circuit suitable for teaching and research in computing. Feedforward, decoupling, and adaptive control, examination of digital filtering, and a cascade loop are teaching experiments utilizing this rig. Diagrams and…

Callaghan, P. J.; And Others

1988-01-01

418

Capstone Teaching Models: Combining Simulation, Analytical Intuitive Learning Processes, History and Effectiveness  

ERIC Educational Resources Information Center

For the past decade teaching models have been changing, reflecting the dynamics, complexities, and uncertainties of today's organizations. The traditional and the more current active models of learning have disadvantages. Simulation provides a platform to combine the best aspects of both types of teaching practices. This research explores the…

Reid, Maurice; Brown, Steve; Tabibzadeh, Kambiz

2012-01-01

419

Student teachers' thinking processes and ICT integration: Predictors of prospective teaching behaviors with educational technology  

Microsoft Academic Search

Student teachers should be prepared to integrate information and communication technology (ICT) into their future teaching and learning practices. Despite the increased availability and support for ICT inte- gration, relatively few teachers intend to integrate ICT into their teaching activities (e.g., Ertmer, 2005). The available research has thus far mainly focused on isolated teacher related variables to explain the weak

Guoyuan Sang; Martin Valcke; Johan van Braak; Jo Tondeur

2010-01-01

420

Application of Digital Image Processing Methods for Portal Image Quality Improvement  

SciTech Connect

The different processing methods which could increase the contrast (unsharp mask, histogram equalization, and deconvolution) and reduce noise (median filter) were analysed. An application which allows the importation of BeamView files (ACR-NEMA 2.0 format) and the application of the above mentioned methods were developed. The main objective was to obtain the most accurate comparison of Beamview images with Digitally Received Radiograms. The preliminary results of image processing methods are presented.

Gorlachev, G. E. [Radiology Department, N.N. Burdenko Neurosurgical Institute, Moscow (Russian Federation); Kosyrev, D. S. [Radiology Department, N.N. Burdenko Neurosurgical Institute, Moscow (Russian Federation); Medical Physics Department, Moscow Engineering Physical Institute (Russian Federation)

2007-11-26

421

Image processing for new optical pattern recognition encoders  

NASA Astrophysics Data System (ADS)

An all new type of absolute, optical encoder with ultra-high sensitivity has been developed at NASA's Goddard Space Flight Center. These position measuring encoders are unconventional in that they rely on computational pattern recognition of high speed, electronic images, made of a moving, backlit scale which carries absolute position information of either linear or rotary format. The pattern recognition algorithms combine edge detection, threshold level sensing, spatial compression, and centroiding along with fault recovery through scale image defect detection. Details of the encoder scale patterns and their design rules and the image processing algorithm which gives these encoders their unique and unparalleled performance characteristics are discussed.

Leviton, Douglas B.

2000-11-01

422

Toolbox for advanced x-ray image processing  

NASA Astrophysics Data System (ADS)

A software system has been developed for high-performance Computed Tomography (CT) reconstruction, simulation and other X-ray image processing tasks utilizing remote computer clusters optionally equipped with multiple Graphics Processing Units (GPUs). The system has a streamlined Graphical User Interface for interaction with the cluster. Apart from extensive functionality related to X-ray CT in plane-wave and cone-beam forms, the software includes multiple functions for X-ray phase retrieval and simulation of phase-contrast imaging (propagation-based, analyzer crystal based and Talbot interferometry). Other features include several methods for image deconvolution, simulation of various phase-contrast microscopy modes (Zernike, Schlieren, Nomarski, dark-field, interferometry, etc.) and a large number of conventional image processing operations (such as FFT, algebraic and geometrical transformations, pixel value manipulations, simulated image noise, various filters, etc.). The architectural design of the system is described, as well as the two-level parallelization of the most computationally-intensive modules utilizing both the multiple CPU cores and multiple GPUs available in a local PC or a remote computer cluster. Finally, some results about the current system performance are presented. This system can potentially serve as a basis for a flexible toolbox for X-ray image analysis and simulation, that can efficiently utilize modern multi-processor hardware for advanced scientific computations.

Gureyev, Timur E.; Nesterets, Yakov; Ternovski, Dimitri; Thompson, Darren; Wilkins, Stephen W.; Stevenson, Andrew W.; Sakellariou, Arthur; Taylor, John A.

2011-09-01

423

Latency and bandwidth considerations in parallel robotics image processing  

SciTech Connect

Parallel image processing for robotics applications differs in a fundamental way from parallel scientific computing applications: the problem size is fixed, and latency requirements are tight. This brings Amdhal`s law in effect with full force, so that message-passing latency and bandwidth severely restrict performance. In this paper the authors examine an application from this domain, stereo image processing, which has been implemented in Adapt, a niche language for parallel image processing implemented on the Carnegie Mellon-Intel Corporation iWarp. High performance has been achieved for this application. They show how a I/O building block approach on iWarp achieved this, and then examine the implications of this performance for more traditional machines that do not have iWarp`s rich I/O primitive set.

Webb, J.A. [Carnegie Mellon Univ., Pittsburgh, PA (United States). School of Computer Science

1993-12-31

424

The Multimission Image Processing Laboratory's virtual frame buffer interface  

NASA Technical Reports Server (NTRS)

Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.

Wolfe, T.

1984-01-01

425

An Automated Image Processing System for Concrete Evaluation  

SciTech Connect

AlliedSignal Federal Manufacturing & Technologies (FM&T) was asked to perform a proof-of-concept study for the Missouri Highway and Transportation Department (MHTD), Research Division, in June 1997. The goal of this proof-of-concept study was to ascertain if automated scanning and imaging techniques might be applied effectively to the problem of concrete evaluation. In the current evaluation process, a concrete sample core is manually scanned under a microscope. Voids (or air spaces) within the concrete are then detected visually by a human operator by incrementing the sample under the cross-hairs of a microscope and by counting the number of "pixels" which fall within a void. Automation of the scanning and image analysis processes is desired to improve the speed of the scanning process, to improve evaluation consistency, and to reduce operator fatigue. An initial, proof-of-concept image analysis approach was successfully developed and demonstrated using acquired black and white imagery of concrete samples. In this paper, the automated scanning and image capture system currently under development will be described and the image processing approach developed for the proof-of-concept study will be demonstrated. A development update and plans for future enhancements are also presented.

Baumgart, C.W.; Cave, S.P.; Linder, K.E.

1998-11-23

426

Grid Computing Application for Brain Magnetic Resonance Image Processing  

NASA Astrophysics Data System (ADS)

This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

Valdivia, F.; Crépeault, B.; Duchesne, S.

2012-02-01

427

Automated Processing of Zebrafish Imaging Data: A Survey  

PubMed Central

Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

2013-01-01

428

Image Algebra Matlab language version 2.3 for image processing and compression research  

NASA Astrophysics Data System (ADS)

Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation of IAM to include polymorphic operations over different point sets, as well as recursive convolution operations and functional composition. We also show how image algebra and IAM can be employed in image processing and compression research, as well as algorithm development and analysis.

Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

2010-08-01

429

A new programming metaphor for image processing procedures  

NASA Technical Reports Server (NTRS)

Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and thus be used as a component of other factories. A bare-bones prototype of factory programming was implemented under the PcIPS image processing system, and a complete version (on a multitasking platform) is under development.

Smirnov, O. M.; Piskunov, N. E.

1992-01-01

430

A synoptic description of coal basins via image processing  

NASA Technical Reports Server (NTRS)

An existing image processing system is adapted to describe the geologic attributes of a regional coal basin. This scheme handles a map as if it were a matrix, in contrast to more conventional approaches which represent map information in terms of linked polygons. The utility of the image processing approach is demonstrated by a multiattribute analysis of the Herrin No. 6 coal seam in Illinois. Findings include the location of a resource and estimation of tonnage corresponding to constraints on seam thickness, overburden, and Btu value, which are illustrative of the need for new mining technology.

Farrell, K. W., Jr.; Wherry, D. B.

1978-01-01

431

High-resolution imaging of the supercritical antisolvent process  

NASA Astrophysics Data System (ADS)

A high-magnification and high-resolution imaging technique was developed for the supercritical fluid antisolvent (SAS) precipitation process. Visualizations of the jet injection, flow patterns, droplets, and particles were obtained in a high-pressure vessel for polylactic acid and budesonide precipitation in supercritical CO2. The results show two regimes for particle production: one where turbulent mixing occurs in gas-like plumes, and another where distinct droplets were observed in the injection. Images are presented to demonstrate the capabilities of the method for examining particle formation theories and for understanding the underlying fluid mechanics, thermodynamics, and mass transport in the SAS process.

Bell, Philip W.; Stephens, Amendi P.; Roberts, Christopher B.; Duke, Steve R.

2005-06-01

432

Mobile medical image retrieval  

Microsoft Academic Search

Images are an integral part of medical practice for diagnosis, treatment planning and teaching. Image retrieval has gained in importance mainly as a research domain over the past 20 years. Both textual and visual retrieval of images are essential. In the process of mobile devices becoming reliable and having a functionality equaling that of formerly desktop clients, mobile computing has

Samuel Duc; Adrien Depeursinge; Ivan Eggel; Henning Müller

2011-01-01

433

DSP filters in FPGAs for image processing applications  

NASA Astrophysics Data System (ADS)

Real-time video-rate image processing requires orders of magnitude performance beyond the capabilities of general purpose computers. ASICs deliver the required performance, however they have the drawback of fixed functionality. Field programmable gate arrays (FPGAs) are reprogrammable SRAM based ICs capable of real-time image processing. FPGAs deliver the benefits of hardware execution speeds and software programmability. An FPGA program creates a custom data processor, which executes the equivalent of hundreds to thousands of lines of C code on the same clock tick. FPGAs emulate circuits which are normally built as ASICs. Multiple real-time video streams can be processed in Giga Operations' Spectrum Reconfigurable Computing (RC) PlatformTM. The Virtual Bus ArchitectureTM enables the same hardware to be configured into many image processing architectures, including 32-bit pipelines, global busses, rings, and systolic arrays. This allows an efficient mapping of data flows and memory access for many image processing applications and the implementation of many real-time DSP filters, including convolutions, morphological operators, and recoloring and resampling algorithms. FPGAs provide significant price/performance benefits versus ASICs where time to market, cost to market, and technical risk are issues. And FPGA descriptions migrate efficiently and easily into ASICs for downstream cost reduction.

Taylor, Brad

1996-10-01

434

Teaching ethical analysis in environmental management decisions: a process-oriented approach.  

PubMed

The general public and environmental policy makers often perceive management actions of environmental managers as "science," when such actions are, in fact, value judgments about when to intervene in natural processes. The choice of action requires ethical as well as scientific analysis because managers must choose a normative outcome to direct their intervention. I examine a management case study involving prescribed burning of sagebrush (Artemisia tridentata) communities in south-central Montana (USA) to illustrate how to teach students to ethically evaluate a management action by precisely identifying: 1) the proposed management action, 2) the deficiency of the system to be remedied by the action, 3) the stakeholders affected by the action, and 4) the category and type of values affirmed in the management action. Through such analysis, students are taught to recognize implicit and explicit value judgments associated with management actions, identify stakeholders to whom managers have legitimate ethical obligations, and practice a general method of ethical analysis applicable to many forms of environmental management. PMID:16279763

Dyke, Fred Van

2005-10-01

435

Woods Hole Image Processing System Software implementation; using NetCDF as a software interface for image processing  

USGS Publications Warehouse

The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.

Paskevich, Valerie F.

1992-01-01

436

Parallel-Processing Software for Creating Mosaic Images  

NASA Technical Reports Server (NTRS)

A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

2008-01-01

437

Ice images processing interface for automatic features extraction  

NASA Astrophysics Data System (ADS)

Canadian Coast Guard has the mandate to maintain the navigability of the St.-Lawrence seaway. It must prevent ice jam formation. Radar, sonar sensors and cameras are used to verify ice movement and keep a record of pertinent data. The cameras are placed along the seaway at strategic locations. Images are processed and saved for future reference. The Ice Images Processing Interface (IIPI) is an integral part of Ices Integrated System (IIS). This software processes images to extract the ice speed, concentration, roughness, and rate of flow. Ice concentration is computed from image segmentation using color models and a priori information. Speed is obtained from a region-matching algorithm. Both concentration and speed calculations are complex, since they require a calibration step involving on-site measurements. Color texture features provide ice roughness estimation. Rate of flow uses ice thickness, which is estimated from sonar sensors on the river floor. Our paper will present how we modeled and designed the IIPI, the issues involved and its future. For more reliable results, we suggest that meteorological data be provided, change in camera orientation be changed, sun reflections be anticipated, and more a priori information, such as radar images available at some sites, be included.

Tardif, Pierre M.

2001-02-01

438

Image processing of metal surface with structured light  

NASA Astrophysics Data System (ADS)

In structured light vision measurement system, the ideal image of structured light strip, in addition to black background , contains only the gray information of the position of the stripe. However, the actual image contains image noise, complex background and so on, which does not belong to the stripe, and it will cause interference to useful information. To extract the stripe center of mental surface accurately, a new processing method was presented. Through adaptive median filtering, the noise can be preliminary removed, and the noise which introduced by CCD camera and measured environment can be further removed with difference image method. To highlight fine details and enhance the blurred regions between the stripe and noise, the sharping algorithm is used which combine the best features of Laplacian operator and Sobel operator. Morphological opening operation and closing operation are used to compensate the loss of information.Experimental results show that this method is effective in the image processing, not only to restrain the information but also heighten contrast. It is beneficial for the following processing.

Luo, Cong; Feng, Chang; Wang, Congzheng

2014-09-01

439

Line scan CCD image processing for biomedical application  

NASA Astrophysics Data System (ADS)

Blood samples are frequently analyzed for the blood disorders or other diseases in the research and clinic applications. Most of the analyses are related to blood cell counts and blood cell sizes. In this paper, the line scan CCD imaging system is developed, which is based on the Texas Instruments' TMS320C6416T (DSP6416), a high performance digital signal processor and Altera's Field programmable Gate Array (FPGA) EP3C25F324. It is used to acquire and process the images of blood cells for counting the number of cells, sizing and positioning them. The cell image is captured by line scan CCD sensor and then the digital image data converted by Analogue Front-End (AFE) are transferred into FPGA, after pre-processing they are transferred into DSP6416 through the interface of First In First Out (FIFO) in FPGA and External Memory Interfaces (EMIF) of DSP6416. Then the image data are processed in DSP6416. Experimental results show that this system is useful and efficient.

Lee, Choon-Young; Yan, Lei; Lee, Sang-Ryong

2010-02-01

440

Spatial versus temporal stability issues in image processing neuro chips.  

PubMed

A typical image processing neuro chip consists of a regular array of very simple cell circuits. When it is implemented by a CMOS process, two stability issues naturally arise. First, parasitic capacitors of MOS transistors induce temporal dynamics. Since a processed image is given as the stable limit point of the temporal dynamics, a temporally unstable chip is unusable. Second, because of the array structure, the node voltage distribution induces spatial dynamics, and it could behave in a wild manner, e.g. oscillatory. The main contributions are: (i) a clarification of the spatial stability issue; (ii) explicit if and only if conditions for the temporal and the spatial stability in terms of circuit parameters; (iii) a rigorous explanation of the fact that even though the spatial stability is stronger than the temporal stability, the set of parameter values for which the two stability issues disagree is of (Lebesgue) measure zero; and (iv) theoretical estimates of the processing speed. PMID:18276456

Matsumoto, T; Kobayashi, H; Togawa, Y

1992-01-01

441

Development Process of a Praxeology for Supporting the Teaching of Proofs in a CAS Environment Based on Teachers' Experience in a Professional Development Course  

ERIC Educational Resources Information Center

This paper presents the development process of a "praxeology" (theory-of-practice) for supporting the teaching of proofs in a CAS environment. The characteristics of the praxeology were elaborated within the frame of a professional development course for teaching analytic geometry with CAS. The theoretical framework draws on Chevallard's…

Zehavi, Nurit; Mann, Giora

2011-01-01

442

Study of optical techniques for the Ames unitary wind tunnel: Digital image processing, part 6  

NASA Technical Reports Server (NTRS)

A survey of digital image processing techniques and processing systems for aerodynamic images has been conducted. These images covered many types of flows and were generated by many types of flow diagnostics. These include laser vapor screens, infrared cameras, laser holographic interferometry, Schlieren, and luminescent paints. Some general digital image processing systems, imaging networks, optical sensors, and image computing chips were briefly reviewed. Possible digital imaging network systems for the Ames Unitary Wind Tunnel were explored.

Lee, George

1993-01-01

443

An image-processing program for automated counting  

USGS Publications Warehouse

An image-processing program developed by the National Institute of Health, IMAGE, was modified in a cooperative project between remote sensing specialists at the Ohio State University Center for Mapping and scientists at the Alaska Science Center to facilitate estimating numbers of black brant (Branta bernicla nigricans) in flocks at Izembek National Wildlife Refuge. The modified program, DUCK HUNT, runs on Apple computers. Modifications provide users with a pull down menu that optimizes image quality; identifies objects of interest (e.g., brant) by spectral, morphometric, and spatial parameters defined interactively by users; counts and labels objects of interest; and produces summary tables. Images from digitized photography, videography, and high- resolution digital photography have been used with this program to count various species of waterfowl.

Cunningham, D.J.; Anderson, W.H.; Anthony, R.M.

1996-01-01

444

Infective endocarditis detection through SPECT/CT images digital processing  

NASA Astrophysics Data System (ADS)

Infective endocarditis (IE) is a difficult-to-diagnose pathology, since its manifestation in patients is highly variable. In this work, it was proposed a semiautomatic algorithm based on SPECT images digital processing for the detection of IE using a CT images volume as a spatial reference. The heart/lung rate was calculated using the SPECT images information. There were no statistically significant differences between the heart/lung rates values of a group of patients diagnosed with IE (2.62+/-0.47) and a group of healthy or control subjects (2.84+/-0.68). However, it is necessary to increase the study sample of both the individuals diagnosed with IE and the control group subjects, as well as to improve the images quality.

Moreno, Albino; Valdés, Raquel; Jiménez, Luis; Vallejo, Enrique; Hernández, Salvador; Soto, Gabriel

2014-03-01

445

Image-plane processing for improved computer vision  

NASA Technical Reports Server (NTRS)

The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

1984-01-01

446

[VISIOPATH: system of telediagnosis and digital image processing].  

PubMed

VISIOPATH is a system that exploits the most recent techniques for digital processing of images. With performance tools, it can acquire and visualise images in 16 million colors on a high resolution screen. VISIOPATH has telediagnosis functions. It allows an exchange of medical images between doctors in real time or recorded. The "Digital Network with Integrated Services" (ISDN), from France Telecom which bring a way of communicating that is flexible and efficient, and allows an exchange of images, sound and data, with high quality and security. It brings to the specialists the means of a stricter interpretation, that is to say of a quicker and safer diagnosis. It can be adapted to other medical specialties. PMID:8526573

Joyez, J C

1995-01-01

447

Collection and processing data for high quality CCD images.  

SciTech Connect

Coherent Change Detection (CCD) with Synthetic Aperture Radar (SAR) images is a technique whereby very subtle temporal changes can be discerned in a target scene. However, optimal performance requires carefully matching data collection geometries and adjusting the processing to compensate for imprecision in the collection geometries. Tolerances in the precision of the data collection are discussed, and anecdotal advice is presented for optimum CCD performance. Processing considerations are also discussed.

Doerry, Armin Walter

2007-03-01

448

Parallel-Processing Software for Correlating Stereo Images  

NASA Technical Reports Server (NTRS)

A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

2007-01-01

449

Eclipse: ESO C Library for an Image Processing Software Environment  

NASA Astrophysics Data System (ADS)

Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems.

Devillard, Nicolas

2011-12-01

450

Solar physics applications of computer graphics and image processing  

NASA Technical Reports Server (NTRS)

Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.

Altschuler, M. D.

1985-01-01

451

Processing techniques for digital sonar images from GLORIA.  

USGS Publications Warehouse

Image processing techniques have been developed to handle data from one of the newest members of the remote sensing family of digital imaging systems. This paper discusses software to process data collected by the GLORIA (Geological Long Range Inclined Asdic) sonar imaging system, designed and built by the Institute of Oceanographic Sciences (IOS) in England, to correct for both geometric and radiometric distortions that exist in the original 'raw' data. Preprocessing algorithms that are GLORIA-specific include corrections for slant-range geometry, water column offset, aspect ratio distortion, changes in the ship's velocity, speckle noise, and shading problems caused by the power drop-off which occurs as a function of range.-from Author

Chavez, P.S., Jr.

1986-01-01

452

Color separation in forensic image processing using interactive differential evolution.  

PubMed

Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. PMID:25400037

Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb

2015-01-01

453

Personal Computer (PC) based image processing applied to fluid mechanics  

NASA Technical Reports Server (NTRS)

A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

Cho, Y.-C.; Mclachlan, B. G.

1987-01-01

454

Remote sensing study based on IRSA Remote Sensing Image Processing System  

Microsoft Academic Search

The IRSA Remote Sensing Image Processing System is multi-functional software used for satellite image processing. It consists of over ten parts of the routine and typical used modules in Remote Sensing Image Processing project, such as viewer & file import\\/export, basic processing, image restoration. As an indigenous developed software, IRSA combines the advantages and kernels of many import famous systems,

Ling Peng; Zhongming Zhao; Linli Cui; Lu Wang

2004-01-01

455

Remotely sensed image distributed processing system design with web services technology  

Microsoft Academic Search

With the development of Remote Sensing and digital image processing technology, it becomes very important and imminent for remotely sensed images to be processed in the distributed environment. This paper aims at the implementation of remotely sensed image distributed processing, firstly analyzes the current implementation method and technique of remotely sensed image distributed processing, then points out the problems it

Zhanfeng Shen; Dongping Ming; Junli Li

2005-01-01

456

Smartphones as image processing systems for prosthetic vision.  

PubMed

The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research. PMID:24110531

Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

2013-01-01

457

SHORT COMMUNICATION: An image processing approach to calibration of hydrometers  

Microsoft Academic Search

The usual method adopted for multipoint calibration of glass hydrometers is based on the measurement of the buoyancy by hydrostatic weighing when the hydrometer is plunged in a reference liquid up to the scale mark to be calibrated. An image processing approach is proposed by the authors to align the relevant scale mark with the reference liquid surface level. The

S. Lorefice; A. Malengo

2004-01-01

458

Automatic image stabilizing system by full-digital signal processing  

Microsoft Academic Search

An automatic image-stabilizing system for camcorders and VCRs utilizing only digital signal processing has been developed. New technologies for this system are (1) the BERP (band extract representative point) matching technique with a small-scale circuit, (2) an adaptive system control algorithm to discriminate moving objects, and (3) suppression of motion vectors due to noise. Calculations show that the motion vector

K. Uomori; A. Morimura; H. Ishii; T. Sakaguchi; Y. Kitamura

1990-01-01

459

Image Processing using Java and C#: A Comparison Approach  

Microsoft Academic Search

This paper presents results of a study to compare Java and C# programming languages features in terms of portability, functional programming and execution time. This comparison permits to evaluate both programming languages to know which one has better performance in the image processing area.

María Isabel; Díaz Figueroa

460

Parallel asynchronous hardware implementation of image processing algorithms  

NASA Technical Reports Server (NTRS)

Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

Coon, Darryl D.; Perera, A. G. U.

1990-01-01

461

OPTICAL FLANK WEAR MONITORING OF CUTTING TOOLS BY IMAGE PROCESSING  

E-print Network

, television camera, pneumatic probe and so forth [2,3]. These methods have the advantage of high measuring, acoustic emission, cutting temperature and surface roughness [ 4 ­ 8]. However, few reliable indirect based on image processing has been developed and tested in this work. 2. Flank wear and tool life

Kim, Yong Jung

462

A new approach for image processing in foreign fiber detection  

Microsoft Academic Search

In the textile industry, different types of foreign fibers may be mixed in cotton that need to be sorted out to ensure the quality of the final cotton textile products. Automated visual inspection (AVI) system is a popular tool at present for real time foreign fibers detection in lint. The image processing is one of the key techniques in the

Wenzhu Yang; Daoliang Li; Liang Zhu; Yuguo Kang; Futang Li

2009-01-01

463

Metamaterials for threat reduction applications: imaging, signal processing, and cloaking  

E-print Network

Metamaterials for threat reduction applications: imaging, signal processing, and cloaking R. D structured materials, termed metamaterials (MM), has dramati- cally expanded our view of electromagnetic with metamaterials provides a promising approach--from a device perspective--towards fill- ing this gap

464

Spatial processing for coherent noise reduction in ultrasonic imaging  

E-print Network

Spatial processing for coherent noise reduction in ultrasonic imaging NihatM. Bilgutay:43.60.Gk, 43.35.2c INTRODUCTION In recent years, ultrasonic nondestructive testing (NDT becauseofits feasibility,versatility, and efectiveness. However, the quaiity of ultrasonic imagesis often

Saniie, Jafar

465

Digital Image Processing of Earth Observation Sensor Data  

Microsoft Academic Search

This paper describes digital image processing techniques that were developed to precisely correct Landsat multispectral Earth observation data and gives illustrations of the results achieved, e.g., geometric corrections with an error of less than one picture element, a relative error of one-fourth picture element, and no radiometric error effect. Techniques for enhancing the sensor data, digitally mosaicking multiple scenes, and

Ralph Bernstein

1976-01-01

466

Image of Europe from Abroad: The Pleasures and Pitfalls of Teaching German Cinema in America  

ERIC Educational Resources Information Center

This article describes strategies specific to teaching German film courses at American universities, particularly how to capture the interest of students who have not studied film previously and have little understanding of German culture, history, or the language. I suggest starting with discussions on the interrelatedness of "foreign film" and…

William, Jennifer Marston

2006-01-01

467

Application Of Image Processing To Human Motion Analysis  

NASA Astrophysics Data System (ADS)

A novel method is presented for the determination of position and orientation of interconnected human body segments relative to a spatial coordinate system. The development of this new method was prompted by the inadequacy of the techniques currently in use for recorded images. In these techniques, markers are fixed to certain points on the skin of the subject. However, due to skin movement relative to the skeleton and various other factors, the configurational coordinates derived from digitized marker positions may be grossly erroneous with disastrous consequences for the subsequent motion analysis. The new method is based on body-segment shape recognition in the video-image domain. During the recording session, the subject carries special, tight-fitting clothing which permits the unambiguous recognition of segmental shapes and boundaries from the recorded video images. The recognition is performed by means of an edge detection algorithm followed by the computation of the positions and orientations relative to the spatial axes system of all segments of the body model. The new method is implemented on an advanced, special high speed graphic system (Impuls, System 2400) based on transputer chips. The parallel processing capability of this system permits the simultaneous computation of the configurational characteristics for all segments visible in the image. After processing one complete image frame, the video digitizer is instructed to automatically proceed to the next frame, thereby enabling the user to automatically evaluate large amounts of successive frames.

Baca, Arnold

1989-10-01

468

Compact hybrid optoelectrical unit for image processing and recognition  

NASA Astrophysics Data System (ADS)

In this paper a compact opto-electric unit (CHOEU) for digital image processing and recognition is proposed. The central part of CHOEU is an incoherent optical correlator, which is realized with a SHARP QA-1200 8.4 inch active matrix TFT liquid crystal display panel which is used as two real-time spatial light modulators for both the input image and reference template. CHOEU can do two main processing works. One is digital filtering; the other is object matching. Using CHOEU an edge-detection operator is realized to extract the edges from the input images. Then the reprocessed images are sent into the object recognition unit for identifying the important targets. A novel template- matching method is proposed for gray-tome image recognition. A positive and negative cycle-encoding method is introduced to realize the absolute difference measurement pixel- matching on a correlator structure simply. The system has god fault-tolerance ability for rotation distortion, Gaussian noise disturbance or information losing. The experiments are given at the end of this paper.

Cheng, Gang; Jin, Guofan; Wu, Minxian; Liu, Haisong; He, Qingsheng; Yuan, ShiFu

1998-07-01

469

Instant super-resolution imaging in live cells and embryos via analog image processing  

PubMed Central

Existing super-resolution fluorescence microscopes compromise acquisition speed to provide subdiffractive sample information. We report an analog implementation of structured illumination microscopy that enables 3D super-resolution imaging with 145 nm lateral and 350 nm axial resolution, at acquisition speeds up to 100 Hz. By performing image processing operations optically instead of digitally, we removed the need to capture, store, and combine multiple camera exposures, increasing data acquisition rates 10–100x over other super-resolution microscopes and acquiring and displaying super-resolution images in real-time. Low excitation intensities allow imaging over hundreds of 2D sections, and combined physical and computational sectioning allow similar depth penetration to confocal microscopy. We demonstrate the capability of our system by imaging fine, rapidly moving structures including motor-driven organelles in human lung fibroblasts and the cytoskeleton of flowing blood cells within developing zebrafish embryos. PMID:24097271

York, Andrew G.; Chandris, Panagiotis; Nogare, Damian Dalle; Head, Jeffrey; Wawrzusin, Peter; Fischer, Robert S.; Chitnis, Ajay; Shroff, Hari

2013-01-01

470

RegiStax: Alignment, stacking and processing of images  

NASA Astrophysics Data System (ADS)

RegiStax is software for alignment/stacking/processing of images; it was released over 10 years ago and continues to be developed and improved. The current version is RegiStax 6, which supports the following formats: AVI, SER, RFL (RegiStax Framelist), BMP, JPG, TIF, and FIT. This version has a shorter and simpler processing sequence than its predecessor, and optimizing isn't necessary anymore as a new image alignment method optimizes directly. The interface of RegiStax 6 has been simplified to look more uniform in appearance and functionality, and RegiStax 6 now uses Multi-core processing, allowing the user to have up to have multiple cores(recommended to use maximally 4) working simultaneous during alignment/stacking.

Berrevoets, Cor; DeClerq, Bart; George, Tony; Makolkin, Dmitry; Maxson, Paul; Pilz, Bob; Presnyakov, Pavel; Roel, Eric; Weiller, Sylvain

2012-06-01

471

Liquid crystal thermography and true-colour digital image processing  

NASA Astrophysics Data System (ADS)

In the last decade thermochromic liquid crystals (TLC) and true-colour digital image processing have been successfully used in non-intrusive technical, industrial and biomedical studies and applications. Thin coatings of TLCs at surfaces are utilized to obtain detailed temperature distributions and heat transfer rates for steady or transient processes. Liquid crystals also can be used to make visible the temperature and velocity fields in liquids by the simple expedient of directly mixing the liquid crystal material into the liquid (water, glycerol, glycol, and silicone oils) in very small quantities to use as thermal and hydrodynamic tracers. In biomedical situations e.g., skin diseases, breast cancer, blood circulation and other medical application, TLC and image processing are successfully used as an additional non-invasive diagnostic method especially useful for screening large groups of potential patients. The history of this technique is reviewed, principal methods and tools are described and some examples are also presented.

Stasiek, J.; Stasiek, A.; Jewartowski, M.; Collins, M. W.

2006-06-01

472

Thermographic Imaging And Computer Image Processing Of Defects In Building Materials  

NASA Astrophysics Data System (ADS)

An image processing system has been coupled to both a thermographic and a video camera to quantify defects from images of building materials. Several applications to building materials are presented including the detection of delaminations in single-ply roofing membrane seams, the characterization of the extent of corrosion under pigmented organic coatings on metallic substrates, the determination of the roughness of a sand-blasted metallic substrate, and the determination of the porosity in hardened cement paste.

Martin, Jonathan W.; McKnight, Mary E.; Bentz, Dale P.

1986-03-01

473

Deal with three-dimension image in laser processing system  

NASA Astrophysics Data System (ADS)

By the analysis of the DXF file format of three-dimension image, the scheme that uses the technology of spatial and triangular transformation is formulated. The arithmetic for this scheme was given out in this paper. The control software for this arithmetic has been used in CO2 three-dimension laser processing system. Compared with some different schemes, the advantage of using this scheme in laser processing system was also pointed out. This scheme can be popularized in the other laser processing system.

Lu, Yu; Chen, Guannan; Chen, Hongmin

2005-01-01

474

Image processing as a tool to improve machine performance and process control  

Microsoft Academic Search

Image processing has been used as a tool to determine the trash content in cotton webs. The experiments showed that in particular seed coat particles are difficult to separate from the cotton during cleaning processes whereas neps, leaf and wooden fragments can easily be removed. A correlation between the number of seed coat particles and USTER-imperfections in the yarn has

D. Veit; I. Hormes; J. Bergmann; B. Wulfhorst

1996-01-01

475

Quantitative evaluation of phase processing approaches in susceptibility weighted imaging  

NASA Astrophysics Data System (ADS)

Susceptibility weighted imaging (SWI) takes advantage of the local variation in susceptibility between different tissues to enable highly detailed visualization of the cerebral venous system and sensitive detection of intracranial hemorrhages. Thus, it has been increasingly used in magnetic resonance imaging studies of traumatic brain injury as well as other intracranial pathologies. In SWI, magnitude information is combined with phase information to enhance the susceptibility induced image contrast. Because of global susceptibility variations across the image, the rate of phase accumulation varies widely across the image resulting in phase wrapping artifacts that interfere with the local assessment of phase variation. Homodyne filtering is a common approach to eliminate this global phase variation. However, filter size requires careful selection in order to preserve image contrast and avoid errors resulting from residual phase wraps. An alternative approach is to apply phase unwrapping prior to high pass filtering. A suitable phase unwrapping algorithm guarantees no residual phase wraps but additional computational steps are required. In this work, we quantitatively evaluate these two phase processing approaches on both simulated and real data using different filters and cutoff frequencies. Our analysis leads to an improved understanding of the relationship between phase wraps, susceptibility effects, and acquisition parameters. Although homodyne filtering approaches are faster and more straightforward, phase unwrapping approaches perform more accurately in a wider variety of acquisition scenarios.

Li, Ningzhi; Wang, Wen-Tung; Sati, Pascal; Pham, Dzung L.; Butman, John A.

2012-03-01

476

Using Imaging Spectroscopy to Study Ecosystem Processes and Properties  

NSDL National Science Digital Library

This peer-reviewed resource from Bioscience is about recent advances in imaging spectroscopy to study ecosystems. Remote sensing data provide essential input for today's climate and ecosystem models. It is generally agreed that many model processes are not accurately depicted by current remotely sensed indices of vegetation and that new observational capabilities are needed at different spatial and spectral scales to reduce uncertainty. Recent advances in materials and optics have allowed the development of smaller, more stable, accurately calibrated imaging spectrometers that can quantify biophysical properties on the basis of the spectral absorbing and scattering characteristics of the land surface. Airborne and spaceborne imaging spectrometers, which measure large numbers (hundreds) of narrow spectral bands, are becoming more widely available from government and commercial sources; thus, it is increasingly feasible to use data from imaging spectroscopy for environmental research. In contrast to multispectral sensors, imaging spectroscopy produces quantitative estimates of biophysical absorptions, which can be used to improve scientific understanding of ecosystem functioning and properties. We present the recent advances in imaging spectroscopy and new capabilities for using it to quantify a range of ecological variables.

SUSAN L. USTIN, DAR A. ROBERTS, JOHN A. GAMON, GREGORY P. ASNER, and ROBERT O. GREEN (;)

2004-06-01

477

Computed tomography perfusion imaging denoising using Gaussian process regression  

NASA Astrophysics Data System (ADS)

Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study.

Zhu, Fan; Carpenter, Trevor; Rodriguez Gonzalez, David; Atkinson, Malcolm; Wardlaw, Joanna

2012-06-01

478

EMAN2: an extensible image processing suite for electron microscopy.  

PubMed

EMAN is a scientific image processing package with a particular focus on single particle reconstruction from transmission electron microscopy (TEM) images. It was first released in 1999, and new versions have been released typically 2-3 times each year since that time. EMAN2 has been under development for the last two years, with a completely refactored image processing library, and a wide range of features to make it much more flexible and extensible than EMAN1. The user-level programs are better documented, more straightforward to use, and written in the Python scripting language, so advanced users can modify the programs' behavior without any recompilation. A completely rewritten 3D transformation class simplifies translation between Euler angle standards and symmetry conventions. The core C++ library has over 500 functions for image processing and associated tasks, and it is modular with introspection capabilities, so programmers can add new algorithms with minimal effort and programs can incorporate new capabilities automatically. Finally, a flexible new parallelism system has been designed to address the shortcomings in the rigid system in EMAN1. PMID:16859925

Tang, Guang; Peng, Liwei; Baldwin, Philip R; Mann, Deepinder S; Jiang, Wen; Rees, Ian; Ludtke, Steven J

2007-01-01

479

Linear pixel shuffling for image processing: an introduction  

NASA Astrophysics Data System (ADS)

We investigate a method of ordering pixels (the elements of a rectangular matrix) based on an arithmetic progression with wrap-around (modular arithmetic). For appropriate choices of the progression's parameters, based on a generalization of Fibonacci numbers and the golden mean, we find equidistributed collections of pixels formed by subintervals of the pixel progression of 'shuffle.' We illustrate this equidistributivity with a novel approach to progressive rendering of a synthetic image, and we suggest several opportunities for its application to other areas of image processing.

Anderson, Peter G.

1993-04-01

480

Lunar and Planetary Science XXXV: Image Processing and Earth Observations  

NASA Technical Reports Server (NTRS)

The titles in this section include: 1) Expansion in Geographic Information Services for PIGWAD; 2) Modernization of the Integrated Software for Imagers and Spectrometers; 3) Science-based Region-of-Interest Image Compression; 4) Topographic Analysis with a Stereo Matching Tool Kit; 5) Central Avra Valley Storage and Recovery Project (CAVSARP) Site, Tucson, Arizona: Floodwater and Soil Moisture Investigations with Extraterrestrial Applications; 6) ASE Floodwater Classifier Development for EO-1 HYPERION Imagery; 7) Autonomous Sciencecraft Experiment (ASE) Operations on EO-1 in 2004; 8) Autonomous Vegetation Cover Scene Classification of EO-1 Hyperion Hyperspectral Data; 9) Long-Term Continental Areal Reduction Produced by Tectonic Processes.

2004-01-01

481

Application of digital image processing techniques to astronomical imagery, 1979  

NASA Technical Reports Server (NTRS)

Several areas of applications of image processing to astronomy were identified and discussed. These areas include: (1) deconvolution for atmospheric seeing compensation; a comparison between maximum entropy and conventional Wiener algorithms; (2) polarization in galaxies from photographic plates; (3) time changes in M87 and methods of displaying these changes; (4) comparing emission line images in planetary nebulae; and (5) log intensity, hue saturation intensity, and principal component color enhancements of M82. Examples are presented of these techniques applied to a variety of objects.

Lorre, J. J.

1979-01-01

482

Using Satellite Imagery and Google Tools to Teach Planetary Surface Forming Processes. (Invited)  

NASA Astrophysics Data System (ADS)

Space and planetary science topics are included as an important part of an introductory physical geology course to give students the ability to understand and interpret satellite imagery and have a broad, synergistic understanding of surface processes