These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

Applying a visual language for image processing as a graphical teaching tool in medical imaging  

NASA Astrophysics Data System (ADS)

Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.

Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin

1992-05-01

2

A self-teaching image processing and voice-recognition-based, intelligent and interactive system to educate visually impaired children  

NASA Astrophysics Data System (ADS)

A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.

Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad

2010-02-01

3

Image Processing  

NASA Technical Reports Server (NTRS)

Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

1993-01-01

4

Teaching: A Reflective Process  

ERIC Educational Resources Information Center

In this article, the authors describe how they used formative assessments to ferret out possible misconceptions among middle-school students in a unit about weather-related concepts. Because they teach fifth- and eighth-grade science, this assessment also gives them a chance to see how student understanding develops over the years. This year they…

German, Susan; O'Day, Elizabeth

2009-01-01

5

Linear Algebra and Image Processing  

ERIC Educational Resources Information Center

We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

Allali, Mohamed

2010-01-01

6

Digital Image Processing  

Microsoft Academic Search

This paper describes the basic technological aspects of Digital Image Processing with special reference to satellite image processing. Basically, all satellite image-processing operations can be grouped into three categories: Image Rectification and Restoration, Enhancement and Information Extraction. The former deals with initial processing of raw image data to correct for geometric distortion, to calibrate the data radiometrically and to eliminate

Minakshi Kumar

1981-01-01

7

Learning to Use Geographic Information Systems and Image Processing and Analysis to Teach Ocean Science to Middle School Students  

NASA Astrophysics Data System (ADS)

This presentation will provide a middle school teacher's perspective on Ocean Explorers, a three-year project directed at teachers and schools in California. Funded by the Information Technology Experiences for Students and Teachers (ITEST) program at the National Science Foundation, Ocean Explorers is giving support to teams of teachers that will serve as local user groups for the exploration of geographic information systems (GIS) and image processing and analysis (IPA) as educational technologies for studying ocean science. Conducted as a collaboration between the nonprofit Center for Image Processing in Education and the Channel Islands National Marine Sanctuary, the project is providing mentoring, software, equipment, funding, and training on how to design inquiry-based activities that support achievement of California's standards for science, technology, mathematics, and reading education. During year two of Ocean Explorers, the teams of teachers will begin to use GIS and IPA as tools for involving their students in original research on issues of interest to their home communities. With assistance from the Ocean Explorers project, the teachers will create inquiry-based activities for their students that will help their school achieve targeted standards. This presentation will focus on plans by one teacher for involving students from St. Mary's Middle School, Fullerton, California, in tracking of ocean pollution and beach closures along the Southern California coast.

Moore, S. D.; Martin, J.; Kinzel, M.

2004-12-01

8

Multispectral imaging and image processing  

NASA Astrophysics Data System (ADS)

The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.

Klein, Julie

2014-02-01

9

Digital image processing  

Microsoft Academic Search

The field of digital image processing is reviewed with reference to its origins, progress, current status, and prospects for the future. Consideration is given to the evolution of image processor display devices, developments in the functional components of an image processor display system (e.g. memory, data bus, and pipeline central processing unit), and developments in the software. The major future

B. R. Hunt

1981-01-01

10

Retinex Image Processing  

NSDL National Science Digital Library

Retinex Image Processing technology, developed by NASA, is used to compensate for the effect of poor lighting in recorded images. Shadows, changes in the color of illumination, and several other factors can cause image quality to be highly variable. Using an advanced system that sharpens images and efficiently renders colors, a much more constant image quality can be achieved regardless of the lighting. Retinex technology is described in several online publications that can be downloaded from this Web site. Additionally, some example pictures of scenes taken with and without the image processing are shown.

11

Art Images for College Teaching (AICT)  

NSDL National Science Digital Library

Allan T. Kohl of the Minneapolis College of Art and Design presents Art Images for College Teaching (AICT), "a royalty-free image exchange resource for the educational community." AICT images may be downloaded, and making derivative copies is permitted, as long as the images will be used for educational or personal purposes, not commercial. AICT on the Web consists of selections from a more extensive image collection on CD, arranged in five broad chronological sections: Ancient, Medieval Era, Renaissance & Baroque, 18th - 20th Century, and Non-Western cultures. AICT is still in development; for example, the 18th - 20th Century section currently contains several messages informing users "there is nothing here." Educational institutions and individuals who find this too limiting can rent entire image CDs. Already, AICT includes a great many of the images necessary for teaching art history courses. One helpful feature is a concordance to about a dozen standard art history textbooks, allowing users to cross-reference AICT images to these books.

12

Image Processing System  

NASA Technical Reports Server (NTRS)

Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

1986-01-01

13

Image Processing Software  

NASA Astrophysics Data System (ADS)

ABSTRACT: A brief description of astronomical image software is presented. This software was developed in a Digital Micro Vax II Computer System. : St presenta una somera descripci6n del software para procesamiento de imagenes. Este software fue desarrollado en un equipo Digital Micro Vax II. : DATA ANALYSIS - IMAGE PROCESSING

Bosio, M. A.

1990-11-01

14

Meteorological image processing applications  

NASA Technical Reports Server (NTRS)

Meteorologists at NASA's Goddard Space Flight Center are conducting an extensive program of research in weather and climate related phenomena. This paper focuses on meteorological image processing applications directed toward gaining a detailed understanding of severe weather phenomena. In addition, the paper discusses the ground data handling and image processing systems used at the Goddard Space Flight Center to support severe weather research activities and describes three specific meteorological studies which utilized these facilities.

Bracken, P. A.; Dalton, J. T.; Hasler, A. F.; Adler, R. F.

1979-01-01

15

Image sets for satellite image processing systems  

Microsoft Academic Search

The development of novel image processing algorithms requires a diverse and relevant set of training images to ensure the general applicability of such algorithms for their required tasks. Images must be appropriately chosen for the algorithm's intended applications. Image processing algorithms often employ the discrete wavelet transform (DWT) algorithm to provide efficient compression and near-perfect reconstruction of image data. Defense

Michael R. Peterson; Toby Horner; Asael Temple

2011-01-01

16

BAOlab: Image processing program  

NASA Astrophysics Data System (ADS)

BAOlab is an image processing package written in C that should run on nearly any UNIX system with just the standard C libraries. It reads and writes images in standard FITS format; 16- and 32-bit integer as well as 32-bit floating-point formats are supported. Multi-extension FITS files are currently not supported. Among its tools are ishape for size measurements of compact sources, mksynth for generating synthetic images consisting of a background signal including Poisson noise and a number of pointlike sources, imconvol for convolving two images (a “source” and a “kernel”) with each other using fast fourier transforms (FFTs) and storing the output as a new image, and kfit2d for fitting a two-dimensional King model to an image.

Larsen, Søren S.

2014-03-01

17

Image processing techniques for digital breast imaging  

NASA Astrophysics Data System (ADS)

Digital imaging offers major advantages over conventional film radiology, especially with respect to image quality, the speed with which the images can be viewed, the ability to perform image processing, and the potential for computer aided diagnosis. A typical mammographic image requires 10 million pixels of data, assuming 50 micrometers square pixels. Currently, there are not single sensor that can satisfy these specifications. One approach to acquiring full-breast digital images utilizes multiple sub-images from two 1024 by 1024 pixel charge coupled devices. This paper describes how the full-breast image is obtained by translating the sensor apparatus and 'stitching' the sub-images together. Radiologist desire seamless full-breast images, so a 'blending' process was developed to prevent visible seams in the full-breast image. Also, flaws in the detection system are removed by image processing techniques. FInally, the process of enhancing an image for film printing is described.

Otto, Gregory P.; Palmer, Douglas A.; Tran, Jean-Marie; Spivey, Brett A.; Clark, Stuart E.

1996-11-01

18

Image processing techniques for digital breast imaging  

Microsoft Academic Search

Digital imaging offers major advantages over conventional film radiology, especially with respect to image quality, the speed with which the images can be viewed, the ability to perform image processing, and the potential for computer aided diagnosis. A typical mammographic image requires 10 million pixels of data, assuming 50 micrometers square pixels. Currently, there are not single sensor that can

Gregory P. Otto; Douglas A. Palmer; Jean-Marie Tran; Brett A. Spivey; Stuart E. Clark

1996-01-01

19

Image processing and reconstruction  

SciTech Connect

This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

Chartrand, Rick [Los Alamos National Laboratory

2012-06-15

20

Image-Processing Program  

NASA Technical Reports Server (NTRS)

IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

Roth, D. J.; Hull, D. R.

1994-01-01

21

A Process Approach to Teaching Thematic Instruction.  

ERIC Educational Resources Information Center

This paper presents information on using a process approach to teaching thematic instruction in preservice teacher education. Section 1 offers a junior block thematic mini-unit designed to give students intensive practice in a specific content area by designing a thematic topic in the content area and by developing lessons on three or four related…

Murray, Ann

22

Image Processing Chapter 6: Color Image  

E-print Network

of color in image processing is motivated by: 1. Color is important in object recognition 2. Human eyes can of human eye: all colors are seen as variable combinations of 3 primary colors, Red, Green and Blue-emitting · Three primary colors are added and received by the eye as a full-color image 9 Color image processing

Wu, Xiaolin

23

Image processing technology  

SciTech Connect

This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The primary objective of this project was to advance image processing and visualization technologies for environmental characterization. This was effected by developing and implementing analyses of remote sensing data from satellite and airborne platforms, and demonstrating their effectiveness in visualization of environmental problems. Many sources of information were integrated as appropriate using geographic information systems.

Van Eeckhout, E.; Pope, P.; Balick, L. [and others

1996-07-01

24

scikit-image: image processing in Python  

PubMed Central

scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

2014-01-01

25

scikit-image: image processing in Python.  

PubMed

scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

2014-01-01

26

IMAGES: An interactive image processing system  

NASA Technical Reports Server (NTRS)

The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.

Jensen, J. R.

1981-01-01

27

Smart Image Enhancement Process  

NASA Technical Reports Server (NTRS)

Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

2012-01-01

28

Topics in genomic image processing  

E-print Network

-FISH and cDNA microarray images. In particular, we focus on three important areas: M-FISH image compression, microarray image processing and expression-based classification. Two schemes, embedded M-FISH image coding (EMIC) and Microarray BASICA: Background...

Hua, Jianping

2006-04-12

29

FORTRAN Algorithm for Image Processing  

NASA Technical Reports Server (NTRS)

FORTRAN computer algorithm containing various image-processing analysis and enhancement functions developed. Algorithm developed specifically to process images of developmental heat-engine materials obtained with sophisticated nondestructive evaluation instruments. Applications of program include scientific, industrial, and biomedical imaging for studies of flaws in materials, analyses of steel and ores, and pathology.

Roth, Don J.; Hull, David R.

1987-01-01

30

The Astronomical Image Processing System  

E-print Network

The Astronomical Image Processing System Eric W. Greisen September 12, 1988 National Radio!egreisen Introduction The Astronomical Image Processing System, AIPS for short, is the National Radio and what they will find unnatural. And I cannot conceive some of the algorithms which will be invented

Groppi, Christopher

31

Absolute limits on image processing  

Microsoft Academic Search

The metrology of image processing for medical diagnostic imaging systems can be developed from two different approaches. One method involves the comparison of actual clinical efficacy of one processing scheme versus another. A paradigm for this approach is the ROC analysis technique outlined by Metz and by Swets. The second method, taking as its starting point axioms of information theory

David G. Brown; Robert F. Wagner; Mary Pastel Anderson

1980-01-01

32

Image Processing 2: Color Spaces  

E-print Network

Preprint 1 With J: Image Processing 2: Color Spaces Cliff Reiter Mathematics Department Lafayette for many types of image analysis and processing. We will discuss conversion between RGB and YUV, YIQ, HSV red, green and blue. Figure 1 shows RGB color space envisioned as a cube. Grays appear along

Reiter, Clifford A.

33

Fundamentals of Image Processing  

E-print Network

at the edges or noise points #12;Signals ­ Examples #12;Motivation: noise reduction · Assume image is degraded occurrences of black and white pixels ­ Impulse noise: random occurrences of white pixels ­ Gaussian noise of the sigma? #12;Motivation: noise reduction · Make multiple observations of the same static scene · Take

Erdem, Erkut

34

Enhancing the Teaching-Learning Process: A Knowledge Management Approach  

ERIC Educational Resources Information Center

Purpose: The purpose of this paper is to emphasize the need for knowledge management (KM) in the teaching-learning process in technical educational institutions (TEIs) in India, and to assert the impact of information technology (IT) based KM intervention in the teaching-learning process. Design/methodology/approach: The approach of the paper is…

Bhusry, Mamta; Ranjan, Jayanthi

2012-01-01

35

What Does Teaching Writing as a Process Really Mean?  

Microsoft Academic Search

For most teachers, teaching writing as a process consists of having students write a draft and requiring them to re-write it after peer editing. This write-rewrite process may benefit students who are already familiar with the rhetorical structure and linguistic characteristics of the expected text. But for students who are not, more systematic teaching of specific composing skills seems to

Antonia Chandrasegaran

36

Astronomical Image Processing with Hadoop  

NASA Astrophysics Data System (ADS)

In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.

Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

2011-07-01

37

Image Processing: Some Challenging Problems  

NASA Astrophysics Data System (ADS)

Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing.

Huang, T. S.; Aizawa, K.

1993-11-01

38

Hilbert transform in image processing  

Microsoft Academic Search

Generally, the Hilbert transform plays an important role in dealing with analytical functions. Its main contribution to the signal processing era is to change electrical signals to be of low-pass style instead of band-pass. Image signals are commonly considered as of electrical nature and thus committed to this concept. The Fourier spectrum of images can be contributed to the Hilbert

A. O. A. Salam

1999-01-01

39

Changes in the images of teaching, teachers, and children expressed by student teachers before and after student teaching.  

PubMed

The purpose of this study was to investigate how education majors' images of teaching, teachers, and children change before and after student teaching, with special attention to the grade level (Grades 1-2, 3-4, 5-6) taught by the student teachers at primary school in Japan. A total of 126 student teachers from an education faculty (49 men, 77 women) participated in this study using metaphor-questionnaires before and after student teaching. For images of teaching, responses to the factors Dull Event and Live Event changed, suggesting that students started to develop more positive, active, and clear images of teaching. For images of teachers, responses on the factor Performer changed, suggesting that students started to develop more active images of teachers. For images of children, responses on the factors Critic and Pure-minded Person changed, suggesting that student teachers started to develop more realistic images of children. However, grade level taught had no significant effect. PMID:20712166

Mishima, Tomotaka; Horimoto, Akihiro; Mori, Toshiaki

2010-06-01

40

Teaching Image Computation: From Computer Graphics to Computer Vision  

E-print Network

Teaching Image Computation: From Computer Graphics to Computer Vision Bruce A. Draper and J. Ross Beveridge Department of Computer Science Colorado State University Fort Collins, CO 80523 draper@cs.colostate.edu ross@cs.colostate.edu Keywords: Computer Vision, Computer Graphics, Education, Course Design

Draper, Bruce A.

41

Transmitting Musical Images: Using Music to Teach Public Speaking  

ERIC Educational Resources Information Center

Oral communication courses traditionally help students identify the proper arrangement of words to convey particular images. Although instructors emphasize the importance of speaking powerfully, they often struggle to find effective ways to teach their students "how" to deliver a message that resonates with the audience. Given the clear importance…

Cohen, Steven D.; Wei, Thomas E.

2010-01-01

42

History Making and the Plains Indians. Teaching with Images.  

ERIC Educational Resources Information Center

Considers the use of cultural images and symbols of Native Americans to reflect, interpret, and justify the westward expansion of the United States. Seldom overtly racist, paintings and lithographs of the time often presented a benign and romantic vision of the West. Includes suggested teaching ideas. (MJP)

Rothwell, Jennifer Truran

1997-01-01

43

Magnetic resonance imaging simulator: a teaching tool for radiology.  

PubMed

The increasing use of magnetic resonance imaging (MRI) as a clinical modality has put an enormous burden on medical institutions to cost effectively teach MRI scanning techniques to technologists and physicians. Since MRI scanner time is a scarce resource, it would be ideal if the teaching could be effectively performed off-line. In order to meet this goal, the radiology Department at the University of Pennsylvania has designed and developed a Magnetic Resonance Imaging Simulator. The simulator in its current implementation mimics the General Electric Signa (General Electric Magnetic Resonance Imaging System, Milwaukee, WI) scanner's user interface for image acquisition. The design is general enough to be applied to other MRI scanners. One unique feature of the simulator is its incorporation of an image-synthesis module that permits the user to derive images for any arbitrary combination of pulsing parameters for spin-echo, gradient-echo, and inversion recovery pulse sequences. These images are computed in 5 seconds. The development platform chosen is a standard Apple Macintosh II (Apple Computer, Inc, Cupertino, CA) computer with no specialized hardware peripherals. The user interface is implemented in HyperCard (Apple Computer Inc, Cupertino, CA). All other software development including synthesis and display functions are implemented under the Macintosh Programmer's Workshop 'C' environment. The scan parameters, demographics, and images are tracked using an Oracle (Oracle Corp, Redwood Shores, CA) data base. Images are currently stored on magnetic disk but could be stored on optical media with minimal effort. PMID:2085559

Rundle, D; Kishore, S; Seshadri, S; Wehrli, F

1990-11-01

44

Factors Causing Demotivation in EFL Teaching Process: A Case Study  

ERIC Educational Resources Information Center

Studies have mainly focused on strategies to motivate teachers or the student-teacher motivation relationships rather than teacher demotivation in the English as a foreign language (EFL) teaching process, whereas no data have been found on the factors that cause teacher demotivation in the Turkish EFL teaching contexts at the elementary education…

Aydin, Selami

2012-01-01

45

Amateur Image Pipeline Processing using Python plus PyRAF  

NASA Astrophysics Data System (ADS)

A template pipeline spanning observing planning to publishing is offered as a basis for establishing a long term observing program. The data reduction pipeline encapsulates all policy and procedures, providing an accountable framework for data analysis and a teaching framework for IRAF. This paper introduces the technical details of a complete pipeline processing environment using Python, PyRAF and a few other languages. The pipeline encapsulates all processing decisions within an auditable framework. The framework quickly handles the heavy lifting of image processing. It also serves as an excellent teaching environment for astronomical data management and IRAF reduction decisions.

Green, Wayne

2012-05-01

46

Teaching electron diffraction and imaging of macromolecules.  

PubMed Central

Electron microscopic analysis can be used to determine the three-dimensional structures of macromolecules at resolutions ranging between 3 and 30 A. It differs from nuclear magnetic resonance spectroscopy or x-ray crystallography in that it allows an object's Coulomb potential functions to be determined directly from images and can be used to study relatively complex macromolecular assemblies in a crystalline or noncrystalline state. Electron imaging already has provided valuable structural information about various biological systems, including membrane proteins, protein-nucleic acid complexes, contractile and motile protein assemblies, viruses, and transport complexes for ions or macromolecules. This article, organized as a series of lectures, presents the biophysical principles of three-dimensional analysis of objects possessing different symmetries. Images FIGURE 3 FIGURE 4 FIGURE 5 FIGURE 6 FIGURE 7 FIGURE 9 FIGURE 10 FIGURE 11 FIGURE 12 FIGURE 13 FIGURE 17 FIGURE 18 PMID:8324196

Chiu, W; Schmid, M F; Prasad, B V

1993-01-01

47

Associative architecture for image processing  

NASA Astrophysics Data System (ADS)

This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

Adar, Rutie; Akerib, Avidan

1997-09-01

48

Teaching with "Voix et Images de France"  

ERIC Educational Resources Information Center

A report on the classroom use of Voix et Images de France," the French text prepared by the Centre de Recherche et d'Etude pourla Diffusion du Francais (CREDIF) at the Ecole Normale Superieure de Saint-Cloud in France. (FB)

Marrow, G. D.

1970-01-01

49

Computer processing of radiographic images  

NASA Technical Reports Server (NTRS)

In the past 20 years, a substantial amount of effort has been expended on the development of computer techniques for enhancement of X-ray images and for automated extraction of quantitative diagnostic information. The historical development of these methods is described. Illustrative examples are presented and factors influencing the relative success or failure of various techniques are discussed. Some examples of current research in radiographic image processing is described.

Selzer, R. H.

1984-01-01

50

Digital processing of radiographic images  

NASA Technical Reports Server (NTRS)

Some techniques are presented and the software documentation for the digital enhancement of radiographs. Both image handling and image processing operations are considered. The image handling operations dealt with are: (1) conversion of format of data from packed to unpacked and vice versa; (2) automatic extraction of image data arrays; (3) transposition and 90 deg rotations of large data arrays; (4) translation of data arrays for registration; and (5) reduction of the dimensions of data arrays by integral factors. Both the frequency and the spatial domain approaches are presented for the design and implementation of the image processing operation. It is shown that spatial domain recursive implementation of filters is much faster than nonrecursive implementations using fast fourier transforms (FFT) for the cases of interest in this work. The recursive implementation of a class of matched filters for enhancing image signal to noise ratio is described. Test patterns are used to illustrate the filtering operations. The application of the techniques to radiographic images of metallic structures is demonstrated through several examples.

Bond, A. D.; Ramapriyan, H. K.

1973-01-01

51

Using Goldenrod Galls to Teach Science Process Skills.  

ERIC Educational Resources Information Center

Emphasizes the importance of using examples from the student's environment to aid in teaching science process skills. The author uses diagrams to aid in discussing the various uses of goldenrod (Solidago sp) galls in the classroom. (ZWH)

Peard, Terry L.

1994-01-01

52

Using Classic and Contemporary Visual Images in Clinical Teaching.  

ERIC Educational Resources Information Center

The patient's body is an image that medical students and residents use to process information. The classic use of images using the patient is qualitative and personal. The contemporary use of images is quantitative and impersonal. The contemporary use of imaging includes radiographic, nuclear, scintigraphic, and nuclear magnetic resonance…

Edwards, Janine C.

1990-01-01

53

FITS Liberator: Image processing software  

NASA Astrophysics Data System (ADS)

The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

2012-06-01

54

Teaching the Inquiry Process to 21st Century Learners  

ERIC Educational Resources Information Center

Unlike the static, set-in-stone research project, the inquiry process is an interactive cycle used to teach research in any content area. The inquiry process engages students in a way that promotes critical thinking, higher-level processing, and the use of more varied and appropriate resources. This article introduces the inquiry process and…

Carnesi, Sabrina; DiGiorgio, Karen

2009-01-01

55

Teaching Science: A Picture Perfect Process.  

ERIC Educational Resources Information Center

Explains how teachers can use graphs and graphing concepts when teaching art, language arts, history, social studies, and science. Students can graph the lifespans of the Ninja Turtles' Renaissance namesakes (Donatello, Michelangelo, Raphael, and Leonardo da Vinci) or world population growth. (MDM)

Leyden, Michael B.

1994-01-01

56

The Change Process and Interdisciplinary Teaching.  

ERIC Educational Resources Information Center

Assesses the concerns raised by a group of middle-level teachers when developing and implementing interdisciplinary curriculum, teaming, and teaching. Participants progressed through various stages, including information, personal, management, and collaboration. Concerns at each level had to be resolved before teachers could reach the final…

Whinery, Barbara L.; Faircloth, C. Victoria

1994-01-01

57

Fingerprint recognition using image processing  

NASA Astrophysics Data System (ADS)

Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

Dholay, Surekha; Mishra, Akassh A.

2011-06-01

58

Teaching Tools: Physics Downloads, Movies, and Images  

NSDL National Science Digital Library

The University of California Berkeley Physics Lecture Demonstrations Web site contains a page entitled Things of Interest: Downloads, Movies, and Images. The highlight of the site is the downloadable movies of physics experiments that should be very helpful for time and/or money constrained educators. The ten experiments include movies of a chladni disk, Jacob's ladder, dippy bird, a person rotating in a chair while holding dumbbells, a person in a chair with a rotating bicycle wheel, gyroscopic precession, a superconductor, a levitator, jumping rings, and a Tesla coil.

2002-01-01

59

Computer image processing: Geologic applications  

NASA Technical Reports Server (NTRS)

Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

Abrams, M. J.

1978-01-01

60

Guest Editorial Color image processing  

E-print Network

, virtual restoration of artworks, mul- timedia information mining, and color management for peripheral expanding area. However, it can provide a good starting point for researchers and practicing engineers image processing and analysis. 2. Quick facts about the special issue The guest editors suggested

Plataniotis, Konstantinos N.

61

SAR Image Processing using GPU  

NASA Astrophysics Data System (ADS)

Synthetic aperture Radar (SAR) has been extensively used for space-borne Earth observations in recent times. In conventional SAR systems analog beam-steering techniques are capable of implementing multiple operational modes, such as the Stripmap, ScanSAR, and Spotlight, to fulfill the different requirements in terms of spatial resolution and coverage. Future RADAR satellites need to resolve the complex issues such as wide area coverage and resolution. Digital beamforming (DBF) is a promising technique to overcome the problems mentioned above. In communication satellites DBF technique is already implemented. This paper discuses the relevance of DBF in space-borne RADAR satellites for enhancements to quality imaging. To implement DBF in SAR, processing of SAR data is an important step. This work focused on processing of Level 1.1 and 1.5 SAR image data. The SAR raw data is computationally intensive to process. To resolve the computation problem, high performance computing (HPC) is necessary. The relevance of HPC for SAR data processing using an off-the-shelf graphical processing unit (GPU) over CPU is discussed in this paper. Quantitative estimates on SAR image processing performance comparisons using both CPU and GPU are also provided as validation for the results.

Shanmugha Sundaram, GA; Sujith Maddikonda, Syam

62

Experience Report: Teaching and Using the Personal Software Process PSP  

E-print Network

Experience Report: Teaching and Using the Personal Software Process PSP Lutz Prechelt prechelt in using a personal software process are keeping enough self-discipline and nding proper tool support. 1 The Personal Software Pro- cess PSP The Personal Software Process PSP frame- work is an approach suggested

Prechelt, Lutz

63

FIPE: a forensic image-processing environment  

Microsoft Academic Search

The aim of the `Image and Signal Processing' section at the INCC\\/NICC is to develop their own software tools in image processing. This paper gives an overview of the image processing environment that has been developed for about two years in the section. Above all, the main idea of FIPE (Forensic Image Processing Environment) is to build an user- friendly

Thai Quan Huynh-Thu; Yves Lardinois

1999-01-01

64

SDMS-An image processing application  

Microsoft Academic Search

Lights! Camera! Action! constitute the basic fundamentals of Digital image processing. Digital image processing is a rapidly evolving field with growing applications in science and engineering. Image processing holds the possibility of developing the ultimate machine that could perform the visual functions of all living beings. Digital image processing has a broad spectrum of applications such as remote sensing via

Bhushan V. Patil; Kalpesh V. Joshi; Kiran H. Sonawane

2009-01-01

65

Image processing applications in NDE  

SciTech Connect

Nondestructive examination (NDE) can be defined as a technique or collection of techniques that permits one to determine some property of a material or object without damaging the object. There are a large number of such techniques and most of them use visual imaging in one form or another. They vary from holographic interferometry where displacements under stress are measured to the visual inspection of an objects surface to detect cracks after penetrant has been applied. The use of image processing techniques on the images produced by NDE is relatively new and can be divided into three general categories: classical image enhancement; mensuration techniques; and quantitative sensitometry. An example is discussed of how image processing techniques are used to nondestructively and destructively test the product throughout its life cycle. The product that will be followed is the microballoon target used in the laser fusion program. The laser target is a small (50 to 100 ..mu..m - dia) glass sphere with typical wall thickness of 0.5 to 6 ..mu..m. The sphere may be used as is or may be given a number of coatings of any number of materials. The beads are mass produced by the millions and the first nondestructive test is to separate the obviously bad beads (broken or incomplete) from the good ones. After this has been done, the good beads must be inspected for spherocity and wall thickness uniformity. The microradiography of the glass, uncoated bead is performed on a specially designed low-energy x-ray machine. The beads are mounted in a special jig and placed on a Kodak high resolution plate in a vacuum chamber that contains the x-ray source. The x-ray image is made with an energy less that 2 keV and the resulting images are then inspected at a magnification of 500 to 1000X. Some typical results are presented.

Morris, R.A.

1980-01-01

66

Image processing with ImageJ  

Microsoft Academic Search

Wayne Rasband of NIH has created ImageJ, an open source Java-written program that is now at version 1.31 and is used for many imaging applications, including those that that span the gamut from skin analysis to neuroscience. ImageJ is in the public domain and runs on any operating system (OS). ImageJ is easy to use and can do many imaging

M. D. Abramoff; Paulo J. Magalhães; Sunanda J. Ram

2004-01-01

67

Teaching quality in higher education: An introductory review on a process-oriented teaching-quality model  

Microsoft Academic Search

For improving the quality of teaching in higher education, besides the organisational focus or the result-oriented and momentary evaluations, attention can be paid to teachers and their ‘teaching processes’. With this in mind, the information systems quality (ISQ) laboratory developed a process-oriented model that offered teachers a systematic and incremental way for superior teaching excellence. The model, which is termed

Chung-Yang Chen; Pei-Chi Chen; Pei-Ying Chen

2012-01-01

68

The Neon Image Processing Language  

Microsoft Academic Search

Neon is a high-level domain-specific programming language for writing efficient image processing programs which can run on either the CPU or the GPU. End users write Neon programs in a C# programming environment. When the Neon program is executed, our optimizing code generator outputs human-readable source files for either the CPU or GPU. These source files are then added to

Brian Guenter; Diego Nehab

2010-01-01

69

Online Radiology Teaching Files and Medical Image Atlas and Database  

NSDL National Science Digital Library

Radiologists, students of radiology, and those who are interested in medical images in general will be delighted to hear about this website. Created by MedPix, the teaching files and medical images offered here can be browsed by organ system, and after selecting a particular system, visitors can learn about more about each slide in great detail. Another feature of the site allows visitors to offer their own diagnosis before learning the particulars of each discrete medical case. Visitors can also consider the "Case of the Week" area, which features a case that has been selected by the peer review board that vets all of the images that find their way into their archive. Additionally, the site also features an advanced search engine for those users who know exactly what they want.

70

Image processing software for imaging spectrometry  

NASA Technical Reports Server (NTRS)

The paper presents a software system, Spectral Analysis Manager (SPAM), which has been specifically designed and implemented to provide the exploratory analysis tools necessary for imaging spectrometer data, using only modest computational resources. The basic design objectives are described as well as the major algorithms designed or adapted for high-dimensional images. Included in a discussion of system implementation are interactive data display, statistical analysis, image segmentation and spectral matching, and mixture analysis.

Mazer, Alan S.; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

1988-01-01

71

Agent communication based SAR image parallel processing  

Microsoft Academic Search

Airborne SAR remote sensing image has the characteristic of large data volume and computation burden, so the processing needs very large computer memory and stronger computation ability. Based on the introduction of the SAR image processing procedure, we study the SAR image processing using computer parallel computation technology. The parallel processing mechanism is based on the parallel computer cluster operation

Tianhe Chi; Xin Zhang; Hongqiao Wu; Jinyun Fang

2003-01-01

72

TEACHING PEER REVIEW AND THE PROCESS OF  

E-print Network

guidelines, and grading criteria (8). It is also possible that students do not recognize a practical need and graduate students understand neither the process of scientific writing nor the significance of peer review undergraduate students the full scientific publishing process, including anonymous peer review, during

Guilford, William

73

Morphological image sequence processing Karol Mikula1  

E-print Network

such as ultrasound (US), magnetic resonance imaging (MRI) and computed tomography imaging (CT) en- able called optical flow, is desired. Moreover the method takes into account the velocity in whose direction background work on image processing, image sequence processing and the optical flow problem. In Sec- tion 3

Preusser, Tobias

74

Morphological image sequence processing Karol Mikula  

E-print Network

(US), magnetic resonance imaging (MRI) and computed tomography imaging (CT) en- able called optical flow, is desired. Moreover the method takes into account the velocity in whose direction background work on image processing, image sequence processing and the optical flow problem. In Sec- tion 3

Rumpf, Martin

75

Radiology image orientation processing for workstation display  

Microsoft Academic Search

Radiology images are acquired electronically using phosphor plates that are read in Computed Radiology (CR) readers. An automated radiology image orientation processor (RIOP) for determining the orientation for chest images and for abdomen images has been devised. In addition, the chest images are differentiated as front (AP or PA) or side (Lateral). Using the processing scheme outlined, hospitals will improve

Chung-Fu Chang; Kermit Hu; Dennis L. Wilson

1998-01-01

76

Teaching the Dance Class: Strategies to Enhance Skill Acquisition, Mastery and Positive Self-Image  

ERIC Educational Resources Information Center

Effective teaching of dance skills is informed by a variety of theoretical frameworks and individual teaching and learning styles. The purpose of this paper is to present practical teaching strategies that enhance the mastery of skills and promote self-esteem, self-efficacy, and positive self-image. The predominant thinking and primary research…

Mainwaring, Lynda M.; Krasnow, Donna H.

2010-01-01

77

SIP : a smart digital image processing library  

E-print Network

The Smart Image Processing (SIP) library was developed to provide automated real-time digital image processing functions on camera phones with integer microprocessors. Many of the functions are not available on commercial ...

Zhou, Mengyao

2005-01-01

78

Research on Case Teaching Method of Photoshop Course  

Microsoft Academic Search

Considering of the application fields of Photoshop and vocational education ideas, the teaching contents of Photoshop course have been done by cases based on the teaching characteristics of Computer Image Processing course at graphic design module. This paper has given the teaching flowchart of Photoshop image processing case teaching. According to the teachers and the students, the case teaching is

Xie Nan; Sun Xinxin; Zhang Haibo; Zhu Yijia

2010-01-01

79

Interactive Virtual Client for Teaching Occupational Therapy Evaluative Processes  

E-print Network

Interactive Virtual Client for Teaching Occupational Therapy Evaluative Processes Sharon Stansfield-274-3630 sstansfield@ithaca.edu Marilyn Kane Occupational Therapy Department Ithaca College Ithaca, NY USA +1 607-based educational tool for Occupational Therapy students learning client evaluation techniques. The software

Stansfield, Sharon

80

Student Evaluation of Teaching: An Instrument and a Development Process  

ERIC Educational Resources Information Center

This article describes the process of faculty-led development of a student evaluation of teaching instrument at Centurion School of Rural Enterprise Management, a management institute in India. The instrument was to focus on teacher behaviors that students get an opportunity to observe. Teachers and students jointly contributed a number of…

Alok, Kumar

2011-01-01

81

A Plan for Teaching Data Processing to Library Science Students.  

ERIC Educational Resources Information Center

An outline is proposed for a library school course in data processing for libraries that is different from other such courses in that it emphasizes the operations of the computer itself over the study of library computer systems. The course begins with a study of computer hardware then moves to the teaching of assembly language using the MIX…

Losee, Robert M., Jr.

82

Modeling the Research Process: Alternative Approaches to Teaching Undergraduates  

ERIC Educational Resources Information Center

An Introduction to Research course was modified to better teach the process of scientific inquiry to students who were not engaged in research projects. Students completed several tasks involved in research projects, including making presentations in a journal club format, writing mock grant proposals, and working as teams to evaluate grant…

Felzien, Lisa; Cooper, Janet

2005-01-01

83

A Process-Oriented Framework for Acquiring Online Teaching Competencies  

ERIC Educational Resources Information Center

As a multidimensional construct which requires multiple competencies, online teaching is forcing universities to rethink traditional faculty roles and competencies. With this consideration in mind, this paper presents a process-oriented framework structured around three sequential non-linear phases: (1) "before": preparing, planning, and…

Abdous, M'hammed

2011-01-01

84

Direct Influence of English Teachers in the Teaching Learning Process  

ERIC Educational Resources Information Center

Teachers play a vital role in the classroom environment. Interaction between teacher and students is an essential part of the teaching/learning process. An educator, Flanders originally developed an instrument called Flanders Interaction Analysis (FIA). The FIA system was designed to categorize the types, quantity of verbal interaction and direct…

Inamullah, Hafiz Muhammad; Hussain, Ishtiaq; Ud Din, M. Naseer

2008-01-01

85

Modeling the Research Process: Alternative Approaches to Teaching Undergraduates  

NSDL National Science Digital Library

An Introduction to Research course was modified to better teach the process of scientific inquiry to students who were not engaged in research projects. Students completed several tasks involved in research projects, including making presentations in a journal club format, writing mock grant proposals, and working as teams to evaluate grant proposals.

Cooper, Janet; Felzien, Lisa

2005-05-01

86

A Method for Teaching a Software Process based on the Personal Software Process  

Microsoft Academic Search

The paper presents a method in teaching software process at under-graduate level, based on the Personal Software Process (PSP). The goal is to inform students about the process and to allow practical experience in the implementation of the defined and measured personal process. During the course, personal baseline and planning processes are implemented while completing programming and report exercises. In

Zeljka Car

2003-01-01

87

Video and Image Processing in Multimedia Systems (Video Processing)  

E-print Network

COT 6930 Video and Image Processing in Multimedia Systems (Video Processing) Instructor: Borko. Content-based image and video indexing and retrieval. Video processing using compressed data. Course concepts and structures 4. Classification of compression techniques 5. Image and video compression

Furht, Borko

88

Teaching Software Process Modeling Marco Kuhrmann  

E-print Network

, the reasons that large-scale development programs have failed have not been technical. Furthermore, the usual students' programming tasks are of a size that either one student or a small group of reflecting the organization structure, coordinating teams, or interfaces to business processes

89

Combining Image-Processing and Image Compression Schemes.  

National Technical Information Service (NTIS)

An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive trans...

H. Greenspan, M. Lee

1995-01-01

90

Efficient Processing of Laser Speckle Contrast Images  

Microsoft Academic Search

Though laser speckle contrast imaging enables the measurement of scattering particle dynamics with high temporal resolution, the subsequent processing has previously been much slower. In prior studies, generating a laser speckle contrast image required about 1 s to process a raw image potentially collected in 10 ms or less. In this paper, novel algorithms are described which are demonstrated to

W. James Tom; Adrien Ponticorvo; Andrew K. Dunn

2008-01-01

91

Validation in Medical Image Processing 1 Introduction  

E-print Network

outcomes of disease or treatment and intervention strategies. Sources of errors or uncertaintiesValidation in Medical Image Processing 1 Introduction The increasingly important role of image processing in medicine Medical imaging is one of our most powerful tools for gaining insight into normal

Boyer, Edmond

92

Pyramid Methods in Image Processing  

Microsoft Academic Search

: The data structure used torepresent image information can be criticalto the successful completion of an imageprocessing task. One structure that hasattracted considerable attention is the imagepyramid This consists of a set of lowpass orbandpass copies of an image, eachrepresenting pattern information of adifferent scale. Here we describe a variety ofpyramid methods that we have developedfor image data compression, enhancement,analysis

E. H. Adelson; C. H. Anderson; J. R. Bergen; P. J. Burt; J. M. Ogden

1984-01-01

93

Introduction to Image Processing Prof. George Wolberg  

E-print Network

algorithms, geometric transforms - Warping, morphing, and visual effects 3Wolberg: Image Processing Course filtering for resampling Spatial transformations, texture mapping Separable warping algorithms; visual into numeric form. Typical operations include: - Contrast enhancement - Remove blur from an image - Smooth out

Wolberg, George

94

Image processing for medical diagnosis using CNN  

NASA Astrophysics Data System (ADS)

Medical diagnosis is one of the most important area in which image processing procedures are usefully applied. Image processing is an important phase in order to improve the accuracy both for diagnosis procedure and for surgical operation. One of these fields is tumor/cancer detection by using Microarray analysis. The research studies in the Cancer Genetics Branch are mainly involved in a range of experiments including the identification of inherited mutations predisposing family members to malignant melanoma, prostate and breast cancer. In bio-medical field the real-time processing is very important, but often image processing is a quite time-consuming phase. Therefore techniques able to speed up the elaboration play an important rule. From this point of view, in this work a novel approach to image processing has been developed. The new idea is to use the Cellular Neural Networks to investigate on diagnostic images, like: Magnetic Resonance Imaging, Computed Tomography, and fluorescent cDNA microarray images.

Arena, Paolo; Basile, Adriano; Bucolo, Maide; Fortuna, Luigi

2003-01-01

95

Teaching sensor technology and MEMS processing using an in-house educational MPW process  

Microsoft Academic Search

This paper presents a new innovative way of teaching modern sensor technology and practical MEMS-processing by using an in-house Multi Project Wafer (MPW) process especially developed for education purposes. This process has been used as the base for a project course called Sensor Technology, which includes a substantial laboratory component. In this project, the student actively follows the complete sensor

Thorbjoern Ebefors; Edvard Kaelvesten

1999-01-01

96

Using NASA Space Imaging to Teach Earth and Sun Topics in Professional Development Courses for In-Service Teachers  

NASA Astrophysics Data System (ADS)

several PD courses using NASA imaging technology. It includes various ways to study selected topics in physics and astronomy. We use NASA Images to develop lesson plans and EPO materials for PreK-8 grades. Topics are Space based and they vary from measurements, magnetism on Earth to that for our Sun. In addition we cover topics on ecosystem structure, biomass and water on Earth. Hands-on experiments, computer simulations, analysis of real-time NASA data, and vigorous seminar discussions are blended in an inquiry-driven curriculum to instill confident understanding of basic physical science and modern, effective methods for teaching it. Course also demonstrates ways how scientific thinking and hands-on activities could be implemented in the classroom. This course is designed to provide the non-science student a confident understanding of basic physical science and modern, effective methods for teaching it. Most of topics were selected using National Science Standards and National Mathematics Standards to be addressed in grades PreK-8. The course focuses on helping in several areas of teaching: 1) Build knowledge of scientific concepts and processes; 2) Understand the measurable attributes of objects and the units and methods of measurements; 3) Conducting data analysis (collecting, organizing, presenting scientific data, and to predict the result); 4) Use hands-on approaches to teach science; 5) Be familiar with Internet science teaching resources. Here we share our experiences and challenges we faced teaching this course.

Verner, E.; Bruhweiler, F. C.; Long, T.; Edwards, S.; Ofman, L.; Brosius, J. W.; Holman, G.; St Cyr, O. C.; Krotkov, N. A.; Fatoyinbo Agueh, T.

2012-12-01

97

MPP: a supersystem for satellite image processing  

Microsoft Academic Search

In 1971 NASA Goddard Space Flight Center initiated a program to develop high-speed image processing systems. These systems use thousands of processing elements (PE's) operating simultaneously to achieve their speed (massive parallelism). A typical satellite image contains millions of picture elements (pixels) that can generally be processed in parallel. In 1979 a contract was awarded to construct a massively parallel

Kenneth E. Batcher

1982-01-01

98

Diagrammatic Description of Satellite Image Processing Workflow  

Microsoft Academic Search

Multiple experiments on grid based satellite imagery classification require flexible descriptions of the processing workflow. The user develops the processing workflow pattern by visual tools in terms of satellite image multi-band spectral data. Each pattern may be stored into a repository and instantiated later for a particular area and time of the satellite image. The specific processing can be scheduled

Anca Radu; Victor Bacu; Dorian Gorgan

2007-01-01

99

Programmable remapper for image processing  

NASA Technical Reports Server (NTRS)

A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

Juday, Richard D. (inventor); Sampsell, Jeffrey B. (inventor)

1991-01-01

100

Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images  

NSDL National Science Digital Library

Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.

Kuehn, David; Perry, Jamie; Langlois, Rick

2007-01-01

101

Survey: Interpolation Methods in Medical Image Processing  

Microsoft Academic Search

Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation kernels of finite size have been introduced. This paper compares 1) truncated and win- dowed sinc; 2) nearest neighbor; 3) linear; 4) quadratic;

Thomas Martin Lehmann; Claudia Gönner; Klaus Spitzer

1999-01-01

102

Neural net computing for biomedical image processing  

NASA Astrophysics Data System (ADS)

In this paper we describe some of the most important types of neural networks applied in biomedical image processing. The networks described are variations of well-known architectures but are including image-relevant features in their structure. Convolutional neural networks, modified Hopfield networks, regularization networks and nonlinear principal component analysis neural networks are successfully applied in biomedical image classification, restoration and compression.

Meyer-Baese, Anke

1999-03-01

103

Combining advanced imaging processing and low cost remote imaging capabilities  

NASA Astrophysics Data System (ADS)

Target images are very important for evaluating the situation when Unattended Ground Sensors (UGS) are deployed. These images add a significant amount of information to determine the difference between hostile and non-hostile activities, the number of targets in an area, the difference between animals and people, the movement dynamics of targets, and when specific activities of interest are taking place. The imaging capabilities of UGS systems need to provide only target activity and not images without targets in the field of view. The current UGS remote imaging systems are not optimized for target processing and are not low cost. McQ describes in this paper an architectural and technologic approach for significantly improving the processing of images to provide target information while reducing the cost of the intelligent remote imaging capability.

Rohrer, Matthew J.; McQuiddy, Brian

2008-04-01

104

Teaching Processes To Improve Both Higher As Well As Lower Mental Process Achievement.  

ERIC Educational Resources Information Center

A major purpose of this research was to measure the effect of four different teaching processes on lower and higher mental process achievement. Two separate studies, one in science and one in mathematics, involved approximately 100 seventh grade students in four classrooms in a public junior high school in a middle-income neighborhood, and 85…

Soled, Suzanne Wegener

105

Utilizing image processing techniques to compute herbivory.  

PubMed

Leafy spurge (Euphorbia esula L. sensu lato) is a perennial weed species common to the north-central United States and southern Canada. The plant is a foreign species toxic to cattle. Spurge infestation can reduce cattle carrying capacity by 50 to 75 percent [1]. University of Wyoming Entomology doctoral candidate Vonny Barlow is conducting research in the area of biological control of leafy spurge via the Aphthona nigriscutis Foudras flea beetle. He is addressing the question of variability within leafy spurge and its potential impact on flea beetle herbivory. One component of Barlow's research consists of measuring the herbivory of leafy spurge plant specimens after introducing adult beetles. Herbivory is the degree of consumption of the plant's leaves and was measured in two different manners. First, Barlow assigned each consumed plant specimen a visual rank from 1 to 5. Second, image processing techniques were applied to "before" and "after" images of each plant specimen in an attempt to more accurately quantify herbivory. Standardized techniques were used to acquire images before and after beetles were allowed to feed on plants for a period of 12 days. Matlab was used as the image processing tool. The image processing algorithm allowed the user to crop the portion of the "before" image containing only plant foliage. Then Matlab cropped the "after" image with the same dimensions, converted the images from RGB to grayscale. The grayscale image was converted to binary based on a user defined threshold value. Finally, herbivory was computed based on the number of black pixels in the "before" and "after" images. The image processing results were mixed. Although, this image processing technique depends on user input and non-ideal images, the data is useful to Barlow's research and offers insight into better imaging systems and processing algorithms. PMID:11347423

Olson, T E; Barlow, V M

2001-01-01

106

Non-linear Post Processing Image Enhancement  

NASA Technical Reports Server (NTRS)

A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

Hunt, Shawn; Lopez, Alex; Torres, Angel

1997-01-01

107

Quantitative image processing in fluid mechanics  

NASA Technical Reports Server (NTRS)

The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

Hesselink, Lambertus; Helman, James; Ning, Paul

1992-01-01

108

ENVIRONMENTALLYORIENTED PROCESSING OF MULTISPECTRAL SATELLITE IMAGES  

E-print Network

ENVIRONMENTALLY­ORIENTED PROCESSING OF MULTI­SPECTRAL SATELLITE IMAGES: NEW CHALLENGES FOR BAYESIAN words: satellite imaging, multi­spectral satellite data, environmental appli­ cations, Bayes risk 1 from the processing of data obtained from sensors mounted on satellites with the capability of taking

Kreinovich, Vladik

109

Image processing with mini and micro computers  

Microsoft Academic Search

Processing of pictorial images by digital computers is currently attracting much interest. This paper describes a new facility evolving at the University of Wisconsin-Milwaukee and some of the projects being planned. Among the projects are the study of picture transmission links between two minicomputers, the use of microprocessors to operate image displays and hybrid computer processing of signals.

George R. Steber; Richard A. Northouse

1974-01-01

110

Web interface for image processing algorithms  

NASA Astrophysics Data System (ADS)

In this contribution we present an interface for image processing algorithms that has been made recently available on the Internet (http://nibbler.uni-koblenz.de). First, we show its usefulness compared to some other existing products. After a description of its architecture, its main features are then presented: the particularity of the user management, its image database, its interface, and its original quarantine system. We finally present the result of an evaluation performed by students in image processing.

Chastel, Serge; Schwab, Guido; Paulus, Dietrich

2003-12-01

111

DISTRIBUTED PROCESSING OF HYPERSPECTRAL IMAGES  

Microsoft Academic Search

This paper examines several hyperspectral data processing algorithms designed for a distributed computing environment. Due to the large size, hyperspectral data requires long computational times to process. In a distributed environment, the processing can be split into several components, many of them being executed simultaneously, thus leading to increased time efficiency. The algorithms are derived from previously introduced feature extraction

Stefan Robila

112

Adaptive processing of catadioptric images using polarization imaging  

E-print Network

consisting of convex mirror and conventional camera [charge coupled device (CCD) or complementary metal, and recently, in the navigation of mobile robots. Images pro- duced by catadioptric sensors contain of the shape of the catadioptric sensor's mirror (including noncentral configurations), image processing

Boyer, Edmond

113

Ultrasound 3-dimensional image processing using power Doppler image  

Microsoft Academic Search

A new ultrasound three dimensional (3D) reconstruction method is presented. It can show blood flow geometry (one promising application of 3D techniques). For this purpose, the authors adopt power Doppler ultrasound images of vascular flow because power Doppler has slow speed detectability and much less angle dependence. Here, the authors introduce this 3D processing method for ultrasonic diagnostic images. Furthermore,

H. Hashimoto; Y. Shen; Y. Takeuchi; E. Yoshitome

1995-01-01

114

Applications of Digital Image Processing 11  

NASA Technical Reports Server (NTRS)

A new technique, digital image velocimetry, is proposed for the measurement of instantaneous velocity fields of time dependent flows. A time sequence of single-exposure images of seed particles are captured with a high-speed camera, and a finite number of the single-exposure images are sampled within a prescribed period in time. The sampled images are then digitized on an image processor, enhanced, and superimposed to construct an image which is equivalent to a multiple exposure image used in both laser speckle velocimetry and particle image velocimetry. The superimposed image and a single-exposure Image are digitally Fourier transformed for extraction of information on the velocity field. A great enhancement of the dynamic range of the velocity measurement is accomplished through the new technique by manipulating the Fourier transform of both the single-exposure image and the superimposed image. Also the direction of the velocity vector is unequivocally determined. With the use of a high-speed video camera, the whole process from image acquisition to velocity determination can be carried out electronically; thus this technique can be developed into a real-time capability.

Cho, Y. -C.

1988-01-01

115

Inquiring Into the Teaching Process: Towards Self-Evaluation and Professional Development.  

ERIC Educational Resources Information Center

This book is designed to be a stimulant to action and reflection for teachers who would like to make a study of their own teaching. The first part explores how teachers can make a personal appraisal of their teaching. An analytic procedure is used which involves gathering information about four major aspects of the teaching process: (1) the…

Haysom, John

116

Point process models for weather radar images  

Microsoft Academic Search

A framework for analysing weather radar (DBz) images as spatial point processes is presented. Weather radar images are modelled for the purpose of predicting their evolution in time and thereby providing a basis for short-period precipitation forecasts. An observed image sequence is modelled as a set of individual rain cells that are the outcome of a marked 2+1D spatial point

Morten Larsen; Dina KVL

1994-01-01

117

Processing Welding Images For Robot Control  

NASA Technical Reports Server (NTRS)

Image data from two distinct windows used to locate weld features. Analyzer part of vision system described in companion article, "Image Control in Automatic Welding Vision System" (MFS-26035). Horizontal video lines define windows for viewing unwelded joint and weld pool. Data from picture elements outside windows not processed. Widely-separated local features carry no significance, but closely spaced features indicate welding feature. Image processor assigns confidence level to group of local features according to spacing and pattern.

Richardson, Richard W.

1988-01-01

118

Wolf population counting by spectrogram image processing  

Microsoft Academic Search

We investigate the use of image processing techniques based on partial dieren tial equations applied to the image produced by time-frequency representations of one- dimensional signals, such as the spectrogram. Specically , we use the PDE model introduced by Alvarez, Lions and Morel for noise smoothing and edge enhance- ment, which we show to be stable under signal and window

B. Dugnol; C. Fernández; G. Galiano

2007-01-01

119

Axioms and fundamental equations of image processing  

Microsoft Academic Search

Image-processing transforms must satisfy a list of formal requirements. We discuss these requirements and classify them into three categories: “architectural requirements” like locality, recursivity and causality in the scale space, “stability requirements” like the comparison principle and “morphological requirements”, which correspond to shape-preserving properties (rotation invariance, scale invariance, etc.). A complete classification is given of all image multiscale transforms satisfying

Luis Alvarez; Frédéric Guichard; Pierre-Louis Lions; Jean-Michel Morel

1993-01-01

120

Stereo Image Processing Procedure for Vision Rehabilitation  

Microsoft Academic Search

This article presents a review on vision-aided systems and proposes an approach for visual rehabilitation using stereo vision technology. The proposed system utilizes stereo vision, image processing methodology, and a sonification procedure to support blind mobilization. The developed system includes wearable computer, stereo cameras as vision sensor, and stereo earphones, all molded in a helmet. The image of the scene

G. Balakrishnan; G. Sainarayanan

2008-01-01

121

COMPUTER ENG 4TN4 Image Processing  

E-print Network

for developing applications and for research in the field of image processing. · Provide training for the design Spatial filtering o Frequency domain method · Image Restoration o Degradation models o Inverse filtering o. The lab component of the course consists of programming assignments and a small project. Assessment

Haykin, Simon

122

Command Line Image Processing System (CLIPS)  

NASA Astrophysics Data System (ADS)

An interactive image processing language (CLIPS) has been developed for use in an image processing environment. CLIPS uses a simple syntax with extensive on-line help to allow even the most naive user perform complex image processing tasks. In addition, CLIPS functions as an interpretive language complete with data structures and program control statements. CLIPS statements fall into one of three categories: command, control,and utility statements. Command statements are expressions comprised of intrinsic functions and/or arithmetic operators which act directly on image or user defined data. Some examples of CLIPS intrinsic functions are ROTATE, FILTER AND EXPONENT. Control statements allow a structured programming style through the use of statements such as DO WHILE and IF-THEN - ELSE. Utility statements such as DEFINE, READ, and WRITE, support I/O and user defined data structures. Since CLIPS uses a table driven parser, it is easily adapted to any environment. New commands may be added to CLIPS by writing the procedure in a high level language such as Pascal or FORTRAN and inserting the syntax for that command into the table. However, CLIPS was designed by incorporating most imaging operations into the language as intrinsic functions. CLIPS allows the user to generate new procedures easily with these powerful functions in an interactive or off line fashion using a text editor. The fact that CLIPS can be used to generate complex procedures quickly or perform basic image processing functions interactively makes it a valuable tool in any image processing environment.

Fleagle, S. R.; Meyers, G. L.; Kulinski, R. G.

1985-06-01

123

Image Processing on a Custom Computing Platform  

Microsoft Academic Search

Custom computing platforms are emerging as a class of computing engine that not only can provide near application-specific computational performance, but also can be configured to accommodate a wide variety of tasks. Due to vast computational needs, image processing computing platforms are traditionally constructed either by using costly application-specific hardware to support real-time image processing, or by sacrificing real-time performance

Peter M. Athanas; A. Lynn Abbott

1994-01-01

124

Color image processing for date quality evaluation  

NASA Astrophysics Data System (ADS)

Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing a challenging task. Since most agricultural applications only require analysis on a predefined set or range of colors, mapping these relevant colors to a small number of indexes allows simple and efficient color image processing for quality evaluation. This paper presents a simple but efficient color mapping and image processing technique that is designed specifically for real-time quality evaluation of Medjool dates. In contrast with more complex color image processing techniques, the proposed color mapping method makes it easy for a human operator to specify and adjust color-preference settings for different color groups representing distinct quality levels. Using this color mapping technique, the color image is first converted to a color map that has one color index represents a color value for each pixel. Fruit maturity level is evaluated based on these color indices. A skin lamination threshold is then determined based on the fruit surface characteristics. This adaptive threshold is used to detect delaminated fruit skin and hence determine the fruit quality. The performance of this robust color grading technique has been used for real-time Medjool date grading.

Lee, Dah Jye; Archibald, James K.

2010-01-01

125

Corn tassel detection based on image processing  

NASA Astrophysics Data System (ADS)

Machine vision has been widely applied in facility agriculture, and played an important role in obtaining environment information. In this paper, it is studied that application of image processing to recognize and locate corn tassel for corn detasseling machine. The corn tassel identification and location method was studied based on image processing and automated technology guidance information was provided for the actual production of corn emasculation operation. The system is the application of image processing to recognize and locate corn tassel for corn detasseling machine. According to the color characteristic of corn tassel, image processing techniques was applied to identify corn tassel of the images under HSI color space and Image segmentation was applied to extract the part of corn tassel, the feature of corn tassel was analyzed and extracted. Firstly, a series of preprocessing procedures were done. Then, an image segmentation algorithm based on HSI color space was develop to extract corn tassel from background and region growing method was proposed to recognize the corn tassel. The results show that this method could be effective for extracting corn tassel parts from the collected picture and can be used for corn tassel location information; this result could provide theoretical basis guidance for corn intelligent detasseling machine.

Tang, Wenbing; Zhang, Yane; Zhang, Dongxing; Yang, Wei; Li, Minzan

2012-01-01

126

Image-processing with augmented reality (AR)  

NASA Astrophysics Data System (ADS)

In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.

Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash

2013-03-01

127

The Positioning of Students in Newly Qualified Secondary Teachers' Images of Their "Best Teaching"  

ERIC Educational Resources Information Center

Asking newly qualified teachers (NQTs) to provide images of their "best teaching", and encouraging subsequent reflection on these images, has the potential to enhance their understanding of themselves as teachers as they explore their often unconsciously held assumptions about students and classrooms. This paper reports aspects of a study of 100…

Haigh, Mavis; Kane, Ruth; Sandretto, Susan

2012-01-01

128

A Grid Environment Based Satellite Images Processing  

Microsoft Academic Search

With the presentation of massive remotely sensed data, it is one of the biggest challenges how to process and analyze these data as soon as possible. Owing to Grid conformity heterogeneous computing sources, a Grid environment is built for the processing of remotely sensed images. In this study, CSF4 is taken as meta-scheduler in the collective layer in such a

X. Zhang; S. Chen; J. Fan; X. Wei

2009-01-01

129

Digital-image processing and image analysis of glacier ice  

USGS Publications Warehouse

This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

Fitzpatrick, Joan J.

2013-01-01

130

Homeland Security – Image Processing for Intelligent Cameras  

Microsoft Academic Search

Image surveillance systems demand is a challenging fast growing domain for TOSA [Thales Optronique Société Anonyme]. To address\\u000a this domain and image processing architecture in general, TOSA bets on the strategy of using reconfigurable multi-purpose\\u000a architecture to improve performances, re-use, productivity and reactivity. MORPHEUS project allows to realize this concept\\u000a and demonstrate its capacities. Among the two phases that compose

Cyrille Batariere

131

Smart vision system for applied image processing  

NASA Astrophysics Data System (ADS)

Smart vision systems for high performance machine vision applications are presented. The smart vision systems are based on smart vision sensors. These are programmable circuits consisting of image sensors, AD-converters, and RISC-processors integrated on the same silicon chip. The processor, being a line parallel bit-serial SIMD machine, handles both binary and grey scale information efficiently. The instruction set is specially designed for image processing tasks.

Goekstorp, Mats

1997-08-01

132

Effects of Using Online Tools in Improving Regulation of the Teaching-Learning Process  

ERIC Educational Resources Information Center

Introduction: The current panorama of Higher Education reveals a need to improve teaching and learning processes taking place there. The rise of the information society transforms how we organize learning and transmit knowledge. On this account, teaching-learning processes must be enhanced, the role of teachers and students must be evaluated, and…

de la Fuente, Jesus; Cano, Francisco; Justicia, Fernando; Pichardo, Maria del Carmen; Garcia-Berben, Ana Belen; Martinez-Vicente, Jose Manuel; Sander, Paul

2007-01-01

133

Support Routines for In Situ Image Processing  

NASA Technical Reports Server (NTRS)

This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the pointing of in situ cameras, (8) marsinvrange: Inverse of marsrange . given a range file, re-computes an XYZ file that closely matches the original. . marsproj: Projects an XYZ coordinate through the camera model, and reports the line/sample coordinates of the point in the image, (9) marsprojfid: Given the output of marsfidfinder, projects the XYZ locations and compares them to the found locations, creating a report showing the fiducial errors in each image. marsrad: Radiometrically corrects an image, (10) marsrelabel: Updates coordinate system or camera model labels in an image, (11) marstiexyz: Given a stereo pair, allows the user to interactively pick a point in each image and reports the XYZ value corresponding to that pair of locations. marsunmosaic: Extracts a single frame from a mosaic, which will be created such that it could have been an input to the original mosaic. Useful for creating simulated input frames using different camera models than the original mosaic used, and (12) merinverter: Uses an inverse lookup table to convert 8-bit telemetered data to its 12-bit original form. Can be used in other missions despite the name.

Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

2013-01-01

134

Mobile Phone Images and Video in Science Teaching and Learning  

ERIC Educational Resources Information Center

This article reports a study into how mobile phones could be used to enhance teaching and learning in secondary school science. It describes four lessons devised by groups of Sri Lankan teachers all of which centred on the use of the mobile phone cameras rather than their communication functions. A qualitative methodological approach was used to…

Ekanayake, Sakunthala Yatigammana; Wishart, Jocelyn

2014-01-01

135

Snapping Sharks, Maddening Mindreaders, and Interactive Images: Teaching Correlation.  

ERIC Educational Resources Information Center

Understanding correlation coefficients is difficult for students. A free computer program that helps introductory psychology students distinguish between positive and negative correlation, and which also teaches them to understand the differences between correlation coefficients of different size is described in this paper. The program is…

Mitchell, Mark L.

136

Logarithmic spiral grids for image processing  

NASA Technical Reports Server (NTRS)

A picture digitization grid based on logarithmic spirals rather than Cartesian coordinates is presented. Expressing this curvilinear grid as a conformal exponential mapping reveals useful image processing properties. The mapping induces a computational simplification that suggests parallel architectures in which most geometric transformations are effected by data shifting in memory rather than arithmetic on coordinates. These include fast, parallel noise-free rotation, scaling, and some projective transformations of pixel defined images. Conformality of the mapping preserves local picture-processing operations such as edge detection.

Weiman, C. F. R.; Chaikin, G. M.

1979-01-01

137

Improving Teaching through a Peer Support "Teacher Consultation Process."  

ERIC Educational Resources Information Center

The Teaching Consultation Program (TCP) is one of the most popular faculty development programs offered in the University of Kentucky Community Colleges (UKCC's), having trained over 500 faculty since its implementation in 1977. The TCP is a confidential, peer consulting program available to faculty who wish to analyze their teaching behaviors and…

Kerwin, Mike; Rhoads, Judith

138

Product review: lucis image processing software.  

PubMed

Lucis is a software program that allows the manipulation of images through the process of selective contrast pattern emphasis. Using an image-processing algorithm called Differential Hysteresis Processing (DHP), Lucis extracts and highlights patterns based on variations in image intensity (luminance). The result is that details can be seen that would otherwise be hidden in deep shadow or excessive brightness. The software is contained on a single floppy disk, is easy to install on a PC, simple to use, and runs on Windows 95, Windows 98, and Windows NT operating systems. The cost is $8,500 for a license, but is estimated to save a great deal of money in photographic materials, time, and labor that would have otherwise been spent in the darkroom. Superb images are easily obtained from unstained (no lead or uranium) sections, and stored image files sent to laser printers are of publication quality. The software can be used not only for all types of microscopy, including color fluorescence light microscopy, biological and materials science electron microscopy (TEM and SEM), but will be beneficial in medicine, such as X-ray films (pending approval by the FDA), and in the arts. PMID:10206154

Johnson, J E

1999-04-01

139

Processing Images of Craters for Spacecraft Navigation  

NASA Technical Reports Server (NTRS)

A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

2009-01-01

140

Refractive surface flow visualization using image processing.  

PubMed

The importance of the wake-free-surface interaction in the detection, classification, and tracking of submerged objects has led to the development of a simple but effective free-surface visualization technique for use in controlled water-tunnel experiments. An experiment was performed to verify the effectiveness and the applicability of this method. Digital images of a spatially varying sinusoidal grid were acquired as seen through the disturbance pattern on the water surface. Image-processing techniques were used to perform phase demodulation of the distorted image. The resulting image details the outline, location, and extent of the surface deformation in a gray-scale format. Optimal digital filter specifications and spatial grid frequencies were determined experimentally for various surface-flow conditions. PMID:21037733

Britton, D F; Smith, L M

1995-04-10

141

FITSH: Software Package for Image Processing  

NASA Astrophysics Data System (ADS)

FITSH provides a standalone environment for analysis of data acquired by imaging astronomical detectors. The package provides utilities both for the full pipeline of subsequent related data processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple image combinations, spatial transformations and interpolations, etc.) and for aiding the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The utilities in the package are built on the top of the commonly used UNIX/POSIX shells (hence the name of the package), therefore both frequently used and well-documented tools for such environments can be exploited and managing massive amount of data is rather convenient.

Pál, András

2011-11-01

142

MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING  

PubMed Central

In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN

2013-01-01

143

Web-based document image processing  

NASA Astrophysics Data System (ADS)

Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

Walker, Frank L.; Thoma, George R.

1999-12-01

144

Digital image processing of vascular angiograms  

NASA Technical Reports Server (NTRS)

The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

1975-01-01

145

Review of biomedical signal and image processing  

PubMed Central

This article is a review of the book “Biomedical Signal and Image Processing” by Kayvan Najarian and Robert Splinter, which is published by CRC Press, Taylor & Francis Group. It will evaluate the contents of the book and discuss its suitability as a textbook, while mentioning highlights of the book, and providing comparison with other textbooks.

2013-01-01

146

CMSC 426: Image Processing (Computer Vision)  

E-print Network

CMSC 426: Image Processing (Computer Vision) David Jacobs Today's class · What is vision · What is computer vision · Layout of the class #12;Vision · ``to know what is where, by looking.'' (Marr). · Where · What Why is Vision Interesting? · Psychology ­ ~ 50% of cerebral cortex is for vision. ­ Vision is how

Jacobs, David

147

Progressive band processing for hyperspectral imaging  

NASA Astrophysics Data System (ADS)

Hyperspectral imaging has emerged as an image processing technique in many applications. The reason that hyperspectral data is called hyperspectral is mainly because the massive amount of information provided by the hundreds of spectral bands that can be used for data analysis. However, due to very high band-to-band correlation much information may be also redundant. Consequently, how to effectively and best utilize such rich spectral information becomes very challenging. One general approach is data dimensionality reduction which can be performed by data compression techniques, such as data transforms, and data reduction techniques, such as band selection. This dissertation presents a new area in hyperspectral imaging, to be called progressive hyperspectral imaging, which has not been explored in the past. Specifically, it derives a new theory, called Progressive Band Processing (PBP) of hyperspectral data that can significantly reduce computing time and can also be realized in real-time. It is particularly suited for application areas such as hyperspectral data communications and transmission where data can be communicated and transmitted progressively through spectral or satellite channels with limited data storage. Most importantly, PBP allows users to screen preliminary results before deciding to continue with processing the complete data set. These advantages benefit users of hyperspectral data by reducing processing time and increasing the timeliness of crucial decisions made based on the data such as identifying key intelligence information when a required response time is short.

Schultz, Robert C.

148

Stochastic processes, estimation theory and image enhancement  

NASA Technical Reports Server (NTRS)

An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

Assefi, T.

1978-01-01

149

A new ocean SAR imaging process simulator  

Microsoft Academic Search

In this paper, we develop the concept of a new ocean SAR imaging process simulator. We intend to come up with a simulator more complete than those which have been developed until now. Indeed, in addition to the large number of oceanic and atmospheric phenomena considered, the simulator should lead to cope with various radar configurations (spaceborne, airborne, grazing-angle). The

M. Lamy; J.-M. Le Caillec; R. Garello; A. Khenchaf

2003-01-01

150

Satellite image processing and air pollution detection  

Microsoft Academic Search

Environmental sensing is closely related to digital processing of observed signals and images. The paper is devoted to the analysis of mathematical methods allowing for detection of concentration of aerosol particles observed at ground measuring stations and by satellites. The first part of the contribution presents basic methods of two-dimensional interpolation allowing for the estimation of the observed variables over

A. Prochazka; M. Kolinova; J. Fiala; P. Hampl; K. Hlavaty

2000-01-01

151

Satellite Image Processing on Computational Grids  

Microsoft Academic Search

Remote sensing image processing is a very demanding procedure in terms of data manipulation and computing power. Grid computing is a possible solution when the required computing performance or data sharing is not available at the user's site. Two scenarios of using Service Grids were analyzed in our papers (17, 18). This paper discusses another scenario of using Computational Grids.

DANA PETCU; SILVIU PANICA; ANDREI ECKSTEIN

2007-01-01

152

Fingerprint quality assurance using image processing  

E-print Network

Fingerprint quality assurance using image processing Marek Dusio Kongens Lyngby 2013 IMM-M.Sc.-2013 analysis methods that are fast to compute. Of interest are analysing the impact of ngertip skin moisture Modelling at the Technical University of Denmark during an exchange visit at the Center for Advanced

153

Teaching Practice Trends Regarding the Teaching of the Design Process within a South African Context: A Situation Analysis  

ERIC Educational Resources Information Center

In this article an analysis is made of the responses of 95 technology education teachers, 14 technology education lecturers and 25 design practitioners to questionnaires regarding the teaching and the application of the design process. The main purpose of the questionnaires is to determine whether there are any trends regarding the strategies and…

Potgieter, Calvyn

2013-01-01

154

Subband/transform functions for image processing  

NASA Technical Reports Server (NTRS)

Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

Glover, Daniel

1993-01-01

155

Process Evaluation of a Teaching and Learning Centre at a Research University  

ERIC Educational Resources Information Center

This paper describes the evaluation of a teaching and learning centre (TLC) five?years after its inception at a mid-sized, midwestern state university. The mixed methods process evaluation gathered data from 209 attendees and non-attendees of the TLC from the full-time, benefit-eligible teaching faculty. Focus groups noted feelings of…

Smith, Deborah B.; Gadbury-Amyot, Cynthia C.

2014-01-01

156

The Effect of Activity Based Lexis Teaching on Vocabulary Development Process  

ERIC Educational Resources Information Center

"Teaching word" as a complimentary process of teaching Turkish is a crucial field of study. However, studies on this area are insufficient. The only aim of the designed activities that get under way with the constructivist approach on which new education programs are based is to provide students with vocabulary elements of Turkish. In…

Mert, Esra Lule

2013-01-01

157

The Process of Physics Teaching Assistants' Pedagogical Content Knowledge Development  

ERIC Educational Resources Information Center

This study explored the process of physics teaching assistants' (TAs) PCK development in the context of teaching a new undergraduate introductory physics course. "Matter and Interactions" (M&I) has recently adopted a new introductory physics course that focuses on the application of a small number of fundamental physical…

Seung, Eulsun

2013-01-01

158

Teaching beliefs and the practice of e-moderators: Presage, process and product  

Microsoft Academic Search

Th is paper examines the beliefs, intentions and actions of e-moderators by observing these interactions during the presage, process and product phases of online discussion forums (Biggs, 1988). While the structure provided by Biggs' 3P Model is used to highlight the interrelationships of the various steps of online teaching, this paper proposes that because individuals' teaching beliefs and practices are

Mary Panko

159

Using Process and Inqury to Teach Content: Projectile Motion and Graphing  

ERIC Educational Resources Information Center

These series of lessons uses the process of student inquiry to teach the concepts of force and motion identified in the National Science Education Standards for grades 5-8. The lesson plan also uses technology as a teaching tool through the use of interactive Web sites. The lessons are built on the 5-E format and feature imbedded assessments.

Rhea, Marilyn; Lucido, Patricia; Gregerson-Malm, Cheryl

2005-01-01

160

Twitter for Teaching: Can Social Media Be Used to Enhance the Process of Learning?  

ERIC Educational Resources Information Center

Can social media be used to enhance the process of learning by students in higher education? Social media have become widely adopted by students in their personal lives. However, the application of social media to teaching and learning remains to be fully explored. In this study, the use of the social media tool Twitter for teaching was…

Evans, Chris

2014-01-01

161

The Process of Adapting a German Pedagogy for Modern Mathematics Teaching in Japan  

ERIC Educational Resources Information Center

Modern geometry teaching in schools in Japan was modeled on the pedagogies of western countries. However, the core ideas of these pedagogies were often radically changed in the process of adaptation, resulting in teaching differing fundamentally from the original models. This paper discusses the radical changes the pedagogy of a German mathematics…

Yamamoto, Shinya

2006-01-01

162

Preservice Chemistry Teachers' Images about Science Teaching in Their Future Classrooms  

ERIC Educational Resources Information Center

The purpose of this study is to explore pre-service chemistry teachers' images of science teaching in their future classrooms. Also, association between instructional style, gender, and desire to be a teacher was explored. Sixty six pre-service chemistry teachers from three public universities participated in the data collection for this study. A…

Elmas, Ridvan; Demirdogen, Betul; Geban, Omer

2011-01-01

163

Improving the Teaching/Learning Process in General Chemistry: Report on the 1997 Stony Brook General Chemistry Teaching Workshop  

NASA Astrophysics Data System (ADS)

Motivated by the widespread recognition that traditional teaching methods at postsecondary institutions no longer are meeting students' educational needs, 59 participants came to the first Stony Brook General Chemistry Teaching Workshop, July 20-July 25, 1997, on improving the teaching/learning process in General Chemistry. The instructors from 42 institutions across the country, including community colleges, liberal-arts colleges, and large research universities, had mutual concerns that students are having difficulty understanding and applying concepts, finding relevance, transferring knowledge within and across disciplines, and identifying and developing skills needed for success in college and a career. This situation has come about because challenges posed by students' increasing diversity in academic preparation, cultural background, motivation, and career goals go unmet, with too many courses maintaining the conventional objective of structuring and presenting information.

Hanson, David M.; Wolfskill, Troy

1998-02-01

164

Teaching the Process of Science: Faculty Perceptions and an Effective Methodology  

PubMed Central

Most scientific endeavors require science process skills such as data interpretation, problem solving, experimental design, scientific writing, oral communication, collaborative work, and critical analysis of primary literature. These are the fundamental skills upon which the conceptual framework of scientific expertise is built. Unfortunately, most college science departments lack a formalized curriculum for teaching undergraduates science process skills. However, evidence strongly suggests that explicitly teaching undergraduates skills early in their education may enhance their understanding of science content. Our research reveals that faculty overwhelming support teaching undergraduates science process skills but typically do not spend enough time teaching skills due to the perceived need to cover content. To encourage faculty to address this issue, we provide our pedagogical philosophies, methods, and materials for teaching science process skills to freshman pursuing life science majors. We build upon previous work, showing student learning gains in both reading primary literature and scientific writing, and share student perspectives about a course where teaching the process of science, not content, was the focus. We recommend a wider implementation of courses that teach undergraduates science process skills early in their studies with the goals of improving student success and retention in the sciences and enhancing general science literacy. PMID:21123699

Coil, David; Wenderoth, Mary Pat; Cunningham, Matthew

2010-01-01

165

Biological crystal alignment using image processing.  

SciTech Connect

Crystal location and alignment to the x-ray beam is an enabling technology necessary for automation of the macromolecular crystallography at synchrotron beamlines. In a process of crystal structure determination, a small size x-ray synchrotron beam with FWHM as small as 70 {mu}m (bending magnet beamlines) and 20 {mu}m (undulator beamlines) is focused at or downstream of the crystal sample. Protein crystals used in structure determination become smaller and approach 50 {mu}m or less, and need to be precisely placed in the focused x-ray beam. At the Structural Biology Center the crystals are mounted on a goniostat, allowing precise crystal xyz positioning and rotations. One low and two high magnification cameras integrated into synchrotron beamline permit imaging of the crystal mounted on a goniostat. The crystals are held near liquid nitrogen temperatures using cryostream to control secondary radiation damage. Image processing techniques are used for automatic and precise placing of protein crystals in synchrotron beam. Here we are discussing automatic crystal centering process considered for Structure Biology Center utilizing several image processing techniques.

Gofron, K. J.; Lazarski, K.; Molitsky, M.; Joachimiak, A.; Biosciences Division

2006-01-01

166

High performance image processing of satellite images using graphics processing units  

Microsoft Academic Search

This paper presents preliminary results of studies concerning possibilities of high performance processing of satellite images using graphics processing units. Even if numerical procedures used for this kind of computation are not complicated and fast, size of typical satellite scene makes them time consuming. This problem is especially troublesome when many, sometimes hundredths of scenes, have to be processed in

Michal Rumanek; Tomasz Danek; Andrzej Lesniak

2011-01-01

167

Texture Segmentation and Classification in Biomedical Image Processing  

Microsoft Academic Search

s: Methods of image analysis belong to a general interdisciplinary area of multi- dimensional signal processing. The paper is devoted to selected intelligent techniques of biomedi- cal image processing and namely to mathematical methods of image features extraction and image components classification invariant to their rotation. The first method under study presents an algorithm for the given image segmentation using

Ales Prochazka; Andrea Gavlasova; Oldrich Vysata

168

The use of QTVR for teaching Radiology and Diagnostic Imaging  

Microsoft Academic Search

This paper reports on the preparation of a technology-mediated alternative to print-based external study of a postgraduate unit in veterinary diagnostic imaging. A number of innovative uses of technology had to be developed in order to meet the educational requirements: ? Large radiograph images were converted into QuickTime VR format, which enabled them to be zoomed into and navigated, with

Rob Phillips; Fred Lafitte; Jennifer L Richardson

169

Contented-Based Satellite Cloud Image Processing and Information Retrieval  

Microsoft Academic Search

Satellite cloud image is a kind of useful image which includes abundant information, for acquired this information, the image\\u000a processing and character extraction method adapt to satellite cloud image has to be used. Content-based satellite cloud image\\u000a processing and information retrieval (CBIPIR) is a very important problem in image processing and analysis field. The basic\\u000a character, like color, texture, edge

Yanling Hao; Wei Shangguan; Yi Zhu; Yanhong Tang

2007-01-01

170

Digital image processing of vascular angiograms  

NASA Technical Reports Server (NTRS)

A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.

Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.

1975-01-01

171

Multiresolution Markov Models for Signal and Image Processing  

E-print Network

on graphical models. Keywords--Autoregressive processes, Bayesian networks, data assimilation, data fusion- hancement, image processing, image segmentation, inverse prob- lems, Kalman filtering, machine vision processing, sparse matrices, state space methods, stochastic realization, trees, wavelet transforms

Willsky, Alan S.

172

Automated synthesis of image processing procedures using AI planning techniques  

NASA Technical Reports Server (NTRS)

This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

Chien, Steve; Mortensen, Helen

1994-01-01

173

Resistivity Imaging of Bedrock Constrained by Digital Image Processing Algorithms  

NASA Astrophysics Data System (ADS)

The resistivity method is routinely used to image the shallow subsurface and common applications include mapping of near surface geology, characterization of contaminated sites, delineation of engineered structures, and locating archeological features. The smoothness constraint is most commonly employed as the default regularization method in commercially available software for resistivity image reconstruction based on least squares minimization. This regularization constraint is conceptually appropriate for a wide range of applications, particularly when the objective is to predict changes in resistivity due to variations in moisture and/or salinity across space and time. However, resistivity imaging is increasingly used to predict targets that are characterized by sharp, rather than gradational, resistivity contrasts (e.g. depth to bedrock as of interest here). In this case, the smoothness constraint is conceptually inappropriate as our a priori expectation is that such targets represent a sharp change in resistivity across some unknown boundary location. Here, we propose simple procedures for partly offsetting the pitfalls that result from applying the smoothness-based regularization to locate such targets, which combines: (1) an initial inversion using the standard smoothness constraint and a homogeneous starting model (as typically most often done in practice), (2) an image processing technique known as the watershed algorithm to subsequently predict the probable depth to bedrock from the smooth image, and (3) a second inversion step incorporating a disconnect in the regularization, defined by the probable depth to bedrock output from the watershed algorithm, to obtain an improved estimate of the variation in resistivity within and outside of the bedrock based on the incorporation of a priori information (i.e. the disconnect and the resistivity model obtained from the standard smooth inversion). We test this approach on the four synthetic variants on the depth to bedrock problem, as well as the field dataset from the H.J. Andrews Experimental Forest (Oregon). We do not apply our approach on the data from the monitoring of an infiltration test as sharp resistivity boundaries are not expected.

Elwaseif, M.; Slater, L.

2009-05-01

174

NCEP Exercise- Introduction to Remote Sensing: Viewing satellite imagery with image processing software  

NSDL National Science Digital Library

This exercise acts as a self-guided tutorial that demonstrates how satellite imagery can be visualized using an open source remote sensing image processing software application called OpenEV. With this software it is possible to read a broad range of satellite imagery in a variety of file formats. Once the image files are input into the software, functions exist to allow the user to enhance the image and overlay other geographic data. Additional teaching materials on topics relating to biodiversity conservation and ecology can be obtained free of charge by registering at the Network for Conservation Educators and PractitionersÃÂ website (http://ncep.amnh.org).

Horning, N.

2010-02-16

175

Color Image Processing and Object Tracking System  

NASA Technical Reports Server (NTRS)

This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

1996-01-01

176

Using the medical image processing package, ImageJ, for astronomy  

E-print Network

At the most fundamental level, all digital images are just large arrays of numbers that can easily be manipulated by computer software. Specialized digital imaging software packages often use routines common to many different applications and fields of study. The freely available, platform independent, image-processing package ImageJ has many such functions. We highlight ImageJ's capabilities by presenting methods of processing sequences of images to produce a star trail image and a single high quality planetary image.

Jennifer L. West; Ian D. Cameron

2006-11-21

177

Mnemonic Text-Processing Strategies: A Teaching Science for Science Teaching.  

ERIC Educational Resources Information Center

Reports on the development and evaluation of a two-component mnemonic strategy for teaching hierarchical and specific botanical concepts. Reports that compared to traditional instruction, mnemonic instruction facilitates learning of both classification and characteristic information, as well as inferential thinking on a problem-solving task. (RS)

Levin, Mary E.; And Others

1988-01-01

178

Vector processing enhancements for real-time image analysis.  

SciTech Connect

A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

Shoaf, S.; APS Engineering Support Division

2008-01-01

179

MedPix Radiology Teaching Files & Medical Image Database  

NSDL National Science Digital Library

For students of radiology and related fields, this database will be a most welcome find. Created by the team behind MedPix, the site includes thousands of radiology images designed to be used as educational tools. Visitors can click on the Picture of the Day to get started, and then head on over to the Weekly Quiz to test their mettle. The Radiology Tutor section includes nine different tutorials that cover topics such as Trauma, Vascular, Technique, and General Principles. The Brain Lesion Locator can help visitors learn about identifying different brain lesions via radiological images. The site is rounded out by seven different practice exams that will help visitors strengthen their basic understanding of radiological images.

2012-02-17

180

Improvement of the detection rate in digital watermarked images against image degradation caused by image processing  

NASA Astrophysics Data System (ADS)

In the current environment of medical information disclosure, the general-purpose image format such as JPEG/BMP which does not require special software for viewing, is suitable for carrying and managing medical image information individually. These formats have no way to know patient and study information. We have therefore developed two kinds of ID embedding methods: one is Bit-swapping method for embedding Alteration detection ID and the other is data-imposing method in Fourier domain using Discrete Cosine Transform (DCT) for embedding Original image source ID. We then applied these two digital watermark methods to four modality images (Chest X-ray, Head CT, Abdomen CT, Bone scintigraphy). However, there were some cases where the digital watermarked ID could not be detected correctly due to image degradation caused by image processing. In this study, we improved the detection rate in digital watermarked image using several techniques, which are Error correction method, Majority correction method, and Scramble location method. We applied these techniques to digital watermarked images against image processing (Smoothing) and evaluated the effectiveness. As a result, Majority correction method is effective to improve the detection rate in digital watermarked image against image degradation.

Nishio, Masato; Ando, Yutaka; Tsukamoto, Nobuhiro; Kawashima, Hironao; Nakamura, Shinya

2004-04-01

181

FITSH- a software package for image processing  

NASA Astrophysics Data System (ADS)

In this paper we describe the main features of the software package named FITSH, intended to provide a standalone environment for analysis of data acquired by imaging astronomical detectors. The package both provides utilities for the full pipeline of subsequent related data-processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple-image combinations, spatial transformations and interpolations) and aids the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The set of utilities found in this package is built on top of the commonly used UNIX/POSIX shells (hence the name of the package); therefore, both frequently used and well-documented tools for such environments can be exploited and managing a massive amount of data is rather convenient.

Pál, András.

2012-04-01

182

The Teaching Evaluation Process: Segmentation of Marketing Students.  

ERIC Educational Resources Information Center

A study applied the concept of market segmentation to student evaluation of college teaching, by assessing whether there exist several segments of students and how this relates to their evaluation of faculty. Subjects were 156 Australian undergraduate business administration students. Results suggest segments do exist, with different expectations…

Yau, Oliver H. M.; Kwan, Wayne

1993-01-01

183

Pharmacy students' perceptions of the teaching evaluation process in Jordan  

Microsoft Academic Search

Purpose – The purpose of this paper is to assess pharmacy students' perceptions of the usefulness of the teaching evaluation (TE) instrument and the rationale behind their responses. Design\\/methodology\\/approach – A comprehensive survey instrument is constructed by the authors. Pharmacy students at University of Jordan (JU) are asked to complete the survey instrument. The questionnaire is completed during students first

Ibrahim Al-Abbadi; Fadi Alkhateeb; Nile Khanfar; Bahaudin Mujtaba; Latif David

2009-01-01

184

Context Effects in the Teaching-Learning Process.  

ERIC Educational Resources Information Center

A review of research findings on context variables in the classroom and their effects on student achievement provided a framework upon which to base conclusions on the following factors pertaining to effective teaching: (1) teacher control of the classroom; (2) adjustment of teacher control of learning activities to student abilities; (3)…

Soar, Robert S.; Soar, Ruth M.

185

Portable EDITOR (PEDITOR): A portable image processing system. [satellite images  

NASA Technical Reports Server (NTRS)

The PEDITOR image processing system was created to be readily transferable from one type of computer system to another. While nearly identical in function and operation to its predecessor, EDITOR, PEDITOR employs additional techniques which greatly enhance its portability. These cover system structure and processing. In order to confirm the portability of the software system, two different types of computer systems running greatly differing operating systems were used as target machines. A DEC-20 computer running the TOPS-20 operating system and using a Pascal Compiler was utilized for initial code development. The remaining programmers used a Motorola Corporation 68000-based Forward Technology FT-3000 supermicrocomputer running the UNIX-based XENIX operating system and using the Silicon Valley Software Pascal compiler and the XENIX C compiler for their initial code development.

Angelici, G.; Slye, R.; Ozga, M.; Ritter, P.

1986-01-01

186

PET Plants: Imaging Natural Processes for Renewable Energy  

E-print Network

PET Plants: Imaging Natural Processes for Renewable Energy from Plants Benjamin A. Babst Goldhaber Postdoctoral Fellow Medical Department Plant Imaging #12;PET imaging for medicine Tumor Diagnosis Biomedical research and plants #12;Brookhaven's Unique Capabilities Movement, distribution, and metabolism

Homes, Christopher C.

187

Development of the SOFIA Image Processing Tool  

NASA Technical Reports Server (NTRS)

The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

Adams, Alexander N.

2011-01-01

188

HYMOSS signal processing for pushbroom spectral imaging  

NASA Technical Reports Server (NTRS)

The objective of the Pushbroom Spectral Imaging Program was to develop on-focal plane electronics which compensate for detector array non-uniformities. The approach taken was to implement a simple two point calibration algorithm on focal plane which allows for offset and linear gain correction. The key on focal plane features which made this technique feasible was the use of a high quality transimpedance amplifier (TIA) and an analog-to-digital converter for each detector channel. Gain compensation is accomplished by varying the feedback capacitance of the integrate and dump TIA. Offset correction is performed by storing offsets in a special on focal plane offset register and digitally subtracting the offsets from the readout data during the multiplexing operation. A custom integrated circuit was designed, fabricated, and tested on this program which proved that nonuniformity compensated, analog-to-digital converting circuits may be used to read out infrared detectors. Irvine Sensors Corporation (ISC) successfully demonstrated the following innovative on-focal-plane functions that allow for correction of detector non-uniformities. Most of the circuit functions demonstrated on this program are finding their way onto future IC's because of their impact on reduced downstream processing, increased focal plane performance, simplified focal plane control, reduced number of dewar connections, as well as the noise immunity of a digital interface dewar. The potential commercial applications for this integrated circuit are primarily in imaging systems. These imaging systems may be used for: security monitoring systems, manufacturing process monitoring, robotics, and for spectral imaging when used in analytical instrumentation.

Ludwig, David E.

1991-01-01

189

Report on using TIPS (Teaching Information Processing System) in teaching physics and astronomy  

NSDL National Science Digital Library

A computer-managed instruction system, TIPS, has been used for over a decade in the teaching of diverse disciplines. This paper describes the recent use of TIPS in physics and astronomy courses at Kansas State University, Memphis State University, University of New Mexico, and University of WisconsinâGreen Bay. Student reactions to TIPS were largely positive, but the degree of success in improving student performance reported in many articles has not been observed.

Folland, Nathan; Marchini, Robert R.; Rhyner, Charles R.; Zeilik, Michael

2005-10-21

190

Localized processing for hyperspectral image analysis  

NASA Astrophysics Data System (ADS)

Target detection is one of the major tasks in hyperspectral image analysis. Constrained Energy Minimization (CEM) is a popular technique for target detection. It designs a finite impulse response filter in such a manner that the filter output energy is minimized subject to a constraint imposed by the desired target of interest. It is particular useful when only the desired target signature is available. When those undesired signatures to be eliminated are also known, Target Constrained Interference Minimization Filter (TCIMF) can be used to minimize the output of undesired signatures to further improve the performance. It has been demonstrated that TCIMF can better differentiate targets with similar spectral signatures. Both CEM and TCIMF involve the calculation of the data sample correlation matrix R and its inverse matrix R-1. The function of R-1 is background suppression. When the target to be detected is very small and embedded at the sub-pixel level, it is difficult to detect it. But if the data sample correlation matrix R can well present the statistics of the background surrounding the pixel containing the object such that R-1 can well suppress the background, the target may still have a chance to be detected. So in this paper we propose a localized processing technique. Instead of using all the pixels in an image scene to calculate the R, only several lines of pixels near the pixel to be processed are used for the R computation. The preliminary result using an HYDICE image scene demonstrates the effectiveness of such a localized processing technique in the detection of targets at sub-pixel level. Interestingly, in some cases it can also improve the performance of CEM in target discrimination.

Du, Qian

2004-12-01

191

The constructive use of images in medical teaching: a literature review  

PubMed Central

This literature review illustrates the various ways images are used in teaching and the evidence appertaining to it and advice regarding permissions and use. Four databases were searched, 23 papers were retained out of 135 abstracts found for the study. Images are frequently used to motivate an audience to listen to a lecture or to note key medical findings. Images can promote observation skills when linked with learning outcomes, but the timing and relevance of the images is important – it appears they must be congruent with the dialogue. Student reflection can be encouraged by asking students to actually draw their own impressions of a course as an integral part of course feedback. Careful structured use of images improve attention, cognition, reflection and possibly memory retention. PMID:22666530

Norris, Elizabeth M

2012-01-01

192

Processing of the Eruptive Prominence Images  

NASA Astrophysics Data System (ADS)

The films of eruptive prominences (EPs) registered at the Astronomical Institute of the Wroclaw University, Poland (1979 and 1980 years) were digitized with the automatic microdensitometer at the National Astronomical Observatory Rozhen, Bulgaria. The basic algorithms for processing of the digitized prominence images, determination of the kinematic parameters (velocity and acceleration) of the EP, as well as the dynamics and the evolution of the EP internal structure, are considered. The same magnetic structure is thought to be involved in the EPs and flares but at different spatial and energetic scales.

Koleva, K.; Duchlev, P.; Dechev, M.; Petrov, N.; Kokotanekova, J.; Rompolt, B.; Rudawy, P.

2006-04-01

193

Interactive Computer Assisted Instruction in Teaching of Process Analysis and Simulation.  

ERIC Educational Resources Information Center

To improve the instructional process, time shared computer-assisted instructional methods were developed to teach upper division undergraduate chemical engineering students the concepts of process simulation and analysis. The interactive computer simulation aimed at enabling the student to learn the difficult concepts of process dynamics by…

Nuttall, Herbert E., Jr.; Himmelblau, David M.

194

Imaging fault zones using 3D seismic image processing techniques  

NASA Astrophysics Data System (ADS)

Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes and collecting these into "disturbance geobodies". These seismic image processing methods represents a first efficient step toward a construction of a robust technique to investigate sub-seismic strain, mapping noisy deformed zones and displacement within subsurface geology (Dutzer et al.,2011; Iacopini et al.,2012). In all these cases, accurate fault interpretation is critical in applied geology to building a robust and reliable reservoir model, and is essential for further study of fault seal behavior, and reservoir compartmentalization. They are also fundamental for understanding how deformation localizes within sedimentary basins, including the processes associated with active seismogenetic faults and mega-thrust systems in subduction zones. Dutzer, JF, Basford., H., Purves., S. 2009, Investigating fault sealing potential through fault relative seismic volume analysis. Petroleum Geology Conference series 2010, 7:509-515; doi:10.1144/0070509 Marfurt, K.J., Chopra, S., 2007, Seismic attributes for prospect identification and reservoir characterization. SEG Geophysical development Iacopini, D., Butler, RWH. & Purves, S. (2012). 'Seismic imaging of thrust faults and structural damage: a visualization workflow for deepwater thrust belts'. First Break, vol 5, no. 30, pp. 39-46.

Iacopini, David; Butler, Rob; Purves, Steve

2013-04-01

195

Mutated bacteriorhodopsins-versatile media in optical image processing  

Microsoft Academic Search

Mutated bacteriorhodopsins are considered as versatile media in optical image processing. The following topics are discussed: the biological function of bacteriorhodopsin (BR); the photocycle of BR; light-controlled absorption of BR films; BR films in transmission- and reflection-type dynamic holograms; holographic interferometry (image processing in the time domain); holographic pattern recognition (image processing in the space domain)

N. Hampp; D. Zeisel

1994-01-01

196

ATM experiment S-056 image processing requirements definition  

NASA Technical Reports Server (NTRS)

A plan is presented for satisfying the image data processing needs of the S-056 Apollo Telescope Mount experiment. The report is based on information gathered from related technical publications, consultation with numerous image processing experts, and on the experience that was in working on related image processing tasks over a two-year period.

1972-01-01

197

Using Hollywood Movies as a Supplementary Tool to Teach Manufacturing Processes  

NSDL National Science Digital Library

Introductory courses on manufacturing processes are difficult to teach and it is challenging to deliver the information in an interesting or entertaining way. As one of the attempts to promote students learning, Hollywood movies have been used as a supplementary tool to teach such a course at Kansas State University. This paper presents the experience of such attempt. Examples of using Hollywood movies are presented and discussed. Students feedback and comments are also provided.

Pei, Z. J.

2012-01-27

198

DKIST visible broadband imager data processing pipeline  

NASA Astrophysics Data System (ADS)

The Daniel K. Inouye Solar Telescope (DKIST) Data Handling System (DHS) provides the technical framework and building blocks for developing on-summit instrument quality assurance and data reduction pipelines. The DKIST Visible Broadband Imager (VBI) is a first light instrument that alone will create two data streams with a bandwidth of 960 MB/s each. The high data rate and data volume of the VBI require near-real time processing capability for quality assurance and data reduction, and will be performed on-summit using Graphics Processing Unit (GPU) technology. The VBI data processing pipeline (DPP) is the first designed and developed using the DKIST DHS components, and therefore provides insight into the strengths and weaknesses of the framework. In this paper we lay out the design of the VBI DPP, examine how the underlying DKIST DHS components are utilized, and discuss how integration of the DHS framework with GPUs was accomplished. We present our results of the VBI DPP alpha release implementation of the calibration, frame selection reduction, and quality assurance display processing nodes.

Beard, Andrew; Cowan, Bruce; Ferayorni, Andrew

2014-07-01

199

Digital image processing of fundus images using scanning laser ophthalmoscopic images  

NASA Astrophysics Data System (ADS)

Many ocular diseases of the human fundus could be quantified from the visible pathological features. Fundus images are usually recorded on a photographic film using a fundus camera that employs very high levels of illumination. Scanning laser ophthalmoscopy is a new method of imaging the fundus that offers the unique capability of differentiating various pathological conditions using very low light levels and has the confocal capability to collect light from different layers. The scanning laser ophthalmoscope (SLO) uses a narrow beam of laser to illuminate the fundus. Only a single point is illuminated at a time and the fundus is scanned in a raster pattern and the reflected light is detected y a photodiode. Patients with pathological conditions such as cataract, exudates, macular and optic disc drusen were imaged using lasers ranging from 488 nm to 830 nm. Images were analyzed using Visilog image processing software on a SUN IPX Sparc station. Image processing to quantify the area of exudates macular drusen and optic disc drusen showed better accuracy when compared to registered digitized fundus photographs. Quantitative analysis of contrast of retinal vessels in fundus images of cataract patients demonstrated significantly higher contrast for the SLO at all wavelengths tested. Development of a color SLO using multiple lasers and tomographic imaging suing very small confocal apertures are in progress and preliminary results will be presented.

Manivannan, A.; Kirkpatrick, J. N.; Vieira, P.; Sharp, P. F.; Koller, C.; Forrester, J. V.

1996-12-01

200

Ethical and legal aspects on the use of images and photographs in medical teaching and publication.  

PubMed

The aim of the study was to investigate the legal and ethical concerns raised from the use of photographs and images in medical publication. A search in the pertinent literature was performed. It is of paramount importance that the patient's autonomy, privacy and confidentiality is respected. In all cases in which photographs and images contain identifiable information patient's consent for any potential use of this material is mandatory. Patients should be aware that with the evolution of electronic publication, once an image is published, there is no efficient control of its future misuse. Physicians and hospitals have a duty to use with confidentiality any material kept in the patient's medical records. Efforts should be made to anonymised images and photographs used in teaching and publication so that such information does not raise ethical and legal concerns. The procedures for using photographs and images in medical publication and teaching should respect the ethical principles and contain only anonymous information to avoid legal consequences. Continuous scrutiny and reform is required in order to adapt to the changing social and scientific environment. PMID:20671657

Mavroforou, A; Antoniou, G; Giannoukas, A D

2010-08-01

201

Teaching with Pensive Images: Rethinking Curiosity in Paulo Freire's "Pedagogy of the Oppressed"  

ERIC Educational Resources Information Center

Often when the author is teaching philosophy of education, his students begin the process of inquiry by prefacing their questions with something along the lines of "I'm just curious, but ...." Why do teachers and students feel compelled to express their curiosity as "just" curiosity? Perhaps there is a slight embarrassment in proclaiming their…

Lewis, Tyson E.

2012-01-01

202

Educational Use of Toshiba TDF-500 Medical Image Filing System for Teaching File Archiving and Viewing  

NASA Astrophysics Data System (ADS)

The authors have been using medical image filing system TOSHIBA TDIS-FILE as a teaching files archiving and viewing at University of Tokyo, Hospital, Department of Radiology. Image display on CRT was proven sufficient for the purpose of education for small groups of students, as well as residents. However, retrieval time for archived images, man-machine interface, and financial expenses are not in a satisfactory level yet. The authors also implemented flexible retrieval scheme for diagnostic codes, which has been proven sophisticated. These kinds of software utilities, as well as hardware evolution, are essential for this kind of instruments to be used as potential component of PACSystem. In our department, PACS project is being carried on. In the system, TOSHIBA AS3160 workstation (=SUN 3/160) handles all user interfaces including controls of medical image displays, examination data bases, and interface with HIS.

Kimura, Michio; Yashiro, Naobumi; Kita, Koichi; Tani, Yuichiro; IIO, Masahiro

1989-05-01

203

Multispectral inverse problems in satellite image processing  

Microsoft Academic Search

Satellite imaging is nowadays one of the main sources of geophysical and environmental information. It is, therefore, extremely important to be able to solve the corresponding inverse problem,: reconstruct the actual geophysics- or environmental-related image from the observed noisy data. Traditional image reconstruction techniques have been developed for the case when we have a single observed image. This case corresponds

Scott A. Starks; Vladik Kreinovich

1998-01-01

204

Networks for image acquisition, processing and display  

NASA Technical Reports Server (NTRS)

The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

Ahumada, Albert J., Jr.

1990-01-01

205

HABE real-time image processing  

NASA Astrophysics Data System (ADS)

The HABE system performs real-time autonomous acquisition, pointing and tracking (ATP). The goal of the experiment, sponsored by the Ballistic Missile Defense Organization and administered by the US Air Force Research Laboratory, Kirtland AFB, Albuquerque, NM, is to demonstrate the acquisition, tracking and pointing technologies needed for an effective space-based missile defense system. The three sensor tracking system includes two IR cameras for passive tracking of a missile plume and an intensified visible camera used to capture the return of a high-energy laser pulse reflected by the missile's nose. The HABE real-time image processor uses the images captured by each sensor to find a track point. The VME-based hardware includes four Compaq Computer Corporation Alpha processors and seven Texas Instruments TMS320C4X processors. The C4x comports and the VME bus provide the pathways needed for inter-processor communications. The software design implements a list processing approach to command and control which provides for flexible task redefinition, addition, and deletion while minimizing the need for code changes. The design is implemented in C. Several system performance metrics are described and tabulated.

Krainak, Joseph C.

1999-07-01

206

Image processing for automated erythrocyte classification.  

PubMed

Digital image processing and pattern recognition techniques were applied to determine the feasibility of a natural n-space subgrouping of normal and abnormal peripheral blood erythrocytes into well separated categories. The data consisted of 325 digitized red cells from 11 different cell classes. The analysis resulted in five features: (a) size, (b) roundness, (c) spicularity, (d) eccentricity and (e) central gray level distribution. These features separated the data into six distinct condensed subgroups of red cells. Each subgroup consisted of morphologically similar cells: (a) macrocytes, (b) normocytes, (c) schistocytes, acanthocytes and burr cells, (d) microcytes and spherocytes, (e) elliptocytes, sickle cells and pencil forms and (f) target cells. The concept of a quantitative "red cell differential" was introduced, utilizing these subgroup definitions to establish subpopulations of red cells, with quantifiable indices for the diagnosis of anemia, at the specimen level. PMID:1254916

Bacus, J W; Belanger, M G; Aggarwal, R K; Trobaugh, F E

1976-01-01

207

Image processing of 2D resistivity data for imaging faults  

NASA Astrophysics Data System (ADS)

A methodology to locate automatically limits or boundaries between different geological bodies in 2D electrical tomography is proposed, using a crest line extraction process in gradient images. This method is applied on several synthetic models and on field data set acquired on three experimental sites during the European project PALEOSIS where trenches were dug. The results presented in this work are valid for electrical tomographies data collected with a Wenner-alpha array and computed with an l 1 norm (blocky inversion) as optimization method. For the synthetic cases, three geometric contexts are modelled: a vertical and a dipping fault juxtaposing two different geological formations and a step-like structure. A superficial layer can cover each geological structure. In these three situations, the method locates the synthetic faults and layer boundaries, and determines fault displacement but with several limitations. The estimated fault positions correlate exactly with the synthetic ones if a conductive (or no superficial) layer overlies the studied structure. When a resistive layer with a thickness of 6 m covers the model, faults are positioned with a maximum error of 1 m. Moreover, when a resistive and/or a thick top layer is present, the resolution significantly decreases for the fault displacement estimation (error up to 150%). The tests with the synthetic models for surveys using the Wenner-alpha array indicate that the proposed methodology is best suited to vertical and horizontal contacts. Application of the methodology to real data sets shows that a lateral resistivity contrast of 1:5-1:10 leads to exact faults location. A fault contact with a resistivity contrast of 1:0.75 and overlaid by a resistive layer with a thickness of 1 m gives an error location ranging from 1 to 3 m. Moreover, no result is obtained for a contact with very low contrasts (˜1:0.85) overlaid by a resistive soil. The method shows poor results when vertical gradients are greater than horizontal ones. This kind of image processing technique should be systematically used for improving the objectiveness of tomography interpretation when looking for limits between geological objects.

Nguyen, F.; Garambois, S.; Jongmans, D.; Pirard, E.; Loke, M. H.

2005-07-01

208

Modeling as a Teaching Learning Process for Understanding Materials: A Case Study in Primary Education  

ERIC Educational Resources Information Center

Modeling is being used in teaching learning science in a number of ways. It will be considered here as a process whereby children of primary school age exercise their capacity of organizing recognizable and manageable forms during their understanding of complex phenomenologies. The aim of this work is to characterize this process in relation to…

Acher, Andres; Arca, Maria; Sanmarti, Neus

2007-01-01

209

The Reflections of Layered Curriculum to Learning-Teaching Process in Social Studies Course  

ERIC Educational Resources Information Center

The purpose of this research is to set the effect of Layered Curriculum on learning-teaching processes. The research was conducted on 2011-2012 educational year. The implementation process, which lasted for 4 weeks, was carried out with the theme named "The World of All of Us" in Social Studies lesson at 5th grade. Observation and interview…

Gun, Emine Seda

2013-01-01

210

On Anisotropic Diffusion in 3D image processing and image sequence analysis  

Microsoft Academic Search

A morphological multiscale method in 3D image and 3D image sequence processing is discussed which identifies edges on level sets and the motion of features in time. Based on these indicator evaluation the image data is processed applying nonlinear diffusion and the theory of geometric evolution problems. The aim is to smooth level sets of a 3D image while simultaneously

Karol Mikula; Martin Rumpf; Fiorella Sgallari

211

On Anisotropic Geometric Diffusion in 3D Image Processing and Image Sequence Analysis  

Microsoft Academic Search

A morphological multiscale method in 3D image and 3D image sequence processing is discussed which identifies edges on level sets and the motion of features in time. Based on these indicator evaluation the image data is processed applying nonlinear diffusion and the theory of geometric evolution problems. The aim is to smooth level sets of a 3D image while preserving

Karol Mikula; Martin Rumpf; Fiorella Sgallari

2002-01-01

212

Nonlinear Processing of Large Scale Satellite Images via Unsupervised Clustering and Image Segmentation  

Microsoft Academic Search

For large scale satellite images, it is evitable that images will be affected by various uncertain factors, especially those from atmosphere. To minimize the impact of atmosphere medium dispersing, image segmentation is an essential procedure. As one of the most critical means of image processing and data analysis approach, segmentation is to classify an image into parts that have a

JIECAI LUO; ZHENGMAO YE; PRADEEP BHATTACHARYA

2005-01-01

213

A forensic image processing environment for investigation of surveillance video  

Microsoft Academic Search

We present an image processing software suite, based on the Matlab environment, specifically designed to be used as a forensic tool by law enforcement laboratories in the analysis of crime scene videos and images. Our aim is to overcome some drawbacks which normally appear when using standard image processing tools for this application, i.e. mainly the lack of full control

M. Jerian; S. Paolino; F. Cervelli; S. Carrato; A. Mattei; L. Garofano

2007-01-01

214

IPL processing of the Viking orbiter images of Mars  

Microsoft Academic Search

The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through

R. M. Ruiz; D. A. Elliott; G. M. Yagi; R. B. Pomphrey; M. A. Power; W. Farrell Jr.; J. J. Lorre; W. D. Benton; R. E. Dewar; L. E. Cullen

1977-01-01

215

IPL Processing of the Viking Orbiter Images of Mars  

Microsoft Academic Search

The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through

Reuben M. Ruiz; Denis A. Elliott; Gary M. Yagi; Richard B. Pomphrey; Margaret A. Power; K. Winslow Farrell; Jean J. Lorre; William D. Benton; Robert E. Dewar; Louise E. Cullen

1977-01-01

216

DSP real-time operating system coordinates image processing  

Microsoft Academic Search

Because of their high-speed arithmetic, interrupt processing, and I\\/O facilities, DSPs are a natural for applications that require image capture, control, and processing. In some cases, DSPs are used as controllers in conjunction with specialized imaging ASICs. In other applications, one or more DSPs are used to execute both control and core imaging functions like FFTs, filters, and correlations. The

M. Grosen

1996-01-01

217

Image Forensics of Digital Cameras by Analysing Image Variations using Statistical Process Control  

E-print Network

Image Forensics of Digital Cameras by Analysing Image Variations using Statistical Process Control the novel use of Statistical Process Control (SPC) as a tool for identifying anomalies in digital cameras to deduce the cause of inconsistency in the device's image acquisition process. This could ultimately lead

Doran, Simon J.

218

Query Processing Issues in Image (Multimedia) Databases  

Microsoft Academic Search

Multimedia databases have attracted academic and in- dustrial interest, and systems such as QBIC (Content Based Image Retrieval system from IBM) have been released. Such systems are essential to effectively and efficiently use the existing large collections of image data in the modern com- puting environment. The aim of such systems is to enable retrieval of images based on their

Surya Nepal; M. V. Ramakrishna

1999-01-01

219

Digital image processing in the Xerox DocuTech document processing system  

NASA Astrophysics Data System (ADS)

This paper describes the realtime image processing features in the Xerox DocuTech document processing system. The image processing features offered include image enhancement, halftone screen removal (de-screening) with an FIR low pass filter, and an image segmentation algorithm developed by Xerox, which can be used for documents with text and high frequency halftone images, such as pages from typical magazines. The image segmentation algorithm uses a modified auto correlation function approach to detect halftone areas on the document. With this set of image processing features, it is possible to handle a wide variety of input documents on the scanner, and generate high quality output prints.

Lin, Ying-Wei

1994-03-01

220

Using NASA Space Imaging Technology to Teach Earth and Sun Topics  

NASA Astrophysics Data System (ADS)

We teach an experimental college-level course, directed toward elementary education majors, emphasizing "hands-on" activities that can be easily applied to the elementary classroom. This course, Physics 240: "The Sun-Earth Connection" includes various ways to study selected topics in physics, earth science, and basic astronomy. Our lesson plans and EPO materials make extensive use of NASA imagery and cover topics about magnetism, the solar photospheric, chromospheric, coronal spectra, as well as earth science and climate. In addition we are developing and will cover topics on ecosystem structure, biomass and water on Earth. We strive to free the non-science undergraduate from the "fear of science" and replace it with the excitement of science such that these future teachers will carry this excitement to their future students. Hands-on experiments, computer simulations, analysis of real NASA data, and vigorous seminar discussions are blended in an inquiry-driven curriculum to instill confident understanding of basic physical science and modern, effective methods for teaching it. The course also demonstrates ways how scientific thinking and hands-on activities could be implemented in the classroom. We have designed this course to provide the non-science student a confident basic understanding of physical science and modern, effective methods for teaching it. Most of topics were selected using National Science Standards and National Mathematics Standards that are addressed in grades K-8. The course focuses on helping education majors: 1) Build knowledge of scientific concepts and processes; 2) Understand the measurable attributes of objects and the units and methods of measurements; 3) Conduct data analysis (collecting, organizing, presenting scientific data, and to predict the result); 4) Use hands-on approaches to teach science; 5) Be familiar with Internet science teaching resources. Here we share our experiences and challenges we face while teaching this course.

Verner, E.; Bruhweiler, F. C.; Long, T.

2011-12-01

221

Three-dimensional image processing for synthetic holographic stereograms  

E-print Network

A digital image processing technique is presented that allows conventionally produced images to be prepared for undistorted printing in one-step holographic stereograms. This technique effectively predistorts the source ...

Holzbach, Mark

1987-01-01

222

Viewpoints on Medical Image Processing: From Science to Application  

PubMed Central

Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

Deserno (ne Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (ne Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

2013-01-01

223

Lat. Am. J. Phys. Educ. Vol. 6, Suppl. I, August 2012 122 http://www.lajpe.org Teaching about the physics of medical imaging  

E-print Network

the physics of medical imaging: Examples of research-based teaching materials Dean Zollman, Dyan Jones, Sytil they are not taught in beginning physics courses. To help students understand that physics and medical imaging's website is http://web.phys.ksu.edu/mmmm/. Keywords: Teaching physics, medical imaging, optics. Resumen

Zollman, Dean

224

ESO C Library for an Image Processing Software Environment (eclipse)  

Microsoft Academic Search

Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry,

N. Devillard

2001-01-01

225

DTV color and image processing: past, present, and future  

NASA Astrophysics Data System (ADS)

The image processor in digital TV has started to play an important role due to the customers' growing desire for higher quality image. The customers want more vivid and natural images without any visual artifact. Image processing techniques are to meet customers' needs in spite of the physical limitation of the panel. In this paper, developments in image processing techniques for DTV in conjunction with developments in display technologies at Samsung R and D are reviewed. The introduced algorithms cover techniques required to solve the problems caused by the characteristics of the panel itself and techniques for enhancing the image quality of input signals optimized for the panel and human visual characteristics.

Kim, Chang-Yeong; Lee, SeongDeok; Park, Du-Sik; Kwak, Youngshin

2006-01-01

226

Image processing software for imaging spectrometry data analysis  

NASA Technical Reports Server (NTRS)

Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

1988-01-01

227

The Study of Image Processing Method for AIDS PA Test  

NASA Astrophysics Data System (ADS)

At present, the main test technique of AIDS is PA in China. Because the judgment of PA test image is still depending on operator, the error ration is high. To resolve this problem, we present a new technique of image processing, which first process many samples and get the data including coordinate of center and the rang of kinds images; then we can segment the image with the data; at last, the result is exported after data was judgment. This technique is simple and veracious; and it also turns out to be suitable for the processing and analyzing of other infectious diseases' PA test image.

Zhang, H. J.; Wang, Q. G.

2006-10-01

228

A Solution for Satellite Image Processing on Grids  

Microsoft Academic Search

Remote sensing image processing is both data and computing intensive. Grid technologies currently provides powerful tools for remote sensing data sharing and processing. After an overview of the recent initiatives of gridifying satellite image processing, two specific usage scenarios are analyzed. The solution that is proposed is based on freely distributed and general-purpose software: latest versions of Globus Toolkit, GIMP

DANA PETCU

2006-01-01

229

An Overview of DNA Microarray Image Requirements for Automated Processing  

Microsoft Academic Search

We present an overview of DNA microarray image requirements for automated processing and information extraction from spotted glass slides. Motivation of our review comes from the need to automate high-throughput microarray data processing due to exponentially growing amounts of microarray data. In order to automate microarray image processing and draw biologically meaningful conclusions from experiments, one has to understand the

Peter Bajcsy

2005-01-01

230

UCIPE: Ubiquitous Context-Based Image Processing Engine for Medical Image Grid  

Microsoft Academic Search

Medical diagnosis and intervention increasingly rely upon medical image processing tools that are bound with high-cost hardware,\\u000a designed for special diseases, and incapable of being shared by common medical terminals. In this paper, we present our Ubiquitous Context-based Image Processing Engine (UCIPE) for MedImGrid (Medical Image Grid). It encapsulates image processing algorithms as WS (Web Services), and creates virtual algorithm

Aobing Sun; Hai Jin; Ran Zheng; Ruhan He; Qin Zhang; Wei Guo; Song Wu

2007-01-01

231

Comparative study of image restoration techniques in forensic image processing  

Microsoft Academic Search

In this work we investigated the forensic applicability of some state-of-the-art image restoration techniques for digitized video-images and photographs: classical Wiener filtering, constrained maximum entropy, and some variants of constrained minimum total variation. Basic concepts and experimental results are discussed. Because all methods appeared to produce different results, a discussion is given of which method is the most suitable, depending

Jurrien Bijhold; Arjan Kuijper; Jaap-Harm Westhuis

1997-01-01

232

Cardiovascular Imaging and Image Processing: Theory and Practice - 1975  

NASA Technical Reports Server (NTRS)

Ultrasonography was examined in regard to the developmental highlights and present applicatons of cardiac ultrasound. Doppler ultrasonic techniques and the technology of miniature acoustic element arrays were reported. X-ray angiography was discussed with special considerations on quantitative three dimensional dynamic imaging of structure and function of the cardiopulmonary and circulatory systems in all regions of the body. Nuclear cardiography and scintigraphy, three--dimensional imaging of the myocardium with isotopes, and the commercialization of the echocardioscope were studied.

Harrison, Donald C. (editor); Sandler, Harold (editor); Miller, Harry A. (editor); Hood, Manley J. (Editor); Purser, Paul E. (Editor); Schmidt, Gene (Editor)

1975-01-01

233

CT Image Processing and Medical Rapid Prototyping  

Microsoft Academic Search

In this paper, the three-dimensional (3D) models of body structures from computed tomography (CT) image are reconstructed in SolidWorks by using planar contour method and medical rapid prototyping (MRP) models are performed. This paper reports how to transfer CT image into digital binary matrixes; then, to capture section contour points from medical image per slice, to create B-spline curve with

Jiman Han; Yi Jia

2008-01-01

234

Optimizing signal and image processing applications using Intel libraries  

NASA Astrophysics Data System (ADS)

This paper presents optimized signal and image processing libraries from Intel Corporation. Intel Performance Primitives (IPP) is a low-level signal and image processing library developed by Intel Corporation to optimize code on Intel processors. Open Computer Vision library (OpenCV) is a high-level library dedicated to computer vision tasks. This article describes the use of both libraries to build flexible and efficient signal and image processing applications.

Landré, Jérôme; Truchetet, Frédéric

2007-01-01

235

On digital image processing technology and application in geometric measure  

NASA Astrophysics Data System (ADS)

Digital image processing technique is an emerging science that emerging with the development of semiconductor integrated circuit technology and computer science technology since the 1960s.The article introduces the digital image processing technique and principle during measuring compared with the traditional optical measurement method. It takes geometric measure as an example and introduced the development tendency of digital image processing technology from the perspective of technology application.

Yuan, Jiugen; Xing, Ruonan; Liao, Na

2014-04-01

236

Image-Processing Software For A Hypercube Computer  

NASA Technical Reports Server (NTRS)

Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

1992-01-01

237

The Process of Teaching and Learning about Reflection: Research Insights from Professional Nurse Education  

ERIC Educational Resources Information Center

The study aimed to investigate the process of reflection in professional nurse education and the part it played in a teaching and learning context. The research focused on the social construction of reflection within a post-registration, palliative care programme, accessed by nurses, in the United Kingdom (UK). Through an interpretive ethnographic…

Bulman, Chris; Lathlean, Judith; Gobbi, Mary

2014-01-01

238

The Emergence of the Teaching/Learning Process in Preschoolers: Theory of Mind and Age Effect  

ERIC Educational Resources Information Center

This study analysed the gradual emergence of the teaching/learning process by examining theory of mind (ToM) acquisition and age effects in the preschool period. We observed five dyads performing a jigsaw task drawn from a previous study. Three stages were identified. In the first one, the teacher focuses on the execution of her/his own task…

Bensalah, Leila

2011-01-01

239

Metaphors in Mathematics Classrooms: Analyzing the Dynamic Process of Teaching and Learning of Graph Functions  

ERIC Educational Resources Information Center

This article presents an analysis of a phenomenon that was observed within the dynamic processes of teaching and learning to read and elaborate Cartesian graphs for functions at high-school level. Two questions were considered during this investigation: What types of metaphors does the teacher use to explain the graphic representation of functions…

Font, Vicenc; Bolite, Janete; Acevedo, Jorge

2010-01-01

240

ICCE/ICCAI 2000 Full & Short Papers (Teaching and Learning Processes).  

ERIC Educational Resources Information Center

This document contains the full and short papers on teaching and learning processes from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction) covering the following topics: a code restructuring tool to help scaffold novice programmers; efficient study of Kanji using…

2000

241

Teaching Processes and Practices for an Australian Multicultural Classroom: Two Complementary Models  

ERIC Educational Resources Information Center

Which pedagogical processes and practices that target the recognition, value and sharing of world views in teaching and learning can be identified as strategies for learning to live together in an Australian multicultural classroom? The question is addressed by this paper, which presents two discrete but complementary pedagogical models that…

Winch-Dummett, Carlene

2004-01-01

242

Incentives and Motivation in the Teaching-Learning Process: The Role of Teacher Intentions.  

ERIC Educational Resources Information Center

The theory of "reasoned action" is applied to the teaching-learning process. This theory asserts that people use the information available to them in a reasonable manner to arrive at their decisions and that a person's behavior follows logically and systematically from whatever information he has available. To illustrate application of the theory…

Menges, Robert J.

243

Students' Cognitive Processes While Learning from Teaching. Final Report (Volume One).  

ERIC Educational Resources Information Center

Research is reported on the cognitive mediational paradigm which postulates that teachers influence students' learning by causing them to think and behave in particular ways during teaching. Four studies are reported. The first describes five teachers and their students and explores, in classroom lessons, the cognitive processes students used in…

Winne, Philip H.; Marx, Ronald W.

244

Students' Cognitive Processes While Learning from Teaching. Final Report: Appendices. (Volume Two).  

ERIC Educational Resources Information Center

These appendices present the protocols used in research (reported in Volume 1) on the cognitive processes of students while learning from teaching. Curriculum outlines are given for the videotaped lessons used in the second and third studies: lessons in sleep and elementary psychology. Included in the appendices are: (1) the illustrative script…

Winne, Philip H.; Marx, Ronald W.

245

Exploring the Process of Integrating the Internet into English Language Teaching  

ERIC Educational Resources Information Center

The present paper explores the process of integrating the Internet into the field of English language teaching in the light of the following points: the general importance of the Internet in our everyday lives shedding some light on the increasing importance of the Internet as a new innovation in our modern life; benefits of using the Internet in…

Abdallah, Mahmoud Mohammad Sayed

2007-01-01

246

Two satellite image sets for the training and validation of image processing systems for defense applications  

Microsoft Academic Search

Many image processing algorithms utilize the discrete wavelet transform (DWT) to provide efficient compression and near-perfect reconstruction of image data. Defense applications often require the transmission of data at high levels of compression over noisy channels. In recent years, evolutionary algorithms (EAs) have been utilized to optimize image transform filters that outperform standard wavelets for bandwidth-constrained compression of satellite images.

Michael R. Peterson; Shawn Aldridge; Britny Herzog; Frank Moore

2010-01-01

247

Translational motion compensation in ISAR image processing  

Microsoft Academic Search

In inverse synthetic aperture radar (ISAR) imaging, the target rotational motion with respect to the radar line of sight contributes to the imaging ability, whereas the translational motion must be compensated out. This paper presents a novel two-step approach to translational motion compensation using an adaptive range tracking method for range bin alignment and a recursive multiple-scatterer algorithm (RMSA) for

Haiqing Wu; Dominic Grenier; Gilles Y. Delisle; Da-Gang Fang

1995-01-01

248

An image display interface for astronomical image processing  

NASA Astrophysics Data System (ADS)

The use made of image display systems in astronomical data reduction is discussed, and a conceptual model of an image display device that supports these uses and is similar to commonly available hardware is developed. This model was used as the basis for designing a set of image display device interface routines. The interface routines support both low- and high-end display devices and a variety of device configurations. The use of high-end imaging workstations is also supported, although the interface does not attempt to provide a windowing environment. Device configuration choices may be handled via use of one or more standard settings or by use of a negotiated configuration. The interfaces are presented in a language independent fashion with language specific bindings for FORTRAN and C. Several implementations of the early versions of this interface have been built and have demonstrated the practicality of the interface, including the actual port of image display applications software from one device to another.

Terrett, D. L.; Shames, P. M. B.; Hanisch, R. J.; Albrecht, R.; Banse, K.

1988-12-01

249

Low-light-level image system with real-time digital image processing function  

Microsoft Academic Search

In night vision applications, because the light reflected by a target is very weak, the image captured by the Low-Light- Level camera has a great deal of noise attached on it. To remove such noise, the image must be processed in time. In this paper a real-time image processing system with a high- speed DSP device embedded in is constructed.

Wusen Li; Zeying Chi; Wenjian Chen

2001-01-01

250

SUSAN - A New Approach to Low Level Image Processing  

Microsoft Academic Search

This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction. Non-linear filtering is used to define which parts of the image are closely related to each individual pixel; each pixel has associated with it a local image region which is of similar brightness to that pixel. The new

Stephen M. Smith; J. Michael Brady; Stephen M. Smith

1997-01-01

251

Image Processing Techniques for the Quanti cation of Atherosclerotic Changes  

E-print Network

for the examination and follow up of the arteriosclerotic changes due to hypertension, with the help of digital image processing of fundus images. This method would help in evaluating the e cacy of various treatments for fundus image enhancement. Our method is based on segmenting the vasculature by identifying the centerline

Fisher, Bob

252

Research and implementation of remote sensing image processing method  

Microsoft Academic Search

C# is a modern, object-oriented programming language, which gradually being widely used because of its precise and simple features. This article uses C# programming language, for example, to gray-scale and linear transformation, achieving the desired results of remote sensing image processing. This method not only enhances the visual effect of the remote sensing image and improves the remote sensing image

Niu Yaqin; Li Jin

2011-01-01

253

IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Recovery of Surface Orientation  

E-print Network

IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Recovery of Surface Orientation From Diffuse Polarization is concerned with exploiting polarization by surface reflection, using images of smooth dielectric objects by reflection, starting with the Fresnel equations. These equations are used to interpret images taken

Martin, Ralph R.

254

Experiences with digital processing of images at INPE  

NASA Technical Reports Server (NTRS)

Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.

Mascarenhas, N. D. A. (principal investigator)

1984-01-01

255

Teaching about Due Process of Law. ERIC Digest.  

ERIC Educational Resources Information Center

Fundamental constitutional and legal principles are central to effective instruction in the K-12 social studies curriculum. To become competent citizens, students need to develop an understanding of the principles on which their society and government are based. Few principles are as important in the social studies curriculum as due process of…

Vontz, Thomas S.

256

Teaching Information Literacy and Scientific Process Skills: An Integrated Approach.  

ERIC Educational Resources Information Center

Describes an online searching and scientific process component taught as part of the laboratory for a general zoology course. The activities were designed to be gradually more challenging, culminating in a student-developed final research project. Student evaluations were positive, and faculty indicated that student research skills transferred to…

Souchek, Russell; Meier, Marjorie

1997-01-01

257

Kagan Structures, Processing, and Excellence in College Teaching  

ERIC Educational Resources Information Center

Frequent student processing of lecture content (1) clears working memory, (2) increases long-term memory storage, (3) produces retrograde memory enhancement, (4) creates episodic memories, (5) increases alertness, and (6) activates many brain structures. These outcomes increase comprehension of and memory for content. Many professors now…

Kagan, Spencer

2014-01-01

258

Using quantum filters to process images of diffuse axonal injury  

NASA Astrophysics Data System (ADS)

Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.

Pineda Osorio, Mateo

2014-06-01

259

Pattern Recognition and Image Processing of Infrared Astronomical Satellite Images  

Microsoft Academic Search

The Infrared Astronomical Satellite (IRAS) images with wavelengths of 60 mu m and 100 mu m contain mainly information on both extra-galactic sources and low-temperature interstellar media. The low-temperature interstellar media in the Milky Way impose a \\

Lun Xiong He

1996-01-01

260

Image data processing of earth resources management. [technology transfer  

NASA Technical Reports Server (NTRS)

Various image processing and information extraction systems are described along with the design and operation of an interactive multispectral information system, IMAGE 100. Analyses of ERTS data, using IMAGE 100, over a number of U.S. sites are presented. The following analyses are included: investigations of crop inventory and management using remote sensing; and (2) land cover classification for environmental impact assessments. Results show that useful information is provided by IMAGE 100 analyses of ERTS data in digital form.

Desio, A. W.

1974-01-01

261

High resolution image processing on low-cost microcomputers  

NASA Technical Reports Server (NTRS)

Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.

Miller, R. L.

1993-01-01

262

The processing and fusion on low light level image and infrared thermal image  

Microsoft Academic Search

In this paper, on the base of analysis on low light level {(LLL)} image and infrared {(IR)} thermal image's characteristics, the researches on the processing and fusion on {LLL} and {IR} image has been done. The dual-channel image registration techniques of {LLL} and {IR} image are put forward, including digital real-time shift technique and dual-channel fusion stereoscopic color display technique.

Lianfa Bai; Yi Zhang; Chuang Zhang; Weixian Qian; Baomin Zhang

2006-01-01

263

A fusion method for visible and infrared images based on contrast pyramid with teaching learning based optimization  

NASA Astrophysics Data System (ADS)

This paper proposes a novel image fusion scheme based on contrast pyramid (CP) with teaching learning based optimization (TLBO) for visible and infrared images under different spectrum of complicated scene. Firstly, CP decomposition is employed into every level of each original image. Then, we introduce TLBO to optimizing fusion coefficients, which will be changed under teaching phase and learner phase of TLBO, so that the weighted coefficients can be automatically adjusted according to fitness function, namely the evaluation standards of image quality. At last, obtain fusion results by the inverse transformation of CP. Compared with existing methods, experimental results show that our method is effective and the fused images are more suitable for further human visual or machine perception.

Jin, Haiyan; Wang, Yanyan

2014-05-01

264

Vector-valued image processing by parallel level sets.  

PubMed

Vector-valued images such as RGB color images or multimodal medical images show a strong interchannel correlation, which is not exploited by most image processing tools. We propose a new notion of treating vector-valued images which is based on the angle between the spatial gradients of their channels. Through minimizing a cost functional that penalizes large angles, images with parallel level sets can be obtained. After formally introducing this idea and the corresponding cost functionals, we discuss their Gâteaux derivatives that lead to a diffusion-like gradient descent scheme. We illustrate the properties of this cost functional by several examples in denoising and demosaicking of RGB color images. They show that parallel level sets are a suitable concept for color image enhancement. Demosaicking with parallel level sets gives visually perfect results for low noise levels. Furthermore, the proposed functional yields sharper images than the other approaches in comparison. PMID:23955746

Ehrhardt, Matthias Joachim; Arridge, Simon R

2014-01-01

265

FREQUENCY-BASED SIGNAL PROCESSING FOR ULTRASOUND COLOR FLOW IMAGING  

Microsoft Academic Search

In ultrasound color flow imaging, the computation of flow estimates is well-recognized as a challenging problem from a signal processing perspective. The flow visualization performance of this imaging tool is often affected by error sources such as the lack of abundant signal samples available for processing, the presence of wideband clutter in the acquired signals, and the flow signal distortions

Alfred C. H. Yu; K. Wayne Johnston; Richard S. C. Cobbold

2007-01-01

266

Spinor Fourier Transform for Image Processing Thomas Batard, Michel Berthier  

E-print Network

1 Spinor Fourier Transform for Image Processing Thomas Batard, Michel Berthier Abstract--We propose in this paper to introduce a new spinor Fourier transform for both grey-level and color image processing. Our Fourier transform may be used to perform frequency filtering that takes into account the Riemannian

Paris-Sud XI, Université de

267

Real-Time Image Processing on a Custom Computing Platform  

Microsoft Academic Search

The authors explore the utility of custom computing machinery for accelerating the development, testing, and prototyping of a diverse set of image processing applications. We chose an experimental custom computing platform called Splash-2 to investigate this approach to prototyping real time image processing designs. Custom computing platforms are emerging as a class of computers that can provide near application specific

Peter M. Athanas; A. Lynn Abbott

1995-01-01

268

Determination of SATI Instrument Filter Parameters by Processing Interference Images  

Microsoft Academic Search

This paper presents a method for determination of interference filter parameters such as the effective refraction index and the maximal transmittance wavelength on the basis of image processing of a spectrogram produced by Spectrometer Airglow Temperature Imager instrument by means of data processing. The method employs the radial sections for determination of points from the crests and valleys in the

Atanas Marinov Atanassov

2010-01-01

269

Teaching Heritage  

NSDL National Science Digital Library

Subtitled "a professional development Website for teachers," Teaching Heritage is an impressive collection of information and resources for teaching Australian history and culture. There are eight main sections to the site: four offer teaching resources and four provide teaching units. The resource sections include an examination of different ways of defining heritage, an Australian heritage timeline, discussions of different approaches to teaching heritage through media, and outcomes-based approaches in teaching and assessing heritage coursework. The teaching units deal in depth with issues of citizenship, nationalism, Australian identities, and new cultural values. A Heritage Gallery features images of various culturally significant or representative places in Australia, such as New Italy, the Dundullimal Homestead, Australian Hall, Kelly's Bush, and many more. Obviously, teachers of Civics on the southern continent will find this site extremely useful, but the teaching units -- rich with texts and images -- also offer fascinating introductions for anyone interested in the issues of Australian nation-making.

270

Efficient image processing in Fourier holography for moving images  

Microsoft Academic Search

For still images, diffraction-based beam-forming of laser light is a well-known task in optics called Fourier holography. Using a phase-only diffractive optical element (DOE) which displays a computer-generated hologram (CGH) very efficient beam forming is realized. The drawbacks of this technique are massive computation costs for phase-only holograms. In this paper, we briefly describe this diffraction-based technology. Then, we depict

Marc Bernau

2009-01-01

271

Graphical user interface for image acquisition and processing  

DOEpatents

An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

Goldberg, Kenneth A. (Berkeley, CA)

2002-01-01

272

Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition  

NASA Technical Reports Server (NTRS)

Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

Downie, John D.; Tucker, Deanne (Technical Monitor)

1994-01-01

273

Image processing for drift compensation in fluorescence microscopy  

NASA Astrophysics Data System (ADS)

Fluorescence microscopy is characterized by low background noise, thus a fluorescent object appears as an area of high signal/noise. Thermal gradients may result in apparent motion of the object, leading to a blurred image. Here, we have developed an image processing methodology that may remove/reduce blur significantly for any type of microscopy. A total of ~100 images were acquired with a pixel size of 30 nm. The acquisition time for each image was approximately 1 second. We can quantity the drift in X and Y using the sub pixel accuracy computed centroid location of an image object in each frame. We can measure drifts down to approximately 10 nm in size and a drift-compensated image can therefore be reconstructed on a grid of the same size using the "Shift and Add" approach leading to an image of identical size as the individual image. We have also reconstructed the image using a 3 fold larger grid with a pixel size of 10 nm. The resulting images reveal details at the diffraction limit. In principle we can only compensate for inter-image drift - thus the drift that takes place during the acquisition time for the individual image is not corrected. We believe that our results are of general applicability in microscopy and other types of imaging. A prerequisite for our method is the presence of a trackable object in the image such as a cell nucleus.

Petersen, Steffen B.; Viruthachalam, Thiagarajan; Coutinho, Isabel; Gajula, Gnana P.; Neves-Petersen, Maria Teresa

2013-02-01

274

Gaussian process interpolation for uncertainty estimation in image registration.  

PubMed

Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

2014-01-01

275

Computer image processing in traffic engineering  

SciTech Connect

This book describes techniques through which television images of automobile traffic can be analyzed by computers to provide solutions to problems of traffic management or road design. It outlines the current experience of the use of these techniques along with results of the author's own research. Finally, the potential impact of these methods on traffic engineering is discussed.

Hoose, N.

1991-01-01

276

Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry  

NASA Technical Reports Server (NTRS)

Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.

Hong, Yie-Ming

1973-01-01

277

A model for simulation and processing of radar images  

NASA Technical Reports Server (NTRS)

A model for recording, processing, presentation, and analysis of radar images in digital form is presented. The observed image is represented as having two random components, one which models the variation due to the coherent addition of electromagnetic energy scattered from different objects in the illuminated areas. This component is referred to as fading. The other component is a representation of the terrain variation which can be described as the actual signal which the radar is attempting to measure. The combination of these two components provides a description of radar images as being the output of a linear space-variant filter operating on the product of the fading and terrain random processes. In addition, the model is applied to a digital image processing problem using the design and implementation of enhancement scene. Finally, parallel approaches are being employed as possible means of solving other processing problems such as SAR image map-matching, data compression, and pattern recognition.

Stiles, J. A.; Frost, V. S.; Shanmugam, K. S.; Holtzman, J. C.

1981-01-01

278

Computer vision applications for coronagraphic optical alignment and image processing  

E-print Network

Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

Savransky, Dmitry; Poyneer, Lisa A; Macintosh, Bruce A; 10.1364/AO.52.003394

2013-01-01

279

Subway tunnel crack identification algorithm research based on image processing  

NASA Astrophysics Data System (ADS)

The detection of cracks in tunnels has profound impact on the tunnel's safety. It's common for low contrast, uneven illumination and severe noise pollution in tunnel surface images. As traditional image processing algorithms are not suitable for detecting tunnel cracks, a new image processing method for detecting cracks in surface images of subway tunnels is presented in this paper. This algorithm includes two steps. The first step is a preprocessing which uses global and local methods simultaneously. The second step is the elimination of different types of noises based on the connected components. The experimental results show that the proposed algorithm is effective for detecting tunnel surface cracks.

Bai, Biao; Zhu, Liqiang; Wang, Yaodong

2014-04-01

280

The Goddard Space Flight Center Program to develop parallel image processing systems  

NASA Technical Reports Server (NTRS)

Parallel image processing which is defined as image processing where all points of an image are operated upon simultaneously is discussed. Coherent optical, noncoherent optical, and electronic methods are considered parallel image processing techniques.

Schaefer, D. H.

1972-01-01

281

Hadoop plugin for distributed and parallel image processing  

Microsoft Academic Search

Hadoop Distributed File System (HDFS) is widely used in large-scale data storage and processing. HDFS uses MapReduce programming model for parallel processing. The work presented in this paper proposes a novel Hadoop plugin to process image files with MapReduce model. The plugin introduces image related I\\/O formats and novel classes for creating records from input files. HDFS is especially designed

Ilginc Demir; Ahmet Sayar

2012-01-01

282

Noticing Children's Learning Processes--Teachers Jointly Reflect on Their Own Classroom Interaction for Improving Mathematics Teaching  

ERIC Educational Resources Information Center

One could focus on many different aspects of improving the quality of mathematics teaching. For a better understanding of children's mathematical learning processes or teaching and learning in general, reflection on and analysis of concrete classroom situations are of major importance. On the basis of experiences gained in a collaborative research…

Scherer, Petra; Steinbring, Heinz

2006-01-01

283

Co-Teaching through Modeling Processes: Professional Development of Students and Instructors in a Teacher Training Program  

ERIC Educational Resources Information Center

In this article, a unique model of instruction based on co-teaching carried out in the framework of the practice teaching program intended for third year college students is presented. The program incorporated principles of modeling based on processes of mentoring, instruction, and discussion, showing the students the pedagogical importance of…

Bashan, Bilha; Holsblat, Rachel

2012-01-01

284

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 7, JULY 1997 901 Digital Color Imaging  

E-print Network

the nature of the color sensing mechanisms in the human eye. While some progress in this direction was madeIEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 7, JULY 1997 901 Digital Color Imaging Gaurav and research in the area of digital color imaging. In order to establish the background and lay down

Sharma, Gaurav

285

Digital image acquisition and processing in medical x-ray imaging  

Microsoft Academic Search

This contribution discusses a selection of today's techniques and future concepts for digital x-ray imaging in medicine. Advantages of digital imaging over conventional analog methods include the possibility to archive and transmit images in digital information systems as well as to digitally process pictures before display, for example, to enhance low contrast details. After reviewing two digital x- ray radiography

Til Aach; Ulrich Schiebel; Gerhard Spekowius

1999-01-01

286

Muscular Ideal Media Images and Men’s Body Image: Social Comparison Processing and Individual Vulnerability  

Microsoft Academic Search

The study aimed to investigate the role of social comparison processes in men’s responses to images of muscular-ideal male beauty. A sample of 104 male university students viewed either 15 television commercials containing images of men who epitomize the current muscular ideal, or 15 nonappearance commercials containing no such images. Body satisfaction was assessed immediately before and after commercial viewing.

Duane A. Hargreaves; Marika Tiggemann

2009-01-01

287

Processing of polarametric SAR images. Final report  

SciTech Connect

The objective of this work was to develop a systematic method of combining multifrequency polarized SAR images. It is shown that the traditional methods of correlation, hard targets, and template matching fail to produce acceptable results. Hence, a new algorithm was developed and tested. The new approach combines the three traditional methods and an interpolation method. An example is shown that demonstrates the new algorithms performance. The results are summarized suggestions for future research are presented.

Warrick, A.L.; Delaney, P.A. [Univ. of Arizona, Tucson, AZ (United States)

1995-09-01

288

Knowledge-based aerial image understanding systems and expert systems for image processing  

SciTech Connect

This paper discusses roles of artificial intelligence in the automatic interpretation of remotely sensed imagery. The authors first discuss several image understanding systems for analyzing complex aerial photographs. The discussion is mainly concerned with knowledge representation and control structure in the aerial image understanding systems: a blackboard model for integrating diverse object detection modules, a symbolic model representation for three-dimensional object recognition, and integration of bottom-up and top-down analyses. Then, a model of expert systems for image processing is introduced that discussed which and what combinations of image processing operators are effective to analyze an image.

Matsuyama, T.

1987-05-01

289

Diagnosis of skin cancer using image processing  

NASA Astrophysics Data System (ADS)

In this papera methodology for classifying skin cancerin images of dermatologie spots based on spectral analysis using the K-law Fourier non-lineartechnique is presented. The image is segmented and binarized to build the function that contains the interest area. The image is divided into their respective RGB channels to obtain the spectral properties of each channel. The green channel contains more information and therefore this channel is always chosen. This information is point to point multiplied by a binary mask and to this result a Fourier transform is applied written in nonlinear form. If the real part of this spectrum is positive, the spectral density takeunit values, otherwise are zero. Finally the ratio of the sum of the unit values of the spectral density with the sum of values of the binary mask are calculated. This ratio is called spectral index. When the value calculated is in the spectral index range three types of cancer can be detected. Values found out of this range are benign injure.

Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Coronel-Beltrán, Ángel

2014-10-01

290

On the Development of a Programming Teaching Tool: The Effect of Teaching by Templates on the Learning Process  

ERIC Educational Resources Information Center

One of the major issues related to teaching an introductory programming course is the excessive amount of time spent on the language's syntax, which leaves little time for developing skills in program design and solution creativity. The wide variation in the students' backgrounds, coupled with the traditional classroom (one size-fits-all) teaching

Al-Imamy, Samer; Alizadeh, Javanshir; Nour, Mohamed A.

2006-01-01

291

The emergence of the teaching\\/learning process in preschoolers: theory of mind and age effect  

Microsoft Academic Search

This study analysed the gradual emergence of the teaching\\/learning process by examining theory of mind (ToM) acquisition and age effects in the preschool period. We observed five dyads performing a jigsaw task drawn from a previous study. Three stages were identified. In the first one, the teacher focuses on the execution of her\\/his own task without paying any attention to

Leïla Bensalah

2011-01-01

292

Metaphors in mathematics classrooms: analyzing the dynamic process of teaching and learning of graph functions  

Microsoft Academic Search

This article presents an analysis of a phenomenon that was observed within the dynamic processes of teaching and learning\\u000a to read and elaborate Cartesian graphs for functions at high-school level. Two questions were considered during this investigation:\\u000a What types of metaphors does the teacher use to explain the graphic representation of functions at high-school level? Is the\\u000a teacher aware of

Janete Bolite; Jorge Acevedo

2010-01-01

293

Optimization of super-resolution processing using incomplete image sets in PET imaging.  

PubMed

Super-resolution (SR) techniques are used in PET imaging to generate a high-resolution image by combining multiple low-resolution images that have been acquired from different points of view (POVs). The number of low-resolution images used defines the processing time and memory storage necessary to generate the SR image. In this paper, the authors propose two optimized SR implementations (ISR-1 and ISR-2) that require only a subset of the low-resolution images (two sides and diagonal of the image matrix, respectively), thereby reducing the overall processing time and memory storage. In an N x N matrix of low-resolution images, ISR-1 would be generated using images from the two sides of the N x N matrix, while ISR-2 would be generated from images across the diagonal of the image matrix. The objective of this paper is to investigate whether the two proposed SR methods can achieve similar performance in contrast and signal-to-noise ratio (SNR) as the SR image generated from a complete set of low-resolution images (CSR) using simulation and experimental studies. A simulation, a point source, and a NEMA/IEC phantom study were conducted for this investigation. In each study, 4 (2 x 2) or 16 (4 x 4) low-resolution images were reconstructed from the same acquired data set while shifting the reconstruction grid to generate images from different POVs. SR processing was then applied in each study to combine all as well as two different subsets of the low-resolution images to generate the CSR, ISR-1, and ISR-2 images, respectively. For reference purpose, a native reconstruction (NR) image using the same matrix size as the three SR images was also generated. The resultant images (CSR, ISR-1, ISR-2, and NR) were then analyzed using visual inspection, line profiles, SNR plots, and background noise spectra. The simulation study showed that the contrast and the SNR difference between the two ISR images and the CSR image were on average 0.4% and 0.3%, respectively. Line profiles of the point source study showed that the three SR images exhibited similar signal amplitudes and FWHM. The NEMA/IEC study showed that the average difference in SNR among the three SR images was 2.1% with respect to one another and they contained similar noise structure. ISR-1 and ISR-2 can be used to replace CSR, thereby reducing the total SR processing time and memory storage while maintaining similar contrast, resolution, SNR, and noise structure. PMID:19175132

Chang, Guoping; Pan, Tinsu; Clark, John W; Mawlawi, Osama R

2008-12-01

294

Against a Negative Image of Science: History of Science and the Teaching of Physics and Chemistry  

NASA Astrophysics Data System (ADS)

After a first approach to analyze which is today's status of the history of science in high school Physics and Chemistry classes, we attempt to demonstrate that an appropriate introduction of several aspects of History and Sociology of Science in our classes can operate a significant improvement in pupils' image and attitudes in science and science teaching. We will show that several groups of pupils from 15 to 17 can improve significantly their interest in science after at least a year working with papers containing many different activities that involve several historical aspects of science, like context biographies, original papers, reports on STS in history or videos showing the making and growth of major concepts in P & C.

Solbes, J.; Traver, M.

295

Color image processing and content-based image retrieval techniques for the analysis of dermatological lesions  

Microsoft Academic Search

This paper presents color image processing methods for the analysis of dermatological images in the context of a content-based image retrieval (CBIR) system. Tests were conducted on the classification of tissue components in skin lesions, in terms of necrotic tissue, fibrin, granulation, and mixed composition. The images were classified based on color components by an expert dermatologist following a black-yellow-red

Ederson. A. G. Dorileo; Marco A. C. Frade; Ana M. F. Roselino; Rangaraj M. Rangayyan; Paulo M. Azevedo-Marques

2008-01-01

296

Image pre-processing for optimizing automated photogrammetry performances  

NASA Astrophysics Data System (ADS)

The purpose of this paper is to analyze how optical pre-processing with polarizing filters and digital pre-processing with HDR imaging, may improve the automated 3D modeling pipeline based on SFM and Image Matching, with special emphasis on optically non-cooperative surfaces of shiny or dark materials. Because of the automatic detection of homologous points, the presence of highlights due to shiny materials, or nearly uniform dark patches produced by low reflectance materials, may produce erroneous matching involving wrong 3D point estimations, and consequently holes and topological errors on the mesh originated by the associated dense 3D cloud. This is due to the limited dynamic range of the 8 bit digital images that are matched each other for generating 3D data. The same 256 levels can be more usefully employed if the actual dynamic range is compressed, avoiding luminance clipping on the darker and lighter image areas. Such approach is here considered both using optical filtering and HDR processing with tone mapping, with experimental evaluation on different Cultural Heritage objects characterized by non-cooperative optical behavior. Three test images of each object have been captured from different positions, changing the shooting conditions (filter/no-filter) and the image processing (no processing/HDR processing), in order to have the same 3 camera orientations with different optical and digital pre-processing, and applying the same automated process to each photo set.

Guidi, G.; Gonizzi, S.; Micoli, L. L.

2014-05-01

297

Large-scale image processing on the Grid  

E-print Network

Copyright 2008Imense Ltd. Large-scale image processing on the Grid Edward Schofield Copyright 2008Imense Ltd. Overview Background What it does How it works How to make it scalable Copyright 2008Imense Ltd. Background Academic • Undergrad maths... (“keywording”) industry needs: media, advertising, satellite imaging ... Copyright 2008Imense Ltd. Problems with metadata manual image annotation: • expensive and tedious • inconsistent and error prone • language-specific • homonymy: keywords can be ambiguous...

Schofield, Edward

2008-06-27

298

X-Ray Absorption Analysis By Image Processing Techniques  

NASA Astrophysics Data System (ADS)

A combination of an X-ray absorption instrument and an image processing system is described and data on cigarette densities are presented. Using an electro-mechanically controlled positioning unit the automatic analysis of samples larger than the sensor area is possible by sequential imaging storage. The final analysis of the image of the total object results in characteristic functions and parameters which can be related statistically to material specific measurements. This procedure is illustrated with a cigarette tobacco rod.

Trinkies, Wolfgang; Muller, Bernd-H.; Wiethaup, Wolfgang

1989-03-01

299

Color sensitivity of the multi-exposure HDR imaging process  

NASA Astrophysics Data System (ADS)

Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

2013-04-01

300

Image processing of a spectrogram produced by Spectrometer Airglow Temperature Imager  

Microsoft Academic Search

The Spectral Airglow Temperature Imager is an instrument, specially designed for investigation of the wave processes in the Mesosphere-Lower Thermosphere. In order to determine the kinematics parameters of a wave, the values of a physical quantity in different space points and their changes in the time should be known. An approach for image processing of registered spectrograms is proposed. A

Atanas Marinov Atanassov

2010-01-01

301

Adaptive multiresolution image and video compression and pre\\/post-processing of image and video streams  

Microsoft Academic Search

This thesis is divided into two sections. In the first section, the focus is on adaptive transform-based image compression and motion compensation at low bit rates. In the second section, the pre-processing and post-processing of images and video streams are focused on.^ Natural images are two dimensional signals with unknown or time-varying characteristics. For this type of signal, linear expansion

Hamid R Rabiee

1996-01-01

302

Detecting jaundice by using digital image processing  

NASA Astrophysics Data System (ADS)

When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.

2014-03-01

303

Fundus Image Processing System For Early Detection Of Glaucoma  

NASA Astrophysics Data System (ADS)

An image processing system based on a MC 68000 microprocessor has been developed for analysis of fundus photographs. This personal computer based system has specific image enhancement capabilities comparable to existing large scale systems. Basic enhancement of fundus images consists of histogram modification or kernel convolution techniques to determine regions of specific interest such as textural difference in the nerve fiber layer or cupping of the optic nerve head. Fast Fourier transforms and filtering techniques are then utilized for specific regions of the original image. Textural difference in the nerve fiber layers are further highlighted using either interactive histogram modification or pseudocolor mappings. Menu driven software allows review of the steps applied, creating a feedback mechanism for optimum display of the fundus image. A wider noise margin than that of digitized fundus photographs can be obtained by direct fundus imaging. The present fundus image processing system clearly provides us with quantitative and detailed techniques of assessing textural changes in the fundus photographs of glaucoma patients and suspects for better classification and early detection of glaucoma. The versatility and computing capability of the system make it also suitable for other applications such as multidimensional image processing and image analysis.

Whiteside, S.; Mitra, S.; Krile, T. F.; Shihab, Z.

1986-12-01

304

Fusion of digital image processing and video on the desktop  

Microsoft Academic Search

The use of digital image processing techniques to create compelling multimedia and video using VideoFusion, a Macintosh application for processing QuickTime movies is described. VideoFusion provides a toolbox for video processing, including a wide variety of video editing techniques, compositing, chroma keying, special effects and 3D transforms

J. W. Klingler; L. T. Andrews; C. Vaughan

1992-01-01

305

Medical image processing using transient Fourier holography in bacteriorhodopsin films  

E-print Network

processing is demonstrated by recording and reconstructing the transient photoisomerizative grating formedMedical image processing using transient Fourier holography in bacteriorhodopsin films Sri of the technique is the ability to transient display of selected spatial frequencies in the reconstructing process

Rao, D.V.G.L.N.

306

An overview of rough-hybrid approaches in image processing  

Microsoft Academic Search

Rough set theory offers a novel approach to manage uncertainty that has been used for the discovery of data dependencies, importance of features, patterns in sample data, feature space dimensionality reduction, and the classification of objects. Consequently, rough sets have been successfully employed for various image processing tasks including image segmentation, enhancement and classification. Nevertheless, while rough sets on their

A. E. Hassanien; A. Abraham; James F. Peters; Gerald Schaefer

2008-01-01

307

Using spectral distances for speedup in hyperspectral image processing  

Microsoft Academic Search

This paper investigates the efficiency of spectral screening as a tool for speedup in hyperspectral image processing. Spectral screening is a technique for reducing the hyperspectral data to a representative subset of spectra. The subset is formed such that any two spectra in it are dissimilar and, for any spectrum in the original image cube, there is a similar spectrum

S. A. Robila

2005-01-01

308

oRis: multiagents approach for image processing  

Microsoft Academic Search

In this article, we present a parallel image processing system based on the concept of reactive agents. This means that, in our system, each agent has a very simple behavior which allows it to take a decision (find out an edge, a region, ...) according to its position in the image and to the information enclosed in it. Our system

Vincent Rodin; Fabrice Harrouet; Pascal Ballet; Jacques Tisseau

1998-01-01

309

Satellite image processing using cellular array processor (CAP)  

Microsoft Academic Search

Since its successful launch in February of 1992, the Japan Earth Resources Satellite-1 (JERS-1) has been sending back high resolution images of the Earth for various studies, including the investigation of Earth resources, the preservation of environments and the observation of coastal lines. Currently, received images are processed using the Earth Resources Satellite Data Information System (ERSDIS). The ERSDIS is

M. Ajiro; H. Miyata; T. Kan; M. Ono

1993-01-01

310

Multiscale Statistical Models for Signal and Image Processing.  

National Technical Information Service (NTIS)

We are developing a general theory for multi scale signal and image modeling, processing, and analysis that matched to singularity-rich data, such as transients and images with edges. Using a linguistic analogy, our model can be interpreted as grammars th...

R. G. Baraniuk

2004-01-01

311

An Image Database on a Parallel Processing Network.  

ERIC Educational Resources Information Center

Describes the design and development of an image database for photographs in the Ulster Museum (Northern Ireland) that used parallelism from a transputer network. Topics addressed include image processing techniques; documentation needed for the photographs, including indexing, classifying, and cataloging; problems; hardware and software aspects;…

Philip, G.; And Others

1991-01-01

312

Automatic Record of IGO Game by Image Processing  

Microsoft Academic Search

Although an IGO record is a valuable work, it takes a lot of human laborious work. In this paper, we propose an automatic record system by image processing. First, we control a camera shutter by judging the time of brightness change which is caused by the action of the player's hand, capture the n-th image in order. The contrast among

Tadao Fukuyama; Takahiro Ogisu; Jim Woo Kim; Okazaki Kozo

2006-01-01

313

A Programmable, Maximal Throughput Architecture for Neighborhood Image Processing  

Microsoft Academic Search

The authors propose a run-time re-configurable architecture for local neighborhood image processing. Discussion of how the new architecture can offer improved flexibility to the developer. The authors show that for a satellite image feature extraction application, our architecture, implemented on Stratix II and Virtex 2 field programmable gate arrays, achieves similar performance, hardware resource utilization, and throughput as fully pipelined

Reid B. Porter; Jan R. Frigo; Maya Gokhale; Christophe Wolinski; François Charot; Charles Wagner

2006-01-01

314

Notes on reconstructing the data cube in hyperspectral image processing  

E-print Network

Notes on reconstructing the data cube in hyperspectral image processing Charles Byrne (Charles 01854 May 3, 2004 Abstract The hyperspectral imaging technique described in [12] leads to the inter- esting problem of reconstructing a three-dimensional data cube from measured data. This problem involves

Byrne, Charles

315

Parallel perfusion imaging processing using GPGPU  

PubMed Central

Background and purpose The objective of brain perfusion quantification is to generate parametric maps of relevant hemodynamic quantities such as cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT) that can be used in diagnosis of acute stroke. These calculations involve deconvolution operations that can be very computationally expensive when using local Arterial Input Functions (AIF). As time is vitally important in the case of acute stroke, reducing the analysis time will reduce the number of brain cells damaged and increase the potential for recovery. Methods GPUs originated as graphics generation dedicated co-processors, but modern GPUs have evolved to become a more general processor capable of executing scientific computations. It provides a highly parallel computing environment due to its large number of computing cores and constitutes an affordable high performance computing method. In this paper, we will present the implementation of a deconvolution algorithm for brain perfusion quantification on GPGPU (General Purpose Graphics Processor Units) using the CUDA programming model. We present the serial and parallel implementations of such algorithms and the evaluation of the performance gains using GPUs. Results Our method has gained a 5.56 and 3.75 speedup for CT and MR images respectively. Conclusions It seems that using GPGPU is a desirable approach in perfusion imaging analysis, which does not harm the quality of cerebral hemodynamic maps but delivers results faster than the traditional computation. PMID:22824549

Zhu, Fan; Gonzalez, David Rodriguez; Carpenter, Trevor; Atkinson, Malcolm; Wardlaw, Joanna

2012-01-01

316

0.5 micrometer Photolithography Using Photo Image Restoration Process,  

National Technical Information Service (NTIS)

Photo image restoration lithography (PIR), a method for improving the contrast in the resist exposure process, has been developed using photobleachable diazonium compound. The absorption wavelength region (390-440 nm) of the diazonium compound corresponds...

H. Niki, A. Kumagae, M. Nakase

1987-01-01

317

Schedule visualization and analysis for halide image processing language  

E-print Network

Image processing applications require high performance software implementations in order to satisfy large input data and run on smaller mobile devices that require high efficiency. Halide is a language and compiler for ...

Kneževi?, Jovana

2013-01-01

318

SIP: A Web-Based Astronomical Image Processing Program  

NASA Astrophysics Data System (ADS)

I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

Simonetti, J. H.

1999-12-01

319

Optimization of image processing algorithms on mobile platforms  

Microsoft Academic Search

This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time

Pramod Poudel; Mukul Shirvaikar

2011-01-01

320

Digital image processing for the earth resources technology satellite data.  

NASA Technical Reports Server (NTRS)

This paper discusses the problems of digital processing of the large volumes of multispectral image data that are expected to be received from the ERTS program. Correction of geometric and radiometric distortions are discussed and a byte oriented implementation is proposed. CPU timing estimates are given for a System/360 Model 67, and show that a processing throughput of 1000 image sets per week is feasible.

Will, P. M.; Bakis, R.; Wesley, M. A.

1972-01-01

321

Satellite Image Processing Applications in MedioGRID  

Microsoft Academic Search

This paper presents a high level architectural specification of MedioGRID, a research project aiming at implementing a real-time satellite image processing system for extracting relevant environmental and meteorological parameters on a grid system. The presentation focuses on the key architectural decisions of the GRID-aware satellite image processing system, highlighting the technologies for each of the major components. An essential part

Ovidiu Muresan; textbfFlorin Pop; Dorian Gorgan; Valentin Cristea

2006-01-01

322

Overall Sigma Level of an Imaging Department through Process Innovation  

Microsoft Academic Search

\\u000a The integration of a RIS-PACS into the imaging service at the INER in Mexico City had a strong impact in their previous workflow.\\u000a A process analysis was developed considering: patient’s satisfaction, strengthen system’s use and elimination of waste. The\\u000a objective of this work was to determine an overall sigma level of an imaging department, presenting a strategy that considerates\\u000a process

J. García-Porres; M. R. Ortiz-Posadas

323

ELAS: A powerful, general purpose image processing package  

NASA Technical Reports Server (NTRS)

ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.

Walters, David; Rickman, Douglas

1991-01-01

324

Determination of SATI Instrument Filter Parameters by Processing Interference Images  

Microsoft Academic Search

This paper presents a method for determination of interference filter\\u000aparameters such as the effective refraction index and the maximal transmittance\\u000awavelength on the basis of image processing of a spectrogram produced by\\u000aSpectrometer Airglow Temperature Imager instrument by means of data processing.\\u000aThe method employs the radial sections for determination of points from the\\u000acrests and valleys in the

Atanas Marinov Atanassov

2010-01-01

325

Cardiac phase-correlated image reconstruction and advanced image processing in pulmonary CT imaging  

Microsoft Academic Search

Image quality in pulmonary CT imaging is commonly degraded by cardiac motion artifacts. Phase-correlated image reconstruction\\u000a algorithms known from cardiac imaging can reduce motion artifacts but increase image noise and conventionally require a concurrently\\u000a acquired ECG signal for synchronization. Techniques are presented to overcome these limitations. Based on standard and phase-correlated\\u000a images that are reconstructed using a raw data-derived synchronization

Robert M. Lapp; Marc Kachelrieß; Dirk Ertel; Yiannis Kyriakou; Willi A. Kalender

2009-01-01

326

Multispectral image processing for environmental monitoring  

Microsoft Academic Search

New techniques are described for detecting environmental anomalies and changes using multispectral imagery. Environmental anomalies are areas that do not exhibit normal signatures due to man-made activities and include phenomena such as effluent discharges, smoke plumes, stressed vegetation, and deforestation. A new region-based processing technique is described for detecting these phenomena using Landsat TM imagery. Another algorithm that can detect

Mark J. Carlotto; Mark B. Lazaroff; Mark W. Brennan

1993-01-01

327

Imaging Implicit Morphological Processing: Evidence from Hebrew  

ERIC Educational Resources Information Center

Is morphology a discrete and independent element of lexical structure or does it simply reflect a fine-tuning of the system to the statistical correlation that exists among orthographic and semantic properties of words? Hebrew provides a unique opportunity to examine morphological processing in the brain because of its rich morphological system.…

Bick, Atira S.; Frost, Ram; Goelman, Gadi

2010-01-01

328

Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 1 Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 2  

E-print Network

Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 1 #12;Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 2 #12;Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 3 #12;Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 4 #12;Computer Vision

Murray, David

329

Particle sizing in rocket motor studies utilizing hologram image processing  

NASA Technical Reports Server (NTRS)

A technique of obtaining particle size information from holograms of combustion products is described. The holograms are obtained with a pulsed ruby laser through windows in a combustion chamber. The reconstruction is done with a krypton laser with the real image being viewed through a microscope. The particle size information is measured with a Quantimet 720 image processing system which can discriminate various features and perform measurements of the portions of interest in the image. Various problems that arise in the technique are discussed, especially those that are a consequence of the speckle due to the diffuse illumination used in the recording process.

Netzer, David; Powers, John

1987-01-01

330

Computational techniques in zebrafish image processing and analysis.  

PubMed

The zebrafish (Danio rerio) has been widely used as a vertebrate animal model in neurobiological. The zebrafish has several unique advantages that make it well suited for live microscopic imaging, including its fast development, large transparent embryos that develop outside the mother, and the availability of a large selection of mutant strains. As the genome of zebrafish has been fully sequenced it is comparatively easier to carry out large scale forward genetic screening in zebrafish to investigate relevant human diseases, from neurological disorders like epilepsy, Alzheimer's disease, and Parkinson's disease to other conditions, such as polycystic kidney disease and cancer. All of these factors contribute to an increasing number of microscopic images of zebrafish that require advanced image processing methods to objectively, quantitatively, and quickly analyze the image dataset. In this review, we discuss the development of image analysis and quantification techniques as applied to zebrafish images, with the emphasis on phenotype evaluation, neuronal structure quantification, vascular structure reconstruction, and behavioral monitoring. Zebrafish image analysis is continually developing, and new types of images generated from a wide variety of biological experiments provide the dataset and foundation for the future development of image processing algorithms. PMID:23219894

Xia, Shunren; Zhu, Yongxu; Xu, Xiaoyin; Xia, Weiming

2013-02-15

331

High Dynamic Range Processing for Magnetic Resonance Imaging  

PubMed Central

Purpose To minimize feature loss in T1- and T2-weighted MRI by merging multiple MR images acquired at different TR and TE to generate an image with increased dynamic range. Materials and Methods High Dynamic Range (HDR) processing techniques from the field of photography were applied to a series of acquired MR images. Specifically, a method to parameterize the algorithm for MRI data was developed and tested. T1- and T2-weighted images of a number of contrast agent phantoms and a live mouse were acquired with varying TR and TE parameters. The images were computationally merged to produce HDR-MR images. All acquisitions were performed on a 7.05 T Bruker PharmaScan with a multi-echo spin echo pulse sequence. Results HDR-MRI delineated bright and dark features that were either saturated or indistinguishable from background in standard T1- and T2-weighted MRI. The increased dynamic range preserved intensity gradation over a larger range of T1 and T2 in phantoms and revealed more anatomical features in vivo. Conclusions We have developed and tested a method to apply HDR processing to MR images. The increased dynamic range of HDR-MR images as compared to standard T1- and T2-weighted images minimizes feature loss caused by magnetization recovery or low SNR. PMID:24250788

Sukerkar, Preeti A.; Meade, Thomas J.

2013-01-01

332

Digital image processing: a path to better pictures.  

PubMed

Digital image processing has been used for over a decade to provide startling improvements in the quality of photographs from deep space, satellites and low light level photography. In the last decade considerable work has gone on in major microscope labs in applying these techniques to electron micrographs. Within the last few years a number of factors have come together which allow large scale image processing techniques to be applied at reasonable costs in microscope labs everywhere. The system described here provides hardware for digitization and storage of multiple images simultaneously and multimode scan generation. System software provides easy scan control, image digitization, automatic prescale adjust on acquisition, grey scale histogram generation, grey scale manipulation, image filtering, smoothing, and random color assignment to grey levels. PMID:6762647

Hardy, W; Vance, J; Jones, K; Kokubo, Y

1982-01-01

333

Segmentation of acoustic images by neural network processing  

NASA Astrophysics Data System (ADS)

A segmentation method for biomedical acoustic images is reported which efficiently classifies the groups of similar image elements (pixels) and separates them into particular characteristic regions. As the input data, the method uses the pixel intensities of the source image. The classification is performed by learning vector quantization neural networks, which separate the main classes (structures, tissues, artifacts, etc.) present in the image. Because this type of neural network implies that the number of the classes is known and that the network should be trained by instruction, an expert must participate in the process of generating the input data. Results obtained by processing test acoustic (ultrasonic) images demonstrate that the method is capable of effectively solving sonography classification problems. The accuracy of the method is estimated by comparison with the segmentation performed manually.

Il'in, S. V.; Rychagov, M. N.

2004-09-01

334

Theoretical demonstration of image characteristics and image formation process depending on image displaying conditions on liquid crystal display  

NASA Astrophysics Data System (ADS)

In soft-copy diagnosis, medical images with a large number of matrices often need displaying of reduced images by subsampling processing. We analyzed overall image characteristics on a liquid crystal display (LCD) depending on the display condition. Specifically, we measured overall Wiener spectra (WS) of displayed X-ray images at the sub-sampling rates from pixel-by-pixel mode to 35 %. A used image viewer took image reductions by sub-sampling processing using bilinear interpolation. We also simulated overall WS from sub-sampled images by bilinear, super-sampling, and nearestneighbor interpolations. The measured and simulated results agreed well and demonstrated that overall noise characteristics were attributed to luminance-value fluctuation, sub-sampling effects, and inherent image characteristics of the LCD. Besides, we measured digital MTFs (modulation transfer functions) on center and shifted alignments from subsampled edge images as well as simulating WS. The WS and digital MTFs represented that the displaying of reduced images induced noise increments by aliasing errors and made it impossible to exhibit high-frequency signals. Furthermore, because super-sampling interpolation processed the image reductions more smoothly compared with bilinear interpolations, it resulted in lower WS and digital MTFs. Nearest-neighbor interpolation had almost no smoothing effect, so the WS and digital MTFs indicated the highest values.

Yamazaki, Asumi; Ichikawa, Katsuhiro; Funahashi, Masao; Kodera, Yoshie

2012-02-01

335

Digital processing of side-scan sonar data with the Woods Hole image processing system software  

USGS Publications Warehouse

Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low-resolution side-scan sonar data. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for processing side-scan sonar data. This report describes the steps required to process the collected data and to produce an image that has equal along- and across-track resol

Paskevich, Valerie F.

1992-01-01

336

Teaching Cost-Conscious Medicine: Impact of a Simple Educational Intervention on Appropriate Abdominal Imaging at a Community-Based Teaching Hospital  

PubMed Central

Background Rising costs pose a major threat to US health care. Residency programs are being asked to teach residents how to provide cost-conscious medical care. Methods An educational intervention incorporating the American College of Radiology appropriateness criteria with lectures on cost-consciousness and on the actual hospital charges for abdominal imaging was implemented for residents at Scripps Mercy Hospital in San Diego, CA. We hypothesized that residents would order fewer abdominal imaging examinations for patients with complaints of abdominal pain after the intervention. We analyzed the type and number of abdominal imaging studies completed for patients admitted to the inpatient teaching service with primary abdominal complaints for 18 months before (738 patients) and 12 months following the intervention (632 patients). Results There was a significant reduction in mean abdominal computed tomography (CT) scans per patient (1.7–1.4 studies per patient, P < .001) and total abdominal radiology studies per patient (3.1–2.7 studies per patient, P ?=? .02) following the intervention. The avoidance of charges solely due to the reduction in abdominal CT scans following the intervention was $129 per patient or $81,528 in total. Conclusions A simple educational intervention appeared to change the radiologic test-ordering behavior of internal medicine residents. Widespread adoption of similar interventions by residency programs could result in significant savings for the health care system. PMID:24404274

Covington, Matthew F.; Agan, Donna L.; Liu, Yang; Johnson, John O.; Shaw, David J.

2013-01-01

337

Image processing for improved eye-tracking accuracy  

NASA Technical Reports Server (NTRS)

Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

Mulligan, J. B.; Watson, A. B. (Principal Investigator)

1997-01-01

338

Vector directional filters-a new class of multichannel image processing filters  

Microsoft Academic Search

Vector directional filters (VDF) for multichannel image processing are introduced and studied. These filters separate the processing of vector-valued signals into directional processing and magnitude processing. This provides a link between single-channel image processing where only magnitude processing is essentially performed, and multichannel image processing where both the direction and the magnitude of the image vectors play an important role

Panos E. Trahanias; Anastasios N. Venetsanopoulos

1993-01-01

339

Multichannel image processing system for thermal supervision systems  

Microsoft Academic Search

Foundation principles of multichannel real-time information processing system which is solving a problem of automatic object tracking are considered in this work. Image processing algorithms which are used for three-channel system foundation and optimally approaching for system core-Microprocessor Neuro Matrix NM6403 are realized. Information processing is accomplished with using of Hopfield neural network one 384x288 frame processing time is about

A. A. Zorin; I. I. Razumova; V. A. Tarkov

2005-01-01

340

Image processing and 3D visualization in forensic pathologic examination  

NASA Astrophysics Data System (ADS)

The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing and three-dimensional visualization in the analysis of patterned injuries and tissue damage. While image processing will never replace classical understanding and interpretation of how injuries develop and evolve, it can be a useful tool in helping an observer notice features in an image, may help provide correlation of surface to deep tissue injury, and provide a mechanism for the development of a metric for analyzing how likely it may be that a given object may have caused a given wound. We are also exploring methods of acquiring three-dimensional data for such measurements, which is the subject of a second paper.

Oliver, William R.; Altschuler, Bruce R.

1996-02-01

341

An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.  

PubMed

Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. PMID:24777764

Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

2014-08-01

342

Dielectric barrier discharge image processing by Photoshop  

NASA Astrophysics Data System (ADS)

In this paper, the filamentary pattern of dielectric barrier discharge has been processed by using Photoshop, the coordinates of each filament can also be obtained. By using Photoshop two different ways have been used to analyze the spatial order of the pattern formation in dielectric barrier discharge. The results show that the distance of the neighbor filaments at U equals 14 kV and d equals 0.9 mm is about 1.8 mm. In the scope of the experimental error, the results from the two different methods are similar.

Dong, Lifang; Li, Xuechen; Yin, Zengqian; Zhang, Qingli

2001-09-01

343

Parallel architecture for multidimensional image processing  

NASA Astrophysics Data System (ADS)

In order to deliver the computational power required by real-time manipulation and display of multidimensional objects, we present a massively parallel octree architecture, based on a new Interconnection Network, the Partitionable Spanning Multibus Hypercube (PSMH). Its goal is, to use one Processing Element per obel (object element), as opposed to one Processing Element per voxel (volume element). The underlying idea of the PSMH, is to take advantage of the data hierarchical ordering to reduce the computational cost. As a basic tool, we first derive a routing algorithm for the case of an object shift. Its running time is of order O(max(n3, m)), for an 8n PSMH, where m is the message length in bits. As we do not consider voxels but obels, we design a compaction algorithm, which meets the routing requirements. We get a compression ratio of O(2n). This is followed by a parallel neighbor finding technique, to account for the compaction in the routing operations.

Acharya, Raj S.; Hecquard, Jean

1990-07-01

344

Teaching the Process of Science: A Critical Component of Introductory Geoscience Courses  

NASA Astrophysics Data System (ADS)

Undergraduate students hold many misconceptions about the nature and process of science, including the social and cultural components of the scientific endeavor. These misconceptions are perhaps even more pronounced in the geosciences, where most students enter college without having been exposed to subject matter in high school. Many faculty and teachers feel that the process of science is embedded in their teaching through the inclusion of laboratory exercises and assigned readings in the primary literature. These techniques utilize the tools of science, but do not necessarily enlighten students in the actual process by which science progresses. Students do gain that understanding when they are involved in research, but the majority of the undergraduate research experiences are capstone experiences for students who choose to major in the science and engineering disciplines. A critical vehicle for teaching most undergraduate students about the process of science, therefore, is the introductory science course. In these courses, teaching the nature and process of science requires going beyond implicit use of the tools and techniques of science to making explicit reference to the process of science and, in addition, allowing students time to reflect on how they have participated in the process. We have developed a new series of freely accessible, web-based reading materials (available at http://www.visionlearning.com/process_science.php) that explicitly discuss the process of science and can be easily incorporated into any introductory science course, including introductory geoscience. These modules cover a variety of topics including specific research methods, such as experimentation, description, and modeling, as well as other aspects of the process of science like scientific writing, data analysis and interpretation, and the use of statistics. Our preliminary assessment results suggest that students find the text interesting and that they specifically address misconceptions held by students prior to reading them. During fall 2008 we will be more thoroughly evaluating the utility of these materials in a treatment-control designed study initiated in a large, introductory non-major science course. This presentation will present an overview of these materials as well as preliminary data from this evaluation.

Egger, A. E.; Carpi, A.

2008-12-01

345

Active Monitoring Processing and Imaging Model  

NASA Astrophysics Data System (ADS)

Long-term coherent, controlled point-source signal transmission, stacking, complex-envelope matched filtering (MF), and instantaneous phase processing yields multiple-signal arrival time accuracy of 1.3/(f(snrdB-36)) ms, where f=modulated signal carrier frequency, and snrdB=MFoutput signal-to-noise ratio in dB. In our model, travel-time tomography establishes the location, geometry, volume, and orientation of an Earth parameter contrast region. Adopting body-wave theory to describe the background medium Green function tensor, we arrive at integral representations for 3-D time-lapse contrast source inversion from which static stress, anisotropy, and anelastic perturbations are to be resolved.

Unger, R.

2012-12-01

346

Video image processing for nuclear safeguards  

SciTech Connect

The field of nuclear safeguards has received increasing amounts of public attention since the events of the Iraq-UN conflict over Kuwait, the dismantlement of the former Soviet Union, and more recently, the North Korean resistance to nuclear facility inspections by the International Atomic Energy Agency (IAEA). The role of nuclear safeguards in these and other events relating to the world`s nuclear material inventory is to assure safekeeping of these materials and to verify the inventory and use of nuclear materials as reported by states that have signed the nuclear Nonproliferation Treaty throughout the world. Nuclear safeguards are measures prescribed by domestic and international regulatory bodies such as DOE, NRC, IAEA, and EURATOM and implemented by the nuclear facility or the regulatory body. These measures include destructive and non destructive analysis of product materials/process by-products for materials control and accountancy purposes, physical protection for domestic safeguards, and containment and surveillance for international safeguards.

Rodriguez, C.A.; Howell, J.A.; Menlove, H.O.; Brislawn, C.M.; Bradley, J.N. [Los Alamos National Lab., NM (United States); Chare, P.; Gorten, J. [European Atomic Energy Community, Luxembourg (Luxembourg)

1995-09-01

347

Image processing system performance prediction and product quality evaluation  

NASA Technical Reports Server (NTRS)

The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

Stein, E. K.; Hammill, H. B. (principal investigators)

1976-01-01

348

Robust individual template pipeline processing for longitudinal MR images  

E-print Network

Robust individual template pipeline processing for longitudinal MR images N. Guizard1 , V.S. Fonov1 Montreal Neurological Institute, McGill University, Canada 2 CENIR - ICM, Pitié Salpétrière, Paris, France the anatomical changes underlying on-going neurodegenerative processes. In different neurological disorders

Paris-Sud XI, Université de

349

Parallel and Distributed Algorithms for High Speed Image Processing  

Microsoft Academic Search

Many image processing tasks exhibit a high degree of data locality and parallelism and map quite readily to spe- cialized massively parallel computing hardware. However, as distributed memory machines are becoming a viable and economical parallel computing resource, it is important to understand how to use these environments for parallel im- age processing as well. In this paper we discuss

Jeffrey M. Squyres; Andrew Lumsdaine; Brian C. McCandless; Robert L. Stevenson

1996-01-01

350

Accelerating MATLAB Image Processing Toolbox functions on GPUs  

Microsoft Academic Search

In this paper, we present our effort in developing an open-source GPU (graphics processing units) code library for the MATLAB Image Processing Toolbox (IPT). We ported a dozen of representative functions from IPT and based on their inherent characteristics, we grouped these functions into four categories: data independent, data sharing, algorithm dependent and data dependent. For each category, we present

Jingfei Kong; Martin Dimitrov; Yi Yang; Janaka Liyanage; Lin Cao; Jacob Staples; Mike Mantor; Huiyang Zhou

2010-01-01

351

IDP: Image and data processing (software) in C++  

SciTech Connect

IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

Lehman, S. [Lawrence Livermore National Lab., CA (United States)

1994-11-15

352

Pulsed thermography image processing for damage growth monitoring  

NASA Astrophysics Data System (ADS)

An image processing algorithm based on a combination of the most commonly used signal processing techniques in pulsed thermography is applied to monitor the progression of impact damage sites during the full-scale testing of a composite test article. It is demonstrated that the algorithm can be used to monitor damage during a durability and damage tolerance testing. Over the first phase of the test program, although no damage growth was detected, the processed pulsed thermography images showed that the average standard deviation of the measurements was only ~0.08 inches, the equivalent of 2 infrared camera pixels.

Genest, M.

2012-05-01

353

Cellular Neural Network for Real Time Image Processing  

SciTech Connect

Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information for plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET)

Vagliasindi, G.; Arena, P.; Fortuna, L. [Dipartimento di Ingegneria Elettrica Elettronica e dei Sistemi - Universita degli Studi di Catania, I-95125 Catania (Italy); Mazzitelli, G. [ENEA-Gestione Grandi Impianti Sperimentali, via E. Fermi 45, I-00044 Frascati, Rome (Italy); Murari, A. [Consorzio RFX-Associazione EURATOM ENEA per la Fusione, I-35127 Padova (Italy)

2008-03-12

354

On-demand server-side image processing for web-based DICOM image display  

NASA Astrophysics Data System (ADS)

Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.

Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo

2000-04-01

355

Digital Image Processing Techniques to Create Attractive Astronomical Images from Research Data  

NASA Astrophysics Data System (ADS)

The quality of modern astronomical data, the power of modern computers and the agility of current image processing software enable the creation of high-quality images in a purely digital form that rival the quality of traditional photographic astronomical images. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways, it has led to a new philosophy towards how to create them. We present a practical guide to generate astronomical images from research data by using powerful image processing programs. These programs use a layering metaphor that allows an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. We also present a philosophy on how to use color and composition to create images that simultaneously highlight the scientific detail within an image and are aesthetically appealing. We advocate an approach that uses visual grammar, defined as the elements which affect the interpretation of an image, to maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage the viewer and keep him or her interested for a longer period of time. The effective use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.

Rector, T. A.; Levay, Z.; Frattare, L.; English, J.; Pu'uohau-Pummill, K.

2004-05-01

356

Digital Image Processing Techniques to Create Attractive Astronomical Images from Research Data  

NASA Astrophysics Data System (ADS)

The quality of modern astronomical data, the power of modern computers and the agility of current image processing software enable the creation of high-quality images in a purely digital form that rival the quality of traditional photographic astronomical images. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways, it has led to a new philosophy towards how to create them. We present a practical guide to generate astronomical images from research data by using powerful image processing programs. These programs use a layering metaphor that allows an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. We also present a philosophy on how to use color and composition to create images that simultaneously highlight the scientific detail within an image and are aesthetically appealing. We advocate an approach that uses visual grammar, defined as the elements which affect the interpretation of an image, to maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage the viewer and keep him or her interested for a longer period of time. The effective use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.

Rector, T. A.; Levay, Z.; Frattare, L. M.; English, J.; Pummill, K.

2003-12-01

357

A general framework for designing image processing algorithms for coherent polarimetric images  

Microsoft Academic Search

We describe a general framework to design optimal image processing algorithms for polarimetric images formed with coherent radiations, which can be optical or microwave. Starting from the classical speckle model for coherent signals, we show that a wide class of algorithms to perform such tasks as detection, localization and segmentation depend on a simple statistics, which is the determinant of

François Goudail; Frcidciric Galland; Philippe Rcifrigier

2003-01-01

358

Image Processing Algorithms Incorporating Textures for the Segmentation of Satellite Data  

E-print Network

Image Processing Algorithms Incorporating Textures for the Segmentation of Satellite Data based has been intentionally left blank) #12;Abstract In image processing, automated segmentation, as a consequence, enable further image processing to increase segmentation accuracy. #12;Contents 1 Introduction 4

Gugat, Martin

359

Development of intelligent massage manipulator and reconstruction of massage process path using image processing technique  

Microsoft Academic Search

This paper develops a motor-driven massage artificial electro-mechanical manipulator system with intelligent biomedical sensing-monitoring capabilities and constructs the path of massage process by using CCD image processing technique. In this paper, we integrate a versatile inter-digital electrocardiograph (ECG) into the manipulator system and construct a massage path by using twin CCD image processing technique with inverse engineering method. So, this

Chih-Cheng Peng; Thong-Shing Hwang; Chih-Jui Lin; Yao-Ting Wu; Ching-Yi Chang; Jian-Bin Huang

2010-01-01

360

Logical Consideration Of Digital Imaging Process Of Minimal Radiographs  

NASA Astrophysics Data System (ADS)

A logical consideration of the digital imaging process (6) is described using the characteristics of the film and screen of a radiograph and a non-linear compensation curve for each pixel between optical density value of 100% exposure radiograph and 25% exposure radiograph obtained experimentally and theoretically. A radiological image of less than 100% exposure can be generated using the point-by-point mapping to form a radiographic image which is similar to that of 100% exposure. The mathematic expression of the film to screen characteristics and conversion between the optical densities of the 100% exposed radiograph and that of 25% exposure are given. A possible potential application of the digital imaging process is discussed.

Wang, Yen; Li, C. C.; Tai, H. T.; Shu, David B.

1982-12-01

361

Single image dehazing using local adaptive signal processing  

NASA Astrophysics Data System (ADS)

A local adaptive algorithm for single image dehazing is presented. The algorithm is able to estimate a dehazed image from an observed hazed scene by solving an objective function whose parameters are adapted to local statistics of the hazed image inside a moving window. The proposed objective function is based on a trade-off among several local rank order statistics of the dehazed signal and the mean-squared-error between the hazed and dehazed signals. In order to achieve a high-rate signal processing, the proposed algorithm is implemented in a graphics processing unit (GPU) exploiting massive parallelism. Experimental results obtained with a laboratory prototype are presented, discussed, and compared with those results obtained with existing single image dehazing methods in terms of objective metrics and computational complexity.

Valderrama, Jesus A.; Diaz-Ramirez, Victor H.; Kober, Vitaly

2014-09-01

362

Determination of SATI Instrument Filter Parameters by Processing Interference Images  

E-print Network

This paper presents a method for determination of interference filter parameters such as the effective refraction index and the maximal transmittance wavelength on the basis of image processing of a spectrogram produced by Spectrometer Airglow Temperature Imager instrument by means of data processing. The method employs the radial sections for determination of points from the crests and valleys in the spectrograms. These points are involved in the least square method for determination of the centres and radii of the crests and valleys. The use of the image radial sections allows to determine the maximal number of crests and valleys in the spectrogram. The application of the least square fitting leads to determination of the image centers and radii of the crests and valleys with precision higher than one pixel. The nocturnal course of the filter parameters produced by this method is presented and compared with that of the known ones. The values of the filter parameters thus obtained are closer to the laborator...

Atanassov, Atanas Marinov

2010-01-01

363

Dehydration process of fish analyzed by neutron beam imaging  

NASA Astrophysics Data System (ADS)

Since regulation of water content of the dried fish is an important factor for the quality of the fish, water-losing process during drying (squid and Japanese horse mackerel) was analyzed through neutron beam imaging. The neutron image showed that around the shoulder of mackerel, there was a part where water content was liable to maintain high during drying. To analyze water-losing process more in detail, spatial image was produced. From the images, it was clearly indicated that the decrease of water content was regulated around the shoulder part. It was suggested that to prevent deterioration around the shoulder part of the dried fish is an important factor to keep quality of the dried fish in the storage.

Tanoi, K.; Hamada, Y.; Seyama, S.; Saito, T.; Iikura, H.; Nakanishi, T. M.

2009-06-01

364

Images from Mars Pathfinder - Data Processing Techniques Revisited  

NASA Astrophysics Data System (ADS)

The primary goal of the Mars Pathfinder Mission was the development of a low cost system that could place a science payload on the Martian surface. Landing on July 4 and operating through September 27, 1997 the mission returned 2,6 GB of data, including over 16,000 images by the IMP (Image for Mars Pathfinder) mounted on top of the lander and 550 images of the rover camera. To support geologists in their analysis we applied and adopted photogrammetric techniques to map the lander area, to determine the landers coordinates in a global context, and to orientate the images with respect to the North direction. With the upcoming activities in Europe in the context of the Exomars mission as well al Planetary Rover Vision projects, we will report on our experience with IMP image data and discuss the processing methods and techniques that were applied.

Willner, K.; Oberst, J.; Scholten, F.; Jaumann, R.

2008-09-01

365

Near-real-time satellite image processing: metacomputing in CC++  

Microsoft Academic Search

Metacomputing combines heterogeneous system elements in a seamless computing service. In this case study, we introduce the elements of metacomputing and describe an application for cloud detection and visualization of infrared and visible-light satellite images. The application processes the satellite images by using Compositional C++ (CC++)-a simple, yet powerful extension of C++-and its runtime system, Nexus, to integrate specialized resources,

Craig A. Lee; Carl Kesselman; Stephen Schwab

1996-01-01

366

Echelle spectra image processing for the International Ultraviolet Explorer  

NASA Technical Reports Server (NTRS)

This paper presents the techniques needed to convert the International Ultraviolet Explorer (IUE) high-resolution echelle spectra into useful scientific data. The image is obtained with a digitally controlled vidicon camera system. The processes that must be carried out on the image include: noise removal, correction for geometric and optical distortion, nonuniform photometric corrections, and two-dimensional wavelength determination. Results from the breadboard camera system are presented.

Klinglesmith, D. A.; Dunford, E.

1975-01-01

367

Image Processing Workstations And Data Bases For Quality Control Of Geocoded Satellite Images  

Microsoft Academic Search

,1. Abstract In the framework of processing facilities for future earth remote sensing sensors (ERS-l, SIR-C\\/X-SAR), the geocoding of Synthetic Aperture Radar images becomes an important aspect for supporting user data needs. DLR will offer such images, tackling with a wide range of process­ ing and cartographic parameters. As geocoding claims for comparability with topographic references, the geometric quality control

G. Schreier

1989-01-01

368

Image processing of a spectrogram produced by Spectrometer Airglow Temperature Imager  

Microsoft Academic Search

The Spectral Airglow Temperature Imager is an instrument, specially designed\\u000afor investigation of the wave processes in the Mesosphere-Lower Thermosphere.\\u000aIn order to determine the kinematics parameters of a wave, the values of a\\u000aphysical quantity in different space points and their changes in the time\\u000ashould be known. An approach for image processing of registered spectrograms is\\u000aproposed. A

Atanas Marinov Atanassov

2010-01-01

369

High Performance Image Processing And Laser Beam Recording System  

NASA Astrophysics Data System (ADS)

The article is meant to provide the digital image recording community with an overview of digital image processing, and recording. The Digital Interactive Image Processing System (DIIPS) was assembled by ESL for Air Force Systems Command under ROME AIR DEVELOPMENT CENTER's guidance. The system provides the capability of mensuration and exploitation of digital imagery with both mono and stereo digital images as inputs. This development provided for system design, basic hardware, software and operational procedures to enable the Air Force's System Command photo analyst to perform digital mensuration and exploitation of stereo digital images as inputs. The engineering model was based on state-of-the-art technology and to the extent possible off-the-shelf hardware and software. A LASER RECORDER was also developed for the DIIPS Systems and is known as the Ultra High Resolution Image Recorder (UHRIR). The UHRIR is a prototype model that will enable the Air Force Systems Command to record computer enhanced digital image data on photographic film at high resolution with geometric and radiometric distortion minimized.

Fanelli, Anthony R.

1980-09-01

370

SENTINEL-2 Level 1 Products and Image Processing Performances  

NASA Astrophysics Data System (ADS)

In partnership with the European Commission and in the frame of the Global Monitoring for Environment and Security (GMES) program, the European Space Agency (ESA) is developing the Sentinel-2 optical imaging mission devoted to the operational monitoring of land and coastal areas. The Sentinel-2 mission is based on a satellites constellation deployed in polar sun-synchronous orbit. While ensuring data continuity of former SPOT and LANDSAT multi-spectral missions, Sentinel-2 will also offer wide improvements such as a unique combination of global coverage with a wide field of view (290 km), a high revisit (5 days with two satellites), a high resolution (10 m, 20 m and 60 m) and multi-spectral imagery (13 spectral bands in visible and shortwave infra-red domains). In this context, the Centre National d'Etudes Spatiales (CNES) supports ESA to define the system image products and to prototype the relevant image processing techniques. This paper offers, first, an overview of the Sentinel-2 system and then, introduces the image products delivered by the ground processing: the Level-0 and Level-1A are system products which correspond to respectively raw compressed and uncompressed data (limited to internal calibration purposes), the Level-1B is the first public product: it comprises radiometric corrections (dark signal, pixels response non uniformity, crosstalk, defective pixels, restoration, and binning for 60 m bands); and an enhanced physical geometric model appended to the product but not applied, the Level-1C provides ortho-rectified top of atmosphere reflectance with a sub-pixel multi-spectral and multi-date registration; a cloud and land/water mask is associated to the product. Note that the cloud mask also provides an indication about cirrus. The ground sampling distance of Level-1C product will be 10 m, 20 m or 60 m according to the band. The final Level-1C product is tiled following a pre-defined grid of 100x100 km2, based on UTM/WGS84 reference frame. The stringent image quality requirements are also described, in particular the geo-location accuracy for both absolute (better than 12.5 m) and multi-temporal (better than 0.3 pixels) cases. Then, the prototyped image processing techniques (both radiometric and geometric) will be addressed. The radiometric corrections will be first introduced. They consist mainly in dark signal and detector relative sensitivity correction, crosstalk correction and MTF restoration. Then, a special focus will be done on the geometric corrections. In particular the innovative method of automatic enhancement of the geometric physical model will be detailed. This method takes advantage of a Global Reference Image database, perfectly geo-referenced, to correct the physical geometric model of each image taken. The processing is based on an automatic image matching process which provides accurate ground control points between a given band of the image to refine and a reference image, allowing to dynamically calibrate the viewing model. The generation of the Global Reference Image database made of Sentinel-2 pre-calibrated mono-spectral images will be also addressed. In order to perform independent validation of the prototyping activity, an image simulator dedicated to Sentinel-2 has been set up. Thanks to this, a set of images have been simulated from various source images and combining different acquisition conditions and landscapes (mountains, deserts, cities …). Given disturbances have been also simulated so as to estimate the end to end performance of the processing chain. Finally, the radiometric and geometric performances obtained by the prototype will be presented. In particular, the geo-location performance of the level-1C products which widely fulfils the image quality requirements will be provided.

Baillarin, S. J.; Meygret, A.; Dechoz, C.; Petrucci, B.; Lacherade, S.; Tremas, T.; Isola, C.; Martimort, P.; Spoto, F.

2012-07-01

371

Creating & using specimen images for collection documentation, research, teaching and outreach  

NASA Astrophysics Data System (ADS)

In this age of digital media, there are many opportunities for use of good images of specimens. On-line resources such as institutional web sites and global sites such as PaleoNet and the Paleobiology Database provide venues for collection information and images. Pictures can also be made available to the general public through popular media sites such as Flickr and Facebook, where they can be retrieved and used by teachers, students, and the general public. The number of requests for specimen loans can be drastically reduced by offering the scientific community access to data and specimen images using the internet. This is an important consideration in these days of limited support budgets, since it reduces the amount of staff time necessary for giving researchers and educators access to collections. It also saves wear and tear on the specimens themselves. Many institutions now limit or refuse to send specimens out of their own countries because of the risks involved in going through security and customs. The internet can bridge political boundaries, allowing everyone equal access to collections. In order to develop photographic documentation of a collection, thoughtful preparation will make the process easier and more efficient. Acquire the necessary equipment, establish standards for images, and develop a simple workflow design. Manage images in the camera, and produce the best possible results, rather than relying on time-consuming editing after the fact. It is extremely important that the images of each specimen be of the highest quality and resolution. Poor quality, low resolution photos are not good for anything, and will often have to be retaken when another need arises. Repeating the photography process involves more handling of specimens and more staff time. Once good photos exist, smaller versions can be created for use on the web. The originals can be archived and used for publication and other purposes.

Demouthe, J. F.

2012-12-01

372

Digital image processing: a primer for JVIR authors and readers: Part 3: Digital image editing.  

PubMed

This is the final installment of a three-part series on digital image processing intended to prepare authors for online submission of manuscripts. In the first two articles of the series, the fundamentals of digital image architecture were reviewed and methods of importing images to the computer desktop were described. In this article, techniques are presented for editing images in preparation for online submission. A step-by-step guide to basic editing with use of Adobe Photoshop is provided and the ethical implications of this activity are explored. PMID:14654480

LaBerge, Jeanne M; Andriole, Katherine P

2003-12-01

373

The Multimission Image Processing Laboratory's virtual frame buffer interface  

NASA Technical Reports Server (NTRS)

Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.

Wolfe, T.

1984-01-01

374

Latency and bandwidth considerations in parallel robotics image processing  

SciTech Connect

Parallel image processing for robotics applications differs in a fundamental way from parallel scientific computing applications: the problem size is fixed, and latency requirements are tight. This brings Amdhal`s law in effect with full force, so that message-passing latency and bandwidth severely restrict performance. In this paper the authors examine an application from this domain, stereo image processing, which has been implemented in Adapt, a niche language for parallel image processing implemented on the Carnegie Mellon-Intel Corporation iWarp. High performance has been achieved for this application. They show how a I/O building block approach on iWarp achieved this, and then examine the implications of this performance for more traditional machines that do not have iWarp`s rich I/O primitive set.

Webb, J.A. [Carnegie Mellon Univ., Pittsburgh, PA (United States). School of Computer Science

1993-12-31

375

Automated Processing of Zebrafish Imaging Data: A Survey  

PubMed Central

Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, Maria J.; Maree, Raphael; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strahle, Uwe; Peyrieras, Nadine

2013-01-01

376

Triple Bioluminescence Imaging for In Vivo Monitoring of Cellular Processes  

PubMed Central

Bioluminescence imaging (BLI) has shown to be crucial for monitoring in vivo biological processes. So far, only dual bioluminescence imaging using firefly (Fluc) and Renilla or Gaussia (Gluc) luciferase has been achieved due to the lack of availability of other efficiently expressed luciferases using different substrates. Here, we characterized a codon-optimized luciferase from Vargula hilgendorfii (Vluc) as a reporter for mammalian gene expression. We showed that Vluc can be multiplexed with Gluc and Fluc for sequential imaging of three distinct cellular phenomena in the same biological system using vargulin, coelenterazine, and D-luciferin substrates, respectively. We applied this triple imaging system to monitor the effect of soluble tumor necrosis factor-related apoptosis-inducing ligand (sTRAIL) delivered using an adeno-associated viral vector (AAV) on brain tumors in mice. Vluc imaging showed efficient sTRAIL gene delivery to the brain, while Fluc imaging revealed a robust antiglioma therapy. Further, nuclear factor-?B (NF-?B) activation in response to sTRAIL binding to glioma cells death receptors was monitored by Gluc imaging. This work is the first demonstration of trimodal in vivo bioluminescence imaging and will have a broad applicability in many different fields including immunology, oncology, virology, and neuroscience. PMID:23778500

Maguire, Casey A; Bovenberg, M Sarah; Crommentuijn, Matheus HW; Niers, Johanna M; Kerami, Mariam; Teng, Jian; Sena-Esteves, Miguel; Badr, Christian E; Tannous, Bakhos A

2013-01-01

377

Automated processing of zebrafish imaging data: a survey.  

PubMed

Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

Mikut, Ralf; Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A; Kausler, Bernhard X; Ledesma-Carbayo, María J; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

2013-09-01

378

Graphical Technique to Support the Teaching\\/Learning Process of Software Process Reference Models  

Microsoft Academic Search

\\u000a In this paper, we propose a set of diagrams to visualize software process reference models (PRM). The diagrams, called dimods,\\u000a are the combination of some visual and process modeling techniques such as rich pictures, mind maps, IDEF and RAD diagrams.\\u000a We show the use of this technique by designing a set of dimods for the Mexican Software Industry Process Model

Ismael Edrein Espinosa-Curiel; Josefina Rodríguez-Jacobo; José Alberto Fernández-Zepeda

379

Image processing of a spectrogram produced by Spectrometer Airglow Temperature Imager  

E-print Network

The Spectral Airglow Temperature Imager is an instrument, specially designed for investigation of the wave processes in the Mesosphere-Lower Thermosphere. In order to determine the kinematics parameters of a wave, the values of a physical quantity in different space points and their changes in the time should be known. An approach for image processing of registered spectrograms is proposed. A detailed description is made of the steps of this approach, related to recovering CCD pixel values, influenced by cosmic particles, dark image correction and filter parameters determination.

Atanassov, Atanas Marinov

2010-01-01

380

Grid Computing Application for Brain Magnetic Resonance Image Processing  

NASA Astrophysics Data System (ADS)

This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

Valdivia, F.; Crépeault, B.; Duchesne, S.

2012-02-01

381

Influence of chemical processing on the imaging properties of microlenses  

NASA Astrophysics Data System (ADS)

Microlenses are produced by irradiation of a layer of tot'hema and eosin sensitized gelatin (TESG) by using a laser beam (Nd:YAG 2nd harmonic; 532 nm). All the microlenses obtained are concave with a parabolic profile. After the production, the microlenses are chemically processed with various concentrations of alum. The following imaging properties of microlenses were calculated and analyzed: the root mean square (rms) wavefront aberration, the geometric encircled energy and the spot diagram. The microlenses with higher concentrations of alum in solution had a greater effective focal length and better image quality. The microlenses chemically processed with 10% alum solution had near-diffraction-limited performance.

Vasiljevi?, Darko; Muri?, Branka; Panteli?, Dejan; Pani?, Bratimir

2009-07-01

382

Implementation of a Radiology Electronic Imaging Network: The community teaching hospital experience  

Microsoft Academic Search

Because of their typically small in-house computer and network staff, non-university hospitals often hesitate to consider\\u000a picture archiving and communication system (PACS) as a solution to the very demanding financial, clinical, and technological\\u000a needs of today’s Radiology Department. This article presents the experiences of the 3-year process for the design and implementation\\u000a of the Radiology Electronic Imaging Network (REIN) in

Manuel Arreola; Harvey L. Neiman; Amy Sugarman; Larry Laurenti; Ron Forys

1997-01-01

383

Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images  

E-print Network

The quality of modern astronomical data, the power of modern computers and the agility of current image-processing software enable the creation of high-quality images in a purely digital form. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways it has led to a new philosophy towards how to create them. A practical guide is presented on how to generate astronomical images from research data with powerful image-processing programs. These programs use a layering metaphor that allows for an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. A philosophy is also presented on how to use color and composition to create images that simultaneously highlight scientific detail and are aesthetically appealing. This philosophy is necessary because most datasets do not correspond to the wavelength range of sensitivity of the human eye. The use of visual grammar, defined as the elements which affect the interpretation of an image, can maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage viewers and keep them interested for a longer period of time. The use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.

T. A. Rector; Z. G. Levay; L. M. Frattare; J. English; K. Pu'uohau-Pummill

2004-12-06

384

Fostering a theoretical and practical understanding of teaching as a relational process: a feminist participatory study of mentoring a doctoral student  

Microsoft Academic Search

We were seeking to disrupt the practice of fostering theoretical understandings about teaching and learning in our graduate courses that had little connection to students’ teaching practices. We also desired a mentoring approach for graduate students in education that would foster an understanding and practice of teaching as a relational process. This paper provides a practitioner account of an action

Gayle A. Buck; Colette M. Mast; Margaret A. Macintyre Latta; Juliann M. Kaftan

2009-01-01

385

Parallel-Processing Software for Creating Mosaic Images  

NASA Technical Reports Server (NTRS)

A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

2008-01-01

386

Teaching Students About the Process of Science: Using Google to Collect and Analyze Student Lab Measurements  

NASA Astrophysics Data System (ADS)

The process of science necessarily includes critical analysis of uncertainty in repeated measurements. We demonstrate how the measurements that students make can be collected and analyzed in real time with Google Docs. Showing students how their measurements compare to the rest of the class provides a valuable opportunity to teach about uncertainty and the process of science. Student work can be compiled by the instructor after the fact, but Google makes it easy for students to submit their measurements via a web form and instantly see how their measurements fit with the rest of the class. Analysis, including histograms, fits, and virtually anything that can be done with a spreadsheet, is updated automatically and available to students. We show how the tools can be readily customized and implemented seamlessly with two examples from large undergraduate classes: measurement of the acceleration due to gravity in introductory physics lab, and measurement of the Hubble constant in introductory astronomy.

Larson, Kristen; Stewart, Jim

2010-03-01

387

Eclipse: ESO C Library for an Image Processing Software Environment  

NASA Astrophysics Data System (ADS)

Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems.

Devillard, Nicolas

2011-12-01

388

ESO C Library for an Image Processing Software Environment (eclipse)  

NASA Astrophysics Data System (ADS)

Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2 GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems. Running on all Unix-like platforms, eclipse is portable. A high-level interface to Python is foreseen that would allow programmers to prototype their applications much faster than through C programs.

Devillard, N.

389

Data Processing for 3D Mass Spectrometry Imaging  

NASA Astrophysics Data System (ADS)

Data processing for three dimensional mass spectrometry (3D-MS) imaging was investigated, starting with a consideration of the challenges in its practical implementation using a series of sections of a tissue volume. The technical issues related to data reduction, 2D imaging data alignment, 3D visualization, and statistical data analysis were identified. Software solutions for these tasks were developed using functions in MATLAB. Peak detection and peak alignment were applied to reduce the data size, while retaining the mass accuracy. The main morphologic features of tissue sections were extracted using a classification method for data alignment. Data insertion was performed to construct a 3D data set with spectral information that can be used for generating 3D views and for data analysis. The imaging data previously obtained for a mouse brain using desorption electrospray ionization mass spectrometry (DESI-MS) imaging have been used to test and demonstrate the new methodology.

Xiong, Xingchuang; Xu, Wei; Eberlin, Livia S.; Wiseman, Justin M.; Fang, Xiang; Jiang, You; Huang, Zejian; Zhang, Yukui; Cooks, R. Graham; Ouyang, Zheng

2012-06-01

390

An image-processing program for automated counting  

USGS Publications Warehouse

An image-processing program developed by the National Institute of Health, IMAGE, was modified in a cooperative project between remote sensing specialists at the Ohio State University Center for Mapping and scientists at the Alaska Science Center to facilitate estimating numbers of black brant (Branta bernicla nigricans) in flocks at Izembek National Wildlife Refuge. The modified program, DUCK HUNT, runs on Apple computers. Modifications provide users with a pull down menu that optimizes image quality; identifies objects of interest (e.g., brant) by spectral, morphometric, and spatial parameters defined interactively by users; counts and labels objects of interest; and produces summary tables. Images from digitized photography, videography, and high- resolution digital photography have been used with this program to count various species of waterfowl.

Cunningham, D. J.; Anderson, W. H.; Anthony, R. M.

1996-01-01

391

Infective endocarditis detection through SPECT/CT images digital processing  

NASA Astrophysics Data System (ADS)

Infective endocarditis (IE) is a difficult-to-diagnose pathology, since its manifestation in patients is highly variable. In this work, it was proposed a semiautomatic algorithm based on SPECT images digital processing for the detection of IE using a CT images volume as a spatial reference. The heart/lung rate was calculated using the SPECT images information. There were no statistically significant differences between the heart/lung rates values of a group of patients diagnosed with IE (2.62+/-0.47) and a group of healthy or control subjects (2.84+/-0.68). However, it is necessary to increase the study sample of both the individuals diagnosed with IE and the control group subjects, as well as to improve the images quality.

Moreno, Albino; Valdés, Raquel; Jiménez, Luis; Vallejo, Enrique; Hernández, Salvador; Soto, Gabriel

2014-03-01

392

AOIPS - An interactive image processing system. [Atmospheric and Oceanic Information Processing System  

NASA Technical Reports Server (NTRS)

The Atmospheric and Oceanographic Information Processing System (AOIPS) was developed to help applications investigators perform required interactive image data analysis rapidly and to eliminate the inefficiencies and problems associated with batch operation. This paper describes the configuration and processing capabilities of AOIPS and presents unique subsystems for displaying, analyzing, storing, and manipulating digital image data. Applications of AOIPS to research investigations in meteorology and earth resources are featured.

Bracken, P. A.; Dalton, J. T.; Quann, J. J.; Billingsley, J. B.

1978-01-01

393

706 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 6, JUNE 2003 Gray and Color Image Contrast Enhancement  

E-print Network

706 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 12, NO. 6, JUNE 2003 Gray and Color Image Contrast by eye in an image, we often transform images before display. His- togram equalization is one the most be seen as a generalization of this approach, taking all resolution levels into account. In color images

Starck, Jean-Luc

394

Digital imaging techniques for blasting process evaluation in field  

SciTech Connect

Direct visualization of rock movement during blasting is an important key to understanding the blasting process, as well as optimizing blast designs and explosives performance. To achieve this, a digital camera system (HSIS-500) has been built. It is a custom made high speed solid-state camera employing an advanced charge coupled device (CCD) and dynamic random access memory (DRAM) technologies. It handles like a regular video camera but requires no film or tape as the image is recorded in digital form on memory chips and transferred to the system hard disk for storage. The system consists of two components: the camera body and hardware, and the image processing unit. The imaging rate is sized at 425 frames/s; it can also be used in the single frame mode. The recording duration can be set at 5, 10, 15 and 20 seconds. The camera can be triggered manually or by wireless remote control, and is capable of recording transient images in extremely low lights. The captured images can be displayed immediately on a video screen or a computer monitor. The system image analysis software can be run in the field for a quick preview. The full features of the software allows the detailed motion digitization in Windows {trademark} for obtaining target displacement as well as velocity. The system has been in use for over a year in several mines and quarries under extreme weather conditions ({minus}20 C to +43 C). The paper describes the basic principles and features of the digital imaging system, and its actual use in blast diagnostics and optimization, and in modelling of the blasting process.

Chung, S.H. [ICI Explosives Canada Inc., North York, Ontario (Canada)

1996-12-01

395

Solar physics applications of computer graphics and image processing  

NASA Technical Reports Server (NTRS)

Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.

Altschuler, M. D.

1985-01-01

396

Personal Computer (PC) based image processing applied to fluid mechanics  

NASA Technical Reports Server (NTRS)

A PC based image processing system was employed to determine the instantaneous velocity field of a two-dimensional unsteady flow. The flow was visualized using a suspension of seeding particles in water, and a laser sheet for illumination. With a finite time exposure, the particle motion was captured on a photograph as a pattern of streaks. The streak pattern was digitized and processed using various imaging operations, including contrast manipulation, noise cleaning, filtering, statistical differencing, and thresholding. Information concerning the velocity was extracted from the enhanced image by measuring the length and orientation of the individual streaks. The fluid velocities deduced from the randomly distributed particle streaks were interpolated to obtain velocities at uniform grid points. For the interpolation a simple convolution technique with an adaptive Gaussian window was used. The results are compared with a numerical prediction by a Navier-Stokes computation.

Cho, Y.-C.; Mclachlan, B. G.

1987-01-01

397

Parallel-Processing Software for Correlating Stereo Images  

NASA Technical Reports Server (NTRS)

A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

2007-01-01

398

A digital image processing workstation for the ocean sciences  

Microsoft Academic Search

An interactive digital image processing workstation has been developed for oceanographic applications (Fig 1.). The turnkey system provides the capability to process imagery from commonly used ocean observation spacecraft, in conjunction with in situ data sets. The system is based on a high-performance 32-bit processor (CPU). The display controller memory may be configured to hold up to1280 times1024 times 32-bit

Michael Guberek; Stephen Borders; Serge Masse

1985-01-01

399

A review of imaging low-latitude ionospheric irregularity processes  

Microsoft Academic Search

A review of the imaging of low-latitude irregularity processes conducted over the past 30 years is presented. The signature in optical data of the Rayleigh–Taylor instability (RTI) process is the development of a region of depleted emission that typically shows east–west dimensions of 50 to several hundred kilometers. In the meridional direction the depletions can at times extend over two-thousand

Jonathan J. Makela

2006-01-01

400

Learning and teaching about the nature of science through process skills  

NASA Astrophysics Data System (ADS)

This dissertation, a three-paper set, explored whether the process skills-based approach to nature of science instruction improves teachers' understandings, intentions to teach, and instructional practice related to the nature of science. The first paper examined the nature of science views of 53 preservice science teachers before and after a year of secondary science methods instruction that incorporated the process skills-based approach. Data consisted of each participant's written and interview responses to the Views of the Nature of Science (VNOS) questionnaire. Systematic data analysis led to the conclusion that participants exhibited statistically significant and practically meaningful improvements in their nature of science views and viewed teaching the nature of science as essential to their future instruction. The second and third papers assessed the outcomes of the process skills-based approach with 25 inservice middle school science teachers. For the second paper, she collected and analyzed participants' VNOS and interview responses before, after, and 10 months after a 6-day summer professional development. Long-term retention of more aligned nature of science views underpins teachers' ability to teach aligned conceptions to their students yet it is rarely examined. Participants substantially improved their nature of science views after the professional development, retained those views over 10 months, and attributed their more aligned understandings to the course. The third paper addressed these participants' instructional practices based on participant-created video reflections of their nature of science and inquiry instruction. Two participant interviews and class notes also were analyzed via a constant comparative approach to ascertain if, how, and why the teachers explicitly integrated the nature of science into their instruction. The participants recognized the process skills-based approach as instrumental in the facilitation of their improved views. Additionally, the participants saw the nature of science as an important way to help students to access core science content such as the theory of evolution by natural selection. Most impressively, participants taught the nature of science explicitly and regularly. This instruction was student-centered, involving high levels of student engagement in ways that represented applying, adapting, and innovating on what they learned in the summer professional development.

Mulvey, Bridget K.

401

Digital image processing of fundus images using scanning laser ophthalmoscopic images  

Microsoft Academic Search

Many ocular diseases of the human fundus could be quantified from the visible pathological features. Fundus images are usually recorded on a photographic film using a fundus camera that employs very high levels of illumination. Scanning laser ophthalmoscopy is a new method of imaging the fundus that offers the unique capability of differentiating various pathological conditions using very low light

A. Manivannan; J. N. Kirkpatrick; P. Vieira; P. F. Sharp; C. Koller; J. V. Forrester

1996-01-01

402

Studying Students' Learning Processes Used during Physics Teaching Sequence about Gas with Networks of Ideas and Their Domain of Applicability  

ERIC Educational Resources Information Center

In literature, several processes have been suggested to describe conceptual changes being undertaken. However, a few parts of studies analyse in great detail which students' learning processes are involved in physics classes during teaching, and how they are used. Following a socio-constructivist approach using tools coming from discourse…

Givry, Damien; Tiberghien, Andree

2012-01-01

403

Going Public: Using Video To Construct Images of "Good" English Teaching.  

ERIC Educational Resources Information Center

Describes a teacher's use of videotape of a series of lessons to show that setting and negotiating assessment criteria with students was good practice. Discusses guidelines for enhancing standards of good teaching practice. Highlights this visual case model in the hope that colleagues will see its merits as a vehicle for articulating teaching

Hall, Greg

2001-01-01

404

Smartphones as image processing systems for prosthetic vision.  

PubMed

The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research. PMID:24110531

Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

2013-01-01

405

Tutorial Counting and Tracking -Useful Examples from Digital Image Processing  

E-print Network

Tutorial Counting and Tracking - Useful Examples from Digital Image Processing Jonathan Kollmer, A3 Tutorial Surface Characterization Methods Christian Papp, Physical Chemistry II Tutorial t.b.a. t.b.a. THURSDAY, JUNE 28 FRIDAY, JUNE 29 SATURDAY, JUNE 30 30 00 12 -14 Lunch Tutorials for EAM Researchers 00 45

Sanderson, Yasmine

406

On optimal control methods in computer vision and image processing  

Microsoft Academic Search

Abstract: In this paper, we discuss the employment of methods from optimal control for problemsin computer vision and image processing. The underlying principle will be that of dynamicprogramming and the associated Hamilton-Jacobi equation which allows a unied approachto tackle a number of dierent issues in vision. In particular, we will consider problemsconcerning shape theory, morphology, optical ow, nonlinear scale-spaces, and

B. Kimia; A. Tannenbaum; S. Zucker

1994-01-01

407

Stereo Vision, Residual Image Processing and Mars Rover Localization  

Microsoft Academic Search

Experiments had been conducted on Mars rover detection and localization using residue image processing and stereo vision. In NASA's Pathfinder mission, an unmanned lander had been landed on Mars and a microrover was released from the lander to perform scientific experiments. Rover localization is an important issue because, for navigation purpose the rover's position needs to be continuously updated. Three

Larry Matthies; Byron Chen; Jon Petrescu

1997-01-01

408

Image Processing Software for 3D Light Microscopy  

Microsoft Academic Search

Advances in microscopy now enable researchers to easily acquire multi-channel three-dimensional (3D) images and 3D time series (4D). However, processing, analyzing, and displaying this data can often be difficult and time- consuming. We discuss some of the software tools and techniques that are available to accomplish these tasks.

Jeffrey L. Clendenon; Jason M. Byars; Deborah P. Hyink

2006-01-01

409

SPMD Image Processing on Beowulf Clusters: Directives and Libraries  

Microsoft Academic Search

Abstract Most image processing algorithms can be parallelized by splitting parallel loops and by using very few commu- nication patterns. Code parallelization using MPI still in- volves much programming,overheads. In order to reduce these overheads, we first developed a small SPMD library (SPMDlib) on top of MPI. The programmer,can use the li- brary routines themselves, because they are easy to

Paulo F. Oliveira; J. M. Hans Du Buf

2003-01-01

410

New Image Processing Tools for Structural Dynamic Monitoring  

Microsoft Academic Search

. Abstract. This paper presents an introduction to structural damage assessment using image processing on real data (non ideal conditions). Our contribution is much more a groundwork than a classical experimental validation. After measuring the bridge dynamic parameter on a small resolution video, we conjointly present advantages and limitations of our method. Finally we introduce several \\

Joseph Morlier; P. Salom; F. Bos

2007-01-01

411

Digital Image Processing of Earth Observation Sensor Data  

Microsoft Academic Search

This paper describes digital image processing techniques that were developed to precisely correct Landsat multispectral Earth observation data and gives illustrations of the results achieved, e.g., geometric corrections with an error of less than one picture element, a relative error of one-fourth picture element, and no radiometric error effect. Techniques for enhancing the sensor data, digitally mosaicking multiple scenes, and

Ralph Bernstein

1976-01-01

412

Satellite Image Processing on a Grid-Based Platform  

Microsoft Academic Search

Satellite image processing is both data and computing intensive, and, therefore, it raises several difficulties or even impossibilities while being using one single computer. Moreover, the analysis and sharing of the huge amount of data provided daily by the space satellites is a major challenge for the remote sensing community. Recently, Gridbased platforms were built to address these issues. This

Dana Petcu; Dorian Gorgan; textbfFlorin Pop; Dacian Tudor; Daniela Zaharie

2008-01-01

413

The Clinical Utility of Brain SPECT Imaging in Process Addictions  

Microsoft Academic Search

Brain SPECT imaging is a nuclear medicine study that uses isotopes bound to neurospecific pharmaceuticals to evaluate regional cerebral blood flow (rCBF) and indirectly metabolic activity. With current available technology and knowledge SPECT has the potential to add important clinical information to benefit patient care in many different areas of a substance abuse practice, including in the area of process

Daniel G. Amen; Kristen Willeumier; Robert Johnson

2012-01-01

414

Research Article Automatic Road Pavement Assessment with Image Processing: Review  

E-print Network

Research Article Automatic Road Pavement Assessment with Image Processing: Review and Comparison describe the proposed method to detect fine defects in pavement surface. This approach is based on a multi for evaluating this difficult task ­ the road pavement crack detection ­ is introduced. Finally, the proposed

Boyer, Edmond

415

Egg_s Bloodspot Detection Using Image Processing  

Microsoft Academic Search

Quality control in manufacturing such as food products is important to meet the user requirement. For example, it is very important to control the quality of eggs in order to ensure only the first quality of eggs are sold and be able to deliver customer satisfactions. In this paper, we apply image processing approach to detect abnormalities in eggs. The

A. Suryanti; N. S. Nurul Ain

2007-01-01

416

OPTICAL FLANK WEAR MONITORING OF CUTTING TOOLS BY IMAGE PROCESSING  

E-print Network

, television camera, pneumatic probe and so forth [2,3]. These methods have the advantage of high measuring, acoustic emission, cutting temperature and surface roughness [ 4 ­ 8]. However, few reliable indirect based on image processing has been developed and tested in this work. 2. Flank wear and tool life

Kim, Yong Jung

417

Parallel asynchronous hardware implementation of image processing algorithms  

NASA Technical Reports Server (NTRS)

Research is being carried out on hardware for a new approach to focal plane processing. The hardware involves silicon injection mode devices. These devices provide a natural basis for parallel asynchronous focal plane image preprocessing. The simplicity and novel properties of the devices would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture built from arrays of the devices would form a two-dimensional (2-D) array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuron-like asynchronous pulse-coded form through the laminar processor. No multiplexing, digitization, or serial processing would occur in the preprocessing state. High performance is expected, based on pulse coding of input currents down to one picoampere with noise referred to input of about 10 femtoamperes. Linear pulse coding has been observed for input currents ranging up to seven orders of magnitude. Low power requirements suggest utility in space and in conjunction with very large arrays. Very low dark current and multispectral capability are possible because of hardware compatibility with the cryogenic environment of high performance detector arrays. The aforementioned hardware development effort is aimed at systems which would integrate image acquisition and image processing.

Coon, Darryl D.; Perera, A. G. U.

1990-01-01

418

Cellular mixed signal pixel array for real time image processing  

Microsoft Academic Search

Contemporary computing platforms fail to deliver the computational density required for many real-time image processing tasks. On the other hand, even the simplest of living systems are able to perceive and interpret their environment effortlessly using a conglomerate of slow and inaccurate neurons in parallel. Motivated by this observation as well as the cellular neural network paradigm, this paper presents

Gamze Erten; Fathi M. Salam

1997-01-01

419

Image Processing using Java and C#: A Comparison Approach  

Microsoft Academic Search

This paper presents results of a study to compare Java and C# programming languages features in terms of portability, functional programming and execution time. This comparison permits to evaluate both programming languages to know which one has better performance in the image processing area.

María Isabel; Díaz Figueroa

420

Combustion Analysis by Image Processing of Premixed Flames  

Microsoft Academic Search

We describe a three-step algorithm for the analysis of color images of flames, with the objective of analyzing the com- bustion process and its control parameters indirectly. The algorithm first extracts the interesting regions, applying a clustering method in the RGB color space, identifying and eliminating irrelevant regions and applying morphological operators. Then it calculates different geometrical parame- ters for

Gaetano Baldini; Paola Campadelli; Raffaella Lanzarotti

2000-01-01

421

Innovative Camera and Image Processing System to Characterize Cryospheric Changes  

Microsoft Academic Search

The polar regions play an important role in Earth's climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images

A. Schenk; B. M. Csatho; S. Nagarajan

2010-01-01

422

Developing a Diagnosis Aiding Ontology Based on Hysteroscopy Image Processing  

NASA Astrophysics Data System (ADS)

In this paper we describe an ontology design process which will introduce the steps and mechanisms required in order to create and develop an ontology which will be able to represent and describe the contents and attributes of hysteroscopy images, as well as their relationships, thus providing a useful ground for the development of tools related with medical diagnosis from physicians.

Poulos, Marios; Korfiatis, Nikolaos

423

Digital image processing for wide-angle highly spatially variant imagers  

NASA Astrophysics Data System (ADS)

High resolution, wide field-of-view and large depth-of-focus imaging systems are greatly desired and have received much attention from researchers who seek to extend the capabilities of cameras. Monocentric lenses are superior in performance over other wide field-of-view lenses with the drawback that they form a hemispheric image plane which is incompatible with current sensor technology. Fiber optic bundles can be used to relay the image the lens produces to the sensor's planar surface. This requires image processing to correct for artifacts inherent to fiber bundle image transfer. Using a prototype fiber coupled monocentric lens imager we capture single exposure focal swept images from which we seek to produce extended depth-of-focus images. Point spread functions (PSF) were measured in lab and found to be both angle and depth dependent. This spatial variance enforces the requirement that the inverse problem be treated as such. This synthesis of information allowed us to establish a framework upon which to mitigate fiber bundle artifacts and extend the depth-of-focus of the imaging system.

Olivas, Stephen J.; Å orel, Michal; Arianpour, Ashkan; Stamenov, Igor; Nikzad, Nima; Schuster, Glenn M.; Motamedi, Nojan; Mellette, William M.; Stack, Ron A.; Johnson, Adam; Morrison, Rick; Agurok, Ilya P.; Ford, Joseph E.

2014-09-01

424

Images of Fractions "as" Processes and Images of Fractions "in" Processes  

ERIC Educational Resources Information Center

Within the large range of potential theoretical perspectives on fractions, this paper considers one particular interpretation: fractions' duality as process and object. By considering the number-fractionbar-number composite symbol as simultaneously representing division and rational, some process-object theories imply that fraction-as-process and…

Herman, Jan; Ilucova, Lucia; Kremsova, Veronika; Pribyl, Jiri; Ruppeldtova, Janka; Simpson, Adrian; Stehlikova, Nada; Sulista, Marek; Ulrychova, Michaela

2004-01-01

425

Limiting liability via high-resolution image processing  

NASA Astrophysics Data System (ADS)

The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as 'evidence ready,' even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

Greenwade, L. E.; Overlin, Trudy K.

1997-01-01

426

Study of optical techniques for the Ames unitary wind tunnel: Digital image processing, part 6  

NASA Technical Reports Server (NTRS)

A survey of digital image processing techniques and processing systems for aerodynamic images has been conducted. These images covered many types of flows and were generated by many types of flow diagnostics. These include laser vapor screens, infrared cameras, laser holographic interferometry, Schlieren, and luminescent paints. Some general digital image processing systems, imaging networks, optical sensors, and image computing chips were briefly reviewed. Possible digital imaging network systems for the Ames Unitary Wind Tunnel were explored.

Lee, George

1993-01-01

427

Investigations into the Instructional Process. IV. Teaching as a Stochastic Process.  

ERIC Educational Resources Information Center

This study examines quantification of the instructional process through the use of Markov chaining, and by considering the transition probabilities within a framework provided by the taxonomy used, attempts to obtain information about behavior sequences common to all lessons. (Author/DLG)

Komulainen, Erkki

428

Integrated Optics for Planar imaging and Optical Signal Processing  

NASA Astrophysics Data System (ADS)

Silicon photonics is a subject of growing interest with the potential of delivering planar electro-optical devices with chip scale integration. Silicon-on-insulator (SOI) technology has provided a marvelous platform for photonics industry because of its advantages in integration capability in CMOS circuit and countless nonlinearity applications in optical signal processing. This thesis is focused on the investigation of planar imaging techniques on SOI platform and potential applications in ultra-fast optical signal processing. In the first part, a general review and background introduction about integrated photonics circuit and planar imaging technique are provided. In chapter 2, planar imaging platform is realized by a silicon photodiode on SOI chip. Silicon photodiode on waveguide provides a high numerical aperture for an imaging transceiver pixel. An erbium doped Y2O3 particle is excited by 1550nm Laser and the fluorescent image is obtained with assistance of the scanning system. Fluorescence image is reconstructed by using image de-convolution technique. Under photovoltaic mode, we use an on-chip photodiode and an external PIN photodiode to realize similar resolution as 5?m. In chapter 3, a time stretching technique is developed to a spatial domain to realize a 2D imaging system as an ultrafast imaging tool. The system is evaluated based on theoretical calculation. The experimental results are shown for a verification of system capability to imaging a micron size particle or a finger print. Meanwhile, dynamic information for a moving object is also achieved by correlation algorithm. In chapter 4, the optical leaky wave antenna based on SOI waveguide has been utilized for imaging applications and extensive numerical studied has been conducted. and the theoretical explanation is supported by leaky wave theory. The highly directive radiation has been obtained from the broadside with 15.7 dB directivity and a 3dB beam width of ?Ø 3dB ? 1.65° in free space environment when ? -1 = 2.409 × 105/m, ?=4.576 ×103/m. At the end, electronics beam-steering principle has been studied and the comprehensive model has been built to explain carrier transformation behavior in a PIN junction as individual silicon perturbation. Results show that 1019/cm3 is possible obtained with electron injection mechanism. Although the radiation modulation based on carrier injection of 1019/cm3 gives 0.5dB variation, resonant structure, such as Fabry Perrot Cavity, can be integrated with LOWAs to enhance modulation effect.

Song, Qi

429

Craft, Process and Art: Teaching and Learning Music Composition in Higher Education  

ERIC Educational Resources Information Center

This paper explores models of teaching and learning music composition in higher education. It analyses the pedagogical approaches apparent in the literature on teaching and learning composition in schools and universities, and introduces a teaching model as: learning from the masters; mastery of techniques; exploring ideas; and developing voice.…

Lupton, Mandy; Bruce, Christine

2010-01-01

430

Physiological basis and image processing in functional magnetic resonance imaging: Neuronal and motor activity in brain  

PubMed Central

Functional magnetic resonance imaging (fMRI) is recently developing as imaging modality used for mapping hemodynamics of neuronal and motor event related tissue blood oxygen level dependence (BOLD) in terms of brain activation. Image processing is performed by segmentation and registration methods. Segmentation algorithms provide brain surface-based analysis, automated anatomical labeling of cortical fields in magnetic resonance data sets based on oxygen metabolic state. Registration algorithms provide geometric features using two or more imaging modalities to assure clinically useful neuronal and motor information of brain activation. This review article summarizes the physiological basis of fMRI signal, its origin, contrast enhancement, physical factors, anatomical labeling by segmentation, registration approaches with examples of visual and motor activity in brain. Latest developments are reviewed for clinical applications of fMRI along with other different neurophysiological and imaging modalities. PMID:15125779

Sharma, Rakesh; Sharma, Avdhesh

2004-01-01

431

New Insights into Image Processing of Cortical Blood Flow Monitors Using Laser Speckle Imaging  

Microsoft Academic Search

Laser speckle imaging has increasingly become a viable technique for real-time medical imaging. However, the com- putational intricacies and the viewing experience involved limit its usefulness for real-time monitors such as those intended for neu- rosurgical applications. In this paper, we propose a new technique, tLASCA, which processes statistics primarily in the temporal direction using the laser speckle contrast analysis

Thinh M. Le; Joseph S. Paul; H. Al-nashash; A. Tan; A. R. Luft; F.-S. Sheu; S. H. Ong

2007-01-01

432

Remote sensing study based on IRSA Remote Sensing Image Processing System  

Microsoft Academic Search

The IRSA Remote Sensing Image Processing System is multi-functional software used for satellite image processing. It consists of over ten parts of the routine and typical used modules in Remote Sensing Image Processing project, such as viewer & file import\\/export, basic processing, image restoration. As an indigenous developed software, IRSA combines the advantages and kernels of many import famous systems,

Ling Peng; Zhongming Zhao; Linli Cui; Lu Wang

2004-01-01

433

Computed tomography perfusion imaging denoising using Gaussian process regression  

NASA Astrophysics Data System (ADS)

Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study.

Zhu, Fan; Carpenter, Trevor; Rodriguez Gonzalez, David; Atkinson, Malcolm; Wardlaw, Joanna

2012-06-01

434

IMAGEP - A FORTRAN ALGORITHM FOR DIGITAL IMAGE PROCESSING  

NASA Technical Reports Server (NTRS)

IMAGEP is a FORTRAN computer algorithm containing various image processing, analysis, and enhancement functions. It is a keyboard-driven program organized into nine subroutines. Within the subroutines are other routines, also, selected via keyboard. Some of the functions performed by IMAGEP include digitization, storage and retrieval of images; image enhancement by contrast expansion, addition and subtraction, magnification, inversion, and bit shifting; display and movement of cursor; display of grey level histogram of image; and display of the variation of grey level intensity as a function of image position. This algorithm has possible scientific, industrial, and biomedical applications in material flaw studies, steel and ore analysis, and pathology, respectively. IMAGEP is written in VAX FORTRAN for DEC VAX series computers running VMS. The program requires the use of a Grinnell 274 image processor which can be obtained from Mark McCloud Associates, Campbell, CA. An object library of the required GMR series software is included on the distribution media. IMAGEP requires 1Mb of RAM for execution. The standard distribution medium for this program is a 1600 BPI 9track magnetic tape in VAX FILES-11 format. It is also available on a TK50 tape cartridge in VAX FILES-11 format. This program was developed in 1991. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation.

Roth, D. J.

1994-01-01

435

Distributed source separation algorithms for hyperspectral image processing  

NASA Astrophysics Data System (ADS)

This paper describes a new algorithm for feature extraction on hyperspectral images based on blind source separation (BSS) and distributed processing. I use Independent Component Analysis (ICA), a particular case of BSS, where, given a linear mixture of statistical independent sources, the goal is to recover these components by producing the unmixing matrix. In the multispectral/hyperspectral imagery, the separated components can be associated with features present in the image, the source separation algorithm projecting them in different image bands. ICA based methods have been employed for target detection and classification of hyperspectral images. However, these methods involve an iterative optimization process. When applied to hyperspectral data, this iteration results in significant execution times. The time efficiency of the method is improved by running it on a distributed environment while preserving the accuracy of the results. The design of the distributed algorithm as well as issues related to the distributed modeling of the hyperspectral data were taken in consideration and presented. The effectiveness of the proposed algorithm has been tested by comparison to the sequential source separation algorithm using data from AVIRIS and HYDICE. Preliminary results indicate that, while the accuracy of the results is preserved, the new algorithm provides a considerable speed-up in processing.

Robila, Stefan A.

2004-08-01

436

Lunar Crescent Detection Based on Image Processing Algorithms  

NASA Astrophysics Data System (ADS)

For many years lunar crescent visibility has been studied by many astronomers. Different criteria have been used to predict and evaluate the visibility status of new Moon crescents. Powerful equipment such as telescopes and binoculars have changed capability of observations. Most of conventional statistical criteria made wrong predictions when new observations (based on modern equipment) were reported. In order to verify such reports and modify criteria, not only previous statistical parameters should be considered but also some new and effective parameters like high magnification, contour effect, low signal to noise, eyestrain and weather conditions should be viewed. In this paper a new method is presented for lunar crescent detection based on processing of lunar crescent images. The method includes two main steps, first, an image processing algorithm that improves signal to noise ratio and detects lunar crescents based on circular Hough transform (CHT). Second using an algorithm based on image histogram processing to detect the crescent visually. Final decision is made by comparing the results of visual and CHT algorithms. In order to evaluate the proposed method, a database, including 31 images are tested. The illustrated method can distinguish and extract the crescent that even the eye can't recognize. Proposed method significantly reduces artifacts, increases SNR and can be used easily by both groups astronomers and who want to develop a new criterion as a reliable method to verify empirical observation.

Fakhar, Mostafa; Moalem, Peyman; Badri, Mohamad Ali

2014-11-01

437

Lunar Crescent Detection Based on Image Processing Algorithms  

NASA Astrophysics Data System (ADS)

For many years lunar crescent visibility has been studied by many astronomers. Different criteria have been used to predict and evaluate the visibility status of new Moon crescents. Powerful equipment such as telescopes and binoculars have changed capability of observations. Most of conventional statistical criteria made wrong predictions when new observations (based on modern equipment) were reported. In order to verify such reports and modify criteria, not only previous statistical parameters should be considered but also some new and effective parameters like high magnification, contour effect, low signal to noise, eyestrain and weather conditions should be viewed. In this paper a new method is presented for lunar crescent detection based on processing of lunar crescent images. The method includes two main steps, first, an image processing algorithm that improves signal to noise ratio and detects lunar crescents based on circular Hough transform (CHT). Second using an algorithm based on image histogram processing to detect the crescent visually. Final decision is made by comparing the results of visual and CHT algorithms. In order to evaluate the proposed method, a database, including 31 images are tested. The illustrated method can distinguish and extract the crescent that even the eye can't recognize. Proposed method significantly reduces artifacts, increases SNR and can be used easily by both groups astronomers and who want to develop a new criterion as a reliable method to verify empirical observation.

Fakhar, Mostafa; Moalem, Peyman; Badri, Mohamad Ali

2014-09-01

438

EMAN2: an extensible image processing suite for electron microscopy.  

PubMed

EMAN is a scientific image processing package with a particular focus on single particle reconstruction from transmission electron microscopy (TEM) images. It was first released in 1999, and new versions have been released typically 2-3 times each year since that time. EMAN2 has been under development for the last two years, with a completely refactored image processing library, and a wide range of features to make it much more flexible and extensible than EMAN1. The user-level programs are better documented, more straightforward to use, and written in the Python scripting language, so advanced users can modify the programs' behavior without any recompilation. A completely rewritten 3D transformation class simplifies translation between Euler angle standards and symmetry conventions. The core C++ library has over 500 functions for image processing and associated tasks, and it is modular with introspection capabilities, so programmers can add new algorithms with minimal effort and programs can incorporate new capabilities automatically. Finally, a flexible new parallelism system has been designed to address the shortcomings in the rigid system in EMAN1. PMID:16859925

Tang, Guang; Peng, Liwei; Baldwin, Philip R; Mann, Deepinder S; Jiang, Wen; Rees, Ian; Ludtke, Steven J

2007-01-01

439

Scheduling algorithms for PIPE (Pipelined Image-Processing Engine)  

SciTech Connect

In this paper the authors present heuristic scheduling algorithms for the National Bureau of Standards/Aspex Inc. Pipelined Image-Processing Engine (PIPE). PIPE is a special-purpose machine for low-level image processing consisting of a linearly connected array of processing stages. A program is specified as a direct acyclic graph (DAG). The authors' first algorithm schedules planar DAGs. It works top-down through the graph and uses the greedy approach to schedule operations on a stage. It uses several heuristics to control the movement of images between stages. The worst case time for the schedule generated by the algorithm is O(N) times the optimal schedule, where N is the maximum width of the graph. The authors generalize this algorithm to work on nonplanar graphs, using heuristics for repositioning images on the stages of PIPE. The worst case time for the more general algorithm is also O(N) times the optimal schedule. Finally, the authors analyze the problem of optimizing throughput and latency for a sequence of DAGs on PIPE.

Stewart, C.V.; Dyer, C.R.

1988-04-01

440

Lunar and Planetary Science XXXV: Image Processing and Earth Observations  

NASA Technical Reports Server (NTRS)

The titles in this section include: 1) Expansion in Geographic Information Services for PIGWAD; 2) Modernization of the Integrated Software for Imagers and Spectrometers; 3) Science-based Region-of-Interest Image Compression; 4) Topographic Analysis with a Stereo Matching Tool Kit; 5) Central Avra Valley Storage and Recovery Project (CAVSARP) Site, Tucson, Arizona: Floodwater and Soil Moisture Investigations with Extraterrestrial Applications; 6) ASE Floodwater Classifier Development for EO-1 HYPERION Imagery; 7) Autonomous Sciencecraft Experiment (ASE) Operations on EO-1 in 2004; 8) Autonomous Vegetation Cover Scene Classification of EO-1 Hyperion Hyperspectral Data; 9) Long-Term Continental Areal Reduction Produced by Tectonic Processes.

2004-01-01

441

Over the last two decades, we have witnessed an explosive growth in both the diversity of techniques and the range of applications of image processing. However, the area of color image processing is still sporadically  

E-print Network

images or, more generally, processing multichannel images, such as satellite images, color filter array of techniques and the range of applications of image processing. However, the area of color image processing processing appears to become the main focus of the image processing research community. Processing color

Plataniotis, Konstantinos N.

442

A Simple Microscopy Assay to Teach the Processes of Phagocytosis and Exocytosis  

PubMed Central

Phagocytosis and exocytosis are two cellular processes involving membrane dynamics. While it is easy to understand the purpose of these processes, it can be extremely difficult for students to comprehend the actual mechanisms. As membrane dynamics play a significant role in many cellular processes ranging from cell signaling to cell division to organelle renewal and maintenance, we felt that we needed to do a better job of teaching these types of processes. Thus, we developed a classroom-based protocol to simultaneously study phagocytosis and exocytosis in Tetrahymena pyriformis. In this paper, we present our results demonstrating that our undergraduate classroom experiment delivers results comparable with those acquired in a professional research laboratory. In addition, students performing the experiment do learn the mechanisms of phagocytosis and exocytosis. Finally, we demonstrate a mathematical exercise to help the students apply their data to the cell. Ultimately, this assay sets the stage for future inquiry-based experiments, in which the students develop their own experimental questions and delve deeper into the mechanisms of phagocytosis and exocytosis. PMID:22665590

Gray, Ross; Gray, Andrew; Fite, Jessica L.; Jordan, Renee; Stark, Sarah; Naylor, Kari

2012-01-01

443

Blurred star image processing for star sensors under dynamic conditions.  

PubMed

The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large angular rate. The adaptive threshold is adopted for denoising the star image when the angular rate is in the dynamic range. Then, the mathematical model of motion blur is deduced so as to restore the blurred star map due to large angular rate. Simulation results validate the effectiveness of the proposed method, which is suitable for blurred star image processing and practical for attitude determination of satellites under dynamic conditions. PMID:22778666

Zhang, Weina; Quan, Wei; Guo, Lei

2012-01-01

444

Fast Implementation of Matched Filter Based Automatic Alignment Image Processing  

SciTech Connect

Video images of laser beams imprinted with distinguishable features are used for alignment of 192 laser beams at the National Ignition Facility (NIF). Algorithms designed to determine the position of these beams enable the control system to perform the task of alignment. Centroiding is a common approach used for determining the position of beams. However, real world beam images suffer from intensity fluctuation or other distortions which make such an approach susceptible to higher position measurement variability. Matched filtering used for identifying the beam position results in greater stability of position measurement compared to that obtained using the centroiding technique. However, this gain is achieved at the expense of extra processing time required for each beam image. In this work we explore the possibility of using a field programmable logic array (FPGA) to speed up these computations. The results indicate a performance improvement of 20 using the FPGA relative to a 3 GHz Pentium 4 processor.

Awwal, A S; Rice, K; Taha, T

2008-04-02

445

Design of interchannel MRF model for probabilistic multichannel image processing.  

PubMed

In this paper, we present a novel framework that exploits an informative reference channel in the processing of another channel. We formulate the problem as a maximum a posteriori estimation problem considering a reference channel and develop a probabilistic model encoding the interchannel correlations based on Markov random fields. Interestingly, the proposed formulation results in an image-specific and region-specific linear filter for each site. The strength of filter response can also be controlled in order to transfer the structural information of a channel to the others. Experimental results on satellite image fusion and chrominance image interpolation with denoising show that our method provides improved subjective and objective performance compared with conventional approaches. PMID:20875973

Koo, Hyung Il; Cho, Nam Ik

2011-03-01

446

GUI-based Processing of Near Infrared Imaging  

NASA Astrophysics Data System (ADS)

As part of The University of Tennessee at Martin's University Scholars mentored research program, we have developed GUI-based, interactive software to aid in the processing of ground-based, near-infrared images. The software is coded in Perl and Tcl/Tk to maximize cross-platform compatibility. The software reduces the raw images to a final set of flat-fielded, sky-subtracted images, which are then input into PhotVis (Gutermuth et al. 2004) to extract the positions and photometric magnitudes of the sources. The final result is a band-merged catalog of sources ready for scientific analysis. Using near-infrared observations of molecular cloud cores obtained with the Magellan 6.5-meter Baade telescope at Las Campanas Observatory, we demonstrate the capability of this software.

Lambert, Dustin; Crews, L. J.; Huard, T. L.; Gutermuth, R. A.

2007-05-01

447

Tracker: Image-Processing and Object-Tracking System Developed  

NASA Technical Reports Server (NTRS)

Tracker is an object-tracking and image-processing program designed and developed at the NASA Lewis Research Center to help with the analysis of images generated by microgravity combustion and fluid physics experiments. Experiments are often recorded on film or videotape for analysis later. Tracker automates the process of examining each frame of the recorded experiment, performing image-processing operations to bring out the desired detail, and recording the positions of the objects of interest. It can load sequences of images from disk files or acquire images (via a frame grabber) from film transports, videotape, laser disks, or a live camera. Tracker controls the image source to automatically advance to the next frame. It can employ a large array of image-processing operations to enhance the detail of the acquired images and can analyze an arbitrarily large number of objects simultaneously. Several different tracking algorithms are available, including conventional threshold and correlation-based techniques, and more esoteric procedures such as "snake" tracking and automated recognition of character data in the image. The Tracker software was written to be operated by researchers, thus every attempt was made to make the software as user friendly and self-explanatory as possible. Tracker is used by most of the microgravity combustion and fluid physics experiments performed by Lewis, and by visiting researchers. This includes experiments performed on the space shuttles, Mir, sounding rockets, zero-g research airplanes, drop towers, and ground-based laboratories. This software automates the analysis of the flame or liquid s physical parameters such as position, velocity, acceleration, size, shape, intensity characteristics, color, and centroid, as well as a number of other measurements. It can perform these operations on multiple objects simultaneously. Another key feature of Tracker is that it performs optical character recognition (OCR). This feature is useful in extracting numerical instrumentation data that are embedded in images. All the results are saved in files for further data reduction and graphing. There are currently three Tracking Systems (workstations) operating near the laboratories and offices of Lewis Microgravity Science Division researchers. These systems are used independently by students, scientists, and university-based principal investigators. The researchers bring their tapes or films to the workstation and perform the tracking analysis. The resultant data files generated by the tracking process can then be analyzed on the spot, although most of the time researchers prefer to transfer them via the network to their offices for further analysis or plotting. In addition, many researchers have installed Tracker on computers in their office for desktop analysis of digital image sequences, which can be digitized by the Tracking System or some other means. Tracker has not only provided a capability to efficiently and automatically analyze large volumes of data, saving many hours of tedious work, but has also provided new capabilities to extract valuable information and phenomena that was heretofore undetected and unexploited.

Klimek, Robert B.; Wright, Theodore W.

1999-01-01

448

Mobile medical image retrieval  

Microsoft Academic Search

Images are an integral part of medical practice for diagnosis, treatment planning and teaching. Image retrieval has gained in importance mainly as a research domain over the past 20 years. Both textual and visual retrieval of images are essential. In the process of mobile devices becoming reliable and having a functionality equaling that of formerly desktop clients, mobile computing has

Samuel Duc; Adrien Depeursinge; Ivan Eggel; Henning Müller

2011-01-01

449

Enhanced Processing and Analysis of Cassini SAR Images of Titan  

NASA Astrophysics Data System (ADS)

SAR images suffer from speckle noise, which hinders interpretation and quantitative analysis. We have adapted a non-local algorithm for de-noising images using an appropriate multiplicative noise model [1] for analysis of Cassini SAR images. We illustrate some examples here that demonstrate the improvement of landform interpretation by focusing on transport processes at Titan's surface. Interpretation of the geomorphic features is facilitated (Figure 1); including revealing details of the channels incised into the terrain, shoreline morphology, and contrast variations in the dark, liquid covered areas. The latter are suggestive of sub-marine channels and gradients in the bathymetry. Furthermore, substantial quantitative improvements are possible. We show that a derived Digital Elevation Model from radargrammetry [2] using the de-noised images is obtained with a greater number of matching points (up to 80%) and a better correlation (59% of the pixels give a good correlation in the de-noised data compared with 18% in the original SAR image). An elevation hypsogram of our enhanced DEM shows evidence that fluvial and/or lacustrine processes have affected the topographic distribution substantially. Dune wavelengths and interdune extents are more precisely measured. Finally, radarclinometry technics applied to our new data are more accurate in dunes and mountainous regions. [1] Deledalle C-A., et al., 2009, Weighted maximum likelihood denoising with iterative and probabilistic patch-based weights, Telecom Paris. [2] Kirk, R.L., et al., 2007, First stereoscopic radar images of Titan, Lunar Planet. Sci., XXXVIII, Abstract #1427, Lunar and Planetary Institute, Houston

Lucas, A.; Aharonson, O.; Hayes, A. G.; Deledalle, C. A.; Kirk, R. L.

2011-12-01

450

High Speed Data Processing for Imaging MS-Based Molecular Histology Using Graphical Processing Units  

NASA Astrophysics Data System (ADS)

Imaging MS enables the distributions of hundreds of biomolecular ions to be determined directly from tissue samples. The application of multivariate methods, to identify pixels possessing correlated MS profiles, is referred to as molecular histology as tissues can be annotated on the basis of the MS profiles. The application of imaging MS-based molecular histology to larger tissue series, for clinical applications, requires significantly increased computational capacity in order to efficiently analyze the very large, highly dimensional datasets. Such datasets are highly suited to processing using graphical processor units, a very cost-effective solution for high speed processing. Here we demonstrate up to 13× speed improvements for imaging MS-based molecular histology using off-the-shelf components, and demonstrate equivalence with CPU based calculations. It is then discussed how imaging MS investigations may be designed to fully exploit the high speed of graphical processor units.

Jones, Emrys A.; van Zeijl, René J. M.; Andrén, Per E.; Deelder, André M.; Wolters, Lex; McDonnell, Liam A.

2012-04-01

451

Optimal Estimation of States in Quantum Image Processing  

E-print Network

An optimal estimator of quantum states based on a modified Kalman Filter is presented in this work. Such estimator acts after state measurement, allowing to obtain an optimal estimation of quantum state resulting in the output of any quantum image algorithm. Besides, a new criteria, logic, and arithmetic based on projections onto vertical axis of Bloch Sphere exclusively are presented too. This approach will allow us: 1) a simpler development of logic and arithmetic quantum operations, where they will closer to those used in the classical digital image processing algorithms, 2) building simple and robust classical-to-quantum and quantum-to-classical interfaces. Said so far is extended to quantum algorithms outside image processing too. In a special section on metrics and simulations, three new metrics based on the comparison between the classical and quantum versions algorithms for filtering and edge detection of images are presented. Notable differences between the results of classical and quantum versions of such algorithms (outside and inside of quantum computer, respectively) show the need for modeling state and measurement noise inside estimation scheme.

Mario Mastriani

2014-06-19

452

IMAGING AND IMAGE PROCESSING. HOLOGRAPHY Some features of photolithography image formation in partially coherent light  

NASA Astrophysics Data System (ADS)

The coherent-noise level in projection images of an opaque-screen sharp edge, formed in the model scheme of photolithography system at different degrees of spatial coherence of screen-illuminating light is studied experimentally. The spatial coherence of laser radiation was reduced by applying a specially developed device, used as a separate functional unit in the system model. The smoothing of the spatial fluctuations of radiation intensity caused by the random spatial inhomogeneity of the initial beam intensity in the obtained images is shown to be highly efficient.

Kitsak, M. A.; Kitsak, A. I.

2010-12-01

453

Small Interactive Image Processing System (SMIPS) users manual  

NASA Technical Reports Server (NTRS)

The Small Interactive Image Processing System (SMIP) is designed to facilitate the acquisition, digital processing and recording of image data as well as pattern recognition in an interactive mode. Objectives of the system are ease of communication with the computer by personnel who are not expert programmers, fast response to requests for information on pictures, complete error recovery as well as simplification of future programming efforts for extension of the system. The SMIP system is intended for operation under OS/MVT on an IBM 360/75 or 91 computer equipped with the IBM-2250 Model 1 display unit. This terminal is used as an interface between user and main computer. It has an alphanumeric keyboard, a programmed function keyboard and a light pen which are used for specification of input to the system. Output from the system is displayed on the screen as messages and pictures.

Moik, J. G.

1973-01-01

454

Neon: A Domain-Specific Programming Language for Image Processing  

Microsoft Academic Search

Neon is a high-level domain-specific programming language for writing efficient image processing programs which can run on either the CPU or the GPU. End users write Neon programs in a C# programming environment. When the Neon program is executed, our optimizing code generator outputs human-readable source files for either the CPU or GPU. These source files are then added to

Brian Guenter; Diego Nehab

455

Ultra high speed image processing techniques. [electronic packaging techniques  

NASA Technical Reports Server (NTRS)

Packaging techniques for ultra high speed image processing were developed. These techniques involve the development of a signal feedthrough technique through LSI/VLSI sapphire substrates. This allows the stacking of LSI/VLSI circuit substrates in a 3 dimensional package with greatly reduced length of interconnecting lines between the LSI/VLSI circuits. The reduced parasitic capacitances results in higher LSI/VLSI computational speeds at significantly reduced power consumption levels.

Anthony, T.; Hoeschele, D. F.; Connery, R.; Ehland, J.; Billings, J.

1981-01-01

456

Automatic Denoising and Unmixing in Hyperspectral Image Processing  

NASA Astrophysics Data System (ADS)

This thesis addresses two important aspects in hyperspectral image processing: automatic hyperspectral image denoising and unmixing. The first part of this thesis is devoted to a novel automatic optimized vector bilateral filter denoising algorithm, while the remainder concerns nonnegative matrix factorization with deterministic annealing for unsupervised unmixing in remote sensing hyperspectral images. The need for automatic hyperspectral image processing has been promoted by the development of potent hyperspectral systems, with hundreds of narrow contiguous bands, spanning the visible to the long wave infrared range of the electromagnetic spectrum. Due to the large volume of raw data generated by such sensors, automatic processing in the hyperspectral images processing chain is preferred to minimize human workload and achieve optimal result. Two of the mostly researched processing for such automatic effort are: hyperspectral image denoising, which is an important preprocessing step for almost all remote sensing tasks, and unsupervised unmixing, which decomposes the pixel spectra into a collection of endmember spectral signatures and their corresponding abundance fractions. Two new methodologies are introduced in this thesis to tackle the automatic processing problems described above. Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios. Typical vector bilateral filtering usage does not employ parameters that have been determined to satisfy optimality criteria. This thesis also introduces an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimizing the Stein's unbiased risk estimate (SURE) of this nonlinear estimator. Along the way, this thesis provides a plausibility argument with an analytical example as to why vector bilateral filtering outperforms band-wise 2D bilateral filtering in enhancing SNR. Experimental results show that the optimized vector bilateral filter provides improved denoising performance on multispectral images when compared to several other approaches. Non-negative matrix factorization (NMF) technique and its extensions were developed to find part based, linear representations of non-negative multivariate data. They have been shown to provide more interpretable results with realistic non-negative constrain in unsupervised learning applications such as hyperspectral imagery unmixing, image feature extraction, and data mining. This thesis extends the NMF method by incorporating deterministic annealing optimization procedure, which will help solve the non-convexity problem in NMF and provide a better choice of sparseness constrain. The approach is based on replacing the difficult non-convex optimization problem of NMF with an easier one by adding an auxiliary convex entropy constrain term and solving this first. Experiment results with hyperspectral unmixing application show that the proposed technique provides improved unmixing performance compared to other state-of-the-art methods.

Peng, Honghong

457

Microspectrofluorometry by digital image processing: measurement of cytoplasmic pH  

Microsoft Academic Search

An interface of our microspectrofluorometer with an image processing system performs microspectrofluorometric measurements in living cells by digital image processing. Fluorescence spectroscopic parameters can be measured by digital image processing directly from microscopic images of cells, and are automatically normalized for pathlength and accessible volume. Thus, an accurate cytoplasmic map of various spectroscopic parameters can be produced. The resting cytoplasmic

L. Tanasugarn; P. McNeil; G. T. Reynolds; D. L. Taylor

1984-01-01

458

Real-time low-light-level image-processing system  

Microsoft Academic Search

A real time low light level image processing and tracking system has been developed in our photoelectronic system laboratory. The system contains five basic parts: a low light level (LLL) CCD camera, several special real time image processing hardware function blocks, processed image output lookup table transfer, target tracking function block, and a controlling CPU. The image (512 X 512

Zebin Wei; Huien Zhang; Min Zhou; Yisong Zou; Shanfeng Hou

1993-01-01

459

Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.  

PubMed

The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue. PMID:24051525

He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P

2013-01-01

460

Laser Doppler Blood Flow Imaging Using a CMOS Imaging Sensor with On-Chip Signal Processing  

PubMed Central

The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue. PMID:24051525

He, Diwei; Nguyen, Hoang C.; Hayes-Gill, Barrie R.; Zhu, Yiqun; Crowe, John A.; Gill, Cally; Clough, Geraldine F.; Morgan, Stephen P.

2013-01-01

461

Teaching the Writing Process through Digital Storytelling in Pre-service Education  

E-print Network

blocks of knowledge, the foundation of memory and learning. Stories function as symbolic tools, ways of understanding experience as unfolding in time and space (Bruner 1986). Stories serve as avenues to personal experience, and they are the way... and meaningful interactions with more capable peers or adults who are able to model problem solving and assist students in finding solutions. These thinking processes, which utilize psychological tools such as language, symbols, images, writing, mapping...

Green, Martha Robison

2012-07-16

462

Negotiating a space to teach science: Stories of community conversation and personal process in a school reform effort  

NASA Astrophysics Data System (ADS)

This is a qualitative study about elementary teachers in a school district who are involved in a science curricular reform effort. The teachers attempted to move from textbook-based science teaching to a more inquiry and process-based approach. I specifically explore how teachers negotiate their place within changes in pedagogy and curriculum and how this negotiation is enacted in the space of a teacher's own classroom. The account developed here is based on a two-year study. Presented are descriptions, analysis, and my own interpretations of teaching and conversations as teachers spoke with one another, with me and with children as they tried out the new science curriculum and pedagogies. I conclude that people interested in school reform should consider the following ideas as they work with teachers to implement pedagogical and curricular changes. (1) Teaching is a personal/individual process that takes place within a larger community. This leads to a complex context for working and making decisions. (2) Despite feeling that changes were imposed, teachers make the curriculum work for the needs in their own classroom. (3) Change is a process that teachers view as part of their work. Teachers expect that they will adapt curriculum and make it work for the children in their classes and for themselves. I suggest that those who advocate various reform efforts in teaching and curriculum should consider the spaces that teachers create as they become a part of the change process including intellectual, physical, and emotional ones. In my stories I assert: teachers create their own spaces for making changes in pedagogy and curriculum and they do this as a complex negotiation of external demands (such as their community, relationships with colleagues, and state standards) and their own values and interpretations. The ways that teachers implement the change process is a personal one, and because it is a personal process, school reform efforts largely depend on the teachers making these efforts a part of their own thinking, teaching, and learning.

Barker, Heidi Bulmahn

463

Applicability of the Multiple Intelligence Theory to the Process of Organizing and Planning of Learning and Teaching  

ERIC Educational Resources Information Center

It has long been under discussion how the teaching and learning environment should be arranged, how individuals achieve learning, and how teachers can effectively contribute to this process. Accordingly, a considerable number of theories and models have been proposed. Gardner (1983) caused a remarkable shift in the perception of learning theory as…

Acat, M. Bahaddin

2005-01-01

464

Validation Study of the Scale for "Assessment of the Teaching-Learning Process", Student Version (ATLP-S)  

ERIC Educational Resources Information Center

Introduction: The main goal of this study is to evaluate the psychometric and assessment features of the Scale for the "Assessment of the Teaching-Learning Process, Student Version" (ATLP-S), for both practical and theoretical reasons. From an applied point of view, this self-report measurement instrument has been designed to encourage student…

de la Fuente, Jesus; Sander, Paul; Justicia, Fernando; Pichardo, M. Carmen; Garcia-Berben, Ana B.

2010-01-01

465

Optimized Laplacian image sharpening algorithm based on graphic processing unit  

NASA Astrophysics Data System (ADS)

In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

2014-12-01

466

Processing Earth Observing images with Ames Stereo Pipeline  

NASA Astrophysics Data System (ADS)

ICESat with its GLAS instrument provided valuable elevation measurements of glaciers. The loss of this spacecraft caused a demand for alternative elevation sources. I