These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

Image Processing for Teaching.  

ERIC Educational Resources Information Center

The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…

Greenberg, R.; And Others

1993-01-01

2

Teaching image processing and pattern recognition with the Intel OpenCV library  

Microsoft Academic Search

In this paper we present an approach to teaching image processing and pattern recognition with the use of the OpenCV library. Image processing, pattern recognition and computer vision are important branches of science and apply to tasks ranging from critical, involving medical diagnostics, to everyday tasks including art and entertainment purposes. It is therefore crucial to provide students of image

Adam Kozlowski; Aleksandra Królak

2009-01-01

3

Enhancing the Teaching of Digital Processing of Remote Sensing Image Course through Geospatial Web Processing Services  

NASA Astrophysics Data System (ADS)

Remote sensing (RS) is an essential method to collect data for Earth science research. Huge amount of remote sensing data, most of them in the image form, have been acquired. Almost all geography departments in the world offer courses in digital processing of remote sensing images. Such courses place emphasis on how to digitally process large amount of multi-source images for solving real world problems. However, due to the diversity and complexity of RS images and the shortcomings of current data and processing infrastructure, obstacles for effectively teaching such courses still remain. The major obstacles include 1) difficulties in finding, accessing, integrating and using massive RS images by students and educators, and 2) inadequate processing functions and computing facilities for students to freely explore the massive data. Recent development in geospatial Web processing service systems, which make massive data, computing powers, and processing capabilities to average Internet users anywhere in the world, promises the removal of the obstacles. The GeoBrain system developed by CSISS is an example of such systems. All functions available in GRASS Open Source GIS have been implemented as Web services in GeoBrain. Petabytes of remote sensing images in NASA data centers, the USGS Landsat data archive, and NOAA CLASS are accessible transparently and processable through GeoBrain. The GeoBrain system is operated on a high performance cluster server with large disk storage and fast Internet connection. All GeoBrain capabilities can be accessed by any Internet-connected Web browser. Dozens of universities have used GeoBrain as an ideal platform to support data-intensive remote sensing education. This presentation gives a specific example of using GeoBrain geoprocessing services to enhance the teaching of GGS 588, Digital Remote Sensing taught at the Department of Geography and Geoinformation Science, George Mason University. The course uses the textbook "Introductory Digital Image Processing, A Remote Sensing Perspective" authored by John Jensen. The textbook is widely adopted in the geography departments around the world for training students on digital processing of remote sensing images. In the traditional teaching setting for the course, the instructor prepares a set of sample remote sensing images to be used for the course. Commercial desktop remote sensing software, such as ERDAS, is used for students to do the lab exercises. The students have to do the excurses in the lab and can only use the simple images. For this specific course at GMU, we developed GeoBrain-based lab excurses for the course. With GeoBrain, students now can explore petabytes of remote sensing images in the NASA, NOAA, and USGS data archives instead of dealing only with sample images. Students have a much more powerful computing facility available for their lab excurses. They can explore the data and do the excurses any time at any place they want as long as they can access the Internet through the Web Browser. The feedbacks from students are all very positive about the learning experience on the digital image processing with the help of GeoBrain web processing services. The teaching/lab materials and GeoBrain services are freely available to anyone at http://www.laits.gmu.edu.

di, L.; Deng, M.

2010-12-01

4

Teaching Effectively with Visual Effect in an Image-Processing Class.  

ERIC Educational Resources Information Center

Describes a course teaching the use of computers in emulating human visual capability and image processing and proposes an interactive presentation using multimedia technology to capture and sustain student attention. Describes the three phase presentation: introduction of image processing equipment, presentation of lecture material, and…

Ng, G. S.

1997-01-01

5

Image Processing for Teaching: Transforming a Scientific Research Tool into an Educational Technology.  

ERIC Educational Resources Information Center

Describes the Image Processing for Teaching (IPT) project which provides digital image processing to excite students about science and mathematics as they use research-quality software on microcomputers. Provides information on IPT whose components of this dissemination project have been widespread teacher education, curriculum-based materials…

Greenberg, Richard

1998-01-01

6

The teaching of computer programming and digital image processing in radiography.  

PubMed

The increased use of digital processing techniques in Medical Radiations imaging modalities, along with the rapid advance in information technology has resulted in a significant change in the delivery of radiographic teaching programs. This paper details a methodology used to concurrently educate radiographers in both computer programming and image processing. The students learn to program in visual basic applications (VBA), and the programming skills are contextualised by requiring the students to write a digital subtraction angiography (DSA) package. Program code generation and image presentation interface is undertaken by the spreadsheet Microsoft Excel. The user-friendly nature of this common interface enables all students to readily begin program creation. The teaching of programming and image processing skills by this method may be readily generalised to other vocational fields where digital image manipulation is a professional requirement. PMID:9726504

Allan, G L; Zylinski, J

1998-06-01

7

Teaching image processing and pattern recognition with the Intel OpenCV library  

NASA Astrophysics Data System (ADS)

In this paper we present an approach to teaching image processing and pattern recognition with the use of the OpenCV library. Image processing, pattern recognition and computer vision are important branches of science and apply to tasks ranging from critical, involving medical diagnostics, to everyday tasks including art and entertainment purposes. It is therefore crucial to provide students of image processing and pattern recognition with the most up-to-date solutions available. In the Institute of Electronics at the Technical University of Lodz we facilitate the teaching process in this subject with the OpenCV library, which is an open-source set of classes, functions and procedures that can be used in programming efficient and innovative algorithms for various purposes. The topics of student projects completed with the help of the OpenCV library range from automatic correction of image quality parameters or creation of panoramic images from video to pedestrian tracking in surveillance camera video sequences or head-movement-based mouse cursor control for the motorically impaired.

Koz?owski, Adam; Królak, Aleksandra

2009-06-01

8

Teaching High School Science Using Image Processing: A Case Study of Implementation of Computer Technology.  

ERIC Educational Resources Information Center

Outlines an in-depth case study of teachers' use of image processing in biology, earth science, and physics classes in one high school science department. Explores issues surrounding technology implementation. Contains 21 references. (DDR)

Greenberg, Richard; Raphael, Jacqueline; Keller, Jill L.; Tobias, Sheila

1998-01-01

9

A self-teaching image processing and voice-recognition-based, intelligent and interactive system to educate visually impaired children  

NASA Astrophysics Data System (ADS)

A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.

Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad

2010-02-01

10

SSMiles: Using Models to Teach about Remote Sensing and Image Processing.  

ERIC Educational Resources Information Center

Presents an introductory lesson on remote sensing and image processing to be used in cooperative groups. Students are asked to solve a problem by gathering information, making inferences, transforming data into other forms, and making and testing hypotheses. Includes four expansions of the lesson and a reproducible student worksheet. (MKR)

Tracy, Dyanne M., Ed.

1994-01-01

11

The Use of Undergraduate Project Courses for Teaching Image and Signal Processing Techniques at Purdue University  

Microsoft Academic Search

This paper describes our approaches to introduce service learning and research concepts from image and signal processing into the undergraduate ECE curriculum at Purdue University. In particular, we describe two project courses we have developed: one is in the context of the Purdue Engineering Projects in Community Service (EPICS) program and the other is a new course known as Vertically

Edward J Delp; Yung-Hsiang Lu

2006-01-01

12

Signals and Images Image processing  

E-print Network

Signals and Images Wavelets Image processing Models and Approximations Data driven approximations;Signals and Images Wavelets Image processing Models and Approximations Data driven approximations or transcendental: Joe Lakey Wavelets Minimize Max #12;Signals and Images Wavelets Image processing Models

Lakey, Joseph D.

13

Image Processing  

NASA Technical Reports Server (NTRS)

Electronic Imagery, Inc.'s ImageScale Plus software, developed through a Small Business Innovation Research (SBIR) contract with Kennedy Space Flight Center for use on space shuttle Orbiter in 1991, enables astronauts to conduct image processing, prepare electronic still camera images in orbit, display them and downlink images to ground based scientists for evaluation. Electronic Imagery, Inc.'s ImageCount, a spin-off product of ImageScale Plus, is used to count trees in Florida orange groves. Other applications include x-ray and MRI imagery, textile designs and special effects for movies. As of 1/28/98, company could not be located, therefore contact/product information is no longer valid.

1993-01-01

14

Teaching Reflection Seismic Processing  

NASA Astrophysics Data System (ADS)

Without pictures, it is difficult to give students a feeling for wave propagation, transmission, and reflection. Even with pictures, wave propagation is still static to many. However, when students use and modify scripts that generate wavefronts and rays through a geologic model that they have modified themselves, we find that students gain a real feeling for wave propagation. To facilitate teaching 2-D seismic reflection data processing (from acquisition through migration) to our undergraduate and graduate Reflection Seismology students, we use Seismic Un*x (SU) software. SU is maintained and distributed by Colorado School of Mines, and it is freely available (at www.cwp.mines.edu/cwpcodes). Our approach includes use of synthetic and real seismic data, processing scripts, and detailed explanation of the scripts. Our real data were provided by Gregory F. Moore of the University of Hawaii. This approach can be used by any school at virtually no expense for either software or data, and can provide students with a sound introduction to techniques used in processing of reflection seismic data. The same software can be used for other purposes, such as research, with no additional expense. Students who have completed a course using SU are well equipped to begin using it for research, as well. Scripts for each processing step are supplied and explained to the students. Our detailed description of the scripts means students do not have to know anything about SU to start. Experience with the Unix operating system is preferable but not necessary -- our notes include Computer Hints to help the beginner work with the Unix operating system. We include several examples of synthetic model building, acquiring shot gathers through synthetic models, sorting shot gathers to CMP gathers, gain, 1-D frequency filtering, f-k filtering, deconvolution, semblance displays and velocity analysis, flattening data (NMO), stacking the CMPs, and migration. We use two real (marine) data sets. One of these is very easy to process, yet provides an extraordinary example of the importance of migration after stack. The other data set is a challenge to process, due to contamination by multiples. Students who complete the SU exercises learn the structure of reflection seismic data, the fundamentals of seismic data processing, and gain an introduction to signal processing, providing them with the tools required to make appropriate career choices and/or to continue their research.

Forel, D.; Benz, T.; Pennington, W. D.

2004-12-01

15

Teaching: A Reflective Process  

ERIC Educational Resources Information Center

In this article, the authors describe how they used formative assessments to ferret out possible misconceptions among middle-school students in a unit about weather-related concepts. Because they teach fifth- and eighth-grade science, this assessment also gives them a chance to see how student understanding develops over the years. This year they…

German, Susan; O'Day, Elizabeth

2009-01-01

16

Teaching Image-Processing Concepts in Junior High School: Boys' and Girls' Achievements and Attitudes towards Technology  

ERIC Educational Resources Information Center

Background: This research focused on the development, implementation and evaluation of a course on image-processing principles aimed at middle-school students. Purpose: The overarching purpose of the study was that of integrating the learning of subjects in science, technology, engineering and mathematics (STEM), and linking the learning of these…

Barak, Moshe; Asad, Khaled

2012-01-01

17

Image Visualization Medical Image Processing  

E-print Network

Image Visualization ENG4BF3 Medical Image Processing #12;2 Visualization Methods · Visualization of medical images is for the determination of the quantitative information about the properties of anatomic types) #12;3 Two-dimensional Image Generation and Visualization · The utility of 2D images depends

Wu, Xiaolin

18

Image Processing  

NASA Technical Reports Server (NTRS)

The Computer Graphics Center of North Carolina State University uses LAS, a COSMIC program, to analyze and manipulate data from Landsat and SPOT providing information for government and commercial land resource application projects. LAS is used to interpret aircraft/satellite data and enables researchers to improve image-based classification accuracies. The system is easy to use and has proven to be a valuable remote sensing training tool.

1991-01-01

19

Linear Algebra and Image Processing  

ERIC Educational Resources Information Center

We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

Allali, Mohamed

2010-01-01

20

Using Process Visuals to Teach Art  

ERIC Educational Resources Information Center

Meeting the diverse needs of Pamela Malkin's students forced her to really look at what she teaches and how she teaches it. While she was teaching her sixth-grade students to create a mask from clay, she found that many were having a hard time remembering all the steps of my demonstration. Since most of the process was taught through…

Malkin, Pamela

2005-01-01

21

Image Processing  

NASA Technical Reports Server (NTRS)

A new spinoff product was derived from Geospectra Corporation's expertise in processing LANDSAT data in a software package. Called ATOM (for Automatic Topographic Mapping), it's capable of digitally extracting elevation information from stereo photos taken by spaceborne cameras. ATOM offers a new dimension of realism in applications involving terrain simulations, producing extremely precise maps of an area's elevations at a lower cost than traditional methods. ATOM has a number of applications involving defense training simulations and offers utility in architecture, urban planning, forestry, petroleum and mineral exploration.

1987-01-01

22

Biomedical image processing  

Microsoft Academic Search

Biomedical image processing is a very broad field; it covers biomedical signal gathering, image forming, picture processing, and image display to medical diagnosis based on features extracted from images. This article reviews this topic in both its fundamentals and applications. In its fundamentals, some basic image processing techniques including outlining, deblurring, noise cleaning, filtering, search, classical analysis and texture analysis

1981-01-01

23

Computers in Public Schools: Changing the Image with Image Processing.  

ERIC Educational Resources Information Center

The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

Raphael, Jacqueline; Greenberg, Richard

1995-01-01

24

Digital Imaging and Image Processing  

NSDL National Science Digital Library

The first site is an excellent introduction to digital imaging from the Eastman Kodak Company (1). There are five lessons with review questions and competency exams, covering fundamentals, image capture, and processing. A more technical introduction is found at the Digital Imaging Glossary (2). This educational resource has several short articles about compression algorithms and specific imaging techniques. The Hypermedia Image Processing Reference (3) goes into the theory of image processing. It describes operations involving image arithmetic, blending multiple images, and feature detectors, to name a few; and several of the sections have illustrative Java applets. The Center for Imaging Science at John Hopkins University (4) offers two chapters from a book on "metric pattern theory." A brief overview of the material is provided on the main page, and the chapters can be viewed on or offline with special plug-ins given on the Web site. The Journal of Electronic Imaging (5) is a quarterly publication with many papers on current research. The final issue of 2002 has a special section on Internet imaging that is quite interesting. A research project at the University of Washington (6) focuses on the role of mathematics in image processing. Besides a thorough description of the project, there is free software and documentation given on the Web site. Philips Research (7) is working on a product that seems like something from a science fiction movie. Three dimensional television and the technologies that make it possible are described on the site. Related to this is a November 2002 news article discussing holograms and 3-D video displays (8). The devices are being studied by the Spatial Imaging Group at the Massachusetts Institute of Technology Media Lab.

Leske, Cavin.

2002-01-01

25

Multispectral imaging and image processing  

NASA Astrophysics Data System (ADS)

The color accuracy of conventional RGB cameras is not sufficient for many color-critical applications. One of these applications, namely the measurement of color defects in yarns, is why Prof. Til Aach and the Institute of Image Processing and Computer Vision (RWTH Aachen University, Germany) started off with multispectral imaging. The first acquisition device was a camera using a monochrome sensor and seven bandpass color filters positioned sequentially in front of it. The camera allowed sampling the visible wavelength range more accurately and reconstructing the spectra for each acquired image position. An overview will be given over several optical and imaging aspects of the multispectral camera that have been investigated. For instance, optical aberrations caused by filters and camera lens deteriorate the quality of captured multispectral images. The different aberrations were analyzed thoroughly and compensated based on models for the optical elements and the imaging chain by utilizing image processing. With this compensation, geometrical distortions disappear and sharpness is enhanced, without reducing the color accuracy of multispectral images. Strong foundations in multispectral imaging were laid and a fruitful cooperation was initiated with Prof. Bernhard Hill. Current research topics like stereo multispectral imaging and goniometric multispectral measure- ments that are further explored with his expertise will also be presented in this work.

Klein, Julie

2014-02-01

26

Biomedical image processing  

SciTech Connect

Biomedical image processing is a very broad field; it covers biomedical signal gathering, image forming, picture processing, and image display to medical diagnosis based on features extracted from images. This article reviews this topic in both its fundamentals and applications. In its fundamentals, some basic image processing techniques including outlining, deblurring, noise cleaning, filtering, search, classical analysis and texture analysis have been reviewed together with examples. The state-of-the-art image processing systems have been introduced and discussed in two categories: general purpose image processing systems and image analyzers. In order for these systems to be effective for biomedical applications, special biomedical image processing languages have to be developed. The combination of both hardware and software leads to clinical imaging devices. Two different types of clinical imaging devices have been discussed. There are radiological imagings which include radiography, thermography, ultrasound, nuclear medicine and CT. Among these, thermography is the most noninvasive but is limited in application due to the low energy of its source. X-ray CT is excellent for static anatomical images and is moving toward the measurement of dynamic function, whereas nuclear imaging is moving toward organ metabolism and ultrasound is toward tissue physical characteristics. Heart imaging is one of the most interesting and challenging research topics in biomedical image processing; current methods including the invasive-technique cineangiography, and noninvasive ultrasound, nuclear medicine, transmission, and emission CT methodologies have been reviewed.

Huang, H.K.

1981-01-01

27

Art Images for College Teaching (AICT)  

NSDL National Science Digital Library

Allan T. Kohl of the Minneapolis College of Art and Design presents Art Images for College Teaching (AICT), "a royalty-free image exchange resource for the educational community." AICT images may be downloaded, and making derivative copies is permitted, as long as the images will be used for educational or personal purposes, not commercial. AICT on the Web consists of selections from a more extensive image collection on CD, arranged in five broad chronological sections: Ancient, Medieval Era, Renaissance & Baroque, 18th - 20th Century, and Non-Western cultures. AICT is still in development; for example, the 18th - 20th Century section currently contains several messages informing users "there is nothing here." Educational institutions and individuals who find this too limiting can rent entire image CDs. Already, AICT includes a great many of the images necessary for teaching art history courses. One helpful feature is a concordance to about a dozen standard art history textbooks, allowing users to cross-reference AICT images to these books.

28

Delivering labeled teaching images over the Web.  

PubMed

The Web provides educators with the best opportunity to date for distributing teaching images across the educational enterprise and within the clinical environment. Experience in the pre-Web era showed that labels and information linked to parts of the image are crucial to student learning. Standard Web technology does not enable the delivery of labeled images. We have developed an environment called OverLayer that succeeds in the authoring and delivering of such images in a variety of formats. OverLayer has a number of functional specifications, based on the literature and on our experience, among them, the following: Users should be able to find components by name or by image; to receive feedback about their choice to test themselves. The image should be of arbitrary size; should be reusable; should be linked to further information; should be stand-alone files. The labels should not obscure the image; should be linked to further information. Images should be stand-alone files that can be transferred among faculty members. Implemented in Java, OverLayer (http:/(/)omie.med.jhmi.edu/overlayer) has at its heart a set of object classes that have been reused in a number of applets for different teaching purposes and a file format for creating OverLayer images. We have created a 350-image histology library and a 500-image pathology library, and are working on a 400-image GI endoscopy library. We hope that the OverLayer suite of classes and implementations will help to further the gains made by previous image-based hyperlinked technologies. PMID:9929253

Lehmann, H P; Nguyen, B; Freedman, J

1998-01-01

29

Digital image processing  

Microsoft Academic Search

The field of digital image processing is reviewed with reference to its origins, progress, current status, and prospects for the future. Consideration is given to the evolution of image processor display devices, developments in the functional components of an image processor display system (e.g. memory, data bus, and pipeline central processing unit), and developments in the software. The major future

B. R. Hunt

1981-01-01

30

TEACHING PEER REVIEW AND THE PROCESS OF  

E-print Network

TEACHING PEER REVIEW AND THE PROCESS OF SCIENTIFIC WRITING William H. Guilford Department and graduate students understand neither the process of scientific writing nor the significance of peer review publishing process. However, none fully reproduced peer review and revision of papers together

Guilford, William

31

Retinex Image Processing  

NSDL National Science Digital Library

Retinex Image Processing technology, developed by NASA, is used to compensate for the effect of poor lighting in recorded images. Shadows, changes in the color of illumination, and several other factors can cause image quality to be highly variable. Using an advanced system that sharpens images and efficiently renders colors, a much more constant image quality can be achieved regardless of the lighting. Retinex technology is described in several online publications that can be downloaded from this Web site. Additionally, some example pictures of scenes taken with and without the image processing are shown.

32

Biomedical image processing.  

PubMed

Biomedical image processing is a very broad field; it covers biomedical signal gathering, image forming, picture processing, and image display to medical diagnosis based on features extracted from images. This article reviews this topic in both its fundamentals and applications. In its fundamentals, some basic image processing techniques including outlining, deblurring, noise cleaning, filtering, search, classical analysis and texture analysis have been reviewed together with examples. The state-of-the-art image processing systems have been introduced and discussed in two categories: general purpose image processing systems and image analyzers. In order for these systems to be effective for biomedical applications, special biomedical image processing languages have to be developed. The combination of both hardware and software leads to clinical imaging devices. Two different types of clinical imaging devices have been discussed. There are radiological imagings which include radiography, thermography, ultrasound, nuclear medicine and CT. Among these, thermography is the most noninvasive but is limited in application due to the low energy of its source. X-ray CT is excellent for static anatomical images and is moving toward the measurement of dynamic function, whereas nuclear imaging is moving toward organ metabolism and ultrasound is toward tissue physical characteristics. Heart imaging is one of the most interesting and challenging research topics in biomedical image processing; current methods including the invasive-technique cineangiography, and noninvasive ultrasound, nuclear medicine, transmission, and emission CT methodologies have been reviewed. Two current federally funded research projects in heart imaging, the dynamic spatial reconstructor and the dynamic cardiac three-dimensional densitometer, should bring some fruitful results in the near future. Miscrosopic imaging technique is very different from the radiological imaging technique in the sense that interaction between the operator and the imaging device is very essential. The white blood cell analyzer has been developed to the point that it becomes a daily clinical imaging device. An interactive chromosome karyotyper is being clinical evaluated and its preliminary indication is very encouraging. Tremendous efforts have been devoted to automation of cancer cytology; it is hoped that some prototypes will be available for clinical trials very soon. Automation of histology is still in its infancy; much work still needs to be done in this area. The 1970s have been very fruitful in utilizing the imaging technique in biomedical application; the computerized tomographic scanner and the white blood cell analyzer being the most successful imaging devices... PMID:7023828

Huang, H K

1981-01-01

33

Space variant image processing  

Microsoft Academic Search

This paper describes a graph-based approach to image processing, intended for use with images obtained from sensors having space variant sampling grids. The connectivity graph (CG) is presented as a fundamental framework for posing image operations in any kind of space variant sensor. Partially motivated by the observation that human vision is strongly space variant, a number of research groups

Richard S. Wallace; Ping-wen Ong; Benjamin B. Bederson; Eric L. Schwartz

1994-01-01

34

Teaching evolutionary processes to skeptical students  

NASA Astrophysics Data System (ADS)

This article draws on current information from scientific and educational sources to provide an extremely useful summary of problems and solutions when teaching about evolutionary processes in physics and astronomy. The article addresses the process of science as described in position statements from professional organizations and actual experiences of instructors in the classroom as described at an AAS panel discussion.

Bobrowsky, Matthew

2000-12-01

35

Image processing mini manual  

NASA Technical Reports Server (NTRS)

The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

1992-01-01

36

Image Processing Software  

NASA Technical Reports Server (NTRS)

To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

1992-01-01

37

Apple Image Processing Educator  

NASA Technical Reports Server (NTRS)

A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.

Gunther, F. J.

1981-01-01

38

Image Processing Learning Resources  

NSDL National Science Digital Library

The Hypermedia Image Processing Reference (HIPR) offers a wealth of resources for users of image processing and an introduction to hypermedia (through use with Web browsers). HIPR was developed at the Department of Artificial Intelligence in the University of Edinburgh as computer-based tutorial materials for use in courses on image processing and machine vision. The material is available as a package that can easily be shared on a local area network and then made available at any suitably equipped computer connected to that network. The materials cover a wide range of image processing operations and are complemented by an extensive collection of actual digitized images, all organized for easy cross-referencing. Some features include a reference section with information on some of the most common classes of image-processing operations currently used, a section describing how each operation works, and various other instructional tools, such as Java demonstrations; interactive tableau where multiple operators can demonstrate sequences of operations; suggestions for appropriate use of operations; example input and output images for each operation; suggested student exercises; an encyclopedic glossary of common image processing concepts and terms; and other reference information. From the index, visitors can search on a particular topic covered in this website.

39

Image Processing System  

NASA Technical Reports Server (NTRS)

Mallinckrodt Institute of Radiology (MIR) is using a digital image processing system which employs NASA-developed technology. MIR's computer system is the largest radiology system in the world. It is used in diagnostic imaging. Blood vessels are injected with x-ray dye, and the images which are produced indicate whether arteries are hardened or blocked. A computer program developed by Jet Propulsion Laboratory known as Mini-VICAR/IBIS was supplied to MIR by COSMIC. The program provides the basis for developing the computer imaging routines for data processing, contrast enhancement and picture display.

1986-01-01

40

Clinical image processing engine  

NASA Astrophysics Data System (ADS)

Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image processing programs for a variety of applications. However, each program requires a human operator to select a specific set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification for each application. A MySQL database is created to store and manage the incoming DICOM images and application results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12 datasets and demonstrated that the engine improved the efficiency dramatically.

Han, Wei; Yao, Jianhua; Chen, Jeremy; Summers, Ronald

2009-02-01

41

Teaching Academic Essay Writing: Accelerating the Process.  

ERIC Educational Resources Information Center

This paper describes a simple instructional approach which social science professors can use to teach the fundamentals of academic essay writing to their students. The paper discusses the evolution of this alternative writing approach--the instructional problems leading to its development as well as the process by which a solution was identified.…

Medina, Suzanne L.

42

Teaching Psychological Report Writing: Content and Process  

ERIC Educational Resources Information Center

The purpose of this article is to discuss the process of teaching graduate students in school psychology to write psychological reports that teachers and parents find readable and that guide intervention. The consensus from studies across four decades of research is that effective psychological reports connect to the client's context; have clear…

Wiener, Judith; Costaris, Laurie

2012-01-01

43

BAOlab: Image processing program  

NASA Astrophysics Data System (ADS)

BAOlab is an image processing package written in C that should run on nearly any UNIX system with just the standard C libraries. It reads and writes images in standard FITS format; 16- and 32-bit integer as well as 32-bit floating-point formats are supported. Multi-extension FITS files are currently not supported. Among its tools are ishape for size measurements of compact sources, mksynth for generating synthetic images consisting of a background signal including Poisson noise and a number of pointlike sources, imconvol for convolving two images (a “source” and a “kernel”) with each other using fast fourier transforms (FFTs) and storing the output as a new image, and kfit2d for fitting a two-dimensional King model to an image.

Larsen, Søren S.

2014-03-01

44

Teaching Process Design through Integrated Process Synthesis  

ERIC Educational Resources Information Center

The design course is an integral part of chemical engineering education. A novel approach to the design course was recently introduced at the University of the Witwatersrand, Johannesburg, South Africa. The course aimed to introduce students to systematic tools and techniques for setting and evaluating performance targets for processes, as well as…

Metzger, Matthew J.; Glasser, Benjamin J.; Patel, Bilal; Hildebrandt, Diane; Glasser, David

2012-01-01

45

Processing Of Binary Images  

NASA Astrophysics Data System (ADS)

An overview of the recent progress in the area of digital processing of binary images in the context of document processing is presented here. The topics covered include input scan, adaptive thresholding, halftoning, scaling and resolution conversion, data compression, character recognition, electronic mail, digital typography, and output scan. Emphasis has been placed on illustrating the basic principles rather than descriptions of a particular system. Recent technology advances and research in this field are also mentioned.

Hou, H. S.

1985-07-01

46

Video image processing  

NASA Technical Reports Server (NTRS)

Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

Murray, N. D.

1985-01-01

47

Image processing and reconstruction  

SciTech Connect

This talk will examine some mathematical methods for image processing and the solution of underdetermined, linear inverse problems. The talk will have a tutorial flavor, mostly accessible to undergraduates, while still presenting research results. The primary approach is the use of optimization problems. We will find that relaxing the usual assumption of convexity will give us much better results.

Chartrand, Rick [Los Alamos National Laboratory

2012-06-15

48

Image-Processing Program  

NASA Technical Reports Server (NTRS)

IMAGEP manipulates digital image data to effect various processing, analysis, and enhancement functions. It is keyboard-driven program organized into nine subroutines. Within subroutines are sub-subroutines also selected via keyboard. Algorithm has possible scientific, industrial, and biomedical applications in study of flows in materials, analysis of steels and ores, and pathology, respectively.

Roth, D. J.; Hull, D. R.

1994-01-01

49

Image processing techniques for acoustic images  

NASA Astrophysics Data System (ADS)

The primary goal of this research is to test the effectiveness of various image processing techniques applied to acoustic images generated in MATLAB. The simulated acoustic images have the same characteristics as those generated by a computer model of a high resolution imaging sonar. Edge detection and segmentation are the two image processing techniques discussed in this study. The two methods tested are a modified version of the Kalman filtering and median filtering.

Murphy, Brian P.

1991-06-01

50

Digital Images and Video for Teaching Science  

NSDL National Science Digital Library

New technologies have revolutionized our ability to see and learn scientific phenomena. Reasonably priced digital still and video cameras have recently become popular additions to many classrooms, and teachers use them regularly to document student learning activities for newsletters, websites, and electronic slideshows. In addition, the advent of the internet has opened up a limitless supply of images and videos on every imaginable science topic. This free chapter will focus on how science teachers can take advantage of the digital images and video available to them on the web, as well as on how to engage students in capturing their own images and video in the process of learning science. It includes the Table of Contents, Preface, and the Index.

John C. Park

2008-01-01

51

Geology and image processing  

NASA Technical Reports Server (NTRS)

Digital image processing for geological applications will be integrated with geographic information systems and data base management systems. While multiband data sets from radar and multispectral scanners will make extreme demands on memory, bus and processor architectures, it is expected that array processors and VLSI/VHSIC dedicated function chips will allow the use of fast Fourier transform and classification algorithms. It is anticipted that, as processor power increases, the weakest link of a processing system will become the analyst who uses it. Human engineering of systems is therefore recommended for the most effective utilization of remotely sensed geologic data.

Daily, M.

1982-01-01

52

scikit-image: image processing in Python.  

PubMed

scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

van der Walt, Stéfan; Schönberger, Johannes L; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

2014-01-01

53

scikit-image: image processing in Python  

PubMed Central

scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications. It is released under the liberal Modified BSD open source license, provides a well-documented API in the Python programming language, and is developed by an active, international team of collaborators. In this paper we highlight the advantages of open source to achieve the goals of the scikit-image library, and we showcase several real-world image processing applications that use scikit-image. More information can be found on the project homepage, http://scikit-image.org. PMID:25024921

Schönberger, Johannes L.; Nunez-Iglesias, Juan; Boulogne, François; Warner, Joshua D.; Yager, Neil; Gouillart, Emmanuelle; Yu, Tony

2014-01-01

54

Teaching People and Machines to Enhance Images  

NASA Astrophysics Data System (ADS)

Procedural tasks such as following a recipe or editing an image are very common. They require a person to execute a sequence of operations (e.g. chop onions, or sharpen the image) in order to achieve the goal of the task. People commonly use step-by-step tutorials to learn these tasks. We focus on software tutorials, more specifically photo manipulation tutorials, and present a set of tools and techniques to help people learn, compare and automate photo manipulation procedures. We describe three different systems that are each designed to help with a different stage in acquiring procedural knowledge. Today, people primarily rely on hand-crafted tutorials in books and on websites to learn photo manipulation procedures. However, putting together a high quality step-by-step tutorial is a time-consuming process. As a consequence, many online tutorials are poorly designed which can lead to confusion and slow down the learning process. We present a demonstration-based system for automatically generating succinct step-by-step visual tutorials of photo manipulations. An author first demonstrates the manipulation using an instrumented version of GIMP (GNU Image Manipulation Program) that records all changes in interface and application state. From the example recording, our system automatically generates tutorials that illustrate the manipulation using images, text, and annotations. It leverages automated image labeling (recognition of facial features and outdoor scene structures in our implementation) to generate more precise text descriptions of many of the steps in the tutorials. A user study finds that our tutorials are effective for learning the steps of a procedure; users are 20-44% faster and make 60-95% fewer errors when using our tutorials than when using screencapture video tutorials or hand-designed tutorials. We also demonstrate a new interface that allows learners to navigate, explore and compare large collections (i.e. thousands) of photo manipulation tutorials based on their command-level structure. Sites such as tutorialized.com or good-tutorials.com collect tens of thousands of photo manipulation tutorials. These collections typically contain many different tutorials for the same task. For example, there are many different tutorials that describe how to recolor the hair of a person in an image. Learners often want to compare these tutorials to understand the different ways a task can be done. They may also want to identify common strategies that are used across tutorials for a variety of tasks. However, the large number of tutorials in these collections and their inconsistent formats can make it difficult for users to systematically explore and compare them. Current tutorial collections do not exploit the underlying command-level structure of tutorials, and to explore the collection users have to either page through long lists of tutorial titles or perform keyword searches on the natural language tutorial text. We present a new browsing interface to help learners navigate, explore and compare collections of photo manipulation tutorials based on their command-level structure. Our browser indexes tutorials by their commands, identifies common strategies within the tutorial collection, and highlights the similarities and differences between sets of tutorials that execute the same task. User feedback suggests that our interface is easy to understand and use, and that users find command-level browsing to be useful for exploring large tutorial collections. They strongly preferred to explore tutorial collections with our browser over keyword search. Finally, we present a framework for generating content-adaptive macros (programs) that can transfer complex photo manipulation procedures to new target images. After learners master a photo manipulation procedure, they often repeatedly apply it to multiple images. For example, they might routinely apply the same vignetting effect to all their photographs. This process can be very tedious especially for procedures that involve many steps. While image manipulation programs pro

Berthouzoz, Floraine Sara Martianne

55

Image processing and recognition for biological images  

PubMed Central

This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

Uchida, Seiichi

2013-01-01

56

Image Processing Diagnostics: Emphysema  

NASA Astrophysics Data System (ADS)

Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

McKenzie, Alex

2009-10-01

57

Smart Image Enhancement Process  

NASA Technical Reports Server (NTRS)

Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image.

Jobson, Daniel J. (Inventor); Rahman, Zia-ur (Inventor); Woodell, Glenn A. (Inventor)

2012-01-01

58

Processes and priorities in planning mathematics teaching  

NASA Astrophysics Data System (ADS)

Insights into teachers' planning of mathematics reported here were gathered as part of a broader project examining aspects of the implementation of the Australian curriculum in mathematics (and English). In particular, the responses of primary and secondary teachers to a survey of various aspects of decisions that inform their use of curriculum documents and assessment processes to plan their teaching are discussed. Teachers appear to have a clear idea of the overall topic as the focus of their planning, but they are less clear when asked to articulate the important ideas in that topic. While there is considerable diversity in the processes that teachers use for planning and in the ways that assessment information informs that planning, a consistent theme was that teachers make active decisions at all stages in the planning process. Teachers use a variety of assessment data in various ways, but these are not typically data extracted from external assessments. This research has important implications for those responsible for supporting teachers in the transition to the Australian Curriculum: Mathematics.

Sullivan, Peter; Clarke, David J.; Clarke, Doug M.; Farrell, Lesley; Gerrard, Jessica

2013-12-01

59

Filter for biomedical imaging and image processing  

NASA Astrophysics Data System (ADS)

Image filtering techniques have numerous potential applications in biomedical imaging and image processing. The design of filters largely depends on the a priori, knowledge about the type of noise corrupting the image. This makes the standard filters application specific. Widely used filters such as average, Gaussian, and Wiener reduce noisy artifacts by smoothing. However, this operation normally results in smoothing of the edges as well. On the other hand, sharpening filters enhance the high-frequency details, making the image nonsmooth. An integrated general approach to design a finite impulse response filter based on Hebbian learning is proposed for optimal image filtering. This algorithm exploits the interpixel correlation by updating the filter coefficients using Hebbian learning. The algorithm is made iterative for achieving efficient learning from the neighborhood pixels. This algorithm performs optimal smoothing of the noisy image by preserving high-frequency as well as low-frequency features. Evaluation results show that the proposed finite impulse response filter is robust under various noise distributions such as Gaussian noise, salt-and-pepper noise, and speckle noise. Furthermore, the proposed approach does not require any a priori knowledge about the type of noise. The number of unknown parameters is few, and most of these parameters are adaptively obtained from the processed image. The proposed filter is successfully applied for image reconstruction in a positron emission tomography imaging modality. The images reconstructed by the proposed algorithm are found to be superior in quality compared with those reconstructed by existing PET image reconstruction methodologies.

Mondal, Partha P.; Rajan, K.; Ahmad, Imteyaz

2006-07-01

60

Enhancing the Teaching-Learning Process: A Knowledge Management Approach  

ERIC Educational Resources Information Center

Purpose: The purpose of this paper is to emphasize the need for knowledge management (KM) in the teaching-learning process in technical educational institutions (TEIs) in India, and to assert the impact of information technology (IT) based KM intervention in the teaching-learning process. Design/methodology/approach: The approach of the paper is…

Bhusry, Mamta; Ranjan, Jayanthi

2012-01-01

61

Processing Visual Images  

SciTech Connect

The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

Litke, Alan (UC Santa Cruz) [UC Santa Cruz

2006-03-27

62

Chemistry Graduate Teaching Assistants' Experiences in Academic Laboratories and Development of a Teaching Self-image  

NASA Astrophysics Data System (ADS)

Graduate teaching assistants (GTAs) play a prominent role in chemistry laboratory instruction at research based universities. They teach almost all undergraduate chemistry laboratory courses. However, their role in laboratory instruction has often been overlooked in educational research. Interest in chemistry GTAs has been placed on training and their perceived expectations, but less attention has been paid to their experiences or their potential benefits from teaching. This work was designed to investigate GTAs' experiences in and benefits from laboratory instructional environments. This dissertation includes three related studies on GTAs' experiences teaching in general chemistry laboratories. Qualitative methods were used for each study. First, phenomenological analysis was used to explore GTAs' experiences in an expository laboratory program. Post-teaching interviews were the primary data source. GTAs experiences were described in three dimensions: doing, knowing, and transferring. Gains available to GTAs revolved around general teaching skills. However, no gains specifically related to scientific development were found in this laboratory format. Case-study methods were used to explore and illustrate ways GTAs develop a GTA self-image---the way they see themselves as instructors. Two general chemistry laboratory programs that represent two very different instructional frameworks were chosen for the context of this study. The first program used a cooperative project-based approach. The second program used weekly, verification-type activities. End of the semester interviews were collected and served as the primary data source. A follow-up case study of a new cohort of GTAs in the cooperative problem-based laboratory was undertaken to investigate changes in GTAs' self-images over the course of one semester. Pre-semester and post-semester interviews served as the primary data source. Findings suggest that GTAs' construction of their self-image is shaped through the interaction of 1) prior experiences, 2) training, 3) beliefs about the nature of knowledge, 4) beliefs about the nature of laboratory work, and 5) involvement in the laboratory setting. Further GTAs' self-images are malleable and susceptible to change through their laboratory teaching experiences. Overall, this dissertation contributes to chemistry education by providing a model useful for exploring GTAs' development of a self-image in laboratory teaching. This work may assist laboratory instructors and coordinators in reconsidering, when applicable, GTA training and support. This work also holds considerable implications for how teaching experiences are conceptualized as part of the chemistry graduate education experience. Findings suggest that appropriate teaching experiences may contribute towards better preparing graduate students for their journey in becoming scientists.

Gatlin, Todd Adam

63

Medical Image Processing  

Microsoft Academic Search

\\u000a Of many types of images around us, medical images are a ubiquitous type since xrays were first discovered in 1985. The recent\\u000a years, especially after the introduction of computed tomography (CT) in 1972, have witnessed an explosion in the use of medical\\u000a imaging and, consequently, the volume of medical image data being produced. It is estimated that 40,000 terabytes of

Raj Shekhar; Vivek Walimbe; William Plishker

64

Cooperative processes in image segmentation  

NASA Technical Reports Server (NTRS)

Research into the role of cooperative, or relaxation, processes in image segmentation is surveyed. Cooperative processes can be employed at several levels of the segmentation process as a preprocessing enhancement step, during supervised or unsupervised pixel classification and, finally, for the interpretation of image segments based on segment properties and relations.

Davis, L. S.

1982-01-01

65

Topics in genomic image processing  

E-print Network

: Electrical Engineering TOPICS IN GENOMIC IMAGE PROCESSING A Dissertation by JIANPING HUA Submitted to Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Approved as to style and content by: Zixiang Xiong... Image Processing. (December 2004) Jianping Hua, B.E., Tsinghua University, P.R. China; M.S., Tsinghua University, P.R. China Chair of Advisory Committee: Dr. Zixiang Xiong The image processing methodologies that have been actively studied and developed...

Hua, Jianping

2006-04-12

66

Voyager image processing at the Image Processing Laboratory  

NASA Technical Reports Server (NTRS)

This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.

Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.

1980-01-01

67

Neighborhood graphs and image processing  

NASA Astrophysics Data System (ADS)

Many image processing and image segmentation problems, in two or three dimensions, can be addressed and solved by methods and tools developed within the graph theory. Two types of graphs are studied: neighborhood graphs (with the duals Voronoi diagram and Delaunay graph) and adjacency graphs. In this paper, we propose an image representation based on graphs: the graph object, together with methods for attributing and weighting the graph, and methods to merge nodes, is defined within an object-oriented library of image processing operators. In order to demonstrate the interest of the approach, several applications dealing with 2D images are briefly described and discussed: we show that this change of representation can greatly simplify the tuning of image processing plans and how to replace complex sequences of image operators by one single basic operation on graphs. As results are promising, our library of graph operators is being extended to 3D images.

Angot, Francois; Clouard, Regis; Elmoataz, Abderrahim; Revenu, Marinette

1996-08-01

68

Factors Causing Demotivation in EFL Teaching Process: A Case Study  

ERIC Educational Resources Information Center

Studies have mainly focused on strategies to motivate teachers or the student-teacher motivation relationships rather than teacher demotivation in the English as a foreign language (EFL) teaching process, whereas no data have been found on the factors that cause teacher demotivation in the Turkish EFL teaching contexts at the elementary education…

Aydin, Selami

2012-01-01

69

Industrial Applications of Image Processing  

NASA Astrophysics Data System (ADS)

The recent advances in sensors quality and processing power provide us with excellent tools for designing more complex image processing and pattern recognition tasks. In this paper we review the existing applications of image processing and pattern recognition in industrial engineering. First we define the role of vision in an industrial. Then a dissemination of some image processing techniques, feature extraction, object recognition and industrial robotic guidance is presented. Moreover, examples of implementations of such techniques in industry are presented. Such implementations include automated visual inspection, process control, part identification, robots control. Finally, we present some conclusions regarding the investigated topics and directions for future investigation

Ciora, Radu Adrian; Simion, Carmen Mihaela

2014-11-01

70

Fundamentals of! Image Processing!  

E-print Network

: Hoiem! Matching with filters" · Goal: find in image! · Method 2: SSD! ! Input! 1- sqrt(SSD)! Thresholded in image! · Method 2: SSD! ! Input! 1- sqrt(SSD)! 2 , )],[],[(],[ lnkmflkgnmh lk ++-= What s the potential downside of SSD?! Slide: Hoiem! SSD sensitive to average intensity! Matching with filters" · Goal: find

Erdem, Erkut

71

Fundamentals of Image Processing  

E-print Network

;Slide: Hoiem Matching with filters · Goal: find in image · Method 2: SSD Input 1- sqrt(SSD: find in image · Method 2: SSD Input 1- sqrt(SSD) 2 , )],[],[(],[ lnkmflkgnmh lk ++-= What's the potential downside of SSD? Slide: Hoiem SSD sensitive to average intensity #12;Matching with filters

Erdem, Erkut

72

Teaching Image Computation: From Computer Graphics to Computer Vision  

E-print Network

Teaching Image Computation: From Computer Graphics to Computer Vision Bruce A. Draper and J. Ross Beveridge Department of Computer Science Colorado State University Fort Collins, CO 80523 draper@cs.colostate.edu ross@cs.colostate.edu Keywords: Computer Vision, Computer Graphics, Education, Course Design

Draper, Bruce A.

73

Transmitting Musical Images: Using Music to Teach Public Speaking  

ERIC Educational Resources Information Center

Oral communication courses traditionally help students identify the proper arrangement of words to convey particular images. Although instructors emphasize the importance of speaking powerfully, they often struggle to find effective ways to teach their students "how" to deliver a message that resonates with the audience. Given the clear importance…

Cohen, Steven D.; Wei, Thomas E.

2010-01-01

74

Annotating radiological images for computer assisted communication and teaching  

Microsoft Academic Search

The simple but powerful idea of annotating radiological images by pointing out and naming can be exploited for multimedia communication and teaching purposes. In this paper we describe a family of Unix workstation-based demonstrators to implement different facets of this paradigm in a radiology environment. We provide technical details on each demonstrator, and discuss results from validation experiments.

Johan Van Cleynenbreugel; Erwin Bellon; Guy Marchal; Paul Suetens

1996-01-01

75

SWNT Imaging Using Multispectral Image Processing  

NASA Astrophysics Data System (ADS)

A flexible optical system was developed to image carbon single-wall nanotube (SWNT) photoluminescence using the multispectral capabilities of a typical CCD camcorder. The built in Bayer filter of the CCD camera was utilized, using OpenCV C++ libraries for image processing, to decompose the image generated in a high magnification epifluorescence microscope setup into three pseudo-color channels. By carefully calibrating the filter beforehand, it was possible to extract spectral data from these channels, and effectively isolate the SWNT signals from the background.

Blades, Michael; Pirbhai, Massooma; Rotkin, Slava V.

2012-02-01

76

Threshold Choice in Image Processing  

NASA Astrophysics Data System (ADS)

The present work proposes a simple procedure to evaluate the efficiency of a given image processing technique applied to weld radiographs. Specifically, special attention is given to the problem of how to evaluate whether or not a threshold value was suitably set in order to preserve relevant image features. In short, a quantitative comparison is made between the images of welds revealing defects before and after processing. Six weld radiographs of rather different image quality were chosen to be analyzed using this proposed procedure.

Almeida, Rômulo M.; Rebello, Joao Marcos A.

2011-06-01

77

The Tao of Teaching: Romance and Process.  

ERIC Educational Resources Information Center

Because college teaching aims to elevate, not entertain, it must be nourished and appreciated as a pedagogical alchemy mixing facts and feelings, ideas and skills, history and mystery. The current debate on educational reform should focus more on quality of learning experience, and on how to create and sustain it. (MSE)

Schindler, Stefan

1991-01-01

78

Teaching Science: A Picture Perfect Process.  

ERIC Educational Resources Information Center

Explains how teachers can use graphs and graphing concepts when teaching art, language arts, history, social studies, and science. Students can graph the lifespans of the Ninja Turtles' Renaissance namesakes (Donatello, Michelangelo, Raphael, and Leonardo da Vinci) or world population growth. (MDM)

Leyden, Michael B.

1994-01-01

79

Astronomical Image Processing with Hadoop  

NASA Astrophysics Data System (ADS)

In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.

Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

2011-07-01

80

Image processing with LERBS  

NASA Astrophysics Data System (ADS)

We investigate the performance of image compression using a custom transform, related to the discrete cosine transform, where the shape of the waveform basis function can be adjusted via setting a shape parameter. A strategy for generating quantization tables for various shapes of the basis function, including the cosine function, is proposed.

Dalmo, Rune; Bratlie, Jostein; Zanaty, Peter

2014-12-01

81

Image processing: some challenging problems.  

PubMed Central

Image processing can be broadly defined as the manipulation of signals which are inherently multidimensional. The most common such signals are photographs and video sequences. The goals of processing or manipulation can be (i) compression for storage or transmission; (ii) enhancement or restoration; (iii) analysis, recognition, and understanding; or (iv) visualization for human observers. The use of image processing techniques has become almost ubiquitous; they find applications in such diverse areas as astronomy, archaeology, medicine, video communication, and electronic games. Nonetheless, many important problems in image processing remain unsolved. It is the goal of this paper to discuss some of these challenging problems. In Section I, we mention a number of outstanding problems. Then, in the remainder of this paper, we concentrate on one of them: very-low-bit-rate video compression. This is chosen because it involves almost all aspects of image processing. PMID:8234312

Huang, T S; Aizawa, K

1993-01-01

82

Image processing of aerodynamic data  

NASA Technical Reports Server (NTRS)

The use of digital image processing techniques in analyzing and evaluating aerodynamic data is discussed. An image processing system that converts images derived from digital data or from transparent film into black and white, full color, or false color pictures is described. Applications to black and white images of a model wing with a NACA 64-210 section in simulated rain and to computed low properties for transonic flow past a NACA 0012 airfoil are presented. Image processing techniques are used to visualize the variations of water film thicknesses on the wing model and to illustrate the contours of computed Mach numbers for the flow past the NACA 0012 airfoil. Since the computed data for the NACA 0012 airfoil are available only at discrete spatial locations, an interpolation method is used to provide values of the Mach number over the entire field.

Faulcon, N. D.

1985-01-01

83

Fractional differentiation based image processing  

E-print Network

There are many resources useful for processing images, most of them freely available and quite friendly to use. In spite of this abundance of tools, a study of the processing methods is still worthy of efforts. Here, we want to discuss the new possibilities arising from the use of fractional differential calculus. This calculus evolved in the research field of pure mathematics until 1920, when applied science started to use it. Only recently, fractional calculus was involved in image processing methods. As we shall see, the fractional calculation is able to enhance the quality of images, with interesting possibilities in edge detection and image restoration. We suggest also the fractional differentiation as a tool to reveal faint objects in astronomical images.

Sparavigna, Amelia Carolina

2009-01-01

84

Teaching with "Voix et Images de France"  

ERIC Educational Resources Information Center

A report on the classroom use of Voix et Images de France," the French text prepared by the Centre de Recherche et d'Etude pourla Diffusion du Francais (CREDIF) at the Ecole Normale Superieure de Saint-Cloud in France. (FB)

Marrow, G. D.

1970-01-01

85

Using Classic and Contemporary Visual Images in Clinical Teaching.  

ERIC Educational Resources Information Center

The patient's body is an image that medical students and residents use to process information. The classic use of images using the patient is qualitative and personal. The contemporary use of images is quantitative and impersonal. The contemporary use of imaging includes radiographic, nuclear, scintigraphic, and nuclear magnetic resonance…

Edwards, Janine C.

1990-01-01

86

Science Sampler: Teaching—A reflective process  

NSDL National Science Digital Library

In this article, the authors describe how they used formative assessments to ferret out possible misconceptions among middle-school students in a unit about weather-related concepts. Because they teach fifth- and eighth-grade science, this assessment also gives them a chance to see how student understanding develops over the years. This year they used the formative assessment probe “Wet Jeans” from Uncovering Student Ideas in Science: 25 Formative Assessment Probes (Keeley, Eberle, and Farrin 2005).

Susan German

2009-07-01

87

Fuzzy image processing in sun sensor  

NASA Technical Reports Server (NTRS)

This paper will describe how the fuzzy image processing is implemented in the instrument. Comparison of the Fuzzy image processing and a more conventional image processing algorithm is provided and shows that the Fuzzy image processing yields better accuracy then conventional image processing.

Mobasser, S.; Liebe, C. C.; Howard, A.

2003-01-01

88

Neurological Foundation of Image Processing  

Microsoft Academic Search

A popular computation approach is to process visual images by dividing them into crisp (winner-takes-all) parts in analog to properties of neurophysiological receptive fields. Problem with such symbolic representation is that in a real environment object attributes are seldom invariant. We propose to divide images into rough parts using hierarchical, multi-valued processes. The bottom-up computation (BUC) is related to prediction

Andrzej W. Przybyszewski

2009-01-01

89

Law and Pop Culture: Teaching and Learning about Law Using Images from Popular Culture.  

ERIC Educational Resources Information Center

Believes that using popular culture images of law, lawyers, and the legal system is an effective way for teaching about real law. Offers examples of incorporating popular culture images when teaching about law. Includes suggestions for teaching activities, a mock trial based on Dr. Seuss's book "Yertle the Turtle," and additional resources. (CMK)

Joseph, Paul R.

2000-01-01

90

UCLA: Image Processing Research Group  

NSDL National Science Digital Library

The University of California at Los Angeles' Image Processing Research Group (or IMAGERS) focuses on mathematical modeling and computational techniques for image processing, with a particular emphasis on using Partial Differential Equations. The group is part of the UCLA Mathematics Department and the Institute of Pure and Applied Mathematics, but also collaborates with other departments on campus. The research areas described on their website include: Image Reconstruction, Inpainting, Computation, Segmentation & Active Contours, Level Set, Wavelets and Compression, Tomography, and Vision Modeling. They provide overviews of each research area along with links to full reports offering more in depth explanations. Recent papers by IMAGERS' researchers are posted online and available to download free of charge.

91

Teaching Tools: Physics Downloads, Movies, and Images  

NSDL National Science Digital Library

The University of California Berkeley Physics Lecture Demonstrations Web site contains a page entitled Things of Interest: Downloads, Movies, and Images. The highlight of the site is the downloadable movies of physics experiments that should be very helpful for time and/or money constrained educators. The ten experiments include movies of a chladni disk, Jacob's ladder, dippy bird, a person rotating in a chair while holding dumbbells, a person in a chair with a rotating bicycle wheel, gyroscopic precession, a superconductor, a levitator, jumping rings, and a Tesla coil.

2002-01-01

92

Using peer review process for teaching introductory physics laboratory  

NASA Astrophysics Data System (ADS)

In recent years various peer instruction methods have been widely used and proven to be successful for teaching of introductory physics courses. Most of these methods refer to student interactions in small peer groups during lectures and/or discussion sessions. At the same time peer review process has been a standard part of any scientific enterprise and/or scientific publication process for more than a century. We have incorporated a method very similar to professional peer review into teaching of introductory physics laboratory. In this process students are asked to review anonymous copies of each other's lab reports and determine whether or not these reports are suitable for publication in a scientific journal. This technique has become an essential part of the Modular Curriculum Approach (MCA) teaching model designed and adopted at McMurry University. MCA has demonstrated significant gains in student learning.

Bykov, Tikhon

2012-03-01

93

Enhanced imaging process for xeroradiography  

NASA Astrophysics Data System (ADS)

An enhanced mammographic imaging process has been developed which is based on the conventional powder-toner selenium technology used in the Xerox 125/126 x-ray imaging system. The process is derived from improvements in the amorphous selenium x-ray photoconductor, the blue powder toner and the aerosol powder dispersion process. Comparisons of image quality and x-ray dose using the Xerox aluminum-wedge breast phantom and the Radiation Measurements Model 152D breast phantom have been made between the new Enhanced Process, the standard Xerox 125/126 System and screen-film at mammographic x-ray exposure parameters typical for each modality. When comparing the Enhanced Xeromammographic Process with the standard 125/126 System, a distinct advantage is seen for the Enhanced equivalent mass detection and superior fiber and speck detection. The broader imaging latitude of enhanced and standard Xeroradiography, in comparison to film, is illustrated in images made using the aluminum-wedge breast phantom.

Fender, William D.; Zanrosso, Eddie M.

1993-09-01

94

Computer processing of radiographic images  

NASA Technical Reports Server (NTRS)

In the past 20 years, a substantial amount of effort has been expended on the development of computer techniques for enhancement of X-ray images and for automated extraction of quantitative diagnostic information. The historical development of these methods is described. Illustrative examples are presented and factors influencing the relative success or failure of various techniques are discussed. Some examples of current research in radiographic image processing is described.

Selzer, R. H.

1984-01-01

95

Polar Caps: Image Processing Tutorial  

NSDL National Science Digital Library

This lesson plan is part of the Center for Educational Resources (CERES), a series of web-based astronomy lessons created by a team of master teachers, university faculty, and NASA researchers. In this tutorial, students learn to use computer image processing techniques to measure the size of Earth's polar ice caps and analyze various phenomena visible on planetary images. This lesson contains expected outcomes for students, materials, background information, follow-up questions, and assessment strategies.

George Tuthill

96

Teaching the Process...with Calvin & Hobbes.  

ERIC Educational Resources Information Center

Discusses the use of Calvin & Hobbes comic strips to point out steps in the Big6 information problem-solving process. Students can see what Calvin is doing wrong and then explain how to improve the process. (LRW)

Big6 Newsletter, 1998

1998-01-01

97

Image Processing: A State-of-the-Art Way to Learn Science.  

ERIC Educational Resources Information Center

Teachers participating in the Image Processing for Teaching Process, begun at the University of Arizona's Lunar and Planetary Laboratory in 1989, find this technology ideal for encouraging student discovery, promoting constructivist science or math experiences, and adapting in classrooms. Because image processing is not a computerized text, it…

Raphael, Jacqueline; Greenberg, Richard

1995-01-01

98

Online Radiology Teaching Files and Medical Image Atlas and Database  

NSDL National Science Digital Library

Radiologists, students of radiology, and those who are interested in medical images in general will be delighted to hear about this website. Created by MedPix, the teaching files and medical images offered here can be browsed by organ system, and after selecting a particular system, visitors can learn about more about each slide in great detail. Another feature of the site allows visitors to offer their own diagnosis before learning the particulars of each discrete medical case. Visitors can also consider the "Case of the Week" area, which features a case that has been selected by the peer review board that vets all of the images that find their way into their archive. Additionally, the site also features an advanced search engine for those users who know exactly what they want.

99

Fingerprint recognition using image processing  

NASA Astrophysics Data System (ADS)

Finger Print Recognition is concerned with the difficult task of matching the images of finger print of a person with the finger print present in the database efficiently. Finger print Recognition is used in forensic science which helps in finding the criminals and also used in authentication of a particular person. Since, Finger print is the only thing which is unique among the people and changes from person to person. The present paper describes finger print recognition methods using various edge detection techniques and also how to detect correct finger print using a camera images. The present paper describes the method that does not require a special device but a simple camera can be used for its processes. Hence, the describe technique can also be using in a simple camera mobile phone. The various factors affecting the process will be poor illumination, noise disturbance, viewpoint-dependence, Climate factors, and Imaging conditions. The described factor has to be considered so we have to perform various image enhancement techniques so as to increase the quality and remove noise disturbance of image. The present paper describe the technique of using contour tracking on the finger print image then using edge detection on the contour and after that matching the edges inside the contour.

Dholay, Surekha; Mishra, Akassh A.

2011-06-01

100

Teaching Peer Review and the Process of Scientific Writing  

NSDL National Science Digital Library

Many undergraduate and graduate students understand neither the process of scientific writing nor the significance of peer review. In response, some instructors have created writing assignments that teach or mimic parts of the scientific publishing process. However, none fully reproduced peer review and revision of papers together with the writing and publishing process from research to final, accepted draft. In addition, most have been instituted at the graduate rather than undergraduate level. We present a detailed method for teaching undergraduate students the full scientific publishing process, including anonymous peer review, during the process of writing a "term paper." The result is a review article in the format for submission to a major scientific journal. This method has been implemented in the course Cell and Molecular Biology for Engineers at the University of Virginia. Use of this method resulted in improved grades, much higher quality in the final manuscript, greater objectivity in grading, and improved understanding of the importance of peer review.

Dr. William H. Guilford (University of Virginia Department of Biomedical Engineering)

2002-01-25

101

The processing of astronomical and space images  

Microsoft Academic Search

Results of the digital and analog processing of space and astronomical images at several Soviet observatories during 1966-1984 are presented. The processing techniques are described, and attention is given to the digital processing of Venus radar altimetry data, TV images of Mars, and images of the lunar surface. The coherent-optical processing of images of Jupiter, the Orion Nebula, and binary

A. Ia. Usikov; Iu. V. Kornienko; V. N. Dudinov; V. S. Tsvetkova; Iu. G. Shkuratov

1986-01-01

102

Concept Learning through Image Processing.  

ERIC Educational Resources Information Center

This study explored computer-based image processing as a study strategy for middle school students' science concept learning. Specifically, the research examined the effects of computer graphics generation on science concept learning and the impact of using computer graphics to show interrelationships among concepts during study time. The 87…

Cifuentes, Lauren; Yi-Chuan, Jane Hsieh

103

Using the Results of Teaching Evaluations to Improve Teaching: A Case Study of a New Systematic Process  

ERIC Educational Resources Information Center

This article describes a new 14-step process for using student evaluations of teaching to improve teaching. The new process includes examination of student evaluations in the context of instructor goals, student evaluations of the same course completed in prior terms, and evaluations of similar courses taught by other instructors. The process has…

Malouff, John M.; Reid, Jackie; Wilkes, Janelle; Emmerton, Ashley J.

2015-01-01

104

A Process-Oriented Framework for Acquiring Online Teaching Competencies  

ERIC Educational Resources Information Center

As a multidimensional construct which requires multiple competencies, online teaching is forcing universities to rethink traditional faculty roles and competencies. With this consideration in mind, this paper presents a process-oriented framework structured around three sequential non-linear phases: (1) "before": preparing, planning, and…

Abdous, M'hammed

2011-01-01

105

Interactive Virtual Client for Teaching Occupational Therapy Evaluative Processes  

E-print Network

Interactive Virtual Client for Teaching Occupational Therapy Evaluative Processes Sharon Stansfield-274-3630 sstansfield@ithaca.edu Marilyn Kane Occupational Therapy Department Ithaca College Ithaca, NY USA +1 607-based educational tool for Occupational Therapy students learning client evaluation techniques. The software

Stansfield, Sharon

106

Modeling the Research Process: Alternative Approaches to Teaching Undergraduates  

NSDL National Science Digital Library

An Introduction to Research course was modified to better teach the process of scientific inquiry to students who were not engaged in research projects. Students completed several tasks involved in research projects, including making presentations in a journal club format, writing mock grant proposals, and working as teams to evaluate grant proposals.

Janet Cooper

2005-05-01

107

Chemical Process Design: An Integrated Teaching Approach.  

ERIC Educational Resources Information Center

Reviews a one-semester senior plant design/laboratory course, focusing on course structure, student projects, laboratory assignments, and course evaluation. Includes discussion of laboratory exercises related to process waste water and sludge. (SK)

Debelak, Kenneth A.; Roth, John A.

1982-01-01

108

Teaching the Dance Class: Strategies to Enhance Skill Acquisition, Mastery and Positive Self-Image  

ERIC Educational Resources Information Center

Effective teaching of dance skills is informed by a variety of theoretical frameworks and individual teaching and learning styles. The purpose of this paper is to present practical teaching strategies that enhance the mastery of skills and promote self-esteem, self-efficacy, and positive self-image. The predominant thinking and primary research…

Mainwaring, Lynda M.; Krasnow, Donna H.

2010-01-01

109

Teaching Basic Nanofabrication Processing Using Core Facilities  

NSDL National Science Digital Library

Nanofabrication is "manipulating and assembling materials atom by atom" and it is used to create materials, devices, and systems with new and unique properties. This involves the application of nanofabrication processing equipment, devices and materials. It behooves industrial technology programs to prepare students with skills necessary to supervise and manage the workforce of any organization that desire to implement nanofabrication technology. This paper addresses the educational aspects of research facilities and nano-research clusters for nanofabrication processing at Jackson State University (JSU).

Ejiwale, James

110

Image processing in optical astronomy  

NASA Technical Reports Server (NTRS)

Successful efforts to enhance optical-astronomy images through digital processing often exploit such 'weaknesses' of the image as the objects' near-symmetry, their preferred directionality, or a differentiation in spatial frequency between the object or objects and superimposed clutter. Attention is presently given to the calibration of a camera prior to astronomical data-acquisition, methods for the enhancement of faint surface brightness features, automated target detection and extraction techniques, the importance of the geometric transformations of digital imagery, the preparation of two-dimensional histograms, and the application of polarization.

Lorre, Jean J.

1988-01-01

111

Word Processing: Teach It or Ignore It?  

ERIC Educational Resources Information Center

Provides background information on what is involved in word processing (WP), reviews the attitudes of business people and educators, and offers business teachers methods and suggestions for incorporating WP training into their business programs and for enlisting the help of organizations, businesses, and educational institutions. (TA)

Addams, H. Lon; Baker, William H.

1977-01-01

112

Concentration by laser image processing  

Microsoft Academic Search

This research is intended to increase knowledge of liquid\\/liquid two-phase pipeline flow systems. The feasibility of a new measurement scheme using digital image processing has been demonstrated for liquid\\/liquid turbulent flow. New correlations will be developed for drop size distribution and concentration as a function of stream velocity and average concentration. The effect of pipeline bends on mixing will be

Hanzevack

1986-01-01

113

Turbine Blade Image Processing System  

NASA Astrophysics Data System (ADS)

A vision system has been developed at North Carolina State University to identify the orientation and three dimensional location of steam turbine blades that are stacked in an industrial A-frame cart. The system uses a controlled light source for structured illumination and a single camera to extract the information required by the image processing software to calculate the position and orientation of a turbine blade in real time.

Page, Neal S.; Snyder, Wesley E.; Rajala, Sarah A.

1983-10-01

114

Review of image processing fundamentals  

NASA Technical Reports Server (NTRS)

Image processing through convolution, transform coding, spatial frequency alterations, sampling, and interpolation are considered. It is postulated that convolution in one domain (real or frequency) is equivalent to multiplication in the other (frequency or real), and that the relative amplitudes of the Fourier components must be retained to reproduce any waveshape. It is suggested that all digital systems may be considered equivalent, with a frequency content approximately at the Nyquist limit, and with a Gaussian frequency response. An optimized cubic version of the interpolation continuum image is derived as a set of cubic spines. Pixel replication has been employed to enlarge the visable area of digital samples, however, suitable elimination of the extraneous high frequencies involved in the visable edges, by defocusing, is necessary to allow the underlying object represented by the data values to be seen.

Billingsley, F. C.

1985-01-01

115

Multicomputer processing for medical imaging  

NASA Astrophysics Data System (ADS)

Medical imaging applications have growing processing requirements, and scalable multicomputers are needed to support these applications. Scalability -- performance speedup equal to the increased number of processors -- is necessary for a cost-effective multicomputer. We performed tests of performance and scalability on one through 16 processors on a RACE multicomputer using Parallel Application system (PAS) software. Data transfer and synchronization mechanisms introduced a minimum of overhead to the multicomputer's performance. We implemented magnetic resonance (MR) image reconstruction and multiplanar reformatting (MPR) algorithms, and demonstrated high scalability; the 16- processor configuration was 80% to 90% efficient, and the smaller configurations had higher efficiencies. Our experience is that PAS is a robust and high-productivity tool for developing scalable multicomputer applications.

Goddard, Iain; Greene, Jonathon; Bouzas, Brian

1995-04-01

116

Image Inpainting with Gaussian Processes Alfredo Kalaitzis  

E-print Network

Image Inpainting with Gaussian Processes Alfredo Kalaitzis TH E U N I V E R S ITY OF E D I N B U R for image inpainting, a process which reconstructs lost or deteriorated parts of an image based Background 5 2.1 Image Inpainting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2

Rattray, Magnus

117

Image processing technologies in intelligent transportation systems  

NASA Astrophysics Data System (ADS)

Nowadays in intelligent transportation systems (ITS), information gathering depends heavily on visual information. Image processing technologies (IPT) play a key role. After a brief introduction of ITS, IPT is illustrated from three aspects: image sensor, image processing methods and image processing system. Among many applications for image processing in ITS, the paper presents a roadside example, licence plate recognition (LPR). Attention is centered around two aspects of LPR: plate character isolation and plate character recognition. Lastly, the paper indicates the trends of image-processing technologies in ITS.

Wang, Zhengyou; Liu, Jilin

2003-09-01

118

Radiology image orientation processing for workstation display  

NASA Astrophysics Data System (ADS)

Radiology images are acquired electronically using phosphor plates that are read in Computed Radiology (CR) readers. An automated radiology image orientation processor (RIOP) for determining the orientation for chest images and for abdomen images has been devised. In addition, the chest images are differentiated as front (AP or PA) or side (Lateral). Using the processing scheme outlined, hospitals will improve the efficiency of quality assurance (QA) technicians who orient images and prepare the images for presentation to the radiologists.

Chang, Chung-Fu; Hu, Kermit; Wilson, Dennis L.

1998-06-01

119

SIP : a smart digital image processing library  

E-print Network

The Smart Image Processing (SIP) library was developed to provide automated real-time digital image processing functions on camera phones with integer microprocessors. Many of the functions are not available on commercial ...

Zhou, Mengyao

2005-01-01

120

Multispectral Image Processing for Plants  

NASA Technical Reports Server (NTRS)

The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

Miles, Gaines E.

1991-01-01

121

Using NASA Space Imaging to Teach Earth and Sun Topics in Professional Development Courses for In-Service Teachers  

NASA Astrophysics Data System (ADS)

several PD courses using NASA imaging technology. It includes various ways to study selected topics in physics and astronomy. We use NASA Images to develop lesson plans and EPO materials for PreK-8 grades. Topics are Space based and they vary from measurements, magnetism on Earth to that for our Sun. In addition we cover topics on ecosystem structure, biomass and water on Earth. Hands-on experiments, computer simulations, analysis of real-time NASA data, and vigorous seminar discussions are blended in an inquiry-driven curriculum to instill confident understanding of basic physical science and modern, effective methods for teaching it. Course also demonstrates ways how scientific thinking and hands-on activities could be implemented in the classroom. This course is designed to provide the non-science student a confident understanding of basic physical science and modern, effective methods for teaching it. Most of topics were selected using National Science Standards and National Mathematics Standards to be addressed in grades PreK-8. The course focuses on helping in several areas of teaching: 1) Build knowledge of scientific concepts and processes; 2) Understand the measurable attributes of objects and the units and methods of measurements; 3) Conducting data analysis (collecting, organizing, presenting scientific data, and to predict the result); 4) Use hands-on approaches to teach science; 5) Be familiar with Internet science teaching resources. Here we share our experiences and challenges we faced teaching this course.

Verner, E.; Bruhweiler, F. C.; Long, T.; Edwards, S.; Ofman, L.; Brosius, J. W.; Holman, G.; St Cyr, O. C.; Krotkov, N. A.; Fatoyinbo Agueh, T.

2012-12-01

122

Concurrent Image Processing Executive (CIPE)  

NASA Technical Reports Server (NTRS)

The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are discussed. The target machine for this software is a JPL/Caltech Mark IIIfp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules; (1) user interface, (2) host-resident executive, (3) hypercube-resident executive, and (4) application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube a data management method which distributes, redistributes, and tracks data set information was implemented.

Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

1988-01-01

123

Signal processing in medical imaging and image-guided intervention  

Microsoft Academic Search

Thisis an introductionto a specialsession ofICASSP devoted to signalprocessing techniquesin medicalimagingand image analysis that consists of this introduction and 5 research presentations, each addressingone aspect of the medicalimaging field in which signal processing plays an irreplaceable role. The topics cover a broad spectrum of medical imaging problems from image acquisition to image analysis to populationbased anatomical modeling. The focus is

Milan Sonka

2011-01-01

124

Teaching Comprehension Processes Using Magazines, Paperback Novels, and Content Area Textbooks.  

ERIC Educational Resources Information Center

Argues that teaching students the process of comprehension and ways to improve their own comprehension helps to develop skills in reluctant or poor readers. Offers teaching ideas that involve a variety of reading materials. (FL)

Nist, Sherrie L.; And Others

1983-01-01

125

Teaching the NIATx Model of Process Improvement as an Evidence-Based Process  

ERIC Educational Resources Information Center

Process Improvement (PI) is an approach for helping organizations to identify and resolve inefficient and ineffective processes through problem solving and pilot testing change. Use of PI in improving client access, retention and outcomes in addiction treatment is on the rise through the teaching of the Network for the Improvement of Addiction…

Evans, Alyson C.; Rieckmann, Traci; Fitzgerald, Maureen M.; Gustafson, David H.

2007-01-01

126

Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images  

NSDL National Science Digital Library

Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.

David Kuehn

2007-01-01

127

Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images  

ERIC Educational Resources Information Center

Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two-dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.…

Perry, Jamie; Kuehn, David; Langlois, Rick

2007-01-01

128

Teaching the Dance Class: Strategies to Enhance Skill Acquisition, Mastery and Positive Self-Image  

Microsoft Academic Search

Effective teaching of dance skills is informed by a variety of theoretical frameworks and individual teaching and learning styles. The purpose of this paper is to present practical teaching strategies that enhance the mastery of skills and promote self-esteem, self-efficacy, and positive self-image. The predominant thinking and primary research findings from dance pedagogy, education, physical education and sport pedagogy, and

Lynda M. Mainwaring; Donna H. Krasnow

2010-01-01

129

Video and Image Processing in Multimedia Systems (Video Processing)  

E-print Network

and retrieval techniques 8. Video scene analysis and video segmentation 9. Video processing using compressedCOT 6930 Video and Image Processing in Multimedia Systems (Video Processing) Instructor: Borko. Content-based image and video indexing and retrieval. Video processing using compressed data. Course

Furht, Borko

130

Digital Image Processing Instructor: Namrata Vaswani  

E-print Network

Digital Image Processing Instructor: Namrata Vaswani http://www.ece.iastate.edu/~namrata #12 signals don't always model image noise well · No standard statistical models to categorize images, every problem is different · 3D scene 2D images, can be occlusions, many problems are ill-posed #12;Sub

Vaswani, Namrata

131

Combining image-processing and image compression schemes  

NASA Technical Reports Server (NTRS)

An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

Greenspan, H.; Lee, M.-C.

1995-01-01

132

Unified Digital Image Display And Processing System  

NASA Astrophysics Data System (ADS)

Our institution like many others, is faced with a proliferation of medical imaging techniques. Many of these methods give rise to digital images (e.g. digital radiography, computerized tomography (CT) , nuclear medicine and ultrasound). We feel that a unified, digital system approach to image management (storage, transmission and retrieval), image processing and image display will help in integrating these new modalities into the present diagnostic radiology operations. Future techniques are likely to employ digital images, so such a system could readily be expanded to include other image sources. We presently have the core of such a system. We can both view and process digital nuclear medicine (conventional gamma camera) images, positron emission tomography (PET) and CT images on a single system. Images from our recently installed digital radiographic unit can be added. Our paper describes our present system, explains the rationale for its configuration, and describes the directions in which it will expand.

Horii, Steven C.; Maguire, Gerald Q.; Noz, Marilyn E.; Schimpf, James H.

1981-11-01

133

Pyramid Methods in Image Processing  

Microsoft Academic Search

: The data structure used torepresent image information can be criticalto the successful completion of an imageprocessing task. One structure that hasattracted considerable attention is the imagepyramid This consists of a set of lowpass orbandpass copies of an image, eachrepresenting pattern information of adifferent scale. Here we describe a variety ofpyramid methods that we have developedfor image data compression, enhancement,analysis

E. H. Adelson; C. H. Anderson; J. R. Bergen; P. J. Burt; J. M. Ogden

1984-01-01

134

Computation-Efficient Image Signal Processing for CMOS Image Sensors  

Microsoft Academic Search

This paper presents an efficient image signal processing method proposed for CMOS image sensors. In the proposed method, the color correction is moved to the front of the color demosaic to reduce the arithmetic complexity required in the color correction to one third, and a new color correction method is suggested to achieve good images with less data. In spite

Ki-Seok Kwon; Eun-Joo Bae; Seokho Lee; Jinook Song; In-Cheol Park

135

Teaching Learning Collaborative: A Process for Supporting Professional Learning Communities  

NSDL National Science Digital Library

The teaching learning collaborative (TLC) is a unique professional development strategy that engages groups of teachers in collaborative planning and teaching of a science lesson, coaching and mentoring, and examining student work. In this chapter, the au

Jo Topps

2009-04-02

136

Programmable remapper for image processing  

NASA Technical Reports Server (NTRS)

A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

Juday, Richard D. (inventor); Sampsell, Jeffrey B. (inventor)

1991-01-01

137

The Graphic Novel Classroom: POWerful Teaching and Learning with Images  

ERIC Educational Resources Information Center

Could you use a superhero to teach reading, writing, critical thinking, and problem solving? While seeking the answer, secondary language arts teacher Maureen Bakis discovered a powerful pedagogy that teaches those skills and more. The amazingly successful results prompted her to write this practical guide that shows middle and high school…

Bakis, Maureen

2011-01-01

138

Survey: Interpolation Methods in Medical Image Processing  

Microsoft Academic Search

Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation kernels of finite size have been introduced. This paper compares 1) truncated and win- dowed sinc; 2) nearest neighbor; 3) linear; 4) quadratic;

Thomas Martin Lehmann; Claudia Gönner; Klaus Spitzer

1999-01-01

139

A residual hybrid encoder for image processing  

E-print Network

or transmission. A block diagram of an Image Processor is shown in Fig. 1 [4]. Mapper Quantizer Coder Fig. 1 ? Block Di. agram of Image Processing Model The existing methods of image compression fit into two basic categories: 1) spatial domain, where Pulse...

Beck, John Andrew

1980-01-01

140

Parallel processing considerations for image recognition tasks  

NASA Astrophysics Data System (ADS)

Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

Simske, Steven J.

2011-01-01

141

Quantitative image processing in fluid mechanics  

NASA Technical Reports Server (NTRS)

The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.

Hesselink, Lambertus; Helman, James; Ning, Paul

1992-01-01

142

Non-linear Post Processing Image Enhancement  

NASA Technical Reports Server (NTRS)

A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,

Hunt, Shawn; Lopez, Alex; Torres, Angel

1997-01-01

143

Programmable Iterative Optical Image And Data Processing  

NASA Technical Reports Server (NTRS)

Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

Jackson, Deborah J.

1995-01-01

144

Water surface capturing by image processing  

Technology Transfer Automated Retrieval System (TEKTRAN)

An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

145

DSP based image processing for retinal prosthesis  

Microsoft Academic Search

The real-time image processing in retinal prosthesis consists of the implementation of various image processing algorithms like edge detection, edge enhancement, decimation etc. The algorithmic computations in real-time may have high level of computational complexity and hence the use of digital signal processors (DSPs) for the implementation of such algorithms is proposed here. This application desires that the DSPs be

Neha J. Parikh; James D. Weiland; Mark S. Humayun; Saloni S. Shah; Gaurav S. Mohile

2004-01-01

146

Web interface for image processing algorithms  

NASA Astrophysics Data System (ADS)

In this contribution we present an interface for image processing algorithms that has been made recently available on the Internet (http://nibbler.uni-koblenz.de). First, we show its usefulness compared to some other existing products. After a description of its architecture, its main features are then presented: the particularity of the user management, its image database, its interface, and its original quarantine system. We finally present the result of an evaluation performed by students in image processing.

Chastel, Serge; Schwab, Guido; Paulus, Dietrich

2003-12-01

147

Improvement of hospital processes through business process management in Qaem Teaching Hospital: A work in progress  

PubMed Central

In a world of continuously changing business environments, organizations have no option; however, to deal with such a big level of transformation in order to adjust the consequential demands. Therefore, many companies need to continually improve and review their processes to maintain their competitive advantages in an uncertain environment. Meeting these challenges requires implementing the most efficient possible business processes, geared to the needs of the industry and market segments that the organization serves globally. In the last 10 years, total quality management, business process reengineering, and business process management (BPM) have been some of the management tools applied by organizations to increase business competiveness. This paper is an original article that presents implementation of “BPM” approach in the healthcare domain that allows an organization to improve and review its critical business processes. This project was performed in “Qaem Teaching Hospital” in Mashhad city, Iran and consists of four distinct steps; (1) identify business processes, (2) document the process, (3) analyze and measure the process, and (4) improve the process. Implementing BPM in Qaem Teaching Hospital changed the nature of management by allowing the organization to avoid the complexity of disparate, soloed systems. BPM instead enabled the organization to focus on business processes at a higher level. PMID:25540784

Yarmohammadian, Mohammad H.; Ebrahimipour, Hossein; Doosty, Farzaneh

2014-01-01

148

Image processing: mathematics, engineering, or art  

SciTech Connect

From the strict mathematical viewpoint, it is impossible to fully achieve the goal of digital image processing, which is to determine an unknown function of two dimensions from a finite number of discrete measurements linearly related to it. However, the necessity to display image data in a form that is visually useful to an observer supersedes such mathematically correct admonitions. Engineering defines the technological limits of what kind of image processing can be done and how the resulting image can be displayed. The appeal and usefulness of the final image to the human eye pertains to aesthetics. Effective image processing necessitates unification of mathematical theory, practical implementation, and artistic display. 59 references, 6 figures.

Hanson, K.M.

1985-01-01

149

Parallel digital signal processing architectures for image processing  

NASA Astrophysics Data System (ADS)

This paper describes research into a high speed image processing system using parallel digital signal processors for the processing of electro-optic images. The objective of the system is to reduce the processing time of non-contact type inspection problems including industrial and medical applications. A single processor can not deliver sufficient processing power required for the use of applications hence, a MIMD system is designed and constructed to enable fast processing of electro-optic images. The Texas Instruments TMS320C40 digital signal processor is used due to its high speed floating point CPU and the support for the parallel processing environment. A custom designed VISION bus is provided to transfer images between processors. The system is being applied for solder joint inspection of high technology printed circuit boards.

Kshirsagar, Shirish P.; Hartley, David A.; Harvey, David M.; Hobson, Clifford A.

1994-10-01

150

The Teaching of Information Processing in the University of Buenos Aires, Argentina  

Microsoft Academic Search

The article broadly describes the current curriculum in the Departamento de Bibliotecología y Ciencia de la Información at the Facultad de Filosofía y Letras of the Universidad de Buenos Aires. The Information Processing area, including cataloging and classification is introduced: its composition, theoretical background, strategies, and teaching techniques used in the teaching process\\/learning, relationship with other areas in the curriculum,

Elsa E. Barber; Silvia L. Pisano

2006-01-01

151

Teaching and Learning Information Technology Process: From a 25 Year Perspective--Math Regents  

ERIC Educational Resources Information Center

This paper will describe the Teaching and Learning Informational Technology Process (TLITP). Before present day strategies, teaching and learning relied on transformations based on quantification to measure performance. The process will be a non-linear three construct of teacher, student and community. Emphasizing old practices now is the…

Lewis Sanchez, Louise

2007-01-01

152

Learning from the National Board Portfolio Process: What Teachers Discovered about Literacy Teaching and Learning  

ERIC Educational Resources Information Center

Using a communities-of-practice framework, this qualitative study investigated what eight teachers learned about literacy teaching and learning through participation in the National Board for Professional Teaching Standards certification process. This article presents two selected teacher cases which suggest that the National Board process created…

Place, Nancy A.; Coskie, Tracy L.

2006-01-01

153

The Effect of Student Teaching Programs on Students' Beliefs about Teaching and Learning Processes.  

ERIC Educational Resources Information Center

This study examined the influence of the student teaching program within the College of Education at the University of Bahrain on the prior beliefs and attitudes of 120 student teachers. A 24-item questionnaire was constructed, based on dilemmas of teaching and learning (knowledge and curriculum, the teacher's role, and the teacher-student…

Al-Musawi, Nu'man M.

154

Improved image quality with Bayesian image processing in digital mammography  

NASA Astrophysics Data System (ADS)

Recent developments in digital detectors have led to investigating the importance of grids in mammography. We propose to examine the use Bayesian Image Estimation (BIE) as a software means of removing scatter post acquisition and to compare this technique to a grid. BIE is an iterative, non- linear statistical estimation technique that reduces scatter content while improving CNR. Images of the ACR breast phantom were acquired both with and without a grid on a calibrated digital mammography system. A BIE algorithm was developed and was used to process the images acquired without the grid. Scatter fractions (SF) were compared for the image acquired with the grid, the image acquired without the grid, and the image acquired without the grid and processed by BIE. Images acquired without the anti-scatter grid had an initial SF of 0.46. Application of the Bayesian image estimation technique reduced this to 0.03. In comparison, the use of the grid reduced the SF to 0.19. The use of Bayesian image estimation in digital mammography is beneficial in reducing scatter fractions. This technique is very useful as it can reduce scatter content effectively without introducing any adverse effects such as aliasing caused by gridlines.

Baydush, Alan H.; Floyd, Carey E., Jr.

2000-06-01

155

Earth Observation Services (Image Processing Software)  

NASA Technical Reports Server (NTRS)

San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

1992-01-01

156

Nonlinear Optical Image Processing with Bacteriorhodopsin Films  

NASA Technical Reports Server (NTRS)

The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.

Downie, John D.; Deiss, Ron (Technical Monitor)

1994-01-01

157

HYPERSPECTRAL IMAGING FOR FOOD PROCESSING AUTOMATION  

Technology Transfer Automated Retrieval System (TEKTRAN)

A hyperspectral imaging system could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system inc...

158

Kernel Regression for Image Processing and Reconstruction  

Microsoft Academic Search

In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show

Hiroyuki Takeda; Sina Farsiu; Peyman Milanfar

2007-01-01

159

Digital Image Processing in Private Industry.  

ERIC Educational Resources Information Center

Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

Moore, Connie

1986-01-01

160

Axioms and fundamental equations of image processing  

Microsoft Academic Search

Image-processing transforms must satisfy a list of formal requirements. We discuss these requirements and classify them into three categories: “architectural requirements” like locality, recursivity and causality in the scale space, “stability requirements” like the comparison principle and “morphological requirements”, which correspond to shape-preserving properties (rotation invariance, scale invariance, etc.). A complete classification is given of all image multiscale transforms satisfying

Luis Alvarez; Frédéric Guichard; Pierre-Louis Lions; Jean-Michel Morel

1993-01-01

161

Color image processing for date quality evaluation  

NASA Astrophysics Data System (ADS)

Many agricultural non-contact visual inspection applications use color image processing techniques because color is often a good indicator of product quality. Color evaluation is an essential step in the processing and inventory control of fruits and vegetables that directly affects profitability. Most color spaces such as RGB and HSV represent colors with three-dimensional data, which makes using color image processing a challenging task. Since most agricultural applications only require analysis on a predefined set or range of colors, mapping these relevant colors to a small number of indexes allows simple and efficient color image processing for quality evaluation. This paper presents a simple but efficient color mapping and image processing technique that is designed specifically for real-time quality evaluation of Medjool dates. In contrast with more complex color image processing techniques, the proposed color mapping method makes it easy for a human operator to specify and adjust color-preference settings for different color groups representing distinct quality levels. Using this color mapping technique, the color image is first converted to a color map that has one color index represents a color value for each pixel. Fruit maturity level is evaluated based on these color indices. A skin lamination threshold is then determined based on the fruit surface characteristics. This adaptive threshold is used to detect delaminated fruit skin and hence determine the fruit quality. The performance of this robust color grading technique has been used for real-time Medjool date grading.

Lee, Dah Jye; Archibald, James K.

2010-01-01

162

Snapping Sharks, Maddening Mindreaders, and Interactive Images: Teaching Correlation.  

ERIC Educational Resources Information Center

Understanding correlation coefficients is difficult for students. A free computer program that helps introductory psychology students distinguish between positive and negative correlation, and which also teaches them to understand the differences between correlation coefficients of different size is described in this paper. The program is…

Mitchell, Mark L.

163

Mobile Phone Images and Video in Science Teaching and Learning  

ERIC Educational Resources Information Center

This article reports a study into how mobile phones could be used to enhance teaching and learning in secondary school science. It describes four lessons devised by groups of Sri Lankan teachers all of which centred on the use of the mobile phone cameras rather than their communication functions. A qualitative methodological approach was used to…

Ekanayake, Sakunthala Yatigammana; Wishart, Jocelyn

2014-01-01

164

Elements of image processing in localization microscopy  

NASA Astrophysics Data System (ADS)

Localization microscopy software generally contains three elements: a localization algorithm to determine fluorophore positions on a specimen, a quality control method to exclude imprecise localizations, and a visualization technique to reconstruct an image of the specimen. Such algorithms may be designed for either sparse or partially overlapping (dense) fluorescence image data, and making a suitable choice of software depends on whether an experiment calls for simplicity and resolution (favouring sparse methods), or for rapid data acquisition and time resolution (requiring dense methods). We discuss the factors involved in this choice. We provide a full set of MATLAB routines as a guide to localization image processing, and demonstrate the usefulness of image simulations as a guide to the potential artefacts that can arise when processing over-dense experimental fluorescence images with a sparse localization algorithm.

Rees, Eric J.; Erdelyi, Miklos; Kaminski Schierle, Gabriele S.; Knight, Alex; Kaminski, Clemens F.

2013-09-01

165

Rotation Covariant Image Processing for Biomedical Applications  

PubMed Central

With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences. PMID:23710255

Reisert, Marco

2013-01-01

166

The Fallacy of Excluded Instruction: A Common But Correctable Error in Process Oriented Teaching Strategies.  

ERIC Educational Resources Information Center

Suggests that indirect process oriented teaching methods are not too successful with elementary school social studies students because there is a tendency for teachers to provide too little information to enable a rational child to discover the ideas he or she is supposed to learn. Modifications in the form and application of the teaching strategy…

McKenzie, Gary R.

1979-01-01

167

The Process of Adapting a German Pedagogy for Modern Mathematics Teaching in Japan  

ERIC Educational Resources Information Center

Modern geometry teaching in schools in Japan was modeled on the pedagogies of western countries. However, the core ideas of these pedagogies were often radically changed in the process of adaptation, resulting in teaching differing fundamentally from the original models. This paper discusses the radical changes the pedagogy of a German mathematics…

Yamamoto, Shinya

2006-01-01

168

Scaling-up Process-Oriented Guided Inquiry Learning Techniques for Teaching Large Information Systems Courses  

ERIC Educational Resources Information Center

Promoting engagement during lectures becomes significantly more challenging as class sizes increase. Therefore, lecturers need to experiment with new teaching methodologies to embolden deep learning outcomes and to develop interpersonal skills amongst students. Process Oriented Guided Inquiry Learning is a teaching approach that uses highly…

Trevathan, Jarrod; Myers, Trina; Gray, Heather

2014-01-01

169

Twitter for Teaching: Can Social Media Be Used to Enhance the Process of Learning?  

ERIC Educational Resources Information Center

Can social media be used to enhance the process of learning by students in higher education? Social media have become widely adopted by students in their personal lives. However, the application of social media to teaching and learning remains to be fully explored. In this study, the use of the social media tool Twitter for teaching was…

Evans, Chris

2014-01-01

170

Process Evaluation of a Teaching and Learning Centre at a Research University  

ERIC Educational Resources Information Center

This paper describes the evaluation of a teaching and learning centre (TLC) five?years after its inception at a mid-sized, midwestern state university. The mixed methods process evaluation gathered data from 209 attendees and non-attendees of the TLC from the full-time, benefit-eligible teaching faculty. Focus groups noted feelings of…

Smith, Deborah B.; Gadbury-Amyot, Cynthia C.

2014-01-01

171

Image-processing with augmented reality (AR)  

NASA Astrophysics Data System (ADS)

In this project, the aim is to discuss and articulate the intent to create an image-based Android Application. The basis of this study is on real-time image detection and processing. It's a new convenient measure that allows users to gain information on imagery right on the spot. Past studies have revealed attempts to create image based applications but have only gone up to crating image finders that only work with images that are already stored within some form of database. Android platform is rapidly spreading around the world and provides by far the most interactive and technical platform for smart-phones. This is why it was important to base the study and research on it. Augmented Reality is this allows the user to maipulate the data and can add enhanced features (video, GPS tags) to the image taken.

Babaei, Hossein R.; Mohurutshe, Pagiel L.; Habibi Lashkari, Arash

2013-03-01

172

Overview on METEOSAT geometrical image data processing  

NASA Technical Reports Server (NTRS)

Digital Images acquired from the geostationary METEOSAT satellites are processed and disseminated at ESA's European Space Operations Centre in Darmstadt, Germany. Their scientific value is mainly dependent on their radiometric quality and geometric stability. This paper will give an overview on the image processing activities performed at ESOC, concentrating on the geometrical restoration and quality evaluation. The performance of the rectification process for the various satellites over the past years will be presented and the impacts of external events as for instance the Pinatubo eruption in 1991 will be explained. Special developments both in hard and software, necessary to cope with demanding tasks as new image resampling or to correct for spacecraft anomalies, are presented as well. The rotating lens of MET-5 causing severe geometrical image distortions is an example for the latter.

Diekmann, Frank J.

1994-01-01

173

Using a Web-Based System To Support Teaching Processes  

Microsoft Academic Search

A platform-independent Java Web application named TSI (Teacher-Student Interaction) that supports communication between an instructor, teaching assistants and students in a traditional on-campus course is presented in this paper. Using the TSI, the instructor and teaching assistants can handle most of the routine work: upload student personal information, send students personal emails, etc. The system can easily be installed and

V. Klyuev; Tomokazu Tsuchimoto; Gennadiy Nikishkov

2008-01-01

174

Teaching the Process of Science: Faculty Perceptions and an Effective Methodology  

PubMed Central

Most scientific endeavors require science process skills such as data interpretation, problem solving, experimental design, scientific writing, oral communication, collaborative work, and critical analysis of primary literature. These are the fundamental skills upon which the conceptual framework of scientific expertise is built. Unfortunately, most college science departments lack a formalized curriculum for teaching undergraduates science process skills. However, evidence strongly suggests that explicitly teaching undergraduates skills early in their education may enhance their understanding of science content. Our research reveals that faculty overwhelming support teaching undergraduates science process skills but typically do not spend enough time teaching skills due to the perceived need to cover content. To encourage faculty to address this issue, we provide our pedagogical philosophies, methods, and materials for teaching science process skills to freshman pursuing life science majors. We build upon previous work, showing student learning gains in both reading primary literature and scientific writing, and share student perspectives about a course where teaching the process of science, not content, was the focus. We recommend a wider implementation of courses that teach undergraduates science process skills early in their studies with the goals of improving student success and retention in the sciences and enhancing general science literacy. PMID:21123699

Coil, David; Wenderoth, Mary Pat; Cunningham, Matthew

2010-01-01

175

Teaching the process of science: faculty perceptions and an effective methodology.  

PubMed

Most scientific endeavors require science process skills such as data interpretation, problem solving, experimental design, scientific writing, oral communication, collaborative work, and critical analysis of primary literature. These are the fundamental skills upon which the conceptual framework of scientific expertise is built. Unfortunately, most college science departments lack a formalized curriculum for teaching undergraduates science process skills. However, evidence strongly suggests that explicitly teaching undergraduates skills early in their education may enhance their understanding of science content. Our research reveals that faculty overwhelming support teaching undergraduates science process skills but typically do not spend enough time teaching skills due to the perceived need to cover content. To encourage faculty to address this issue, we provide our pedagogical philosophies, methods, and materials for teaching science process skills to freshman pursuing life science majors. We build upon previous work, showing student learning gains in both reading primary literature and scientific writing, and share student perspectives about a course where teaching the process of science, not content, was the focus. We recommend a wider implementation of courses that teach undergraduates science process skills early in their studies with the goals of improving student success and retention in the sciences and enhancing general science literacy. PMID:21123699

Coil, David; Wenderoth, Mary Pat; Cunningham, Matthew; Dirks, Clarissa

2010-01-01

176

Real-time optical image processing techniques  

NASA Technical Reports Server (NTRS)

Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.

Liu, Hua-Kuang

1988-01-01

177

The Uncertainty Principle in Image Processing  

Microsoft Academic Search

The uncertainty principle is recognized as one of the fundamental results in signal processing. Its role in inference is, however, less well known outside of quantum mechanics. It is the aim of this paper to provide a unified approach to the problem of uncertainty in image processing. It is shown that uncertainty can be derived from the fundamental constraints on

Roland Wilson; Goesta H. Granlund

1984-01-01

178

Image Processing Application for Cognition (IPAC) - Traditional and Emerging Topics in Image Processing in Astronomy (Invited)  

NASA Astrophysics Data System (ADS)

A new application framework for advanced image processing for astronomy is presented. It implements standard two-dimensional operators, and recent developments in the field of non-astronomical image processing (IP), as well as original algorithms based on nonlinear partial differential equations (PDE). These algorithms are especially well suited for multi-scale astronomical images since they increase signal to noise ratio without smearing localized and diffuse objects. The visualization component is based on the extensive tools that we developed for Spitzer Space Telescope's observation planning tool Spot and archive retrieval tool Leopard. It contains many common features, combines images in new and unique ways and interfaces with many astronomy data archives. Both interactive and batch mode processing are incorporated. In the interactive mode, the user can set up simple processing pipelines, and monitor and visualize the resulting images from each step of the processing stream. The system is platform-independent and has an open architecture that allows extensibility by addition of plug-ins. This presentation addresses astronomical applications of traditional topics of IP (image enhancement, image segmentation) as well as emerging new topics like automated image quality assessment (QA) and feature extraction, which have potential for shaping future developments in the field. Our application framework embodies a novel synergistic approach based on integration of image processing, image visualization and image QA (iQA).

Pesenson, M.; Roby, W.; Helou, G.; McCollum, B.; Ly, L.; Wu, X.; Laine, S.; Hartley, B.

2008-08-01

179

Bistatic SAR: Signal Processing and Image Formation.  

SciTech Connect

This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013 on Kirtland Air Force Base, New Mexico.

Wahl, Daniel E.; Yocky, David A.

2014-10-01

180

JIP: Java image processing on the Internet  

NASA Astrophysics Data System (ADS)

In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

Wang, Dongyan; Lin, Bo; Zhang, Jun

1998-12-01

181

Fundamental concepts of digital image processing  

SciTech Connect

The field of a digital-image processing has experienced dramatic growth and increasingly widespread applicability in recent years. Fortunately, advances in computer technology have kept pace with the rapid growth in volume of image data in these and other applications. Digital image processing has become economical in many fields of research and in industrial and military applications. While each application has requirements unique from the others, all are concerned with faster, cheaper, more accurate, and more extensive computation. The trend is toward real-time and interactive operations, where the user of the system obtains preliminary results within a short enough time that the next decision can be made by the human processor without loss of concentration on the task at hand. An example of this is the obtaining of two-dimensional (2-D) computer-aided tomography (CAT) images. A medical decision might be made while the patient is still under observation rather than days later.

Twogood, R.E.

1983-03-01

182

Prospective faculty developing understanding of teaching and learning processes in science  

NASA Astrophysics Data System (ADS)

Historically, teaching has been considered a burden by many academics at institutions of higher education, particularly research scientists. Furthermore, university faculty and prospective faculty often have limited exposure to issues associated with effective teaching and learning. As a result, a series of ineffective teaching and learning strategies are pervasive in university classrooms. This exploratory case study focuses on four biology graduate teaching fellows (BGF) who participated in a National Science Foundation (NSF) GK-12 Program. Such programs were introduced by NSF to enhance the preparation of prospective faculty for their future professional responsibilities. In this particular program, BGF were paired with high school biology teachers (pedagogical mentors) for at least one year. During this yearlong partnership, BGF were involved in a series of activities related to teaching and learning ranging from classroom teaching, tutoring, lesson planning, grading, to participating in professional development conferences and reflecting upon their practices. The purpose of this study was to examine the changes in BGF understanding of teaching and learning processes in science as a function of their pedagogical content knowledge (PCK). In addition, the potential transfer of this knowledge between high school and higher education contexts was investigated. The findings of this study suggest that understanding of teaching and learning processes in science by the BGF changed. Specific aspects of the BGF involvement in the program (such as classroom observations, practice teaching, communicating with mentors, and reflecting upon one's practice) contributed to PCK development. In fact, there is evidence to suggest that constant reflection is critical in the process of change. Concurrently, BGFs enhanced understanding of science teaching and learning processes may be transferable from the high school context to the university context. Future research studies should be designed to explore explicitly this transfer phenomenon.

Pareja, Jose I.

183

Hardware implementation of machine vision systems: image and video processing  

NASA Astrophysics Data System (ADS)

This contribution focuses on different topics covered by the special issue titled `Hardware Implementation of Machine vision Systems' including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.

Botella, Guillermo; García, Carlos; Meyer-Bäse, Uwe

2013-12-01

184

Digital-image processing and image analysis of glacier ice  

USGS Publications Warehouse

This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

Fitzpatrick, Joan J.

2013-01-01

185

Variational PDE Models in Image Processing  

Microsoft Academic Search

This article gives a broad picture of mathematicalimage processing through one of the most recentand very successful approaches---the variationalPo(partial differential equation) method.We first discuss two crucial ingredients for imageprocessing: image modeling or representation, andprocessor modeling. We then focus on the variationalPomethod. The backbone of the articleconsists of two major problems in image processingthat we personally have worked on: inpaintingand segmentation.

Tony F. Chan; Luminita Vese

2003-01-01

186

Image processing of angiograms: A pilot study  

NASA Technical Reports Server (NTRS)

The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

1974-01-01

187

Imaging for Literacy Learning: A Reflective Teaching Inquiry.  

ERIC Educational Resources Information Center

A study examined whether imaging is a universal ability that can be used by learners, and if it can be developed for more effective use once the learner can observe the role of imaging in his or her own mental activity. Subjects, 196 students in college upperclass writing classes, graduate level workshops, basic writing classes, an undergraduate…

Thompson, Nancy S.

188

Factors Related to Integration of Technology Into the Teaching\\/learning Process in Agriscience Education Programs  

Microsoft Academic Search

This study addressed how technology was being integrated in the teaching\\/learning process in secondary agriscience education programs at four levels: Exploration, Experimentation, Adoption, and Advanced Integration. The study was based on the Kotrlik\\/Redmann Technology Integration Model©. Technology is being integrated by agriscience teachers in the teaching\\/learning process to a moderate extent. They are more active in the areas of exploration

Joe W. Kotrlik; Donna H. Redmann; Bruce B. Douglas

189

Logarithmic spiral grids for image processing  

NASA Technical Reports Server (NTRS)

A picture digitization grid based on logarithmic spirals rather than Cartesian coordinates is presented. Expressing this curvilinear grid as a conformal exponential mapping reveals useful image processing properties. The mapping induces a computational simplification that suggests parallel architectures in which most geometric transformations are effected by data shifting in memory rather than arithmetic on coordinates. These include fast, parallel noise-free rotation, scaling, and some projective transformations of pixel defined images. Conformality of the mapping preserves local picture-processing operations such as edge detection.

Weiman, C. F. R.; Chaikin, G. M.

1979-01-01

190

Processing infrared images of aircraft lapjoints  

NASA Technical Reports Server (NTRS)

Techniques for processing IR images of aging aircraft lapjoint data are discussed. Attention is given to a technique for detecting disbonds in aircraft lapjoints which clearly delineates the disbonded region from the bonded regions. The technique is weak on unpainted aircraft skin surfaces, but can be overridden by using a self-adhering contact sheet. Neural network analysis on raw temperature data has been shown to be an effective tool for visualization of images. Numerical simulation results show the above processing technique to be an effective tool in delineating the disbonds.

Syed, Hazari; Winfree, William P.; Cramer, K. E.

1992-01-01

191

Support Routines for In Situ Image Processing  

NASA Technical Reports Server (NTRS)

This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the pointing of in situ cameras, (8) marsinvrange: Inverse of marsrange . given a range file, re-computes an XYZ file that closely matches the original. . marsproj: Projects an XYZ coordinate through the camera model, and reports the line/sample coordinates of the point in the image, (9) marsprojfid: Given the output of marsfidfinder, projects the XYZ locations and compares them to the found locations, creating a report showing the fiducial errors in each image. marsrad: Radiometrically corrects an image, (10) marsrelabel: Updates coordinate system or camera model labels in an image, (11) marstiexyz: Given a stereo pair, allows the user to interactively pick a point in each image and reports the XYZ value corresponding to that pair of locations. marsunmosaic: Extracts a single frame from a mosaic, which will be created such that it could have been an input to the original mosaic. Useful for creating simulated input frames using different camera models than the original mosaic used, and (12) merinverter: Uses an inverse lookup table to convert 8-bit telemetered data to its 12-bit original form. Can be used in other missions despite the name.

Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean

2013-01-01

192

Teaching Ethics in the Community College Data Processing Curriculum.  

ERIC Educational Resources Information Center

Explores computer-related crime, possible causes for the dramatic increase in criminal activity in the last 20 years, and the moral standards of the computer criminal. Discusses the responsiblity of community colleges to teach individuals the moral implications inherent in the use of computers. Suggests objectives for a course dealing with these…

Gottleber, T. T.

1988-01-01

193

Architecture as a Quality in the Learning and Teaching Process.  

ERIC Educational Resources Information Center

Using an outline format accompanied by numerous photographs and sketches, this brochure explores the relationship of "school" to people's conceptions, actions, and physical surroundings, highlighting changes over the past 20 years in Scandinavian school design. Two major conceptual changes are decentralized administration and teaching and learning…

Cold, Birgit

194

The Teaching Evaluation Process: Segmentation of Marketing Students.  

ERIC Educational Resources Information Center

A study applied the concept of market segmentation to student evaluation of college teaching, by assessing whether there exist several segments of students and how this relates to their evaluation of faculty. Subjects were 156 Australian undergraduate business administration students. Results suggest segments do exist, with different expectations…

Yau, Oliver H. M.; Kwan, Wayne

1993-01-01

195

Student Satisfaction and Its Implications in the Process of Teaching  

ERIC Educational Resources Information Center

Student satisfaction is widely recognized as an indicator of the quality of students' learning and teaching experience. This study aims to highlight how satisfied students (from the primary and preschool pedagogy specialization within the Faculty of Psychology and Educational Sciences, who are studying to become future kindergarten and…

Ciobanu, Alina; Ostafe, Livia

2014-01-01

196

Fundation of single frame image processing  

NASA Astrophysics Data System (ADS)

Single frame image processing with noise can be cast into the Fredholm integral equation of the first kind. Its inversion for object reconstruction is known to be increasingly unstable when the ratio of the maximum and minimum eigenvalues (of the square of the point spread function) increases as the number of discrete samples increases. Therefore single-frame image processing belongs to the class of ill-posed inverse problems. The root of instability is identified to be the singularity of the ratio of noise power spectrum and modulation transfer function. Such an over amplification of noise at high spatial frequency is well known1 in optics2,3 and the image extrapolation problems.7,8,9,10 Ad hock procedures have been casually taken to suppress it. Recently an entirely different approach in a geological inversion problem has been applied to the ad hoc procedure in trading off noise and resolution in image problems. However, the ad hoc combination procedure can be simplified using the Euler-Lagrangian variational approach instead of the conventional gradient method. This will be shown to be rigorously achieved. Qualitative illustrations will be presented which give insight to the present method of dynamical regulation for the illposed inverse problem. Two exact solutions of the Bojarski-Lewis inverse scattering problem will be obtained showing different noise behaviors in the reconstructed object profile. Its impact in the super resolution and the image extrapolation problems will be indicated using an optical implementation of iteration algorithm for image restoration.

Szu, H. Harold

1980-12-01

197

Processing Images of Craters for Spacecraft Navigation  

NASA Technical Reports Server (NTRS)

A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel.

Cheng, Yang; Johnson, Andrew E.; Matthies, Larry H.

2009-01-01

198

MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING  

PubMed Central

In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN

2013-01-01

199

Image processing in swallowing and speech research.  

PubMed

An image processing system for application to studies of the temporal and spatial parameters of movement during swallowing and speech is described. Image sequences from videotape are digitized for computerized manipulation and analysis in an attempt to improve on conventional visual inspection. The system is "interactive" or "event-driven": after executing a function, the computer waits for guidance from the user who controls the program through keyboard and mouse input, selecting options from menus and responding to prompts. The analyst alters image clarity by the application of filters and heightens contrast through video enhancement. A technique called "remapping" reduces head motion and provides uniform spatial scaling. Animated sequences of images are used, as opposed to frame-by-frame analysis, to preserve temporal context and increase efficiency of measurement. Low cost off-the-shelf personal computer hardware is used along with original software tailored to the application. PMID:1884636

Dengel, G; Robbins, J; Rosenbek, J C

1991-01-01

200

FITSH: Software Package for Image Processing  

NASA Astrophysics Data System (ADS)

FITSH provides a standalone environment for analysis of data acquired by imaging astronomical detectors. The package provides utilities both for the full pipeline of subsequent related data processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple image combinations, spatial transformations and interpolations, etc.) and for aiding the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The utilities in the package are built on the top of the commonly used UNIX/POSIX shells (hence the name of the package), therefore both frequently used and well-documented tools for such environments can be exploited and managing massive amount of data is rather convenient.

Pál, András

2011-11-01

201

Digital image processing of vascular angiograms  

NASA Technical Reports Server (NTRS)

The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.

Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.

1975-01-01

202

Web-based document image processing  

NASA Astrophysics Data System (ADS)

Increasing numbers of research libraries are turning to the Internet for electron interlibrary loan and for document delivery to patrons. This has been made possible through the widespread adoption of software such as Ariel and DocView. Ariel, a product of the Research Libraries Group, converts paper-based documents to monochrome bitmapped images, and delivers them over the Internet. The National Library of Medicine's DocView is primarily designed for library patrons are beginning to reap the benefits of this new technology, barriers exist, e.g., differences in image file format, that lead to difficulties in the use of library document information. To research how to overcome such barriers, the Communications Engineering Branch of the Lister Hill National Center for Biomedical Communications, an R and D division of NLM, has developed a web site called the DocMorph Server. This is part of an ongoing intramural R and D program in document imaging that has spanned many aspects of electronic document conversion and preservation, Internet document transmission and document usage. The DocMorph Server Web site is designed to fill two roles. First, in a role that will benefit both libraries and their patrons, it allows Internet users to upload scanned image files for conversion to alternative formats, thereby enabling wider delivery and easier usage of library document information. Second, the DocMorph Server provides the design team an active test bed for evaluating the effectiveness and utility of new document image processing algorithms and functions, so that they may be evaluated for possible inclusion in other image processing software products being developed at NLM or elsewhere. This paper describes the design of the prototype DocMorph Server and the image processing functions being implemented on it.

Walker, Frank L.; Thoma, George R.

1999-12-01

203

Limiting liability via high resolution image processing  

SciTech Connect

The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

Greenwade, L.E.; Overlin, T.K.

1996-12-31

204

Progressive band processing for hyperspectral imaging  

NASA Astrophysics Data System (ADS)

Hyperspectral imaging has emerged as an image processing technique in many applications. The reason that hyperspectral data is called hyperspectral is mainly because the massive amount of information provided by the hundreds of spectral bands that can be used for data analysis. However, due to very high band-to-band correlation much information may be also redundant. Consequently, how to effectively and best utilize such rich spectral information becomes very challenging. One general approach is data dimensionality reduction which can be performed by data compression techniques, such as data transforms, and data reduction techniques, such as band selection. This dissertation presents a new area in hyperspectral imaging, to be called progressive hyperspectral imaging, which has not been explored in the past. Specifically, it derives a new theory, called Progressive Band Processing (PBP) of hyperspectral data that can significantly reduce computing time and can also be realized in real-time. It is particularly suited for application areas such as hyperspectral data communications and transmission where data can be communicated and transmitted progressively through spectral or satellite channels with limited data storage. Most importantly, PBP allows users to screen preliminary results before deciding to continue with processing the complete data set. These advantages benefit users of hyperspectral data by reducing processing time and increasing the timeliness of crucial decisions made based on the data such as identifying key intelligence information when a required response time is short.

Schultz, Robert C.

205

Stochastic processes, estimation theory and image enhancement  

NASA Technical Reports Server (NTRS)

An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

Assefi, T.

1978-01-01

206

CMSC 426: Image Processing (Computer Vision)  

E-print Network

CMSC 426: Image Processing (Computer Vision) David Jacobs Today's class · What is vision · What is computer vision · Layout of the class #12;Vision · ``to know what is where, by looking.'' (Marr). · Where · What Why is Vision Interesting? · Psychology ­ ~ 50% of cerebral cortex is for vision. ­ Vision is how

Jacobs, David

207

CT imaging of wet specimens from a pathology museum: How to build a "virtual museum" for radiopathological correlation teaching.  

PubMed

X-rays and CT have been used to examine specimens such as human remains, mummies and formalin-fixed specimens. However, CT has not been used to study formalin-fixed wet specimens within their containers. The purpose of our study is firstly to demonstrate the role of CT as a non-destructive imaging method for the study of wet pathological specimens and secondly to use the CT data as a method for teaching pathological and radiological correlation. CT scanning of 31 musculoskeletal specimens from a pathology museum was carried out. Images were reconstructed using both soft-tissue and bone algorithms. Further processing of the data produced coronal and sagittal reformats of each specimen. The container and storage solution were manually removed using Volume Viewer Voxtool software to produce a 3D reconstruction of each specimen. Photographs of each specimen (container and close-up) were displayed alongside selected coronal, sagittal, 3D reconstructions and cine sequences in a specially designed computer program. CT is a non-destructive imaging modality for building didactic materials from wet specimens in a Pathology Museum, for teaching radiological and pathological correlation. PMID:16814293

Chhem, R K; Woo, J K H; Pakkiri, P; Stewart, E; Romagnoli, C; Garcia, B

2006-01-01

208

MedPix Radiology Teaching Files & Medical Image Database  

NSDL National Science Digital Library

For students of radiology and related fields, this database will be a most welcome find. Created by the team behind MedPix, the site includes thousands of radiology images designed to be used as educational tools. Visitors can click on the Picture of the Day to get started, and then head on over to the Weekly Quiz to test their mettle. The Radiology Tutor section includes nine different tutorials that cover topics such as Trauma, Vascular, Technique, and General Principles. The Brain Lesion Locator can help visitors learn about identifying different brain lesions via radiological images. The site is rounded out by seven different practice exams that will help visitors strengthen their basic understanding of radiological images.

2012-02-17

209

Imaging college educators (abstract only)  

Microsoft Academic Search

Within computing, the imaging field includes computer vision, image understanding, and image processing. While much research and teaching is done at the graduate level, the typical imaging educator at an undergraduate institution is the only specialist in his or her department. This BOF brings together educators who currently teach imaging courses or may be interested in expanding curricular offerings. We

Jerod Weinman; Ellen Walker

2012-01-01

210

Subband/transform functions for image processing  

NASA Technical Reports Server (NTRS)

Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

Glover, Daniel

1993-01-01

211

Images and the History Lecture: Teaching the History Channel Generation  

ERIC Educational Resources Information Center

No sensible historian would argue that using images in history lectures is a pedagogical waste of time. All people seem to accept the idea that visual elements (paintings, photographs, films, maps, charts, etc.) enhance the retention of historical information and add greatly to student enjoyment of the subject. However, there seems to be very…

Coohill, Joseph

2006-01-01

212

The constructive use of images in medical teaching: a literature review  

PubMed Central

This literature review illustrates the various ways images are used in teaching and the evidence appertaining to it and advice regarding permissions and use. Four databases were searched, 23 papers were retained out of 135 abstracts found for the study. Images are frequently used to motivate an audience to listen to a lecture or to note key medical findings. Images can promote observation skills when linked with learning outcomes, but the timing and relevance of the images is important – it appears they must be congruent with the dialogue. Student reflection can be encouraged by asking students to actually draw their own impressions of a course as an integral part of course feedback. Careful structured use of images improve attention, cognition, reflection and possibly memory retention. PMID:22666530

Norris, Elizabeth M

2012-01-01

213

Automated synthesis of image processing procedures using AI planning techniques  

NASA Technical Reports Server (NTRS)

This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

Chien, Steve; Mortensen, Helen

1994-01-01

214

IMAGE 100: The interactive multispectral image processing system  

NASA Technical Reports Server (NTRS)

The need for rapid, cost-effective extraction of useful information from vast quantities of multispectral imagery available from aircraft or spacecraft has resulted in the design, implementation and application of a state-of-the-art processing system known as IMAGE 100. Operating on the general principle that all objects or materials possess unique spectral characteristics or signatures, the system uses this signature uniqueness to identify similar features in an image by simultaneously analyzing signatures in multiple frequency bands. Pseudo-colors, or themes, are assigned to features having identical spectral characteristics. These themes are displayed on a color CRT, and may be recorded on tape, film, or other media. The system was designed to incorporate key features such as interactive operation, user-oriented displays and controls, and rapid-response machine processing. Owing to these features, the user can readily control and/or modify the analysis process based on his knowledge of the input imagery. Effective use can be made of conventional photographic interpretation skills and state-of-the-art machine analysis techniques in the extraction of useful information from multispectral imagery. This approach results in highly accurate multitheme classification of imagery in seconds or minutes rather than the hours often involved in processing using other means.

Schaller, E. S.; Towles, R. W.

1975-01-01

215

''Virtual Welding,'' a new aid for teaching Manufacturing Process Engineering  

NASA Astrophysics Data System (ADS)

Overcrowding in the classroom is a serious problem in universities, particularly in specialties that require a certain type of teaching practice. These practices often require expenditure on consumables and a space large enough to hold the necessary materials and the materials that have already been used. Apart from the budget, another problem concerns the attention paid to each student. The use of simulation systems in the early learning stages of the welding technique can prove very beneficial thanks to error detection functions installed in the system, which provide the student with feedbach during the execution of the practice session, and the significant savings in both consumables and energy.

Portela, José M.; Huerta, María M.; Pastor, Andrés; Álvarez, Miguel; Sánchez-Carrilero, Manuel

2009-11-01

216

Vector processing enhancements for real-time image analysis.  

SciTech Connect

A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

Shoaf, S.; APS Engineering Support Division

2008-01-01

217

Digital image processing of coal stream combustion  

E-print Network

homogeneous and burning time. However when there are too many particles in a cloud, the results are different. The pyrolysis and burning rates are reduced due to the presence of other particles because of competition for heat and oxygen. Hence... particles, a particle collection probe for collecting particles and an image processing system for analyzing the flame structure. The particles introduced into the hot gases ignite and burn. The ash /o of fired and collected particles are determined...

Gopalakrishnan, Chengappalli Periyasamy

1994-01-01

218

Digital image processing of vascular angiograms  

NASA Technical Reports Server (NTRS)

A computer image processing technique was developed to estimate the degree of atherosclerosis in the human femoral artery. With an angiographic film of the vessel as input, the computer was programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements were combined into an atherosclerosis index, which was found to correlate well with both visual and chemical estimates of atherosclerotic disease.

Selzer, R. H.; Blankenhorn, D. H.; Beckenbach, E. S.; Crawford, D. W.; Brooks, S. H.

1975-01-01

219

IPLIB (Image processing library) user's manual  

NASA Technical Reports Server (NTRS)

IPLIB is a collection of HP FORTRAN 77 subroutines and functions that facilitate the use of a COMTAL image processing system driven by an HP-1000 computer. It is intended for programmers who want to use the HP 1000 to drive the COMTAL Vision One/20 system. It is assumed that the programmer knows HP 1000 FORTRAN 77 or at least one FORTRAN dialect. It is also assumed that the programmer has some familiarity with the COMTAL Vision One/20 system.

Faulcon, N. D.; Monteith, J. H.; Miller, K.

1985-01-01

220

Using the medical image processing package, ImageJ, for astronomy  

E-print Network

At the most fundamental level, all digital images are just large arrays of numbers that can easily be manipulated by computer software. Specialized digital imaging software packages often use routines common to many different applications and fields of study. The freely available, platform independent, image-processing package ImageJ has many such functions. We highlight ImageJ's capabilities by presenting methods of processing sequences of images to produce a star trail image and a single high quality planetary image.

Jennifer L. West; Ian D. Cameron

2006-11-21

221

Optical processing of imaging spectrometer data  

NASA Technical Reports Server (NTRS)

The data-processing problems associated with imaging spectrometer data are reviewed; new algorithms and optical processing solutions are advanced for this computationally intensive application. Optical decision net, directed graph, and neural net solutions are considered. Decision nets and mineral element determination of nonmixture data are emphasized here. A new Fisher/minimum-variance clustering algorithm is advanced, initialization using minimum-variance clustering is found to be preferred and fast. Tests on a 500-class problem show the excellent performance of this algorithm.

Liu, Shiaw-Dong; Casasent, David

1988-01-01

222

Color Image Processing and Object Tracking System  

NASA Technical Reports Server (NTRS)

This report describes a personal computer based system for automatic and semiautomatic tracking of objects on film or video tape, developed to meet the needs of the Microgravity Combustion and Fluids Science Research Programs at the NASA Lewis Research Center. The system consists of individual hardware components working under computer control to achieve a high degree of automation. The most important hardware components include 16-mm and 35-mm film transports, a high resolution digital camera mounted on a x-y-z micro-positioning stage, an S-VHS tapedeck, an Hi8 tapedeck, video laserdisk, and a framegrabber. All of the image input devices are remotely controlled by a computer. Software was developed to integrate the overall operation of the system including device frame incrementation, grabbing of image frames, image processing of the object's neighborhood, locating the position of the object being tracked, and storing the coordinates in a file. This process is performed repeatedly until the last frame is reached. Several different tracking methods are supported. To illustrate the process, two representative applications of the system are described. These applications represent typical uses of the system and include tracking the propagation of a flame front and tracking the movement of a liquid-gas interface with extremely poor visibility.

Klimek, Robert B.; Wright, Ted W.; Sielken, Robert S.

1996-01-01

223

Development of the Instructional Model by Integrating Information Literacy in the Class Learning and Teaching Processes  

ERIC Educational Resources Information Center

This study was aimed at developing an instructional model by integrating information literacy in the instructional process of general education courses at an undergraduate level. The research query, "What is the teaching methodology that integrates information literacy in the instructional process of general education courses at an undergraduate…

Maitaouthong, Therdsak; Tuamsuk, Kulthida; Techamanee, Yupin

2011-01-01

224

The Design Studio as Teaching/Learning Medium--A Process-Based Approach  

ERIC Educational Resources Information Center

This article discusses a design studio teaching experience exploring the design process itself as a methodological tool. We consider the structure of important phases of the process that contain different levels of design thinking: conception, function and practical knowledge as well as the transitions from inception to construction. We show how…

Ozturk, Maya N.; Turkkan, Elif E.

2006-01-01

225

Teaching Students About the Process of Science: Using Google to Collect and Analyze Student Lab Measurements  

Microsoft Academic Search

The process of science necessarily includes critical analysis of uncertainty in repeated measurements. We demonstrate how the measurements that students make can be collected and analyzed in real time with Google Docs. Showing students how their measurements compare to the rest of the class provides a valuable opportunity to teach about uncertainty and the process of science. Student work can

Kristen Larson; Jim Stewart

2010-01-01

226

Teaching with Pensive Images: Rethinking Curiosity in Paulo Freire's "Pedagogy of the Oppressed"  

ERIC Educational Resources Information Center

Often when the author is teaching philosophy of education, his students begin the process of inquiry by prefacing their questions with something along the lines of "I'm just curious, but ...." Why do teachers and students feel compelled to express their curiosity as "just" curiosity? Perhaps there is a slight embarrassment in proclaiming their…

Lewis, Tyson E.

2012-01-01

227

Teaching Fraunhofer diffraction via experimental and simulated images in the laboratory  

NASA Astrophysics Data System (ADS)

Diffraction is an important phenomenon introduced to Physics university students in a subject of Fundamentals of Optics. In addition, in the Physics Degree syllabus of the Universitat Autònoma de Barcelona, there is an elective subject in Applied Optics. In this subject, diverse diffraction concepts are discussed in-depth from different points of view: theory, experiments in the laboratory and computing exercises. In this work, we have focused on the process of teaching Fraunhofer diffraction through laboratory training. Our approach involves students working in small groups. They visualize and acquire some important diffraction patterns with a CCD camera, such as those produced by a slit, a circular aperture or a grating. First, each group calibrates the CCD camera, that is to say, they obtain the relation between the distances in the diffraction plane in millimeters and in the computer screen in pixels. Afterwards, they measure the significant distances in the diffraction patterns and using the appropriate diffraction formalism, they calculate the size of the analyzed apertures. Concomitantly, students grasp the convolution theorem in the Fourier domain by analyzing the diffraction of 2-D gratings of elemental apertures. Finally, the learners use a specific software to simulate diffraction patterns of different apertures. They can control several parameters: shape, size and number of apertures, 1-D or 2-D gratings, wavelength, focal lens or pixel size.Therefore, the program allows them to reproduce the images obtained experimentally, and generate others by changingcertain parameters. This software has been created in our research group, and it is freely distributed to the students in order to help their learning of diffraction. We have observed that these hands on experiments help students to consolidate their theoretical knowledge of diffraction in a pedagogical and stimulating learning process.

Peinado, Alba; Vidal, Josep; Escalera, Juan Carlos; Lizana, Angel; Campos, Juan; Yzuel, Maria

2012-10-01

228

FITSH- a software package for image processing  

NASA Astrophysics Data System (ADS)

In this paper we describe the main features of the software package named FITSH, intended to provide a standalone environment for analysis of data acquired by imaging astronomical detectors. The package both provides utilities for the full pipeline of subsequent related data-processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple-image combinations, spatial transformations and interpolations) and aids the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The set of utilities found in this package is built on top of the commonly used UNIX/POSIX shells (hence the name of the package); therefore, both frequently used and well-documented tools for such environments can be exploited and managing a massive amount of data is rather convenient.

Pál, András.

2012-04-01

229

The Airborne Ocean Color Imager - System description and image processing  

NASA Technical Reports Server (NTRS)

The Airborne Ocean Color Imager was developed as an aircraft instrument to simulate the spectral and radiometric characteristics of the next generation of satellite ocean color instrumentation. Data processing programs have been developed as extensions of the Coastal Zone Color Scanner algorithms for atmospheric correction and bio-optical output products. The latter include several bio-optical algorithms for estimating phytoplankton pigment concentration, as well as one for the diffuse attenuation coefficient of the water. Additional programs have been developed to geolocate these products and remap them into a georeferenced data base, using data from the aircraft's inertial navigation system. Examples illustrate the sequential data products generated by the processing system, using data from flightlines near the mouth of the Mississippi River: from raw data to atmospherically corrected data, to bio-optical data, to geolocated data, and, finally, to georeferenced data.

Wrigley, Robert C.; Slye, Robert E.; Klooster, Steven A.; Freedman, Richard S.; Carle, Mark; Mcgregor, Lloyd F.

1992-01-01

230

Images of a 'good nurse' presented by teaching staff.  

PubMed

Nursing is at the same time a vocation, a profession and a job. By nature, nursing is a moral endeavor, and being a 'good nurse' is an issue and an aspiration for professionals. The aim of our qualitative research project carried out with 18 nurse teachers at a university nursing school in Brazil was to identify the ethical image of nursing. In semistructured interviews the participants were asked to choose one of several pictures, to justify their choice and explain what they meant by an ethical nurse. Five different perspectives were revealed: good nurses fulfill their duties correctly; they are proactive patient advocates; they are prepared and available to welcome others as persons; they are talented, competent, and carry out professional duties excellently; and they combine authority with power sharing in patient care. The results point to a transition phase from a historical introjection of religious values of obedience and service to a new sense of a secular, proactive, scientific and professional identity. PMID:21097967

de Araujo Sartorio, Natalia; Pavone Zoboli, Elma Lourdes Campos

2010-11-01

231

Image Modeling with Parametric Texture Sources for Design and Analysis of Image Processing Algorithms  

E-print Network

Image Modeling with Parametric Texture Sources for Design and Analysis of Image Processing statistical image model is proposed to facilitate the design and analysis of image processing algorithms. A mean-removed image neighborhood is modeled as a scaled segment of a hypothetical texture source, char

Girod, Bernd

232

Development of the SOFIA Image Processing Tool  

NASA Technical Reports Server (NTRS)

The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a Boeing 747SP carrying a 2.5 meter infrared telescope capable of operating between at altitudes of between twelve and fourteen kilometers, which is above more than 99 percent of the water vapor in the atmosphere. The ability to make observations above most water vapor coupled with the ability to make observations from anywhere, anytime, make SOFIA one of the world s premiere infrared observatories. SOFIA uses three visible light CCD imagers to assist in pointing the telescope. The data from these imagers is stored in archive files as is housekeeping data, which contains information such as boresight and area of interest locations. A tool that could both extract and process data from the archive files was developed.

Adams, Alexander N.

2011-01-01

233

Color Correction in the Framework of Color Logarithmic Image Processing  

E-print Network

Color Correction in the Framework of Color Logarithmic Image Processing H´el`ene Gouinaud, Yann and with the multiplicative image formation model. In this paper, the so-called Color Logarithmic Image Processing (CoLIP) framework is introduced. This novel framework expands the LIP theory to color images in the context

Paris-Sud XI, Université de

234

Image Processing Start simple: look at small windows  

E-print Network

Image Processing · Goal ­ Start simple: look at small windows ­ Identify useful image structures (`Clues' useful for recognizing objects) ­ Eliminate irrelevant aspects of image appearance (neglect;· We don't "see" most information in image #12;· We don't "see" most information in image + Can add

Oliensis, John

235

A New Image Processing and GIS Package  

NASA Technical Reports Server (NTRS)

The image processing and GIS package ELAS was developed during the 1980's by NASA. It proved to be a popular, influential and powerful in the manipulation of digital imagery. Before the advent of PC's it was used by hundreds of institutions, mostly schools. It is the unquestioned, direct progenitor or two commercial GIS remote sensing packages, ERDAS and MapX and influenced others, such as PCI. Its power was demonstrated by its use for work far beyond its original purpose, having worked several different types of medical imagery, photomicrographs of rock, images of turtle flippers and numerous other esoteric imagery. Although development largely stopped in the early 1990's the package still offers as much or more power and flexibility than any other roughly comparable package, public or commercial. It is a huge body or code, representing more than a decade of work by full time, professional programmers. The current versions all have several deficiencies compared to current software standards and usage, notably its strictly command line interface. In order to support their research needs the authors are in the process of fundamentally changing ELAS, and in the process greatly increasing its power, utility, and ease of use. The new software is called ELAS II. This paper discusses the design of ELAS II.

Rickman, D.; Luvall, J. C.; Cheng, T.

1998-01-01

236

The Effects of Image-Based Concept Mapping on the Learning Outcomes and Cognitive Processes of Mobile Learners  

ERIC Educational Resources Information Center

The purpose of this study was to investigate the effects of different teaching strategies (text-based concept mapping vs. image-based concept mapping) on the learning outcomes and cognitive processes of mobile learners. Eighty-six college freshmen enrolled in the "Local Area Network Planning and Implementation" course taught by the first author…

Yen, Jung-Chuan; Lee, Chun-Yi; Chen, I-Jung

2012-01-01

237

Research on pavement crack recognition methods based on image processing  

NASA Astrophysics Data System (ADS)

In order to overview and analysis briefly pavement crack recognition methods , then find the current existing problems in pavement crack image processing, the popular methods of crack image processing such as neural network method, morphology method, fuzzy logic method and traditional image processing .etc. are discussed, and some effective solutions to those problems are presented.

Cai, Yingchun; Zhang, Yamin

2011-06-01

238

Using NASA Space Imaging Technology to Teach Earth and Sun Topics  

NASA Astrophysics Data System (ADS)

We teach an experimental college-level course, directed toward elementary education majors, emphasizing "hands-on" activities that can be easily applied to the elementary classroom. This course, Physics 240: "The Sun-Earth Connection" includes various ways to study selected topics in physics, earth science, and basic astronomy. Our lesson plans and EPO materials make extensive use of NASA imagery and cover topics about magnetism, the solar photospheric, chromospheric, coronal spectra, as well as earth science and climate. In addition we are developing and will cover topics on ecosystem structure, biomass and water on Earth. We strive to free the non-science undergraduate from the "fear of science" and replace it with the excitement of science such that these future teachers will carry this excitement to their future students. Hands-on experiments, computer simulations, analysis of real NASA data, and vigorous seminar discussions are blended in an inquiry-driven curriculum to instill confident understanding of basic physical science and modern, effective methods for teaching it. The course also demonstrates ways how scientific thinking and hands-on activities could be implemented in the classroom. We have designed this course to provide the non-science student a confident basic understanding of physical science and modern, effective methods for teaching it. Most of topics were selected using National Science Standards and National Mathematics Standards that are addressed in grades K-8. The course focuses on helping education majors: 1) Build knowledge of scientific concepts and processes; 2) Understand the measurable attributes of objects and the units and methods of measurements; 3) Conduct data analysis (collecting, organizing, presenting scientific data, and to predict the result); 4) Use hands-on approaches to teach science; 5) Be familiar with Internet science teaching resources. Here we share our experiences and challenges we face while teaching this course.

Verner, E.; Bruhweiler, F. C.; Long, T.

2011-12-01

239

Using Image Processing to Determine Emphysema Severity  

NASA Astrophysics Data System (ADS)

Currently X-rays and computerized tomography (CT) scans are used to detect emphysema, but other tests are required to accurately quantify the amount of lung that has been affected by the disease. These images clearly show if a patient has emphysema, but are unable by visual scan alone, to quantify the degree of the disease, as it presents as subtle, dark spots on the lung. Our goal is to use these CT scans to accurately diagnose and determine emphysema severity levels in patients. This will be accomplished by performing several different analyses of CT scan images of several patients representing a wide range of severity of the disease. In addition to analyzing the original CT data, this process will convert the data to one and two bit images and will then examine the deviation from a normal distribution curve to determine skewness. Our preliminary results show that this method of assessment appears to be more accurate and robust than the currently utilized methods, which involve looking at percentages of radiodensities in the air passages of the lung.

McKenzie, Alexander; Sadun, Alberto

2010-10-01

240

Teaching Writing Using the Process-Oriented Approach.  

ERIC Educational Resources Information Center

This study had three objectives: (1) to examine and describe factors that help to create a positive attitude toward learning; (2) to examine and describe factors that led to students' participation in the process-oriented approach; and (3) to examine and describe perceptions and experiences that students had involving the process-oriented…

Westervelt, Lisa

241

Effects of Dictation/Word Processing Systems on Teaching Writing.  

ERIC Educational Resources Information Center

Because of dramatic changes in the technology of communication systems in business, industry, government, and the professions, college graduates are no longer writing the way they were taught to write. Instead of being physically engaged in a recursive pen-in-hand process, they are dictating their communications for word processing systems. A…

Halpern, Jeanne W.

242

Personalized Instruction, Group Process and the Teaching of Psychological Theories of Learning.  

ERIC Educational Resources Information Center

An innovative approach to teaching learning theory to undergraduates was tested by comparing a modified Personalized System of Instruction (PSI) group process class (n=19) to a traditional teacher-centered control class (n=32). Predictions were that academic performance and motivation would be improved by the PSI method, and student satisfaction…

DiScipio, William J.; Crohn, Joan

243

A National Research Survey of Technology Use in the BSW Teaching and Learning Process  

ERIC Educational Resources Information Center

The purpose of this descriptive-correlational research study was to assess the overall use of technology in the teaching and learning process (TLP) by BSW educators. The accessible and target population included all full-time, professorial-rank, BSW faculty in Council on Social Work Education--accredited BSW programs at land grant universities.…

Buquoi, Brittany; McClure, Carli; Kotrlik, Joseph W.; Machtmes, Krisanna; Bunch, J. C.

2013-01-01

244

ICCE/ICCAI 2000 Full & Short Papers (Teaching and Learning Processes).  

ERIC Educational Resources Information Center

This document contains the full and short papers on teaching and learning processes from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction) covering the following topics: a code restructuring tool to help scaffold novice programmers; efficient study of Kanji using…

2000

245

The Emergence of the Teaching/Learning Process in Preschoolers: Theory of Mind and Age Effect  

ERIC Educational Resources Information Center

This study analysed the gradual emergence of the teaching/learning process by examining theory of mind (ToM) acquisition and age effects in the preschool period. We observed five dyads performing a jigsaw task drawn from a previous study. Three stages were identified. In the first one, the teacher focuses on the execution of her/his own task…

Bensalah, Leila

2011-01-01

246

Optimizing the Teaching-Learning Process Through a Linear Programming Model--Stage Increment Model.  

ERIC Educational Resources Information Center

An operations research method to optimize the teaching-learning process is introduced in this paper. In particular, a linear programing model is proposed which, unlike dynamic or control theory models, allows the computer to react to the responses of a learner in seconds or less. To satisfy the assumptions of linearity, the seemingly complicated…

Belgard, Maria R.; Min, Leo Yoon-Gee

247

Internet Access, Use and Sharing Levels among Students during the Teaching-Learning Process  

ERIC Educational Resources Information Center

The purpose of this study was to determine the awareness among students and levels regarding student access, use, and knowledge sharing during the teaching-learning process. The triangulation method was utilized in this study. The population of the research universe was 21,747. The student sample population was 1,292. Two different data collection…

Tutkun, Omer F.

2011-01-01

248

Learning and Teaching about the Nature of Science through Process Skills  

ERIC Educational Resources Information Center

This dissertation, a three-paper set, explored whether the process skills-based approach to nature of science instruction improves teachers' understandings, intentions to teach, and instructional practice related to the nature of science. The first paper examined the nature of science views of 53 preservice science teachers before and after a…

Mulvey, Bridget K.

2012-01-01

249

Using Multimedia and Cooperative Learning for Online Teaching of Signal Processing Techniques in Communication Systems  

Microsoft Academic Search

The manuscript describes how the multimedia technology and cooperative learning were used to teach an online course, Signal Processing techniques in Communication Systems (SP\\/CS). Students study online presentations enhanced by multimedia animations and perception quizzes, discuss prob- lems using an asynchronous conferencing system, and complete cooperative laboratory simulations based on Matlab. A locally designed course management system (CMS) provides access

Amir Asif

250

Three Strategies for Intensifying Student Involvement in the Teaching/Learning Process  

ERIC Educational Resources Information Center

This article describes the library media program at John Glenn Elementary School (San Antonio, Texas). The program's goal was to increase student involvement in the teaching/learning process through a library media program that would generate excitement, intensity, and commitment. The three strategies they employed to achieve this goal were: (1)…

McGuire, Patience Lea

2005-01-01

251

Exploring the Process of Integrating the Internet into English Language Teaching  

ERIC Educational Resources Information Center

The present paper explores the process of integrating the Internet into the field of English language teaching in the light of the following points: the general importance of the Internet in our everyday lives shedding some light on the increasing importance of the Internet as a new innovation in our modern life; benefits of using the Internet in…

Abdallah, Mahmoud Mohammad Sayed

2007-01-01

252

Metaphors in Mathematics Classrooms: Analyzing the Dynamic Process of Teaching and Learning of Graph Functions  

ERIC Educational Resources Information Center

This article presents an analysis of a phenomenon that was observed within the dynamic processes of teaching and learning to read and elaborate Cartesian graphs for functions at high-school level. Two questions were considered during this investigation: What types of metaphors does the teacher use to explain the graphic representation of functions…

Font, Vicenc; Bolite, Janete; Acevedo, Jorge

2010-01-01

253

A Development of Environmental Education Teaching Process by Using Ethics Infusion for Undergraduate Students  

ERIC Educational Resources Information Center

Environmental problems were made by human beings because they lack environmental ethics. The sustainable solving of environmental problems must rely on a teaching process using an environmental ethics infusion method. The purposes of this research were to study knowledge of environment and environmental ethics through an environmental education…

Wongchantra, Prayoon; Boujai, Pairoj; Sata, Winyoo; Nuangchalerm, Prasart

2008-01-01

254

Understanding Reactions to Workplace Injustice through Process Theories of Motivation: A Teaching Module and Simulation  

ERIC Educational Resources Information Center

Management and organizational behavior students are often overwhelmed by the plethora of motivation theories they must master at the undergraduate level. This article offers a teaching module geared toward helping students understand how two major process theories of motivation, equity and expectancy theories and theories of organizational…

Stecher, Mary D.; Rosse, Joseph G.

2007-01-01

255

Using a Laboratory Simulator in the Teaching and Study of Chemical Processes in Estuarine Systems  

ERIC Educational Resources Information Center

The teaching of Chemical Oceanography in the Faculty of Marine and Environmental Sciences of the University of Cadiz (Spain) has been improved since 1994 by the employment of a device for the laboratory simulation of estuarine mixing processes and the characterisation of the chemical behaviour of many substances that pass through an estuary. The…

Garcia-Luque, E.; Ortega, T.; Forja, J. M.; Gomez-Parra, A.

2004-01-01

256

TGINF: GRAPHIC INFORMATION PROCESSING FOR TEACHING P. Ors, G. Villarroya. Departament of Mathematics. University JaumeI Castelln. Spain  

E-print Network

TGINF: GRAPHIC INFORMATION PROCESSING FOR TEACHING P. Orús, G. Villarroya. Departament of Mathematics. University JaumeI Castellón. Spain ABSTRACT: We present TGINF, a software application for graphic processing in teaching. We will describe the software, the rationale behind it, how it works and an example

Paris-Sud XI, Université de

257

Effects of image processing on the detective quantum efficiency  

NASA Astrophysics Data System (ADS)

Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

2010-04-01

258

Imaging fault zones using 3D seismic image processing techniques  

NASA Astrophysics Data System (ADS)

Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes and collecting these into "disturbance geobodies". These seismic image processing methods represents a first efficient step toward a construction of a robust technique to investigate sub-seismic strain, mapping noisy deformed zones and displacement within subsurface geology (Dutzer et al.,2011; Iacopini et al.,2012). In all these cases, accurate fault interpretation is critical in applied geology to building a robust and reliable reservoir model, and is essential for further study of fault seal behavior, and reservoir compartmentalization. They are also fundamental for understanding how deformation localizes within sedimentary basins, including the processes associated with active seismogenetic faults and mega-thrust systems in subduction zones. Dutzer, JF, Basford., H., Purves., S. 2009, Investigating fault sealing potential through fault relative seismic volume analysis. Petroleum Geology Conference series 2010, 7:509-515; doi:10.1144/0070509 Marfurt, K.J., Chopra, S., 2007, Seismic attributes for prospect identification and reservoir characterization. SEG Geophysical development Iacopini, D., Butler, RWH. & Purves, S. (2012). 'Seismic imaging of thrust faults and structural damage: a visualization workflow for deepwater thrust belts'. First Break, vol 5, no. 30, pp. 39-46.

Iacopini, David; Butler, Rob; Purves, Steve

2013-04-01

259

Contrast-detail comparison between unprocessed and processed CDMAM images  

NASA Astrophysics Data System (ADS)

The purpose of this study is to compare Contrast Detail Curves (CDCs) of unprocessed and processed digital images. Images of a CDMAM (contrast detail for mammography) phantom had been acquired at 29 kV Tungsten-Rhodium anode-filter combination and 100 mAs; unprocessed images were subsequently processed using five clinically available image processing algorithms. Scoring of CDMAM images was then performed using human observers and automatic reading. Five observers conducted a four-alternative forced-choice experiment on a set of four images, for each processing condition. For the automatic analysis of CDMAM images the CDCOM software program was used. Contrast Detail Curves were then computed both for the human and automatic reading by fitting a psychometric curve, after applying a smoothing algorithm (Gaussian filter). For both types of readings the CDCs from processed and unprocessed images were compared. We verified the statistical significance of the difference ? between contrast threshold measurements at 0.1 mm target size (Figure of Merit, FoM), for unprocessed and processed images and for each image processing algorithm separately. The non-parametric bootstrap method was used. No statistical significant difference is found between raw and processed images. This study shows that CDMAM images may not be appropriate in assessing image processing algorithms.

Zanca, F.; Bosmans, H.; Jacobs, J.; Michielsen, K.; Sisini, F.; Nens, J.; Young, K. C.; Shaheen, E.; Jacobs, A.; Marchal, G.

2009-02-01

260

DKIST visible broadband imager data processing pipeline  

NASA Astrophysics Data System (ADS)

The Daniel K. Inouye Solar Telescope (DKIST) Data Handling System (DHS) provides the technical framework and building blocks for developing on-summit instrument quality assurance and data reduction pipelines. The DKIST Visible Broadband Imager (VBI) is a first light instrument that alone will create two data streams with a bandwidth of 960 MB/s each. The high data rate and data volume of the VBI require near-real time processing capability for quality assurance and data reduction, and will be performed on-summit using Graphics Processing Unit (GPU) technology. The VBI data processing pipeline (DPP) is the first designed and developed using the DKIST DHS components, and therefore provides insight into the strengths and weaknesses of the framework. In this paper we lay out the design of the VBI DPP, examine how the underlying DKIST DHS components are utilized, and discuss how integration of the DHS framework with GPUs was accomplished. We present our results of the VBI DPP alpha release implementation of the calibration, frame selection reduction, and quality assurance display processing nodes.

Beard, Andrew; Cowan, Bruce; Ferayorni, Andrew

2014-07-01

261

Image processing using light-sensitive chemical waves  

Microsoft Academic Search

Basic principles of information processing by chemical light-sensitive reaction–diffusion media and dynamic modes of these media adequate to information processing operations are studied. Specialized experimental laboratory technique is elaborated optimum for image processing investigations. New modes of image evolution in the process of its transformation by reaction–diffusion medium are observed. Basic features of image processing by chemical reaction–diffusion media could

N. G. Rambidi; K. E. Shamayaev; G. Yu. Peshkov

2002-01-01

262

Image processing and products for the Magellan mission to Venus  

NASA Technical Reports Server (NTRS)

The Magellan mission to Venus is providing planetary scientists with massive amounts of new data about the surface geology of Venus. Digital image processing is an integral part of the ground data system that provides data products to the investigators. The mosaicking of synthetic aperture radar (SAR) image data from the spacecraft is being performed at JPL's Multimission Image Processing Laboratory (MIPL). MIPL hosts and supports the Image Data Processing Subsystem (IDPS), which was developed in a VAXcluster environment of hardware and software that includes optical disk jukeboxes and the TAE-VICAR (Transportable Applications Executive-Video Image Communication and Retrieval) system. The IDPS is being used by processing analysts of the Image Data Processing Team to produce the Magellan image data products. Various aspects of the image processing procedure are discussed.

Clark, Jerry; Alexander, Doug; Andres, Paul; Lewicki, Scott; Mcauley, Myche

1992-01-01

263

Teaching Heritage  

NSDL National Science Digital Library

Subtitled "a professional development Website for teachers," Teaching Heritage is an impressive collection of information and resources for teaching Australian history and culture. There are eight main sections to the site: four offer teaching resources and four provide teaching units. The resource sections include an examination of different ways of defining heritage, an Australian heritage timeline, discussions of different approaches to teaching heritage through media, and outcomes-based approaches in teaching and assessing heritage coursework. The teaching units deal in depth with issues of citizenship, nationalism, Australian identities, and new cultural values. A Heritage Gallery features images of various culturally significant or representative places in Australia, such as New Italy, the Dundullimal Homestead, Australian Hall, Kelly's Bush, and many more. Obviously, teachers of Civics on the southern continent will find this site extremely useful, but the teaching units -- rich with texts and images -- also offer fascinating introductions for anyone interested in the issues of Australian nation-making.

264

Image Processing Using ImageJ In this assignment, we use the built-in functions in ImageJ to process images. ImageJ is rewritten in Java and  

E-print Network

1 Image Processing Using ImageJ In this assignment, we use the built-in functions in ImageJ to process images. ImageJ is rewritten in Java and can be extended with Java plugins. I. INSTALLATION Download ImageJ from http://rsbweb.nih.gov/ij/download.html. Be sure to choose the right version that fits

Jiang, Hao

265

Spot restoration for GPR image post-processing  

DOEpatents

A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

Paglieroni, David W; Beer, N. Reginald

2014-05-20

266

Vision-sensing image analysis for GTAW process control  

SciTech Connect

Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

Long, D.D.

1994-11-01

267

Networks for image acquisition, processing and display  

NASA Technical Reports Server (NTRS)

The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

Ahumada, Albert J., Jr.

1990-01-01

268

Image Algebra Matlab language version 2.3 for image processing and compression research  

Microsoft Academic Search

Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and

Mark S. Schmalz; Gerhard X. Ritter; Eric Hayden

2010-01-01

269

Teaching about Due Process of Law. ERIC Digest.  

ERIC Educational Resources Information Center

Fundamental constitutional and legal principles are central to effective instruction in the K-12 social studies curriculum. To become competent citizens, students need to develop an understanding of the principles on which their society and government are based. Few principles are as important in the social studies curriculum as due process of…

Vontz, Thomas S.

270

Teaching MBA Statistics Online: A Pedagogically Sound Process Approach  

ERIC Educational Resources Information Center

Delivering MBA statistics in the online environment presents significant challenges to education and students alike because of varying student preparedness levels, complexity of content, difficulty in assessing learning outcomes, and faculty availability and technological expertise. In this article, the author suggests a process model that…

Grandzol, John R.

2004-01-01

271

Application of Structure Process Theory to the Teaching of Reading.  

ERIC Educational Resources Information Center

Consistent with definitions of theory as offered by Skinner and Bruner, and based on psychological and neurophysiological evidence for a cognitive hierarchy as propounded by eminent psychologists, the structure process theory has validity in constructing a model of reading instruction. The appropriateness of model construction arises from the…

Frost, Joe L.

272

Kagan Structures, Processing, and Excellence in College Teaching  

ERIC Educational Resources Information Center

Frequent student processing of lecture content (1) clears working memory, (2) increases long-term memory storage, (3) produces retrograde memory enhancement, (4) creates episodic memories, (5) increases alertness, and (6) activates many brain structures. These outcomes increase comprehension of and memory for content. Many professors now…

Kagan, Spencer

2014-01-01

273

Teaching Information Literacy and Scientific Process Skills: An Integrated Approach.  

ERIC Educational Resources Information Center

Describes an online searching and scientific process component taught as part of the laboratory for a general zoology course. The activities were designed to be gradually more challenging, culminating in a student-developed final research project. Student evaluations were positive, and faculty indicated that student research skills transferred to…

Souchek, Russell; Meier, Marjorie

1997-01-01

274

PET Plants: Imaging Natural Processes for Renewable Energy  

E-print Network

PET Plants: Imaging Natural Processes for Renewable Energy from Plants Benjamin A. Babst Goldhaber Postdoctoral Fellow Medical Department Plant Imaging #12;PET imaging for medicine Tumor Diagnosis of molecules in plants. #12;Outline Bioenergy to mitigate energy crisis PET tools & imaging Plant resource

Homes, Christopher C.

275

BIL 415 -Image Processing Practicum Department of Computer Engineering  

E-print Network

BIL 415 - Image Processing Practicum Department of Computer Engineering Problem Set 2 Fall '2014-2015 Dr. Erkut Erdem TAs. Levent Karacan Create Your Own Image Effects Due Date: 23:59pm on Friday, October 31st, 2014 a) Input b)Created Effect Figure 1: Input image and created image effect by using

Erdem, Erkut

276

Parallel processing for fusing SPOT5 satellite images  

NASA Astrophysics Data System (ADS)

Based on the FFT-enhanced IHS transform method a modified fusion method for SPOT5 images is proposed. Because of demanding computation in image fusion a combination of pipeline parallelism and data parallelism is applied in practice. Experimental results indicate that the spectral effect of fused images is good and distributed parallel processing solves the problem of demanding computation in fused images.

Ai, Haibin; Zhang, Jianqing; Zhang, Yong

2007-11-01

277

Corn plant locating by image processing  

NASA Astrophysics Data System (ADS)

The feasibility investigation of using machine vision technology to locate corn plants is an important issue for field production automation in the agricultural industry. This paper presents an approach which was developed to locate the center of a corn plant using image processing techniques. Corn plants were first identified using a main vein detection algorithm by detecting a local feature of corn leaves leaf main veins based on the spectral difference between mains and leaves then the center of the plant could be located using a center locating algorithm by tracing and extending each detected vein line and evaluating the center of the plant from intersection points of those lines. The experimental results show the usefulness of the algorithm for machine vision applications related to corn plant identification. Such a technique can be used for pre. cisc spraying of pesticides or biotech chemicals. 1.

Jia, Jiancheng; Krutz, Gary W.; Gibson, Harry W.

1991-02-01

278

Statistical Calibration of the CCD Imaging Process  

Microsoft Academic Search

Charge-Coupled Device (CCD) cameras are widely used imaging sensors in computer vision systems. Many pho- tometric algorithms, such as shape from shading, color constancy, and photometric stereo, implicitly assume that the image intensity is proportional to scene radiance. The actual image measurements deviate significantly from this assumption since the transformation from scene radiance to image intensity is non-linear and is

Yanghai Tsin; Visvanathan Ramesh; Takeo Kanade

2001-01-01

279

The image processing and target identification of laser imaging fuze  

Microsoft Academic Search

Imaging detection can get the geometric shape and exterior features of target in very close distance, and supply the information for target recognition. But because of the large amount of the information and the real-time requirement, the image of target is always distorted and incomplete.To solve this probolem,the image proecessing and target identification method of laser imaging fuze are introduced.

Song Chengtian; Wang Keyong; Zheng Lian

2008-01-01

280

Application of near-infrared image processing in agricultural engineering  

Microsoft Academic Search

Recently, with development of computer technology, the application field of near-infrared image processing becomes much wider. In this paper the technical characteristic and development of modern NIR imaging and NIR spectroscopy analysis were introduced. It is concluded application and studying of the NIR imaging processing technique in the agricultural engineering in recent years, base on the application principle and developing

Ming-Hong Chen; Guo-Ping Zhang; Hongxing Xia

2009-01-01

281

Image processing on ECG chart for ECG signal recovery  

Microsoft Academic Search

Medical imaging plays an indispensable role on medical informatics. Most of imaging processing technologies is focused on the identification of the locations of diseases on MRI, CT, PET, and SPECT images. However, only a few researches focused on one dimension signal recovery or reconstruction of electronic signals. Spatial and frequency methods were provided to process on colour or gray-level electrocardiogram

T. W. Shen; T. F. Laio

2009-01-01

282

An image-processing software package: UU and Fig for optical metrology applications  

NASA Astrophysics Data System (ADS)

Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

Chen, Lujie

2013-06-01

283

Post-digital image processing based on microlens array  

NASA Astrophysics Data System (ADS)

Benefit from the attractive features such as compact volume, thin and lightweight, the imaging systems based on microlens array have become an active area of research. However, current imaging systems based on microlens array have insufficient imaging quality so that it cannot meet the practical requirements in most applications. As a result, the post-digital image processing for image reconstruction from the low-resolution sub-image sequence becomes particularly important. In general, the post-digital image processing mainly includes two parts: the accurate estimation of the motion parameters between the sub-image sequence and the reconstruction of high resolution image. In this paper, given the fact that the preprocessing of the unit image can make the edge of the reconstructed high-resolution image clearer, the low-resolution images are preprocessed before the post-digital image processing. Then, after the processing of the pixel rearrange method, a high-resolution image is obtained. From the result, we find that the edge of the reconstructed high-resolution image is clearer than that without preprocessing.

Shi, Chaiyuan; Xu, Feng

2014-10-01

284

Automatic processing method of mass MODIS image data  

NASA Astrophysics Data System (ADS)

As one of the most popular optical remote sensor images, MODIS (Moderate Resolution Imaging Spectroradiometer) image are widely used in many areas. However, the processing of MODIS image data is considered as a cumbersome, time-consuming work, especially for the long time series earth observation research. Automatic processing technology is specially needed here. But because of the complex procedure of image matching and the high requirement of location calibration, these images are manual processed in most of the researches. This paper presents an automatic processing method for MODIS image products (mainly for Level 1 B, can be applied on 8-day snow observation image product and daily snow cover optical image data as well). By using the automatic processing system, the efficiency of optical remote sensing image processing is sharply increased while the accuracy in calibration remains the same in comparing with traditional processing method. The working flowchart of the processing system is introduced for those who will deal with mass of MODIS data in their research. Finally, an automatic processing system of snow cover monitoring model based on MODIS L1B image data in ENVI/IDL environment is discussed as the practical application of processing method in long time series snow cover monitoring over Northeast China with MODSI images. The performance shows that the time spent in data processing can be saved from 48 manual working days to 2 working days( 10.41 hours) by computer automatic processing, which proves that processing efficiency of long time series remote sensing data, especial MODIS L1B data, can be greatly increased by saving processing time from months to days and researchers will have more free time from burdensome and automatic work by using the auto processing system.

Yan, Su; Xie, Chengjun

2013-12-01

285

Teaching undergraduates the process of peer review: learning by doing  

NSDL National Science Digital Library

An active approach allowed undergraduates in Health Sciences to learn the dynamics of peer review at first hand. A four-stage process was used. In stage 1, students formed self-selected groups to explore specific issues. In stage 2, each group posted their interim reports online on a specific date. Each student read all the other reports and prepared detailed critiques. In stage 3, each report was discussed at sessions where the lead discussant was selected at random. All students participated in the peer review process. The written critiques were collated and returned to each group, who were asked to resubmit their revised reports within 2 wk. In stage 4, final submissions accompanied by rebuttals were graded. Student responses to a questionnaire were highly positive. They recognized the individual steps in the standard peer review, appreciated the complexities involved, and got a first-hand experience of some of the inherent variabilities involved. The absence of formal presentations and the opportunity to read each other's reports permitted them to study issues in greater depth.

2010-09-01

286

Viewpoints on Medical Image Processing: From Science to Application  

PubMed Central

Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

2013-01-01

287

Fluroscopic Image Processing for Computer-Aided Orthopaedic Surgery  

Microsoft Academic Search

This paper describes the fluoroscopic X-ray image processing techniques of Fracas, a computer-integrated orthopaedic system for bone fracture reduction. Fluoroscopic image processing consists of image dewarping,\\u000a camera calibration, and bone contour extraction. Our approach focuses on bone imaging and emphasizes integration, full automation,\\u000a simplicity, robustness, and practicality. We describe the experimental setup and report results quantifying the accuracy of\\u000a our

Ziv Yaniv; Leo Joskowicz; Ariel Simkin; María A. Garza-jinich; Charles Milgrom

1998-01-01

288

A fusion method for visible and infrared images based on contrast pyramid with teaching learning based optimization  

NASA Astrophysics Data System (ADS)

This paper proposes a novel image fusion scheme based on contrast pyramid (CP) with teaching learning based optimization (TLBO) for visible and infrared images under different spectrum of complicated scene. Firstly, CP decomposition is employed into every level of each original image. Then, we introduce TLBO to optimizing fusion coefficients, which will be changed under teaching phase and learner phase of TLBO, so that the weighted coefficients can be automatically adjusted according to fitness function, namely the evaluation standards of image quality. At last, obtain fusion results by the inverse transformation of CP. Compared with existing methods, experimental results show that our method is effective and the fused images are more suitable for further human visual or machine perception.

Jin, Haiyan; Wang, Yanyan

2014-05-01

289

AN EIGHT WEEK SUMMER INSTITUTE TRAINING PROGRAM TO RETRAIN OFFICE EDUCATION TEACHERS FOR TEACHING BUSINESS ELECTRONIC DATA PROCESSING.  

ERIC Educational Resources Information Center

A 16-WEEK TWO-SUMMER INSTITUTE WAS HELD TO ASSIST IN DEVELOPING THE KNOWLEDGE AND SKILL ESSENTIAL FOR TEACHING SPECIALIZED COURSES IN A 2-YEAR CURRICULUM IN BUSINESS ELECTRONIC DATA PROCESSING. THE REPORT DESCRIBES THE INSTITUTE'S ENROLLMENT, ENVIRONMENT (AREA AND SCHOOL), TEACHING STAFF, TEXT MATERIAL, AND COURSE OUTLINES. EVALUATIONS BY BOTH THE…

BREESE, WILLIAM E.

290

The Evolution of English Language Teaching during Societal Transition in Finland--A Mutual Relationship or a Distinctive Process?  

ERIC Educational Resources Information Center

This study describes the evolution of English language teaching in Finland and looks into the connections of the societal and educational changes in the country as explanatory factors in the process. The results of the study show that the language teaching methodology and the status of foreign languages in Finland are clearly connected to the…

Jaatinen, Riitta; Saarivirta, Toni

2014-01-01

291

Automating image processing for scientific data analysis of a large image database.  

NASA Astrophysics Data System (ADS)

Describes the Multimission VICAR Planner (MVP): an AI planning system which uses knowledge about image processing steps and their requirements to construct executable image processing scripts to support high-level science requests made to the Jet Propulsion Laboratory (JPL) Multimission Image Processing Subsystem (MIPS). This article describes a general AI planning approach to automation and application of the approach to a specific area of image processing for planetary science applications involving radiometric correction, color triplet reconstruction, and mosaicing in which the MVP system significantly reduces the amount of effort required by image processing experts to fill a typical request.

Chien, S. A.; Mortensen, H. B.

1996-08-01

292

Food Log by snapping and processing images  

Microsoft Academic Search

We present the current status of FoodLog, a multimedia Internet application that enables easy capture and archival of information regarding our daily meals. The primary purpose of FoodLog is to facilitate dietary management support with minimum manual recording of information. It analyzes image archives that belong to a user to identify images of meals. Further image analysis determines the nutritional

Kiyoharu Aizawa; Gamhewage C. de Silva; Makoto Ogawa; Yohei Sato

2010-01-01

293

The Study of Image Processing Method for AIDS PA Test  

NASA Astrophysics Data System (ADS)

At present, the main test technique of AIDS is PA in China. Because the judgment of PA test image is still depending on operator, the error ration is high. To resolve this problem, we present a new technique of image processing, which first process many samples and get the data including coordinate of center and the rang of kinds images; then we can segment the image with the data; at last, the result is exported after data was judgment. This technique is simple and veracious; and it also turns out to be suitable for the processing and analyzing of other infectious diseases' PA test image.

Zhang, H. J.; Wang, Q. G.

2006-10-01

294

Lat. Am. J. Phys. Educ. Vol. 6, Suppl. I, August 2012 122 http://www.lajpe.org Teaching about the physics of medical imaging  

E-print Network

the physics of medical imaging: Examples of research-based teaching materials Dean Zollman, Dyan Jones, Sytil Medical Machines is an educational research and development effort to teach some physics in a medical before the discovery of X-rays, attempts at non-invasive medical imaging required an understanding

Zollman, Dean

295

GStreamer as a framework for image processing applications in image fusion  

NASA Astrophysics Data System (ADS)

Multiple source band image fusion can sometimes be a multi-step process that consists of several intermediate image processing steps. Typically, each of these steps is required to be in a particular arrangement in order to produce a unique output image. GStreamer is an open source, cross platform multimedia framework, and using this framework, engineers at NVESD have produced a software package that allows for real time manipulation of processing steps for rapid prototyping in image fusion.

Burks, Stephen D.; Doe, Joshua M.

2011-05-01

296

[Embedded system design of color-blind image processing].  

PubMed

An ARM-based embedded system design schemes is proposed for the color-blind image processing system. The hardware and software of the embedded color-blind image processing system are designed using ARM core processor. Besides, a simple and convenient interface is implemented. This system supplies a general hardware platform for the applications of color-blind image processing algorithms, so that it may bring convenience for the test and rectification of color blindness. PMID:21553537

Wang, Eric; Ma, Yu; Wang, Yuanyuan

2011-01-01

297

Image-Processing Software For A Hypercube Computer  

NASA Technical Reports Server (NTRS)

Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

1992-01-01

298

Optimizing signal and image processing applications using Intel libraries  

NASA Astrophysics Data System (ADS)

This paper presents optimized signal and image processing libraries from Intel Corporation. Intel Performance Primitives (IPP) is a low-level signal and image processing library developed by Intel Corporation to optimize code on Intel processors. Open Computer Vision library (OpenCV) is a high-level library dedicated to computer vision tasks. This article describes the use of both libraries to build flexible and efficient signal and image processing applications.

Landré, Jérôme; Truchetet, Frédéric

2007-01-01

299

A proposal for an educational system service to support teaching\\/learning process for logic programming  

Microsoft Academic Search

The Information and Communication Technology (ICT) have been successfully used to transform either partial\\/total face to face or distance learning education [1]. Educational software systems aid the teaching\\/learning process, promoting the development of a lot of Virtual Learning Environments (VLEs), improving the assimilation of the content presented in the class. Using mechanisms such as Hardware as a Service (HaaS) and

Eric R. G. Dantas; Ryan R. de Azevedo; Cleyton M. O. Rodrigues; Silas C. Almeida; Fred Freitas; Vinicius C. Garcia

2011-01-01

300

Cutaneous blood circulation monitoring by IR imaging and image processing  

NASA Astrophysics Data System (ADS)

A thermography based cutaneous blood flow monitoring system prototype for physiological studies and micro surgery was constructed. The prototype has already been used to analyze blood flow during several operations and physiological experiments. The system filters raw frames from a far IR camera and compresses them, so that we can store images into a small fraction of the original size for filing and later inspection by our system or other suitable Windows programs. Multicolor display combined to image enhancement filtering substantially helps clinical personnel in the interpretation of thermal image information.

Alander, Jarmo T.; Setala, Henri; Karonen, Aimo

1996-10-01

301

Adaptable infrared image processing module implemented in FPGA  

NASA Astrophysics Data System (ADS)

Rapid development of infrared detector arrays caused a need to develop robust signal processing chain able to perform operations on infrared image in real-time. Every infrared detector array suffers from so-called nonuniformity, which has to be digitally compensated by the internal circuits of the camera. Digital circuit also has to detect and replace signal from damaged detectors. At the end the image has to be prepared for display on external display unit. For the best comfort of viewing the delay between registering the infrared image and displaying it should be as short as possible. That is why the image processing has to be done with minimum latency. This demand enforces to use special processing techniques like pipelining and parallel processing. Designed infrared processing module is able to perform standard operations on infrared image with very low latency. Additionally modular design and defined data bus allows easy expansion of the signal processing chain. Presented image processing module was used in two camera designs based on uncooled microbolometric detector array form ULIS and cooled photon detector from Sofradir. The image processing module was implemented in FPGA structure and worked with external ARM processor for control and coprocessing. The paper describes the design of the processing unit, results of image processing, and parameters of module like power consumption and hardware utilization.

Bieszczad, Grzegorz; Sosnowski, Tomasz; Madura, Henryk; Kastek, Mariusz; Barela, Jaroslaw

2010-04-01

302

Bessel filters applied in biomedical image processing  

NASA Astrophysics Data System (ADS)

A magnetic resonance is an image obtained by means of an imaging test that uses magnets and radio waves to create body images, however, in some images it's difficult to recognize organs or foreign agents present in the body. With these Bessel filters the objective is to significantly increase the resolution of magnetic resonance images taken to make them much clearer in order to detect anomalies and diagnose the illness. As it's known, Bessel filters appear to solve the Schrödinger equation for a particle enclosed in a cylinder and affect the image distorting the colors and contours of it, therein lies the effectiveness of these filters, since the clear outline shows more defined and easy to recognize abnormalities inside the body.

Mesa Lopez, Juan Pablo; Castañeda Saldarriaga, Diego Leon

2014-06-01

303

Interactive image processing console A6471  

SciTech Connect

Many system designs and implementations of image processors have been published and discussed to which the authors add another one promising a good compromise between speed, flexibility, and costs. Its main components are programmable semiconductor image refresh memories and a fast parallel processor both acting at TV scan velocity. They are embedded in a 16-bit microcomputer system which interfaces them to the user and the programmer. Special features are the possibility to share the image memories between some systems and a crossconnection between image processor and graphic data. A glance onto the programming techniques is given. Prototypes of such a system are operating in remote sensing and biomedical applications. 6 references.

Kempe, V.; Rebel, B.; Wilhelmi, W.

1982-01-01

304

An Image Processing Algorithm Based On FMAT  

NASA Technical Reports Server (NTRS)

Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

Wang, Lui; Pal, Sankar K.

1995-01-01

305

Experiments with recursive estimation in astronomical image processing  

NASA Technical Reports Server (NTRS)

Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

Busko, I.

1992-01-01

306

APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer  

NASA Technical Reports Server (NTRS)

Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

Masuoka, E.; Rose, J.; Quattromani, M.

1981-01-01

307

Cardiovascular Imaging and Image Processing: Theory and Practice - 1975  

NASA Technical Reports Server (NTRS)

Ultrasonography was examined in regard to the developmental highlights and present applicatons of cardiac ultrasound. Doppler ultrasonic techniques and the technology of miniature acoustic element arrays were reported. X-ray angiography was discussed with special considerations on quantitative three dimensional dynamic imaging of structure and function of the cardiopulmonary and circulatory systems in all regions of the body. Nuclear cardiography and scintigraphy, three--dimensional imaging of the myocardium with isotopes, and the commercialization of the echocardioscope were studied.

Harrison, Donald C. (editor); Sandler, Harold (editor); Miller, Harry A. (editor); Hood, Manley J. (Editor); Purser, Paul E. (Editor); Schmidt, Gene (Editor)

1975-01-01

308

An optimised framework for reconstructing and processing MR phase images  

Microsoft Academic Search

Phase contrast imaging holds great potential for in vivo biodistribution studies of paramagnetic molecules and materials. However, in vivo quantification of iron storage and other paramagnetic materials requires improvements in reconstruction and processing of MR complex images. To achieve this, we have developed a framework including (i) an optimal coil sensitivity smoothing filter for phase imaging determined at the maximal

Zhaolin Chen; Leigh A. Johnston; Dae Hyuk Kwon; Se Hong Oh; Zang-Hee Cho; Gary F. Egan

2010-01-01

309

Digital Image Processing The mathematics behind the pretty pictures  

E-print Network

Digital Image Processing The mathematics behind the pretty pictures Unlike many other branches of science, students of digital image warping benefit from the direct visual realization of mathematical abstractions and concepts. As a result, readers are fortunate to have images clarify what mathematical notation

Breuer, Florian

310

Processing thermal images to detect breast cancer and assess pain  

Microsoft Academic Search

Thermography began to be used to detect breast cancer in the 1960s and assess pain in the 1980s. The images were interpreted through the naked eye and subtle differences were difficult to identify. More recently, widespread use of PCs led to computer processing for the analysis of thermal images. Thermal imaging records the skin temperature distribution of the body and

Monique Frize; Christophe Herry; Nathan Scales

2003-01-01

311

PHOTOGRAMMETRIC PROCESSING OF LOW ALTITUDE IMAGE SEQUENCES BY UNMANNED AIRSHIP  

Microsoft Academic Search

Low altitude aerial image sequences have the advantages of high overlap, multi viewing and very high ground resolution. These kinds of images can be used in various applications that need high precision or fine texture. This paper mainly focuses on the photogrammetric processing of low altitude image sequences acquired by unmanned airship, which automatically flies according to the predefined flight

Yongjun Zhang

312

Experiences with digital processing of images at INPE  

NASA Technical Reports Server (NTRS)

Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.

Mascarenhas, N. D. A. (principal investigator)

1984-01-01

313

SUSAN - A New Approach to Low Level Image Processing  

Microsoft Academic Search

This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction. Non-linear filtering is used to define which parts of the image are closely related to each individual pixel; each pixel has associated with it a local image region which is of similar brightness to that pixel. The new

Stephen M. Smith; J. Michael Brady; Stephen M. Smith

1997-01-01

314

Probabilistic Sequence Models for Image Sequence Processing and Recognition  

E-print Network

Probabilistic Sequence Models for Image Sequence Processing and Recognition Von der Fakultät für.-Inform. Philippe Dreuw ix #12;#12;Abstract This PhD thesis investigates the image sequence labeling problems (HMM) based image sequence recognition system which has been adopted from a large vocabulary continuous

Ney, Hermann

315

A color image processing pipeline for digital microscope  

NASA Astrophysics Data System (ADS)

Digital microscope has found wide application in the field of biology, medicine et al. A digital microscope differs from traditional optical microscope in that there is no need to observe the sample through an eyepiece directly, because the optical image is projected directly on the CCD/CMOS camera. However, because of the imaging difference between human eye and sensor, color image processing pipeline is needed for the digital microscope electronic eyepiece to get obtain fine image. The color image pipeline for digital microscope, including the procedures that convert the RAW image data captured by sensor into real color image, is of great concern to the quality of microscopic image. The color pipeline for digital microscope is different from digital still cameras and video cameras because of the specific requirements of microscopic image, which should have the characters of high dynamic range, keeping the same color with the objects observed and a variety of image post-processing. In this paper, a new color image processing pipeline is proposed to satisfy the requirements of digital microscope image. The algorithm of each step in the color image processing pipeline is designed and optimized with the purpose of getting high quality image and accommodating diverse user preferences. With the proposed pipeline implemented on the digital microscope platform, the output color images meet the various analysis requirements of images in the medicine and biology fields very well. The major steps of color imaging pipeline proposed include: black level adjustment, defect pixels removing, noise reduction, linearization, white balance, RGB color correction, tone scale correction and gamma correction.

Liu, Yan; Liu, Peng; Zhuang, Zhefeng; Chen, Enguo; Yu, Feihong

2012-10-01

316

Using quantum filters to process images of diffuse axonal injury  

NASA Astrophysics Data System (ADS)

Some images corresponding to a diffuse axonal injury (DAI) are processed using several quantum filters such as Hermite Weibull and Morse. Diffuse axonal injury is a particular, common and severe case of traumatic brain injury (TBI). DAI involves global damage on microscopic scale of brain tissue and causes serious neurologic abnormalities. New imaging techniques provide excellent images showing cellular damages related to DAI. Said images can be processed with quantum filters, which accomplish high resolutions of dendritic and axonal structures both in normal and pathological state. Using the Laplacian operators from the new quantum filters, excellent edge detectors for neurofiber resolution are obtained. Image quantum processing of DAI images is made using computer algebra, specifically Maple. Quantum filter plugins construction is proposed as a future research line, which can incorporated to the ImageJ software package, making its use simpler for medical personnel.

Pineda Osorio, Mateo

2014-06-01

317

Color quantitation through image processing in dermatology  

Microsoft Academic Search

Classical color models and their applications to computer vision are reviewed. The performances of color quantitation from digitized images are compared with those derived from a chromameter. The color quantitation obtained from either digitized color slides or directly digitized images is proved to be more efficient than the conventional visual assessment of observers. A methodology is proposed for determining the

M. Herbin; A. Venot; J. Y. Devaux; C. Piette

1990-01-01

318

Colour model analysis for microscopic image processing  

Microsoft Academic Search

Aims: This article presents a comparative study between different colour models (RGB, HSI and CIEL*a*b*) applied to very large microscopic image analysis. Such analysis of different colour models is needed in order to carry out a successful detection and therefore a classification of different regions of interest (ROIs) with- in the image. Methods: All colour models have their advantages and

Gloria Bueno; Roberto González; Oscar Déniz; Jesús González; Marcial García-Rojo

2000-01-01

319

Infrared image processing and data analysis  

Microsoft Academic Search

Infrared thermography in nondestructive testing provides images (thermograms) in which zones of interest (defects) appear sometimes as subtle signatures. In this context, raw images are not often appropriate since most will be missed. In some other cases, what is needed is a quantitative analysis such as for defect detection and characterization. In this paper, presentation is made of various methods

C. Ibarra-Castanedo; D. González; M. Klein; M. Pilla; S. Vallerand; X. Maldague

2004-01-01

320

Interactive image processing console A6471  

Microsoft Academic Search

Many system designs and implementations of image processors have been published and discussed to which the authors add another one promising a good compromise between speed, flexibility, and costs. Its main components are programmable semiconductor image refresh memories and a fast parallel processor both acting at TV scan velocity. They are embedded in a 16-bit microcomputer system which interfaces them

V. Kempe; B. Rebel; W. Wilhelmi

1982-01-01

321

Application of near-infrared image processing in agricultural engineering  

NASA Astrophysics Data System (ADS)

Recently, with development of computer technology, the application field of near-infrared image processing becomes much wider. In this paper the technical characteristic and development of modern NIR imaging and NIR spectroscopy analysis were introduced. It is concluded application and studying of the NIR imaging processing technique in the agricultural engineering in recent years, base on the application principle and developing characteristic of near-infrared image. The NIR imaging would be very useful in the nondestructive external and internal quality inspecting of agricultural products. It is important to detect stored-grain insects by the application of near-infrared spectroscopy. Computer vision detection base on the NIR imaging would be help to manage food logistics. Application of NIR imaging promoted quality management of agricultural products. In the further application research fields of NIR image in the agricultural engineering, Some advices and prospect were put forward.

Chen, Ming-hong; Zhang, Guo-ping; Xia, Hongxing

2009-07-01

322

Image processing using light-sensitive chemical waves  

Microsoft Academic Search

Image processing is usually concerned with the computer manipulation and analysis of pictures1. Typical procedures in computer image-processing are concerned with improvement of degraded (low-contrast or noisy) pictures, restoration and reconstruction, segmenting of pictures into parts and pattern recognition of properties of the pre-processed pictures. To solve these problems, digitized pictures are processed by local operations in a sequential manner.

L. Kuhnert; K. I. Agladze; V. I. Krinsky

1989-01-01

323

IPL processing of the Viking orbiter images of Mars  

NASA Technical Reports Server (NTRS)

The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through spectral filters. The Image Processing Laboratory of the Jet Propulsion Laboratory was responsible for the design, development, and application of the software required to produce these 'second-order' products.

Ruiz, R. M.; Elliott, D. A.; Yagi, G. M.; Pomphrey, R. B.; Power, M. A.; Farrell, W., Jr.; Lorre, J. J.; Benton, W. D.; Dewar, R. E.; Cullen, L. E.

1977-01-01

324

High resolution image processing on low-cost microcomputers  

NASA Technical Reports Server (NTRS)

Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.

Miller, R. L.

1993-01-01

325

Image and Natural Language Processing for Multimedia Information Retrieval  

Microsoft Academic Search

\\u000a Image annotation, the task of automatically generating description words for a picture, is a key component in various image\\u000a search and retrieval applications. Creating image databases for model development is, however, costly and time consuming,\\u000a since the keywords must be hand-coded and the process repeated for new collections. In this work we exploit the vast resource\\u000a of images and documents

Mirella Lapata

2010-01-01

326

Spatially Variant Morphological Image Processing: Theory and Applications  

E-print Network

Spatially Variant Morphological Image Processing: Theory and Applications N. Bouaynaya and D under Euclidean translations. An interest in the extension of mathematical morphology to spatially-variant (image) processing. This paper presents a general theory of spatially-variant mathematical morphology

Bouaynaya, Nidhal

327

Characterisation of magnetotactic bacteria using image processing techniques  

Microsoft Academic Search

The response of magnetotactic bacteria to an applied magnetic field has been analyzed using image processing techniques. Bacterial characteristics, including magnetic movement, have been processed at a higher rate and evaluated with greater accuracy. This method offers a unique tool in data analysis and enhancement for recorded images of biological systems

A. S. Bahaj; P. A. B. James

1993-01-01

328

Signal and Image Processing for Crime Control and Crime Prevention  

Microsoft Academic Search

In this paper we will take a critical look at the research and development of signal and image processing technologies and new applications of existing technologies to improve crime control and crime prevention. Signal and image processing techniques are used in many aspects of sensing the environment both during a crime and in the post-crime analysis of the scene. Common

Susan Hackwood; P. Aaron Potter

1999-01-01

329

Image Processing In Laser-Beam-Steering Subsystem  

NASA Technical Reports Server (NTRS)

Conceptual design of image-processing circuitry developed for proposed tracking apparatus described in "Beam-Steering Subsystem For Laser Communication" (NPO-19069). In proposed system, desired frame rate achieved by "windowed" readout scheme in which only pixels containing and surrounding two spots read out and others skipped without being read. Image data processed rapidly and efficiently to achieve high frequency response.

Lesh, James R.; Ansari, Homayoon; Chen, Chien-Chung; Russell, Donald W.

1996-01-01

330

The processing of cardiac image based on optical mapping  

Microsoft Academic Search

Cardiac optical mapping changes cardiac membrane potential into optical signal with the voltage-sensitive dye, and carries out image acquisition, storage, processing and other functions. This paper successfully finish image processing of the rabbit heart based on optical mapping, displaying membrane potential conduction map, denoising single-cell action potential duration map, locating the beginning of depolarization and the ending of repolarization, and

Hua Xiao; Xizhao Lv

2010-01-01

331

A toolkit for parallel image processing J. M. Squyresy  

E-print Network

A toolkit for parallel image processing J. M. Squyresy A. Lumsdainey R. L. Stevensonz y Department processing. The computational problem is due to the very high resolution of the imagery data (typical images desktop workstation can become a severe bottleneck in the enhancement of imagery data. Due to the nature

Lumsdaine, Andrew

332

Practical image processing systems for improving player's skill  

Microsoft Academic Search

In this paper, we introduce practical image processing systems for improving player's skill in sport. Exising image processing researches produce highly abstracted information, but in education and training of sport information required is one that can be used by a trainer to feedback for a player. We show three systems (spin measurement of table tennis ball, ball speed estimation of

Toru TAMAKI; Higashi Hiroshima

333

FPGA Based Controller for Heterogeneous Image Processing System  

Microsoft Academic Search

In the present paper construction of a controller is described, which is used to control the RETINA image processing platform. The 32-bit RETINA card is dedicated to be used for image acquisition, processing and analysis. The module resources include Video ADC, Virtex FPGA device, floating point Motorola 96002 DSP and PCI Master interface, what enables the execution of all the

Marek Gorgon; Jaromir Przybylo

2001-01-01

334

Infrared image processing and its application to forest fire surveillance  

Microsoft Academic Search

This paper describes an scheme for automatic forest surveillance. A complete system for forest fire detection is firstly presented although we focus on infrared image processing. The proposed scheme based on infrared image processing performs early detection of any fire threat. With the aim of determining the presence or absence of fire, the proposed algorithms performs the fusion of different

Ignacio Bosch; Soledad Gomez; Luis Vergara; Jorge Moragues

2007-01-01

335

Cryo-imaging of 70+GB mice: Image processing\\/visualization challenges and biotechnology applications  

Microsoft Academic Search

The Case whole mouse cryo-imaging system is a sectionand-image system which provides microscopic, information rich, whole mouse color brightfield and molecular fluorescence images of an entire mouse. Cryo-imaging creates extremely large data sets (>70GB color brightfield) and presents a multitude of instrumentation and image processing\\/visualization challenges. Precise mechanical control was required to move a 50 lb microscope payload for tiled

David L. Wilson; Madhusudhana Gargesha; Debashish Roy; Mohammed Q. Qutaish; Ganapathi Krishnamurthi; Kristin Sullivant; Patiwet Wuttisarnwattana; Hong Lu; Benjamin Moore; Christian Anderson

2011-01-01

336

Evaluating Teaching through Teaching Awards.  

ERIC Educational Resources Information Center

Reviews the literature on teaching awards and then examines a specific case, the Alan P. Stuart Award for Excellence in Teaching at the University of New Brunswick, using Menges's three tests of effective awards. Suggests that examining the strengths and weaknesses of institutional teaching awards can help illuminate the processes involved in…

Carusetta, Ellen

2001-01-01

337

Development of an image processing software for medical thermogram analysis using a commercially available image processing system  

Microsoft Academic Search

The medical far infrared (FIR) imaging system has recently been modified to be more sensitive, rapid and cheap. Also, the image format for processing on a personal computer and output format from integrated circuit of a FIR sensor have gradually been standardized. The objective of the report is to develop an application software to analyze clinical FIR images. We developed

Iwao Fujimasa; Hideo Nakazawa; Eiichi Miyasaka

1998-01-01

338

Advanced technology development for image gathering, coding, and processing  

NASA Technical Reports Server (NTRS)

Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

Huck, Friedrich O.

1990-01-01

339

Teaching Image Formation by Extended Light Sources: The Use of a Model Derived from the History of Science  

NASA Astrophysics Data System (ADS)

This research, carried out in Greece on pupils aged 12-16, focuses on the transformation of their representations concerning light emission and image formation by extended light sources. The instructive process was carried out in two stages, each one having a different, distinct target set. During the first stage, the appropriate conflict conditions were created by contrasting the subjects’ predictions with the results of experimental situations inspired by the History of Science, with a view to destabilizing the pupils’ alternative representations. During the second stage, the experimental teaching intervention was carried out; it was based on the geometrical optics model and its parameters were derived from Kepler’s relevant historic experiment. For the duration of this process and within the framework of didactical interactions, an effort was made to reorganize initial limited representations and restructure them at the level of the accepted scientific model. The effectiveness of the intervention was evaluated two weeks later, using experimental tasks which had the same cognitive yet different empirical content with respect to the tasks conducted during the intervention. The results of the study showed that the majority of the subjects accepted the model of geometrical optics, that is, the pupils were able to correctly predict and adequately justify the experimental results based on the principle of punctiform light emission. Educational and research implications are discussed.

Dedes, Christos; Ravanis, Konstantinos

2009-01-01

340

Dynamic feature analysis for Voyager at the Image Processing Laboratory  

NASA Technical Reports Server (NTRS)

Voyager 1 and 2 were launched from Cape Kennedy to Jupiter, Saturn, and beyond on September 5, 1977 and August 20, 1977. The role of the Image Processing Laboratory is to provide the Voyager Imaging Team with the necessary support to identify atmospheric features (tiepoints) for Jupiter and Saturn data, and to analyze and display them in a suitable form. This support includes the software needed to acquire and store tiepoints, the hardware needed to interactively display images and tiepoints, and the general image processing environment necessary for decalibration and enhancement of the input images. The objective is an understanding of global circulation in the atmospheres of Jupiter and Saturn. Attention is given to the Voyager imaging subsystem, the Voyager imaging science objectives, hardware, software, display monitors, a dynamic feature study, decalibration, navigation, and data base.

Yagi, G. M.; Lorre, J. J.; Jepsen, P. L.

1978-01-01

341

Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition  

NASA Technical Reports Server (NTRS)

Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

Downie, John D.; Tucker, Deanne (Technical Monitor)

1994-01-01

342

Graphical user interface for image acquisition and processing  

DOEpatents

An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

Goldberg, Kenneth A. (Berkeley, CA)

2002-01-01

343

Teaching the Process of Science: A Critical Component of Introductory Geoscience Courses  

NASA Astrophysics Data System (ADS)

Undergraduate students hold many misconceptions about the nature and process of science, including the social and cultural components of the scientific endeavor. These misconceptions are perhaps even more pronounced in the geosciences, where most students enter college without having been exposed to subject matter in high school. Many faculty and teachers feel that the process of science is embedded in their teaching through the inclusion of laboratory exercises and assigned readings in the primary literature. These techniques utilize the tools of science, but do not necessarily enlighten students in the actual process by which science progresses. Students do gain that understanding when they are involved in research, but the majority of the undergraduate research experiences are capstone experiences for students who choose to major in the science and engineering disciplines. A critical vehicle for teaching most undergraduate students about the process of science, therefore, is the introductory science course. In these courses, teaching the nature and process of science requires going beyond implicit use of the tools and techniques of science to making explicit reference to the process of science and, in addition, allowing students time to reflect on how they have participated in the process. We have developed a new series of freely accessible, web-based reading materials (available at http://www.visionlearning.com/process_science.php) that explicitly discuss the process of science and can be easily incorporated into any introductory science course, including introductory geoscience. These modules cover a variety of topics including specific research methods, such as experimentation, description, and modeling, as well as other aspects of the process of science like scientific writing, data analysis and interpretation, and the use of statistics. Our preliminary assessment results suggest that students find the text interesting and that they specifically address misconceptions held by students prior to reading them. During fall 2008 we will be more thoroughly evaluating the utility of these materials in a treatment-control designed study initiated in a large, introductory non-major science course. This presentation will present an overview of these materials as well as preliminary data from this evaluation.

Egger, A. E.; Carpi, A.

2008-12-01

344

Approach to retina optical coherence tomography image processing  

NASA Astrophysics Data System (ADS)

Optical coherence tomography (OCT) is a recently developed imaging technology. By using Zeiss STRATUS OCT one could obtain clear tomography pictures of retina and macula lutea. The clinical use of image processing requires both medical knowledge and expertise in image processing techniques. This paper focused on processing of retina OCT image to design the automatic retina OCT image identification system that could help us evaluate retina, examine and clinically diagnose the fundus diseases. The motivation of our work is to extract the contour and highlight the feature area of the focus clearly and exactly. Generally it is related to image segmentation, enhancement and binarization etc. In the paper we try to reduce the image noise and make the symbolic area connected through color segmentation, low-pass filter and mathematical morphology algorithm etc., finally discern some common and different properties of postprocessing image compared with the real OCT image. Experiments were done on cystoid macular edema, macular hole and normal retina OCT image. The results show that the project raised is feasible and suitable for further image identification and classification according to ophthalmology criteria.

Yuan, Jiali; Liu, Ruihua; Xuan, Gao; Yang, Jun; Yuan, Libo

2007-03-01

345

Optical Signal Processing: Poisson Image Restoration and Shearing Interferometry  

NASA Technical Reports Server (NTRS)

Optical signal processing can be performed in either digital or analog systems. Digital computers and coherent optical systems are discussed as they are used in optical signal processing. Topics include: image restoration; phase-object visualization; image contrast reversal; optical computation; image multiplexing; and fabrication of spatial filters. Digital optical data processing deals with restoration of images degraded by signal-dependent noise. When the input data of an image restoration system are the numbers of photoelectrons received from various areas of a photosensitive surface, the data are Poisson distributed with mean values proportional to the illuminance of the incoherently radiating object and background light. Optical signal processing using coherent optical systems is also discussed. Following a brief review of the pertinent details of Ronchi's diffraction grating interferometer, moire effect, carrier-frequency photography, and achromatic holography, two new shearing interferometers based on them are presented. Both interferometers can produce variable shear.

Hong, Yie-Ming

1973-01-01

346

Image processing for nuclear microprobe at Fudan University  

NASA Astrophysics Data System (ADS)

A noisy secondary electron image of a flower cell was processed by filtering both in the spatial domain and the Fourier frequency domain. All the pulse noise and the periodic noise were clearly eliminated. The other two images, taken by PIXE analysis of a silicon chip covered by a patterned photoresist layer, were also processed. One was skewed and squeezed and after geometric correction the realistic silicon distribution was obtained. The second one was processed to enhance the edge information.

Zou, Degang; Ren, Chigang; Zhou, Shijun; Tang, Jiangyong; Yang, Fujia

1995-09-01

347

Rapid prototyping in the development of image processing systems  

Microsoft Academic Search

This contribution presents a rapid prototyping approach for the real-time demonstration of image processing algorithms. As an example EADS\\/LFK has developed a basic IR target tracking system implementing this approach. Traditionally in research and industry time-independent simulation of image processing algorithms on a host computer is processed. This method is good for demonstrating the algorithms' capabilities. Rarely done is a

Arno von der Fecht; Claus Thomas Kelm

2004-01-01

348

Data management in pattern recognition and image processing systems  

NASA Technical Reports Server (NTRS)

Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.

Zobrist, A. L.; Bryant, N. A.

1976-01-01

349

Subway tunnel crack identification algorithm research based on image processing  

NASA Astrophysics Data System (ADS)

The detection of cracks in tunnels has profound impact on the tunnel's safety. It's common for low contrast, uneven illumination and severe noise pollution in tunnel surface images. As traditional image processing algorithms are not suitable for detecting tunnel cracks, a new image processing method for detecting cracks in surface images of subway tunnels is presented in this paper. This algorithm includes two steps. The first step is a preprocessing which uses global and local methods simultaneously. The second step is the elimination of different types of noises based on the connected components. The experimental results show that the proposed algorithm is effective for detecting tunnel surface cracks.

Bai, Biao; Zhu, Liqiang; Wang, Yaodong

2014-04-01

350

Digital image processing for radioactive ion microscopy  

E-print Network

of these measures is given in Table 1. Various authors [3, 4, 8, 9] discuss the use of several preprocessing methods, primarily gradient related operations, used to enhance the performance of the correlation measure. One author [9], in using a gradient...: T(x, y, i) e i N (9) 10 The negative sign serves as a flag for later recovery of the proper data. The slight degradation in precision is not visible in the final images shown in Section III, since the output images are quantized to 8 bits...

Nash, Reuel William

1983-01-01

351

0 IEEE TRANS. ON IMAGE PROCESSING (REVISED) Hardcopy Image Barcodes Via  

E-print Network

0 IEEE TRANS. ON IMAGE PROCESSING (REVISED) Hardcopy Image Barcodes Via Block Error Diffusion that is explicitly modeled. We refer to the encoded printed version as an image barcode due to its high information security, barcodes Contact-- Prof. Brian L. Evans, 1 University Station C0803, The University of Texas

Evans, Brian L.

352

Fast image processing on chain board of inverted tooth chain  

NASA Astrophysics Data System (ADS)

Discussed ordinary image processing technology of inverted tooth chain board, including noise reduction, image segmentation, edge detection and contour extraction etc.. Put forward a new kind of sub-pixel arithmetic for edge orientation of circle. The arithmetic first did edge detection to image by Canny arithmetic, so as to enhance primary orientation precision of edge, then calculated gradient direction, and then interpolated gradient image (image that was detected by Sobel arithmetic) along gradient direction, last obtained sub-pixel orientation of edge. Performed two kinds of least-square fitting methods for line edge to getting its sub-pixel orientation, from analysis and experiments, the orientation error of improved least-square linear fitting method was one quarter of ordinary least-square linear fitting error under small difference of orientation time. The sub-pixel orientation of circle made resolution of CCD increase 42 tines, which enhanced greatly orientation precision of image edge. For the need of quick on-line inspection next step, integrated the whole environment containing image preprocess, Hough conversion of line, setting orientation & direction of image, sub-pixel orientation of line and circle, output of calculation result. The whole quick processing course performed without operator, processing tine of single part is less than 0.3 second. The sub-pixel orientation method this paper posed fits precision orientation of image, and integration calculation method ensure requirement of quick inspection, and lays the foundations for on-line precision visual measurement of image.

Liu, Qing-min; Li, Guo-fa

2007-12-01

353

Parallel Implementation of Hyperspectral Image Processing Algorithms  

E-print Network

Space Flight Center in Maryland, developed the concept of Beowulf cluster with the aim of creating a cost-effective parallel computing system from commodity components to satisfy specific computational hyperspectral imaging applications require analysis al- gorithms able to provide a response in (near) real-time

Plaza, Antonio J.

354

Real-time image processing architecture for robot vision  

NASA Astrophysics Data System (ADS)

This paper presents a study of the impact of MMX technology and PIII Streaming SIMD (Single Instruction stream, Multiple Data stream). Extensions in image processing and machine vision application, which, because of their hard real time constrains, is an undoubtedly challenging task. A comparison with traditional scalar code and with other parallel SIMD architecture (IMPA-VISION board) is discussed with emphasis of the particular programming strategies for speed optimization. More precisely we discuss the low level and intermediate level image processing algorithms, which are best suited for parallel SIMD implementation. High-level image processing algorithms are more suitable for parallel implementation on MIMD architectures. While the IMAP-VISION system performs better because of the large number of processing elements, the MMX processor and PIII (with Streaming SIMD Extensions) remains a good candidate for low-level image processing.

Persa, Stelian; Jonker, Pieter P.

2000-10-01

355

Image processing using wavelet- and fractal-based algorithms  

NASA Astrophysics Data System (ADS)

Modern image and signal processing methods strive to maximize signal to nose ratios, even in the presence of severe noise. Frequently, real world data is degraded by under sampling of intrinsic periodicities, or by sampling with unevenly spaced intervals. This results in dropout or missing data, and such data sets are particularly difficult to process using conventional image processing methods. In many cases, one must still extract as much information as possible from a given data set, although available data may be sparse or noisy. In such cases, we suggest algorithms based on wavelet transform and fractal theory will offer a viable alternative as some early work in the area has indicated. An architecture of a software system is suggested to implement an improved scheme for the analysis, representation, and processing of images. The scheme is based on considering the segments of images as wavelets and fractals so that small details in the images can be exploited and the data can be compressed. The objective is to improve this scheme automatically and rapidly decompose a 2D image into a combination of elemental images so that an array of processing methods can be applied. Thus, the scheme offers potential utility for analysis of image could be the patterns that the system is required to recognize, so that the scheme offers potential utility for industrial and military applications involving robot vision and/or automatic recognition of targets.

Siddiqui, Khalid J.

1998-03-01

356

Evaluation of clinical image processing algorithms used in digital mammography.  

PubMed

Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the same six pairs of modalities were significantly different, but the JAFROC confidence intervals were about 32% smaller than ROC confidence intervals. This study shows that image processing has a significant impact on the detection of microcalcifications in digital mammograms. Objective measurements, such as described here, should be used by the manufacturers to select the optimal image processing algorithm. PMID:19378737

Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

2009-03-01

357

Image pre-processing for optimizing automated photogrammetry performances  

NASA Astrophysics Data System (ADS)

The purpose of this paper is to analyze how optical pre-processing with polarizing filters and digital pre-processing with HDR imaging, may improve the automated 3D modeling pipeline based on SFM and Image Matching, with special emphasis on optically non-cooperative surfaces of shiny or dark materials. Because of the automatic detection of homologous points, the presence of highlights due to shiny materials, or nearly uniform dark patches produced by low reflectance materials, may produce erroneous matching involving wrong 3D point estimations, and consequently holes and topological errors on the mesh originated by the associated dense 3D cloud. This is due to the limited dynamic range of the 8 bit digital images that are matched each other for generating 3D data. The same 256 levels can be more usefully employed if the actual dynamic range is compressed, avoiding luminance clipping on the darker and lighter image areas. Such approach is here considered both using optical filtering and HDR processing with tone mapping, with experimental evaluation on different Cultural Heritage objects characterized by non-cooperative optical behavior. Three test images of each object have been captured from different positions, changing the shooting conditions (filter/no-filter) and the image processing (no processing/HDR processing), in order to have the same 3 camera orientations with different optical and digital pre-processing, and applying the same automated process to each photo set.

Guidi, G.; Gonizzi, S.; Micoli, L. L.

2014-05-01

358

Research on the infrared image processing algorithm based on FPGA  

NASA Astrophysics Data System (ADS)

Recent years, infrared guidance technology has more and more applications in the field of precise guidance, because it is not limited by the night or the meteorology.The development of infrared guidance technology depends on the infrared image processing technology.This paper introduces an algorithm for infrared image nonuniformity correction based in FPGA. It uses multiplication instead of division and adopts efficient pipeline technology to reduce the system logic resource usage and improve efficiency of the system.Because infrared imaging is influenced by environmental temperature, this pape proposes an infrared nonuniformity correction algorithm with the compensation if environment temperature. This algorithm is very effective to reduce the influence of the environment temperature on infrared imaging. This infrared image processing system for infrared imaging laid a foundation for the application of the infrared guidance in the field of precise guidance.

Xin, Li

2014-11-01

359

Subband/Transform MATLAB Functions For Processing Images  

NASA Technical Reports Server (NTRS)

SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.

Glover, D.

1995-01-01

360

Digital processing of stereoscopic image pairs.  

NASA Technical Reports Server (NTRS)

The problem under consideration is concerned with scene analysis during robot navigation on the surface of Mars. In this mode, the world model of the robot must be continuously updated to include sightings of new obstacles and scientific samples. In order to describe the content of a particular scene, it is first necessary to segment it into known objects. One technique for accomplishing this segmentation is by analyzing the pair of images produced by the stereoscopic cameras mounted on the robot. A heuristic method is presented for determining the range for each point in the two-dimensional scene under consideration. The method is conceptually based on a comparison of corresponding points in the left and right images of the stereo pair. However, various heuristics which are adaptive in nature are used to make the algorithm both efficient and accurate. Examples are given of the use of this so-called range picture for the purpose of scene segmentation.

Levine, M. D.

1973-01-01

361

A novel data processing technique for image reconstruction of penumbral imaging  

NASA Astrophysics Data System (ADS)

CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

2011-06-01

362

Processing of polarametric SAR images. Final report  

SciTech Connect

The objective of this work was to develop a systematic method of combining multifrequency polarized SAR images. It is shown that the traditional methods of correlation, hard targets, and template matching fail to produce acceptable results. Hence, a new algorithm was developed and tested. The new approach combines the three traditional methods and an interpolation method. An example is shown that demonstrates the new algorithms performance. The results are summarized suggestions for future research are presented.

Warrick, A.L.; Delaney, P.A. [Univ. of Arizona, Tucson, AZ (United States)

1995-09-01

363

Color sensitivity of the multi-exposure HDR imaging process  

NASA Astrophysics Data System (ADS)

Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

2013-04-01

364

Diagnosis of skin cancer using image processing  

NASA Astrophysics Data System (ADS)

In this papera methodology for classifying skin cancerin images of dermatologie spots based on spectral analysis using the K-law Fourier non-lineartechnique is presented. The image is segmented and binarized to build the function that contains the interest area. The image is divided into their respective RGB channels to obtain the spectral properties of each channel. The green channel contains more information and therefore this channel is always chosen. This information is point to point multiplied by a binary mask and to this result a Fourier transform is applied written in nonlinear form. If the real part of this spectrum is positive, the spectral density takeunit values, otherwise are zero. Finally the ratio of the sum of the unit values of the spectral density with the sum of values of the binary mask are calculated. This ratio is called spectral index. When the value calculated is in the spectral index range three types of cancer can be detected. Values found out of this range are benign injure.

Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Coronel-Beltrán, Ángel

2014-10-01

365

Image processing of underwater multispectral imagery  

USGS Publications Warehouse

Capturing in situ fluorescence images of marine organisms presents many technical challenges. The effects of the medium, as well as the particles and organisms within it, are intermixed with the desired signal. Methods for extracting and preparing the imagery for analysis are discussed in reference to a novel underwater imaging system called the low-light-level underwater multispectral imaging system (LUMIS). The instrument supports both uni- and multispectral collections, each of which is discussed in the context of an experimental application. In unispectral mode, LUMIS was used to investigate the spatial distribution of phytoplankton. A thin sheet of laser light (532 nm) induced chlorophyll fluorescence in the phytoplankton, which was recorded by LUMIS. Inhomogeneities in the light sheet led to the development of a beam-pattern-correction algorithm. Separating individual phytoplankton cells from a weak background fluorescence field required a two-step procedure consisting of edge detection followed by a series of binary morphological operations. In multispectral mode, LUMIS was used to investigate the bio-assay potential of fluorescent pigments in corals. Problems with the commercial optical-splitting device produced nonlinear distortions in the imagery. A tessellation algorithm, including an automated tie-point-selection procedure, was developed to correct the distortions. Only pixels corresponding to coral polyps were of interest for further analysis. Extraction of these pixels was performed by a dynamic global-thresholding algorithm.

Zawada, D.G.

2003-01-01

366

Processing ISS Images of Titan's Surface  

NASA Technical Reports Server (NTRS)

One of the primary goals of the Cassini-Huygens mission, in orbit around Saturn since July 2004, is to understand the surface and atmosphere of Titan. Surface investigations are primarily accomplished with RADAR, the Visual and Infrared Mapping Spectrometer (VIMS), and the Imaging Science Subsystem (ISS) [1]. The latter two use methane "windows", regions in Titan's reflectance spectrum where its atmosphere is most transparent, to observe the surface. For VIMS, this produces clear views of the surface near 2 and 5 microns [2]. ISS uses a narrow continuum band filter (CB3) at 938 nanometers. While these methane windows provide our best views of the surface, the images produced are not as crisp as ISS images of satellites like Dione and Iapetus [3] due to the atmosphere. Given a reasonable estimate of contrast (approx.30%), the apparent resolution of features is approximately 5 pixels due to the effects of the atmosphere and the Modulation Transfer Function of the camera [1,4]. The atmospheric haze also reduces contrast, especially with increasing emission angles [5].

Perry, Jason; McEwen, Alfred; Fussner, Stephanie; Turtle, Elizabeth; West, Robert; Porco, Carolyn; Knowles, Ben; Dawson, Doug

2005-01-01

367

Image processing methods to obtain symmetrical distribution from projection image.  

PubMed

Flow visualization and measurement of cross-sectional liquid distribution is very effective to clarify the effects of obstacles in a conduit on heat transfer and flow characteristics of gas-liquid two-phase flow. In this study, two methods to obtain cross-sectional distribution of void fraction are applied to vertical upward air-water two-phase flow. These methods need projection image only from one direction. Radial distributions of void fraction in a circular tube and a circular-tube annuli with a spacer were calculated by Abel transform based on the assumption of axial symmetry. On the other hand, cross-sectional distributions of void fraction in a circular tube with a wire coil whose conduit configuration rotates about the tube central axis periodically were measured by CT method based on the assumption that the relative distributions of liquid phase against the wire were kept along the flow direction. PMID:15246409

Asano, H; Takenaka, N; Fujii, T; Nakamatsu, E; Tagami, Y; Takeshima, K

2004-10-01

368

SAR image processing by a memetic algorithm  

Microsoft Academic Search

In this study, a recently invented evolutionary computing algorithm known as memetic algorithm is utilized for data processing of synthetic aperture radar (SAR) imagery. Since it's invention by Dr. Holland in 1990's, Genetic Algorithm (GA) has already gained popularity in a wide range of engineering applications. The genetic approach is used for processing of SAR imagery to find a region

M. E. Aydemir; T. Gunel; S. Kargin; I. Erer; S. Kurnaz

2005-01-01

369

Fundus Image Processing System For Early Detection Of Glaucoma  

NASA Astrophysics Data System (ADS)

An image processing system based on a MC 68000 microprocessor has been developed for analysis of fundus photographs. This personal computer based system has specific image enhancement capabilities comparable to existing large scale systems. Basic enhancement of fundus images consists of histogram modification or kernel convolution techniques to determine regions of specific interest such as textural difference in the nerve fiber layer or cupping of the optic nerve head. Fast Fourier transforms and filtering techniques are then utilized for specific regions of the original image. Textural difference in the nerve fiber layers are further highlighted using either interactive histogram modification or pseudocolor mappings. Menu driven software allows review of the steps applied, creating a feedback mechanism for optimum display of the fundus image. A wider noise margin than that of digitized fundus photographs can be obtained by direct fundus imaging. The present fundus image processing system clearly provides us with quantitative and detailed techniques of assessing textural changes in the fundus photographs of glaucoma patients and suspects for better classification and early detection of glaucoma. The versatility and computing capability of the system make it also suitable for other applications such as multidimensional image processing and image analysis.

Whiteside, S.; Mitra, S.; Krile, T. F.; Shihab, Z.

1986-12-01

370

Digital image processing and analysis for activated sludge wastewater treatment.  

PubMed

Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants. PMID:25381111

Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

2015-01-01

371

Temperature imaging and image processing in the steel industry  

Microsoft Academic Search

Our aim is twofold: to present our temperature measurement system based on CCD technology, which gives a linear response versus temperature, and to display two industrial applications in which our systems has been involved to optimize and characterize the process. We present a short summary dealing with temperature evaluations from radiation measurements. We consider especially the problems of the surroundings,

Fabrice Meriaudeau; Eric Renier; Frederic Truchetet

1996-01-01

372

Identifying corners of clothes using image processing method  

Microsoft Academic Search

This paper presents a method to identify the edges and corners of clothes in an image using image processing methods. In clothes manipulation, it is important for the GUI program to identify the shape of clothes before spreading and folding. The first step for clothes manipulation is that the robot needs to grab two real corners so the clothes can

Yew Cheong Hou; Khairul Salleh Mohamed Sahari

2010-01-01

373

SAR Image Processing Based on Fast Discrete Curvelet Transform  

Microsoft Academic Search

Curvelet transform is a new kind of multiscale analysis algorithm which is more suitable for image processing, as compared with Wavelet it can better analysis the line and curve edge characteristics, and it has better approximation precision and sparsity description, also has good directivity. This paper introduces that remote sensing image speckle reduction based on Curvelet transform. Synthetic Aperture Radar

Zhiyu Zhang; Xiaodan Zhang; Jiulong Zhang

2009-01-01

374

Review Article Automated Processing of Zebrafish Imaging Data  

E-print Network

Review Article Automated Processing of Zebrafish Imaging Data: A Survey Ralf Mikut,1 Thomas and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high

Hamprecht, Fred A.

375

Assessing vehicle detection utilizing video image processing technology  

E-print Network

processing system used in the study was the Autoscop0m 2004 by Image Sensing Systems, Inc. The camera imaging device was a one-half (1/2) inch interline transfer microlens charged coupled device (CCD). The camera lens was a six (6) mm, fl.2 auto ifis lens...

Hartmann, Duane E

1996-01-01

376

A Comparison between Image-processing Approaches to Textile Inspection  

Microsoft Academic Search

Visual inspection is an important part of quality control in the textile industry. In order to increase accuracy, attempts are being made to replace manual inspection by automated visual inspections employing a camera and imaging routines. Process-accompanying image acquisition and automatic evaluation could form the basis for a system that ensures a very high degree of fabric quality control. However,

A. Conci; C. B. Proença

2000-01-01

377

High performance acoustic three-dimensional image processing system  

SciTech Connect

The reactor vessel of a fast breeder reactor (FBR) is filled with optically opaque liquid sodium, and, therefore, the ultrasonic imaging technique is useful for inspecting in-vessel structures in sodium. The authors have developed a high-speed and high-resolution three-dimensional image processing technique. For imaging in the sodium, a two-dimensional matrix transducer and the M-series transmitting signal were used. The cross correlation processing between the transmitted signal and received signal was used for enhancing the S/N ratio. The image synthesis also attempts the enhancement of resolution by the synthetic aperture focusing technique (SAFT). High-speed processing could be realized by use of parallel processing boards.

Suzuki, T.; Nagai, S.; Maruyama, F. [Toshiba Corp., Yokohama (Japan); Furukawa, H. [JEOL System Technology Co., Ltd., Tokyo (Japan)

1995-08-01

378

Image processing for flight crew enhanced situation awareness  

NASA Technical Reports Server (NTRS)

This presentation describes the image processing work that is being performed for the Enhanced Situational Awareness System (ESAS) application. Specifically, the presented work supports the Enhanced Vision System (EVS) component of ESAS.

Roberts, Barry

1993-01-01

379

Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 1 Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 2  

E-print Network

Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 1 #12;Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 2 #12;Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 3 #12;Computer Vision, Graphics, and Image Processing 40 (1987) 250-266 4 #12;Computer Vision

Murray, David

380

ELAS: A powerful, general purpose image processing package  

NASA Technical Reports Server (NTRS)

ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.

Walters, David; Rickman, Douglas

1991-01-01

381

Optimization of image processing algorithms on mobile platforms  

Microsoft Academic Search

This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time

Pramod Poudel; Mukul Shirvaikar

2011-01-01

382

Detecting jaundice by using digital image processing  

NASA Astrophysics Data System (ADS)

When strong Jaundice is presented, babies or adults should be subject to clinical exam like "serum bilirubin" which can cause traumas in patients. Often jaundice is presented in liver disease such as hepatitis or liver cancer. In order to avoid additional traumas we propose to detect jaundice (icterus) in newborns or adults by using a not pain method. By acquiring digital images in color, in palm, soles and forehead, we analyze RGB attributes and diffuse reflectance spectra as the parameter to characterize patients with either jaundice or not, and we correlate that parameters with the level of bilirubin. By applying support vector machine we distinguish between healthy and sick patients.

Castro-Ramos, J.; Toxqui-Quitl, C.; Villa Manriquez, F.; Orozco-Guillen, E.; Padilla-Vivanco, A.; Sánchez-Escobar, JJ.

2014-03-01

383

An Image Processing Approach to Linguistic Translation  

NASA Astrophysics Data System (ADS)

The art of translation is as old as written literature. Developments since the Industrial Revolution have influenced the practice of translation, nurturing schools, professional associations, and standard. In this paper, we propose a method of translation of typed Kannada text (taken as an image) into its equivalent English text. The National Instruments (NI) Vision Assistant (version 8.5) has been used for Optical character Recognition (OCR). We developed a new way of transliteration (which we call NIV transliteration) to simplify the training of characters. Also, we build a special type of dictionary for the purpose of translation.

Kubatur, Shruthi; Sreehari, Suhas; Hegde, Rajeshwari

2011-12-01

384

Digital processing of side-scan sonar data with the Woods Hole image processing system software  

USGS Publications Warehouse

Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low-resolution side-scan sonar data. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for processing side-scan sonar data. This report describes the steps required to process the collected data and to produce an image that has equal along- and across-track resol

Paskevich, Valerie F.

1992-01-01

385

Intelligent control method of rotary kiln process based on image processing technology: A survey  

Microsoft Academic Search

The rotary kiln industrial production system is a typical complex nonlinear multivariable process with strongly coupling and large time delays. The up-to-the-minute research results of optimized operation and intelligent control of rotary kiln process based on image processing technology. It mainly includes the flam image processing technology, soft-sensor modeling, state recognition, and fault diagnosis and hybrid intelligent control strategy. In

Wang Jie-Sheng; Zhang Li; Gao Xian-Wen; Sun Shi-Feng

2010-01-01

386

Toward high-quality image communications: inverse problems in image processing  

NASA Astrophysics Data System (ADS)

Recently, image communications are becoming increasingly popular, and there is a growing need for consumers to be provided with high-quality services. Although the image communication services already exist over third-generation wireless networks, there are still obstacles that prevent high-quality image communications because of limited bandwidth. Thus, more research is required to overcome the limited bandwidth of current communications systems and achieve high-quality image reconstruction in real applications. From the point of view of image processing, core technologies for high-quality image reconstruction are face hallucination and compression artifact reduction. The main interests of consumers are facial regions and several compression artifacts inevitably occur by compression; these two technologies are closely related to inverse problems in image processing. We review recent studies on face hallucination and compression artifact reduction, and provide an outline of current research. Furthermore, we discuss practical considerations and possible solutions to implement these two technologies in real mobile applications.

Jung, Cheolkon; Jiao, Licheng; Liu, Bing; Qi, Hongtao; Sun, Tian

2012-10-01

387

The research on image processing technology of the star tracker  

NASA Astrophysics Data System (ADS)

As the core of visual sensitivity via imaging, image processing technology, especially for star tracker, is mainly characterized by such items as image exposure, optimal storage, background estimation, feature correction, target extraction, iteration compensation. This paper firstly summarizes the new research on those items at home and abroad, then, according to star tracker's practical engineering, environment in orbit and lifetime information, shows an architecture about rapid fusion between multiple frame images, which can be used to restrain oversaturation of the effective pixels, which means star tracker can be made more precise, more robust and more stable.

Li, Yu-ming; Li, Chun-jiang; Zheng, Ran; Li, Xiao; Yang, Jun

2014-11-01

388

Parallel perfusion imaging processing using GPGPU  

PubMed Central

Background and purpose The objective of brain perfusion quantification is to generate parametric maps of relevant hemodynamic quantities such as cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT) that can be used in diagnosis of acute stroke. These calculations involve deconvolution operations that can be very computationally expensive when using local Arterial Input Functions (AIF). As time is vitally important in the case of acute stroke, reducing the analysis time will reduce the number of brain cells damaged and increase the potential for recovery. Methods GPUs originated as graphics generation dedicated co-processors, but modern GPUs have evolved to become a more general processor capable of executing scientific computations. It provides a highly parallel computing environment due to its large number of computing cores and constitutes an affordable high performance computing method. In this paper, we will present the implementation of a deconvolution algorithm for brain perfusion quantification on GPGPU (General Purpose Graphics Processor Units) using the CUDA programming model. We present the serial and parallel implementations of such algorithms and the evaluation of the performance gains using GPUs. Results Our method has gained a 5.56 and 3.75 speedup for CT and MR images respectively. Conclusions It seems that using GPGPU is a desirable approach in perfusion imaging analysis, which does not harm the quality of cerebral hemodynamic maps but delivers results faster than the traditional computation. PMID:22824549

Zhu, Fan; Gonzalez, David Rodriguez; Carpenter, Trevor; Atkinson, Malcolm; Wardlaw, Joanna

2012-01-01

389

AN 8-WEEK SUMMER INSTITUTE TRAINING PROGRAM TO RETRAIN OFFICE EDUCATION TEACHERS FOR TEACHING BUSINESS ELECTRONIC DATA PROCESSING.  

ERIC Educational Resources Information Center

AN INSTITUTE WAS CONDUCTED TO RETRAIN TEACHERS IN (1) THE TEACHING OF BUSINESS ELECTRONIC PROCESSING AND PROGRAMING AND (2) THE DEVELOPMENT OF COURSE MATERIAL. COURSES OF STUDY AND ENRICHMENT EXPERIENCES WERE INCLUDED IN THE 8-WEEK PROGRAM. TRAINING IN ELECTRONIC DATA PROCESSING WAS RECEIVED BY 35 TEACHERS. CLASS OUTLINES WERE ALSO DEVELOPED TO…

NICELEY, JOHN B.; VALENTINE, IVAN E.

390

Academic Supervision (Advisory) Guide Since a student is the aim of teaching and learning process,and to provide the  

E-print Network

is the aim of teaching and learning process,and to provide the student with the necessary help within students with their supervisors names . e- Following up the process of supervision especially :- #12- Signing the regestration form and stamping it by the supervisor or any one

391

A Case Study Analysing the Process of Analogy-Based Learning in a Teaching Unit about Simple Electric Circuits  

ERIC Educational Resources Information Center

The purpose of this case study is to analyse the learning processes of a 16-year-old student as she learns about simple electric circuits in response to an analogy-based teaching sequence. Analogical thinking processes are modelled by a sequence of four steps according to Gentner's structure mapping theory (activate base domain, postulate local…

Paatz, Roland; Ryder, James; Schwedes, Hannelore; Scott, Philip

2004-01-01

392

Creating & using specimen images for collection documentation, research, teaching and outreach  

NASA Astrophysics Data System (ADS)

In this age of digital media, there are many opportunities for use of good images of specimens. On-line resources such as institutional web sites and global sites such as PaleoNet and the Paleobiology Database provide venues for collection information and images. Pictures can also be made available to the general public through popular media sites such as Flickr and Facebook, where they can be retrieved and used by teachers, students, and the general public. The number of requests for specimen loans can be drastically reduced by offering the scientific community access to data and specimen images using the internet. This is an important consideration in these days of limited support budgets, since it reduces the amount of staff time necessary for giving researchers and educators access to collections. It also saves wear and tear on the specimens themselves. Many institutions now limit or refuse to send specimens out of their own countries because of the risks involved in going through security and customs. The internet can bridge political boundaries, allowing everyone equal access to collections. In order to develop photographic documentation of a collection, thoughtful preparation will make the process easier and more efficient. Acquire the necessary equipment, establish standards for images, and develop a simple workflow design. Manage images in the camera, and produce the best possible results, rather than relying on time-consuming editing after the fact. It is extremely important that the images of each specimen be of the highest quality and resolution. Poor quality, low resolution photos are not good for anything, and will often have to be retaken when another need arises. Repeating the photography process involves more handling of specimens and more staff time. Once good photos exist, smaller versions can be created for use on the web. The originals can be archived and used for publication and other purposes.

Demouthe, J. F.

2012-12-01

393

Learning and teaching about the nature of science through process skills  

NASA Astrophysics Data System (ADS)

This dissertation, a three-paper set, explored whether the process skills-based approach to nature of science instruction improves teachers' understandings, intentions to teach, and instructional practice related to the nature of science. The first paper examined the nature of science views of 53 preservice science teachers before and after a year of secondary science methods instruction that incorporated the process skills-based approach. Data consisted of each participant's written and interview responses to the Views of the Nature of Science (VNOS) questionnaire. Systematic data analysis led to the conclusion that participants exhibited statistically significant and practically meaningful improvements in their nature of science views and viewed teaching the nature of science as essential to their future instruction. The second and third papers assessed the outcomes of the process skills-based approach with 25 inservice middle school science teachers. For the second paper, she collected and analyzed participants' VNOS and interview responses before, after, and 10 months after a 6-day summer professional development. Long-term retention of more aligned nature of science views underpins teachers' ability to teach aligned conceptions to their students yet it is rarely examined. Participants substantially improved their nature of science views after the professional development, retained those views over 10 months, and attributed their more aligned understandings to the course. The third paper addressed these participants' instructional practices based on participant-created video reflections of their nature of science and inquiry instruction. Two participant interviews and class notes also were analyzed via a constant comparative approach to ascertain if, how, and why the teachers explicitly integrated the nature of science into their instruction. The participants recognized the process skills-based approach as instrumental in the facilitation of their improved views. Additionally, the participants saw the nature of science as an important way to help students to access core science content such as the theory of evolution by natural selection. Most impressively, participants taught the nature of science explicitly and regularly. This instruction was student-centered, involving high levels of student engagement in ways that represented applying, adapting, and innovating on what they learned in the summer professional development.

Mulvey, Bridget K.

394

Image processing for improved eye-tracking accuracy  

NASA Technical Reports Server (NTRS)

Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

Mulligan, J. B.; Watson, A. B. (Principal Investigator)

1997-01-01

395

Computer tomography imaging of fast plasmachemical processes  

SciTech Connect

Results are presented from experimental studies of the interaction of a high-enthalpy methane plasma bunch with gaseous methane in a plasmachemical reactor. The interaction of the plasma flow with the rest gas was visualized by using streak imaging and computer tomography. Tomography was applied for the first time to reconstruct the spatial structure and dynamics of the reagent zones in the microsecond range by the maximum entropy method. The reagent zones were identified from the emission of atomic hydrogen (the H{sub {alpha}} line) and molecular carbon (the Swan bands). The spatiotemporal behavior of the reagent zones was determined, and their relation to the shock-wave structure of the plasma flow was examined.

Denisova, N. V.; Katsnelson, S. S.; Pozdnyakov, G. A. [Russian Academy of Sciences, Khristianovich Institute of Theoretical and Applied Mechanics, Siberian Branch (Russian Federation)

2007-11-15

396

Theoretical demonstration of image characteristics and image formation process depending on image displaying conditions on liquid crystal display  

NASA Astrophysics Data System (ADS)

In soft-copy diagnosis, medical images with a large number of matrices often need displaying of reduced images by subsampling processing. We analyzed overall image characteristics on a liquid crystal display (LCD) depending on the display condition. Specifically, we measured overall Wiener spectra (WS) of displayed X-ray images at the sub-sampling rates from pixel-by-pixel mode to 35 %. A used image viewer took image reductions by sub-sampling processing using bilinear interpolation. We also simulated overall WS from sub-sampled images by bilinear, super-sampling, and nearestneighbor interpolations. The measured and simulated results agreed well and demonstrated that overall noise characteristics were attributed to luminance-value fluctuation, sub-sampling effects, and inherent image characteristics of the LCD. Besides, we measured digital MTFs (modulation transfer functions) on center and shifted alignments from subsampled edge images as well as simulating WS. The WS and digital MTFs represented that the displaying of reduced images induced noise increments by aliasing errors and made it impossible to exhibit high-frequency signals. Furthermore, because super-sampling interpolation processed the image reductions more smoothly compared with bilinear interpolations, it resulted in lower WS and digital MTFs. Nearest-neighbor interpolation had almost no smoothing effect, so the WS and digital MTFs indicated the highest values.

Yamazaki, Asumi; Ichikawa, Katsuhiro; Funahashi, Masao; Kodera, Yoshie

2012-02-01

397

Digital image processing of bone - Problems and potentials  

NASA Technical Reports Server (NTRS)

The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.

Morey, E. R.; Wronski, T. J.

1980-01-01

398

Anomalous diffusion process applied to magnetic resonance image enhancement.  

PubMed

Diffusion process is widely applied to digital image enhancement both directly introducing diffusion equation as in anisotropic diffusion (AD) filter, and indirectly by convolution as in Gaussian filter. Anomalous diffusion process (ADP), given by a nonlinear relationship in diffusion equation and characterized by an anomalous parameters q, is supposed to be consistent with inhomogeneous media. Although classic diffusion process is widely studied and effective in various image settings, the effectiveness of ADP as an image enhancement is still unknown. In this paper we proposed the anomalous diffusion filters in both isotropic (IAD) and anisotropic (AAD) forms for magnetic resonance imaging (MRI) enhancement. Filters based on discrete implementation of anomalous diffusion were applied to noisy MRI T2w images (brain, chest and abdominal) in order to quantify SNR gains estimating the performance for the proposed anomalous filter when realistic noise is added to those images. Results show that for images containing complex structures, e.g. brain structures, anomalous diffusion presents the highest enhancements when compared to classical diffusion approach. Furthermore, ADP presented a more effective enhancement for images containing Rayleigh and Gaussian noise. Anomalous filters showed an ability to preserve anatomic edges and a SNR improvement of 26% for brain images, compared to classical filter. In addition, AAD and IAD filters showed optimum results for noise distributions that appear on extreme situations on MRI, i.e. in low SNR images with approximate Rayleigh noise distribution, and for high SNR images with Gaussian or non central ? noise distributions. AAD and IAD filter showed the best results for the parametric range 1.2 < q < 1.6, suggesting that the anomalous diffusion regime is more suitable for MRI. This study indicates the proposed anomalous filters as promising approaches in qualitative and quantitative MRI enhancement. PMID:25716129

da S Senra Filho, A C; Garrido Salmon, C E; Murta Junior, L O

2015-03-21

399

Anomalous diffusion process applied to magnetic resonance image enhancement  

NASA Astrophysics Data System (ADS)

Diffusion process is widely applied to digital image enhancement both directly introducing diffusion equation as in anisotropic diffusion (AD) filter, and indirectly by convolution as in Gaussian filter. Anomalous diffusion process (ADP), given by a nonlinear relationship in diffusion equation and characterized by an anomalous parameters q, is supposed to be consistent with inhomogeneous media. Although classic diffusion process is widely studied and effective in various image settings, the effectiveness of ADP as an image enhancement is still unknown. In this paper we proposed the anomalous diffusion filters in both isotropic (IAD) and anisotropic (AAD) forms for magnetic resonance imaging (MRI) enhancement. Filters based on discrete implementation of anomalous diffusion were applied to noisy MRI T2w images (brain, chest and abdominal) in order to quantify SNR gains estimating the performance for the proposed anomalous filter when realistic noise is added to those images. Results show that for images containing complex structures, e.g. brain structures, anomalous diffusion presents the highest enhancements when compared to classical diffusion approach. Furthermore, ADP presented a more effective enhancement for images containing Rayleigh and Gaussian noise. Anomalous filters showed an ability to preserve anatomic edges and a SNR improvement of 26% for brain images, compared to classical filter. In addition, AAD and IAD filters showed optimum results for noise distributions that appear on extreme situations on MRI, i.e. in low SNR images with approximate Rayleigh noise distribution, and for high SNR images with Gaussian or non central ? noise distributions. AAD and IAD filter showed the best results for the parametric range 1.2 < q < 1.6, suggesting that the anomalous diffusion regime is more suitable for MRI. This study indicates the proposed anomalous filters as promising approaches in qualitative and quantitative MRI enhancement.

Senra Filho, A. C. da S.; Garrido Salmon, C. E.; Murta Junior, L. O.

2015-03-01

400

Imaging Implicit Morphological Processing: Evidence from Hebrew  

ERIC Educational Resources Information Center

Is morphology a discrete and independent element of lexical structure or does it simply reflect a fine-tuning of the system to the statistical correlation that exists among orthographic and semantic properties of words? Hebrew provides a unique opportunity to examine morphological processing in the brain because of its rich morphological system.…

Bick, Atira S.; Frost, Ram; Goelman, Gadi

2010-01-01

401

Multispectral image processing for environmental monitoring  

Microsoft Academic Search

New techniques are described for detecting environmental anomalies and changes using multispectral imagery. Environmental anomalies are areas that do not exhibit normal signatures due to man-made activities and include phenomena such as effluent discharges, smoke plumes, stressed vegetation, and deforestation. A new region-based processing technique is described for detecting these phenomena using Landsat TM imagery. Another algorithm that can detect

Mark J. Carlotto; Mark B. Lazaroff; Mark W. Brennan

1993-01-01

402

An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.  

PubMed

Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. PMID:24777764

Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

2014-08-01

403

Image data processing system requirements study. Volume 1: Analysis. [for Earth Resources Survey Program  

NASA Technical Reports Server (NTRS)

Digital image processing, image recorders, high-density digital data recorders, and data system element processing for use in an Earth Resources Survey image data processing system are studied. Loading to various ERS systems is also estimated by simulation.

Honikman, T.; Mcmahon, E.; Miller, E.; Pietrzak, L.; Yorsz, W.

1973-01-01

404

Multimission image processing and science data visualization  

NASA Technical Reports Server (NTRS)

The Operational Science Analysis (OSA) Functional area supports science instrument data display, analysis, visualization and photo processing in support of flight operations of planetary spacecraft managed by the Jet Propulsion Laboratory (JPL). This paper describes the data products generated by the OSA functional area, and the current computer system used to generate these data products. The objectives on a system upgrade now in process are described. The design approach to development of the new system are reviewed, including use of the Unix operating system and X-Window display standards to provide platform independence, portability, and modularity within the new system, is reviewed. The new system should provide a modular and scaleable capability supporting a variety of future missions at JPL.

Green, William B.

1993-01-01

405

Some aspects of image processing using foams  

NASA Astrophysics Data System (ADS)

We have explored some concepts of chaotic dynamics and wave light transport in foams. Using some experiments, we have obtained the main features of light intensity distribution through foams. We are proposing a model for this phenomenon, based on the combination of two processes: a diffusive process and another one derived from chaotic dynamics. We have presented a short outline of the chaotic dynamics involving light scattering in foams. We also have studied the existence of caustics from scattering of light from foams, with typical patterns observed in the light diffraction in transparent films. The nonlinear geometry of the foam structure was explored in order to create optical elements, such as hyperbolic prisms and filters.

Tufaile, A.; Freire, M. V.; Tufaile, A. P. B.

2014-08-01

406

GPU-enabled parallel processing for image halftoning applications  

Microsoft Academic Search

Programmable Graphics Processing Unit (GPU) has emerged as a powerful parallel processing architecture for various applications requiring a large amount of CPU cycles. In this paper, we study the feasibility for using this architecture for image halftoning, in particular implementing computationally intensive neighborhood halftoning algorithms such as error diffusion and Direct Binary Search (DBS). We show that it is possible

Barry Trager; Chai Wah Wu; Mikel Stanich; Kartheek Chandu

2011-01-01

407

Image processing system performance prediction and product quality evaluation  

NASA Technical Reports Server (NTRS)

The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.

Stein, E. K.; Hammill, H. B. (principal investigators)

1976-01-01

408

Image processing on compressed data for large video databases  

Microsoft Academic Search

This paper presents a novel approach to processing encoded video sequences prior to decoding. Scene changes may be easily detected using DCT coefficients in JPEG and MPEG encoded video sequences. In addition, by analyzing the DCT coefficients, regions of interest may be isolated prior to decompression, increasing efficiency of any subsequent image processing steps, such as edge detection. The results

Farshid Arman; Arding Hsu; Ming-Yee Chiu

1993-01-01

409

Night vision imaging spectrometer (NVIS) processing and viewing tools  

Microsoft Academic Search

The US Army's Night Vision and Electronic Sensors Directorate (NVESD) has developed software tools for processing, viewing, and analyzing hyperspectral data. The tools were specifically developed for use with the U.S. Army's NVESD Night Vision Imaging Spectrometer (NVIS), but they can also be used to process hyperspectral data in a variety of other formats. The first of these tools is

Christopher G. Simi; Roberta Dixon; Michael J. Schlangen; Edwin M. Winter; Christopher LaSota

2001-01-01

410

Automating the Photogrammetric Bridging Based on MMS Image Sequence Processing  

NASA Astrophysics Data System (ADS)

The photogrammetric bridging or traverse is a special bundle block adjustment (BBA) for connecting a sequence of stereo-pairs and of determining the exterior orientation parameters (EOP). An object point must be imaged in more than one stereo-pair. In each stereo-pair the distance ratio between an object and its corresponding image point varies significantly. We propose to automate the photogrammetric bridging based on a fully automatic extraction of homologous points in stereo-pairs and on an arbitrary Cartesian datum to refer the EOP and tie points. The technique uses SIFT algorithm and the keypoint matching is given by similarity descriptors of each keypoint based on the smallest distance. All the matched points are used as tie points. The technique was applied initially to two pairs. The block formed by four images was treated by BBA. The process follows up to the end of the sequence and it is semiautomatic because each block is processed independently and the transition from one block to the next depends on the operator. Besides four image blocks (two pairs), we experimented other arrangements with block sizes of six, eight, and up to twenty images (respectively, three, four, five and up to ten bases). After the whole image pairs sequence had sequentially been adjusted in each experiment, a simultaneous BBA was run so to estimate the EOP set of each image. The results for classical ("normal case") pairs were analyzed based on standard statistics regularly applied to phototriangulation, and they show figures to validate the process.

Silva, J. F. C.; Lemes Neto, M. C.; Blasechi, V.

2014-11-01

411

The SIVA Image Processing Demos Rajashekar, Bovik, & Nair Page 1 of 20  

E-print Network

The SIVA Image Processing Demos Rajashekar, Bovik, & Nair Page 1 of 20 The SIVA Image Processing have performed some form of image processing. Irrespective of their familiarity with the theory of image processing, most people have used image editing software such as Adobe Photoshop, GIMP, Picasa

Rajashekar, Umesh

412

Small Interactive Image Processing System (SMIPS) system description  

NASA Technical Reports Server (NTRS)

The Small Interactive Image Processing System (SMIPS) operates under control of the IBM-OS/MVT operating system and uses an IBM-2250 model 1 display unit as interactive graphic device. The input language in the form of character strings or attentions from keys and light pen is interpreted and causes processing of built-in image processing functions as well as execution of a variable number of application programs kept on a private disk file. A description of design considerations is given and characteristics, structure and logic flow of SMIPS are summarized. Data management and graphic programming techniques used for the interactive manipulation and display of digital pictures are also discussed.

Moik, J. G.

1973-01-01

413

SPARX, a new environment for Cryo-EM image processing.  

PubMed

SPARX (single particle analysis for resolution extension) is a new image processing environment with a particular emphasis on transmission electron microscopy (TEM) structure determination. It includes a graphical user interface that provides a complete graphical programming environment with a novel data/process-flow infrastructure, an extensive library of Python scripts that perform specific TEM-related computational tasks, and a core library of fundamental C++ image processing functions. In addition, SPARX relies on the EMAN2 library and cctbx, the open-source computational crystallography library from PHENIX. The design of the system is such that future inclusion of other image processing libraries is a straightforward task. The SPARX infrastructure intelligently handles retention of intermediate values, even those inside programming structures such as loops and function calls. SPARX and all dependencies are free for academic use and available with complete source. PMID:16931051

Hohn, Michael; Tang, Grant; Goodyear, Grant; Baldwin, P R; Huang, Zhong; Penczek, Pawel A; Yang, Chao; Glaeser, Robert M; Adams, Paul D; Ludtke, Steven J

2007-01-01

414

Cellular Neural Network for Real Time Image Processing  

NASA Astrophysics Data System (ADS)

Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information for plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET).

Vagliasindi, G.; Arena, P.; Fortuna, L.; Mazzitelli, G.; Murari, A.

2008-03-01

415

Cellular Neural Network for Real Time Image Processing  

SciTech Connect

Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information for plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET)

Vagliasindi, G.; Arena, P.; Fortuna, L. [Dipartimento di Ingegneria Elettrica Elettronica e dei Sistemi - Universita degli Studi di Catania, I-95125 Catania (Italy); Mazzitelli, G. [ENEA-Gestione Grandi Impianti Sperimentali, via E. Fermi 45, I-00044 Frascati, Rome (Italy); Murari, A. [Consorzio RFX-Associazione EURATOM ENEA per la Fusione, I-35127 Padova (Italy)

2008-03-12

416

Models of formation and some algorithms of hyperspectral image processing  

NASA Astrophysics Data System (ADS)

Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.

Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.

2014-12-01

417

Focal-Plane Processing Architectures for Real-Time Hyperspectral Image Processing  

Microsoft Academic Search

Real-time image processing requires high computational and IyO throughputs obtained by use of opto- electronic system solutions. A novel architecture that uses focal-plane optoelectronic-area IyO with a fine-grain, low-memory, single-instruction-multiple-data ~SIMD! processor array is presented as an efficient computational solution for real-time hyperspectral image processing. The architecture is eval- uated by use of realistic workloads to determine data throughputs, processing

Sek M. Chai; Antonio Gentile; Wilfredo E. Lugo-Beauchamp; Javier Fonseca; D. Scott Wills

2000-01-01

418

Multispectral image processing for environmental monitoring  

NASA Astrophysics Data System (ADS)

New techniques are described for detecting environmental anomalies and changes using multispectral imagery. Environmental anomalies are areas that do not exhibit normal signatures due to man-made activities and include phenomena such as effluent discharges, smoke plumes, stressed vegetation, and deforestation. A new region-based processing technique is described for detecting these phenomena using Landsat TM imagery. Another algorithm that can detect the appearance or disappearance of environmental phenomena is also described and an example illustrating its use in detecting urban changes using SPOT imagery is presented.

Carlotto, Mark J.; Lazaroff, Mark B.; Brennan, Mark W.

1993-03-01

419

Photographic Images as an Interactive Online Teaching Technology: Creating Online Communities  

ERIC Educational Resources Information Center

Creating a sense of community in the online classroom is a challenge for educators who teach via the Internet. There is a growing body of literature supporting the importance of the community construct in online courses (Liu, Magjuka, Curtis, & Lee, 2007). Thus, educators are challenged to develop and implement innovative teaching technologies…

Perry, Beth; Dalton, Janice; Edwards, Margaret

2009-01-01

420

Enhancement of structure images of interstellar diamond microcrystals by image processing  

NASA Technical Reports Server (NTRS)

Image processed high resolution TEM images of diamond crystals found in oxidized acid residues of carbonaceous chondrites are presented. Two models of the origin of the diamonds are discussed. The model proposed by Lewis et al. (1987) supposes that the diamonds formed under low pressure conditions, whereas that of Blake et al (1988) suggests that the diamonds formed due to particle-particle collisions behind supernova shock waves. The TEM images of the diamond presented support the high pressure model.

O'Keefe, Michael A.; Hetherington, Crispin; Turner, John; Blake, David; Freund, Friedemann

1988-01-01

421

Color image processing and object tracking workstation  

NASA Technical Reports Server (NTRS)

A system is described for automatic and semiautomatic tracking of objects on film or video tape which was developed to meet the needs of the microgravity combustion and fluid science experiments at NASA Lewis. The system consists of individual hardware parts working under computer control to achieve a high degree of automation. The most important hardware parts include 16 mm film projector, a lens system, a video camera, an S-VHS tapedeck, a frame grabber, and some storage and output devices. Both the projector and tapedeck have a computer interface enabling remote control. Tracking software was developed to control the overall operation. In the automatic mode, the main tracking program controls the projector or the tapedeck frame incrementation, grabs a frame, processes it, locates the edge of the objects being tracked, and stores the coordinates in a file. This process is performed repeatedly until the last frame is reached. Three representative applications are described. These applications represent typical uses and include tracking the propagation of a flame front, tracking the movement of a liquid-gas interface with extremely poor visibility, and characterizing a diffusion flame according to color and shape.

Klimek, Robert B.; Paulick, Michael J.

1992-01-01

422

AR/D image processing system  

NASA Technical Reports Server (NTRS)

General Dynamics has developed advanced hardware, software, and algorithms for use with the Tomahawk cruise missile and other unmanned vehicles. We have applied this technology to the problem of locating and determining the orientation of the docking port of a target vehicle with respect to an approaching spacecraft. The system described in this presentation utilizes a multi-processor based computer to digitize and process television imagery and extract parameters such as range to the target vehicle, approach, velocity, and pitch and yaw angles. The processor is based on the Inmos T-800 Transputer and is configured as a loosely coupled array. Each processor operates asynchronously and has its own local memory. This allows additional processors to be easily added if additional processing power is required for more complex tasks. Total system throughput is approximately 100 MIPS (scalar) and 60 MFLOPS and can be expanded as desired. The algorithm implemented on the system uses a unique adaptive thresholding technique to locate the target vehicle and determine the approximate position of the docking port. A target pattern surrounding the port is than analyzed in the imagery to determine the range and orientation of the target. This information is passed to an autopilot which uses it to perform course and speed corrections. Future upgrades to the processor are described which will enhance its capabilities for a variety of missions.

Wookey, Cathy; Nicholson, Bruce

1991-01-01

423

Enhancement of structure images of interstellar diamond microcrystals by image processing  

Microsoft Academic Search

Image processed high resolution TEM images of diamond crystals found in oxidized acid residues of carbonaceous chondrites are presented. Two models of the origin of the diamonds are discussed. The model proposed by Lewis et al. (1987) supposes that the diamonds formed under low pressure conditions, whereas that of Blake et al (1988) suggests that the diamonds formed due to

Michael A. O'Keefe; Crispin Hetherington; John Turner; David Blake; Friedemann Freund

1988-01-01

424

A Simple Microscopy Assay to Teach the Processes of Phagocytosis and Exocytosis  

PubMed Central

Phagocytosis and exocytosis are two cellular processes involving membrane dynamics. While it is easy to understand the purpose of these processes, it can be extremely difficult for students to comprehend the actual mechanisms. As membrane dynamics play a significant role in many cellular processes ranging from cell signaling to cell division to organelle renewal and maintenance, we felt that we needed to do a better job of teaching these types of processes. Thus, we developed a classroom-based protocol to simultaneously study phagocytosis and exocytosis in Tetrahymena pyriformis. In this paper, we present our results demonstrating that our undergraduate classroom experiment delivers results comparable with those acquired in a professional research laboratory. In addition, students performing the experiment do learn the mechanisms of phagocytosis and exocytosis. Finally, we demonstrate a mathematical exercise to help the students apply their data to the cell. Ultimately, this assay sets the stage for future inquiry-based experiments, in which the students develop their own experimental questions and delve deeper into the mechanisms of phagocytosis and exocytosis. PMID:22665590

Gray, Ross; Gray, Andrew; Fite, Jessica L.; Jordan, Renée; Stark, Sarah; Naylor, Kari

2012-01-01

425

A simple microscopy assay to teach the processes of phagocytosis and exocytosis.  

PubMed

Phagocytosis and exocytosis are two cellular processes involving membrane dynamics. While it is easy to understand the purpose of these processes, it can be extremely difficult for students to comprehend the actual mechanisms. As membrane dynamics play a significant role in many cellular processes ranging from cell signaling to cell division to organelle renewal and maintenance, we felt that we needed to do a better job of teaching these types of processes. Thus, we developed a classroom-based protocol to simultaneously study phagocytosis and exocytosis in Tetrahymena pyriformis. In this paper, we present our results demonstrating that our undergraduate classroom experiment delivers results comparable with those acquired in a professional research laboratory. In addition, students performing the experiment do learn the mechanisms of phagocytosis and exocytosis. Finally, we demonstrate a mathematical exercise to help the students apply their data to the cell. Ultimately, this assay sets the stage for future inquiry-based experiments, in which the students develop their own experimental questions and delve deeper into the mechanisms of phagocytosis and exocytosis. PMID:22665590

Gray, Ross; Gray, Andrew; Fite, Jessica L; Jordan, Renée; Stark, Sarah; Naylor, Kari

2012-01-01

426

Digital Image Processing Techniques to Create Attractive Astronomical Images from Research Data  

NASA Astrophysics Data System (ADS)

The quality of modern astronomical data, the power of modern computers and the agility of current image processing software enable the creation of high-quality images in a purely digital form that rival the quality of traditional photographic astronomical images. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways, it has led to a new philosophy towards how to create them. We present a practical guide to generate astronomical images from research data by using powerful image processing programs. These programs use a layering metaphor that allows an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. We also present a philosophy on how to use color and composition to create images that simultaneously highlight the scientific detail within an image and are aesthetically appealing. We advocate an approach that uses visual grammar, defined as the elements which affect the interpretation of an image, to maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage the viewer and keep him or her interested for a longer period of time. The effective use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.

Rector, T. A.; Levay, Z.; Frattare, L.; English, J.; Pu'uohau-Pummill, K.

2004-05-01

427

Dehydration process of fish analyzed by neutron beam imaging  

NASA Astrophysics Data System (ADS)

Since regulation of water content of the dried fish is an important factor for the quality of the fish, water-losing process during drying (squid and Japanese horse mackerel) was analyzed through neutron beam imaging. The neutron image showed that around the shoulder of mackerel, there was a part where water content was liable to maintain high during drying. To analyze water-losing process more in detail, spatial image was produced. From the images, it was clearly indicated that the decrease of water content was regulated around the shoulder part. It was suggested that to prevent deterioration around the shoulder part of the dried fish is an important factor to keep quality of the dried fish in the storage.

Tanoi, K.; Hamada, Y.; Seyama, S.; Saito, T.; Iikura, H.; Nakanishi, T. M.

2009-06-01

428

New Zealand involvement in Radio Astronomical VLBI Image Processing  

E-print Network

With the establishment of the AUT University 12m radio telescope at Warkworth, New Zealand has now become a part of the international Very Long Baseline Interferometry (VLBI) community. A major product of VLBI observations are images in the radio domain of astronomical objects such as Active Galactic Nuclei (AGN). Using large geographical separations between radio antennas, very high angular resolution can be achieved. Detailed images can be created using the technique of VLBI Earth Rotation Aperture Synthesis. We review the current process of VLBI radio imaging. In addition we model VLBI configurations using the Warkworth telescope, AuScope (a new array of three 12m antennas in Australia) and the Australian Square Kilometre Array Pathfinder (ASKAP) array currently under construction in Western Australia, and discuss how the configuration of these arrays affects the quality of images. Recent imaging results that demonstrate the modeled improvements from inclusion of the AUT and first ASKAP telescope in the Au...

Weston, Stuart; Gulyaev, Sergei

2011-01-01

429

Cloud based toolbox for image analysis, processing and reconstruction tasks.  

PubMed

This chapter describes a novel way of carrying out image analysis, reconstruction and processing tasks using cloud based service provided on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) infrastructure. The toolbox allows users free access to a wide range of useful blocks of functionalities (imaging functions) that can be connected together in workflows allowing creation of even more complex algorithms that can be re-run on different data sets, shared with others or additionally adjusted. The functions given are in the area of cellular imaging, advanced X-ray image analysis, computed tomography and 3D medical imaging and visualisation. The service is currently available on the website www.cloudimaging.net.au . PMID:25381109

Bednarz, Tomasz; Wang, Dadong; Arzhaeva, Yulia; Lagerstrom, Ryan; Vallotton, Pascal; Burdett, Neil; Khassapov, Alex; Szul, Piotr; Chen, Shiping; Sun, Changming; Domanski, Luke; Thompson, Darren; Gureyev, Timur; Taylor, John A

2015-01-01

430

Use of homogeneous neuron-like media for image processing  

SciTech Connect

A homogeneous neuron-like medium with close nonlocal coupling between the positive center and the negative surroundings is used for image processing. The input image is transformed into a stationary structure dependent on the form, of the coupling function and the type of the element`s response to external action. Using the idea of plausible activity structures, we can determine the model parameters needed to extract the features of the medium such as the contour, the lines of definite direction, the ends of lines, the central axes of the image, the negative, the points of intersection of lines, the texture boundaries, and the objects of definite scale as well as the parameters that permit reconstruction of broken lines, smoothing of lines, automatic determination of image scales, and binarization of the initial image by a medium with a tunable coupling function.

Nuidel, I.V.; Kuznetsov, S.O.

1995-02-01

431

Rapid prototyping in the development of image processing systems  

NASA Astrophysics Data System (ADS)

This contribution presents a rapid prototyping approach for the real-time demonstration of image processing algorithms. As an example EADS/LFK has developed a basic IR target tracking system implementing this approach. Traditionally in research and industry time-independent simulation of image processing algorithms on a host computer is processed. This method is good for demonstrating the algorithms' capabilities. Rarely done is a time-dependent simulation or even a real-time demonstration on a target platform to prove the real-time capabilities. In 1D signal processing applications time-dependent simulation and real-time demonstration has already been used for quite a while. For time-dependent simulation Simulink from The MathWorks has established as an industry standard. Combined with The MathWorks' Real-Time Workshop the simulation model can be transferred to a real-time target processor. The executable is generated automatically by the Real-Time Workshop directly out of the simulation model. In 2D signal processing applications like image processing The Mathworks' Matlab is commonly used for time-independent simulation. To achieve time-dependent simulation and real-time demonstration capabilities the algorithms can be transferred to Simulink, which in fact runs on top of Matlab. Additionally to increase the performance Simulink models or parts of them can be transferred to Xilinx FPGAs using Xilinx' System Generator. With a single model and the automatic workflow both, a time-dependant simulation and the real-time demonstration, are covered leading to an easy and flexible rapid prototyping approach. EADS/LFK is going to use this approach for a wider spectrum of IR image processing applications like automatic target recognition or image based navigation or imaging laser radar target recognition.

von der Fecht, Arno; Kelm, Claus Thomas

2004-08-01

432

Near-real-time satellite image processing: metacomputing in CC++  

Microsoft Academic Search

Metacomputing combines heterogeneous system elements in a seamless computing service. In this case study, we introduce the elements of metacomputing and describe an application for cloud detection and visualization of infrared and visible-light satellite images. The application processes the satellite images by using Compositional C++ (CC++)-a simple, yet powerful extension of C++-and its runtime system, Nexus, to integrate specialized resources,

Craig A. Lee; Carl Kesselman; Stephen Schwab

1996-01-01

433

Saturable Fabry-Perot filter for nonlinear optical image processing.  

PubMed

Experimental results of nonlinear optical image processing by a saturable Fabry-Perot filter are described. Fulgide, a photochromic compound, in toluene is used as a saturable absorber for the green light of an Ar-ion laser. Image thresholding using a steep increase of transmission with a critical power density of 700 mW/cm(2) is demonstrated. The spatial resolution is found to be more than 10 lines/mm over a 30-mm-diameter aperture. PMID:19701344

Mitsuhashi, Y

1981-03-01

434

Detection of ash fusion temperatures based on the image processing  

NASA Astrophysics Data System (ADS)

The detection of ash fusion temperatures is important in the research of coal characteristics. The prevalent method is to build up ash cone with some dimension and detect the characteristic temperatures according to the morphological change. However, conditional detection work is not accurate and brings high intensity of labor as a result of both visualization and real-time observation. According to the insufficiency of conventional method, a new method to determine ash fusion temperatures with image processing techniques is introduced in this paper. Seven techniques (image cutting, image sharpening, edge picking, open operation, dilate operation, close operation, geometrical property extraction) are used in image processing program. The processing results show that image sharpening can intensify the outline of ash cone; Prewitt operator may extract the edge well among many operators; mathematical morphology of image can filter noise effectively while filling up the crack brought by filtration, which is useful for further disposal; characteristic temperatures of ash fusion temperatures can be measured by depth-to-width ratio. Ash fusion temperatures derived from this method match normal values well, which proves that this method is feasible in detection of ash fusion temperatures.

Li, Peisheng; Yue, Yanan; Hu, Yi; Li, Jie; Yu, Wan; Yang, Jun; Hu, Niansu; Yang, Guolu

2007-11-01

435

Rapid color-based segmentation in digital image processing  

NASA Astrophysics Data System (ADS)

A necessary part of digital image processing is segmentation of the images into a set of objects which exist on some background. We are interested in a class of objects whose distinguishing characteristic is their color. Such objects occur in many applications, such as microscopy, printing, production line monitoring, etc. In this work, a general method of rapid color-based segmentation is presented. The only hardware requirement is that look up tables (LUT) be available. Most image processing hardware contains either LUTs or processors capable of rapidly performing table lookup. The method presented allows simultaneous application of constraints on both hue and saturation. In addition, it allows for use of different color transformations. As such, it constitutes a general approach to analysis of images consisting of three spectral components. Because of the speed of LUT operations, this approach is suitable for many applications which are time sensitive. The major drawback of using LUTs is that the gray level resolution per color is limited by the size of the LUT. This method was implemented on a Matrox MVP-AT image processor, which is capable of processing 12 bit images (4 bits/color). In many cases, this resolution is adequate, as can be seen from examples which are presented.

Engelberg, Yaakov M.; Chavel, A. C.; Stroh, U.; Weiss, Aryeh M.; Rotman, Stanley R.

1993-08-01

436

Image processing for the laser spot thermography of composite materials  

NASA Astrophysics Data System (ADS)

This paper describes an image processing algorithm in support of infrared based nondestructive testing. The algorithm aims at analyzing the raw thermal infrared images obtained by using the nondestructive evaluation method of the laser spot thermography. In the study presented in this paper, a laser was used to scan a test specimen through the generation of single pulses. The temperature distribution produced by this thermoelastic source was measured by an infrared camera and processed with a two-stage algorithm. In the first stage few statistical parameters were used to flag the presence of damage. In the second stage the images that revealed the presence of damage were processed computing the first and second spatial derivative. Two spatial filters were also used to enhance contrast, and to locate and size the defect. The algorithm was experimentally validated by scanning the surface of a CFRP and a GFRP composite plate with induced defects.

Vandone, Ambra; Rizzo, Piervincenzo; Vanali, Marcello

2012-04-01

437

Latency and bandwidth considerations in parallel robotics image processing  

SciTech Connect

Parallel image processing for robotics applications differs in a fundamental way from parallel scientific computing applications: the problem size is fixed, and latency requirements are tight. This brings Amdhal`s law in effect with full force, so that message-passing latency and bandwidth severely restrict performance. In this paper the authors examine an application from this domain, stereo image processing, which has been implemented in Adapt, a niche language for parallel image processing implemented on the Carnegie Mellon-Intel Corporation iWarp. High performance has been achieved for this application. They show how a I/O building block approach on iWarp achieved this, and then examine the implications of this performance for more traditional machines that do not have iWarp`s rich I/O primitive set.

Webb, J.A. [Carnegie Mellon Univ., Pittsburgh, PA (United States). School of Computer Science

1993-12-31

438

The Multimission Image Processing Laboratory's virtual frame buffer interface  

NASA Technical Reports Server (NTRS)

Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.

Wolfe, T.

1984-01-01

439

Real-time microstructural and functional imaging and image processing in optical coherence tomography  

NASA Astrophysics Data System (ADS)

Optical Coherence Tomography (OCT) is a noninvasive optical imaging technique that allows high-resolution cross-sectional imaging of tissue microstructure, achieving a spatial resolution of about 10 mum. OCT is similar to B-mode ultrasound (US) except that it uses infrared light instead of ultrasound. In contrast to US, no coupling gel is needed, simplifying the image acquisition. Furthermore, the fiber optic implementation of OCT is compatible with endoscopes. In recent years, the transition from slow imaging, bench-top systems to real-time clinical systems has been under way. This has lead to a variety of applications, namely in ophthalmology, gastroenterology, dermatology and cardiology. First, this dissertation will demonstrate that OCT is capable of imaging and differentiating clinically relevant tissue structures in the gastrointestinal tract. A careful in vitro correlation study between endoscopic OCT images and corresponding histological slides was performed. Besides structural imaging, OCT systems were further developed for functional imaging, as for example to visualize blood flow. Previously, imaging flow in small vessels in real-time was not possible. For this research, a new processing scheme similar to real-time Doppler in US was introduced. It was implemented in dedicated hardware to allow real-time acquisition and overlayed display of blood flow in vivo. A sensitivity of 0.5mm/s was achieved. Optical coherence microscopy (OCM) is a variation of OCT, improving the resolution even further to a few micrometers. Advances made in the OCT scan engine for the Doppler setup enabled real-time imaging in vivo with OCM. In order to generate geometrical correct images for all the previous applications in real-time, extensive image processing algorithms were developed. Algorithms for correction of distortions due to non-telecentric scanning, nonlinear scan mirror movements, and refraction were developed and demonstrated. This has led to interesting new applications, as for example in imaging of the anterior segment of the eye.

Westphal, Volker

440

Software architecture for intelligent image processing using Prolog  

NASA Astrophysics Data System (ADS)

We describe a prototype system for interactive image processing using Prolog, implemented by the first author on an Apple Macintosh computer. This system is inspired by Prolog+, but differs from it in two particularly important respects. The first is that whereas Prolog+ assumes the availability of dedicated image processing hardware, with which the Prolog system communicates, our present system implements image processing functions in software using the C programming language. The second difference is that although our present system supports Prolog+ commands, these are implemented in terms of lower-level Prolog predicates which provide a more flexible approach to image manipulation. We discuss the impact of the Apple Macintosh operating system upon the implementation of the image-processing functions, and the interface between these functions and the Prolog system. We also explain how the Prolog+ commands have been implemented. The system described in this paper is a fairly early prototype, and we outline how we intend to develop the system, a task which is expedited by the extensible architecture we have implemented.

Jones, Andrew C.; Batchelor, Bruce G.

1994-10-01

441

ISLE (Image and Signal Processing LISP Environment) reference manual  

SciTech Connect

ISLE is a rapid prototyping system for performing image and signal processing. It is designed to meet the needs of a person doing development of image and signal processing algorithms in a research environment. The image and signal processing modules in ISLE form a very capable package in themselves. They also provide a rich environment for quickly and easily integrating user-written software modules into the package. ISLE is well suited to applications in which there is a need to develop a processing algorithm in an interactive manner. It is straightforward to develop the algorithms, load it into ISLE, apply the algorithm to an image or signal, display the results, then modify the algorithm and repeat the develop-load-apply-display cycle. ISLE consists of a collection of image and signal processing modules integrated into a cohesive package through a standard command interpreter. ISLE developer elected to concentrate their effort on developing image and signal processing software rather than developing a command interpreter. A COMMON LISP interpreter was selected for the command interpreter because it already has the features desired in a command interpreter, it supports dynamic loading of modules for customization purposes, it supports run-time parameter and argument type checking, it is very well documented, and it is a commercially supported product. This manual is intended to be a reference manual for the ISLE functions The functions are grouped into a number of categories and briefly discussed in the Function Summary chapter. The full descriptions of the functions and all their arguments are given in the Function Descriptions chapter. 6 refs.

Sherwood, R.J.; Searfus, R.M.

1990-01-01

442

Automatic image processing system for beautifying human faces  

NASA Astrophysics Data System (ADS)

In this paper, we propose an automatic image processing system to beautify human faces in frontal-parallel color images. Although most of image processing packages provide functions to beautify color images, a few of them, at least based on our knowledge, are specific for beautifying human faces. By using these functions, the processed face images become unreal. For example, they will remove most of natural edges around some special regions, such as eyes and mouth. Therefore, the proposed system only processes the regions of faces. To make the processed face region smoother, our system treats the regions of eyes/mouth and the rest of face region differently. By using different methods to smooth these two types of regions, we can keep almost all the natural edges around eyes and mouth, but remove wrinkles and spots on the rest of faces. The process of our system is as follows. At beginning, we convert the RGB color space into YCbCr space so as to segment face regions from scene background based on the value range of the skin color proposed by H.A. Rowly, etc. Within the face region, the system uses the chain-code to get the eye region and the mouth region. For the eye and mouth regions, we adjust the image pixels by pixel; the rest of pixels are justified by block base. To evaluate the performance of our system, we compare our system with the tool XCleanSkinFX, which can be found at the Web site http://www.mediachance.com. Our system is outperforming.

Ho, Kevin I. C.; Chen, Tung-Shou; Su, Hsing-Yi

2002-09-01

443

Automated Processing of Zebrafish Imaging Data: A Survey  

PubMed Central

Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

2013-01-01

444

Image Algebra Matlab language version 2.3 for image processing and compression research  

NASA Astrophysics Data System (ADS)

Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation of IAM to include polymorphic operations over different point sets, as well as recursive convolution operations and functional composition. We also show how image algebra and IAM can be employed in image processing and compression research, as well as algorithm development and analysis.

Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

2010-08-01

445

An image processing approach to analyze morphological features of microscopic images of muscle fibers.  

PubMed

We present an image processing approach to automatically analyze duo-channel microscopic images of muscular fiber nuclei and cytoplasm. Nuclei and cytoplasm play a critical role in determining the health and functioning of muscular fibers as changes of nuclei and cytoplasm manifest in many diseases such as muscular dystrophy and hypertrophy. Quantitative evaluation of muscle fiber nuclei and cytoplasm thus is of great importance to researchers in musculoskeletal studies. The proposed computational approach consists of steps of image processing to segment and delineate cytoplasm and identify nuclei in two-channel images. Morphological operations like skeletonization is applied to extract the length of cytoplasm for quantification. We tested the approach on real images and found that it can achieve high accuracy, objectivity, and robustness. PMID:25124286

Comin, Cesar Henrique; Xu, Xiaoyin; Wang, Yaming; Costa, Luciano da Fontoura; Yang, Zhong

2014-12-01

446

Negotiating a space to teach science: Stories of community conversation and personal process in a school reform effort  

NASA Astrophysics Data System (ADS)

This is a qualitative study about elementary teachers in a school district who are involved in a science curricular reform effort. The teachers attempted to move from textbook-based science teaching to a more inquiry and process-based approach. I specifically explore how teachers negotiate their place within changes in pedagogy and curriculum and how this negotiation is enacted in the space of a teacher's own classroom. The account developed here is based on a two-year study. Presented are descriptions, analysis, and my own interpretations of teaching and conversations as teachers spoke with one another, with me and with children as they tried out the new science curriculum and pedagogies. I conclude that people interested in school reform should consider the following ideas as they work with teachers to implement pedagogical and curricular changes. (1) Teaching is a personal/individual process that takes place within a larger community. This leads to a complex context for working and making decisions. (2) Despite feeling that changes were imposed, teachers make the curriculum work for the needs in their own classroom. (3) Change is a process that teachers view as part of their work. Teachers expect that they will adapt curriculum and make it work for the children in their classes and for themselves. I suggest that those who advocate various reform efforts in teaching and curriculum should consider the spaces that teachers create as they become a part of the change process including intellectual, physical, and emotional ones. In my stories I assert: teachers create their own spaces for making changes in pedagogy and curriculum and they do this as a complex negotiation of external demands (such as their community, relationships with colleagues, and state standards) and their own values and interpretations. The ways that teachers implement the change process is a personal one, and because it is a personal process, school reform efforts largely depend on the teachers making these efforts a part of their own thinking, teaching, and learning.

Barker, Heidi Bulmahn

447

Teaching strategies for using projected images to develop conceptual understanding: Exploring discussion practices in computer simulation and static image-based lessons  

NASA Astrophysics Data System (ADS)

The availability and sophistication of visual display images, such as simulations, for use in science classrooms has increased exponentially however, it can be difficult for teachers to use these images to encourage and engage active student thinking. There is a need to describe flexible discussion strategies that use visual media to engage active thinking. This mixed methods study analyzes teacher behavior in lessons using visual media about the particulate model of matter that were taught by three experienced middle school teachers. Each teacher taught one half of their students with lessons using static overheads and taught the other half with lessons using a projected dynamic simulation. The quantitative analysis of pre-post data found significant gain differences between the two image mode conditions, suggesting that the students who were assigned to the simulation condition learned more than students who were assigned to the overhead condition. Open coding was used to identify a set of eight image-based teaching strategies that teachers were using with visual displays. Fixed codes for this set of image-based discussion strategies were then developed and used to analyze video and transcripts of whole class discussions from 12 lessons. The image-based discussion strategies were refined over time in a set of three in-depth 2x2 comparative case studies of two teachers teaching one lesson topic with two image display modes. The comparative case study data suggest that the simulation mode may have offered greater affordances than the overhead mode for planning and enacting discussions. The 12 discussions were also coded for overall teacher student interaction patterns, such as presentation, IRE, and IRF. When teachers moved during a lesson from using no image to using either image mode, some teachers were observed asking more questions when the image was displayed while others asked many fewer questions. The changes in teacher student interaction patterns suggest that teachers vary on whether they consider the displayed image as a "tool-for-telling" and a "tool-for-asking." The study attempts to provide new descriptions of strategies teachers use to orchestrate image-based discussions designed to promote student engagement and reasoning in lessons with conceptual goals.

Price, Norman T.

448

A synoptic description of coal basins via image processing  

NASA Technical Reports Server (NTRS)

An existing image processing system is adapted to describe the geologic attributes of a regional coal basin. This scheme handles a map as if it were a matrix, in contrast to more conventional approaches which represent map information in terms of linked polygons. The utility of the image processing approach is demonstrated by a multiattribute analysis of the Herrin No. 6 coal seam in Illinois. Findings include the location of a resource and estimation of tonnage corresponding to constraints on seam thickness, overburden, and Btu value, which are illustrative of the need for new mining technology.

Farrell, K. W., Jr.; Wherry, D. B.

1978-01-01

449

High-resolution imaging of the supercritical antisolvent process  

NASA Astrophysics Data System (ADS)

A high-magnification and high-resolution imaging technique was developed for the supercritical fluid antisolvent (SAS) precipitation process. Visualizations of the jet injection, flow patterns, droplets, and particles were obtained in a high-pressure vessel for polylactic acid and budesonide precipitation in supercritical CO2. The results show two regimes for particle production: one where turbulent mixing occurs in gas-like plumes, and another where distinct droplets were observed in the injection. Images are presented to demonstrate the capabilities of the method for examining particle formation theories and for understanding the underlying fluid mechanics, thermodynamics, and mass transport in the SAS process.

Bell, Philip W.; Stephens, Amendi P.; Roberts, Christopher B.; Duke, Steve R.

2005-06-01

450

Health-Related Intensity Profiles of Physical Education Classes at Different Phases of the Teaching/Learning Process  

ERIC Educational Resources Information Center

Study aim: To assess the intensities of three types of physical education (PE) classes corresponding to the phases of the teaching/learning process: Type 1--acquiring and developing skills, Type 2--selecting and applying skills, tactics and compositional principles and Type 3--evaluating and improving performance skills. Material and methods: A…

Bronikowski, Michal; Bronikowska, Malgorzata; Kantanista, Adam; Ciekot, Monika; Laudanska-Krzeminska, Ida; Szwed, Szymon

2009-01-01

451

Validation Study of the Scale for "Assessment of the Teaching-Learning Process", Student Version (ATLP-S)  

ERIC Educational Resources Information Center

Introduction: The main goal of this study is to evaluate the psychometric and assessment features of the Scale for the "Assessment of the Teaching-Learning Process, Student Version" (ATLP-S), for both practical and theoretical reasons. From an applied point of view, this self-report measurement instrument has been designed to encourage student…

de la Fuente, Jesus; Sander, Paul; Justicia, Fernando; Pichardo, M. Carmen; Garcia-Berben, Ana B.

2010-01-01

452

The Perceptions of Student Teachers about the Effects of Class Size with Regard to Effective Teaching Process  

ERIC Educational Resources Information Center

The main purpose of this study was to determine student teachers' perceptions concerning the effects of class size with regard to the teaching process. A total of 41 fourth-year student teachers participated in the study. A questionnaire including open-ended items was used for data collection. The study revealed that there is a direct relationship…

Cakmak, Melek

2009-01-01

453

Improving Teaching through Continuous Learning: The Inquiry Process John Wooden Used to Become Coach of the Century  

Microsoft Academic Search

Past and contemporary scholars have emphasized the importance of job-embedded, systematic instructional inquiry for educators. A recent review of the literature highlights four key features shared by several well documented inquiry approaches for classroom teachers. Interestingly, another line of research suggests that these key features also characterized the process that UCLA's John Wooden used to systematically improve his teaching of

Bradley Alan Ermeling

2012-01-01

454

Combined optimization of image-gathering optics and image-processing algorithm for edge detection  

NASA Technical Reports Server (NTRS)

This paper investigates the relationships between the image-gathering and image-processing systems for minimum mean-squared error estimation of scene characteristics. A stochastic optimization problem is formulated in which the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled, and noisy image data. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristics is scene restoration. Optimal edge detection is investigated. It is shown that the optimal edge detector compensates for the blurring introduced by the image-gathering optics, and, notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition.

Halyo, N.; Samms, R. W.

1986-01-01

455

Remotely sensed image processing service composition based on heuristic search  

NASA Astrophysics Data System (ADS)

As remote sensing technology become ever more powerful with multi-platform and multi-sensor, it has been widely recognized for contributing to geospatial information efforts. Because the remotely sensed image processing demands large-scale, collaborative processing and massive storage capabilities to satisfy the increasing demands of various applications, the effect and efficiency of the remotely sensed image processing is far from the user's expectation. The emergence of Service Oriented Architecture (SOA) may make this challenge manageable. It encapsulate all processing function into services and recombine them with service chain. The service composition on demand has become a hot topic. Aiming at the success rate, quality and efficiency of processing service composition for remote sensing application, a remote sensed image processing service composition method is proposed in this paper. It composes services for a user requirement through two steps: 1) dynamically constructs a complete service dependency graph for user requirement on-line; 2) AO* based heuristic searches for optimal valid path in service dependency graph. These services within the service dependency graph are considered relevant to the specific request, instead of overall registered services. The second step, heuristic search is a promising approach for automated planning. Starting with the initial state, AO* uses a heuristic function to select states until the user requirement is reached. Experimental results show that this method has a good performance even the repository has a large number of processing services.

Yang, Xiaoxia; Zhu, Qing; Li, Hai-feng; Zhao, Wen-hao

2008-12-01

456

Introduction to Image Processing I-Liang Chern  

E-print Network

Enhencement Ã? Image Denoising Ã? Image Deblurring Ã? Image Inpainting Ã? Image segmentation Ã? Image Registration Denoising Ã? Image Deblurring Ã? Image Inpainting Ã? Image segmentation Ã? Image Registration Book: Rafael C Enhancement a. Intensity Adjustment b. Denoise c. Deblur #12;Image Inpainting "Image Inpainting : An Overview

457

Galaxy Image Processing and Morphological Classification Using Machine Learning  

NASA Astrophysics Data System (ADS)

This work uses data from the Sloan Digital Sky Survey (SDSS) and the Galaxy Zoo Project for classification of galaxy morphologies via machine learning. SDSS imaging data together with reliable human classifications from Galaxy Zoo provide the training set and test set for the machine learning architectures. Classification is performed with hand-picked, pre-computed features from SDSS as well as with the raw imaging data from SDSS that was available to humans in the Galaxy Zoo project. With the hand-picked features and a logistic regression classifier, 95.21% classification accuracy and an area under the ROC curve of 0.986 are attained. In the case of the raw imaging data, the images are first processed to remove background noise, image artifacts, and celestial objects other than the galaxy of interest. They are then rotated onto their principle axis of variance to guarantee rotational invariance. The processed images are used to compute color information, up to 4^th order central normalized moments, and radial intensity profiles. These features are used to train a support vector machine with a 3^rd degree polynomial kernel, which achieves a classification accuracy of 95.89% with an ROC area of 0.943.

Kates-Harbeck, Julian

2012-03-01

458

Modeling of illumination effects for image processing of microvessels  

NASA Astrophysics Data System (ADS)

This research is in support of the development of an image processing system which is capable of detecting and tracking blood vessels in photographs or video images of the human microcirculation system. We describe a model which replicates the illumination processes contributing to a film or video image of the microvessels of the human bulbar conjunctiva. The model provides a foundation for microvessel detection algorithms, for measurement of vessel parameters, for determining relative depth of blood vessels, and for separating neighboring vessels in complex images. The model is based on a cylindrical vessel embedded in a diffuse medium which is on a reflecting background. A light source illuminating the scene is reflected by it's components and passes through a pinhole to an image plane, which records these reflections as intensity values at discrete pixel locations. Fundamental physical principles which include Lambert's cosine law, isotropic spreading, Fresnel's law and Beer's law are systematically applied to the model. A video apparatus and a phantom were constructed to analyze different illumination conditions and to verify the model. A simulation based on the model compared favorably with data taken from phantom images.

Wick, Carl E.; Loew, Murray H.; Kurantsin-Mills, Joseph

1993-06-01

459

IEEE TRANSACTIONS IN IMAGE PROCESSING, SUBMITTED 1 Inpainting of Binary Images Using the Cahn-Hilliard Equation  

E-print Network

IEEE TRANSACTIONS IN IMAGE PROCESSING, SUBMITTED 1 Inpainting of Binary Images Using the Cahn-Hilliard Equation Andrea Bertozzi, Selim Esedoglu, and Alan Gillette* Abstract-- Image inpainting is the filling of degraded text, as well as super-resolution of high contrast images. Index Terms-- Image inpainting, super

Soatto, Stefano

460

Mobile medical image retrieval  

Microsoft Academic Search

Images are an integral part of medical practice for diagnosis, treatment planning and teaching. Image retrieval has gained in importance mainly as a research domain over the past 20 years. Both textual and visual retrieval of images are essential. In the process of mobile devices becoming reliable and having a functionality equaling that of formerly desktop clients, mobile computing has

Samuel Duc; Adrien Depeursinge; Ivan Eggel; Henning Müller

2011-01-01

461

Referential processing: reciprocity and correlates of naming and imaging.  

PubMed

To shed light on the referential processes that underlie mental translation between representations of objects and words, we studied the reciprocity and determinants of naming and imaging reaction times (RT). Ninety-six subjects pressed a key when they had covertly named 248 pictures or imaged to their names. Mean naming and imagery RTs for each item were correlated with one another, and with properties of names, images, and their interconnections suggested by prior research and dual coding theory. Imagery RTs correlated .56 (df = 246) with manual naming RTs and .58 with voicekey naming RTs from prior studies. A factor analysis of the RTs and of 31 item characteristics revealed 7 dimensions. Imagery and naming RTs loaded on a common referential factor that included variables related to both directions of processing (e.g., missing names and missing images). Naming RTs also loaded on a nonverbal-to-verbal factor that included such variables as number of different names, whereas imagery RTs loaded on a verbal-to-nonverbal factor that included such variables as rated consistency of imagery. The other factors were verbal familiarity, verbal complexity, nonverbal familiarity, and nonverbal complexity. The findings confirm the reciprocity of imaging and naming, and their relation to constructs associated with distinct phases of referential processing. PMID:2927314

Paivio, A; Clark, J M; Digdon, N; Bons, T

1989-03-01

462

Parallel-Processing Software for Creating Mosaic Images  

NASA Technical Reports Server (NTRS)

A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

2008-01-01

463

Line scan CCD image processing for biomedical application  

NASA Astrophysics Data System (ADS)

Blood samples are frequently analyzed for the blood disorders or other diseases in the research and clinic applications. Most of the analyses are related to blood cell counts and blood cell sizes. In this paper, the line scan CCD imaging system is developed, which is based on the Texas Instruments' TMS320C6416T (DSP6416), a high performance digital signal processor and Altera's Field programmable Gate Array (FPGA) EP3C25F324. It is used to acquire and process the images of blood cells for counting the number of cells, sizing and positioning them. The cell image is captured by line scan CCD sensor and then the digital image data converted by Analogue Front-End (AFE) are transferred into FPGA, after pre-processing they are transferred into DSP6416 through the interface of First In First Out (FIFO) in FPGA and External Memory Interfaces (EMIF) of DSP6416. Then the image data are processed in DSP6416. Experimental results show that this system is useful and efficient.

Lee, Choon-Young; Yan, Lei; Lee, Sang-Ryong

2010-02-01

464

Image processing of metal surface with structured light  

NASA Astrophysics Data System (ADS)

In structured light vision measurement system, the ideal image of structured light strip, in addition to black background , contains only the gray information of the position of the stripe. However, the actual image contains image noise, complex background and so on, which does not belong to the stripe, and it will cause interference to useful information. To extract the stripe center of mental surface accurately, a new processing method was presented. Through adaptive median filtering, the noise can be preliminary removed, and the noise which introduced by CCD camera and measured environment can be further removed with difference image method. To highlight fine details and enhance the blurred regions between the stripe and noise, the sharping algorithm is used which combine the best features of Laplacian operator and Sobel operator. Morphological opening operation and closing operation are used to compensate the loss of information.Experimental results show that this method is effective in the image processing, not only to restrain the information but also heighten contrast. It is beneficial for the following processing.

Luo, Cong; Feng, Chang; Wang, Congzheng

2014-09-01

465

Teaching dual-process diagnostic reasoning to doctor of nursing practice students: problem-based learning and the illness script.  

PubMed

Accelerating the development of diagnostic reasoning skills for nurse practitioner students is high on the wish list of many faculty. The purpose of this article is to describe how the teaching strategy of problem-based learning (PBL) that drills the hypothetico-deductive or analytic reasoning process when combined with an assignment that fosters pattern recognition (a nonanalytic process) teaches and reinforces the dual process of diagnostic reasoning. In an online Doctor of Nursing Practice program, four PBL cases that start with the same symptom unfold over 2 weeks. These four cases follow different paths as they unfold leading to different diagnoses. Culminating each PBL case, a unique assignment called an illness script was developed to foster the development of pattern recognition. When combined with hypothetico-deductive reasoning drilled during the PBL case, students experience the dual process approach to diagnostic reasoning used by clinicians. PMID:25350904

Durham, Catherine O; Fowler, Terri; Kennedy, Sally

2014-11-01

466

Study of optical techniques for the Ames unitary wind tunnel: Digital image processing, part 6  

NASA Technical Reports Server (NTRS)

A survey of digital image processing techniques and processing systems for aerodynamic images has been conducted. These images covered many types of flows and were generated by many types of flow diagnostics. These include laser vapor screens, infrared cameras, laser holographic interferometry, Schlieren, and luminescent paints. Some general digital image processing systems, imaging networks, optical sensors, and image computing chips were briefly reviewed. Possible digital imaging network systems for the Ames Unitary Wind Tunnel were explored.

Lee, George

1993-01-01