3D Texture Features Mining for MRI Brain Tumor Identification
NASA Astrophysics Data System (ADS)
Rahim, Mohd Shafry Mohd; Saba, Tanzila; Nayer, Fatima; Syed, Afraz Zahra
2014-03-01
Medical image segmentation is a process to extract region of interest and to divide an image into its individual meaningful, homogeneous components. Actually, these components will have a strong relationship with the objects of interest in an image. For computer-aided diagnosis and therapy process, medical image segmentation is an initial mandatory step. Medical image segmentation is a sophisticated and challenging task because of the sophisticated nature of the medical images. Indeed, successful medical image analysis heavily dependent on the segmentation accuracy. Texture is one of the major features to identify region of interests in an image or to classify an object. 2D textures features yields poor classification results. Hence, this paper represents 3D features extraction using texture analysis and SVM as segmentation technique in the testing methodologies.
Cardiovascular imaging environment: will the future be cloud-based?
Kawel-Boehm, Nadine; Bluemke, David A
2017-07-01
In cardiovascular CT and MR imaging large datasets have to be stored, post-processed, analyzed and distributed. Beside basic assessment of volume and function in cardiac magnetic resonance imaging e.g., more sophisticated quantitative analysis is requested requiring specific software. Several institutions cannot afford various types of software and provide expertise to perform sophisticated analysis. Areas covered: Various cloud services exist related to data storage and analysis specifically for cardiovascular CT and MR imaging. Instead of on-site data storage, cloud providers offer flexible storage services on a pay-per-use basis. To avoid purchase and maintenance of specialized software for cardiovascular image analysis, e.g. to assess myocardial iron overload, MR 4D flow and fractional flow reserve, evaluation can be performed with cloud based software by the consumer or complete analysis is performed by the cloud provider. However, challenges to widespread implementation of cloud services include regulatory issues regarding patient privacy and data security. Expert commentary: If patient privacy and data security is guaranteed cloud imaging is a valuable option to cope with storage of large image datasets and offer sophisticated cardiovascular image analysis for institutions of all sizes.
Data management in pattern recognition and image processing systems
NASA Technical Reports Server (NTRS)
Zobrist, A. L.; Bryant, N. A.
1976-01-01
Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.
High resolution image processing on low-cost microcomputers
NASA Technical Reports Server (NTRS)
Miller, R. L.
1993-01-01
Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.
Advantages and Disadvantages in Image Processing with Free Software in Radiology.
Mujika, Katrin Muradas; Méndez, Juan Antonio Juanes; de Miguel, Andrés Framiñan
2018-01-15
Currently, there are sophisticated applications that make it possible to visualize medical images and even to manipulate them. These software applications are of great interest, both from a teaching and a radiological perspective. In addition, some of these applications are known as Free Open Source Software because they are free and the source code is freely available, and therefore it can be easily obtained even on personal computers. Two examples of free open source software are Osirix Lite® and 3D Slicer®. However, this last group of free applications have limitations in its use. For the radiological field, manipulating and post-processing images is increasingly important. Consequently, sophisticated computing tools that combine software and hardware to process medical images are needed. In radiology, graphic workstations allow their users to process, review, analyse, communicate and exchange multidimensional digital images acquired with different image-capturing radiological devices. These radiological devices are basically CT (Computerised Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), etc. Nevertheless, the programs included in these workstations have a high cost which always depends on the software provider and is always subject to its norms and requirements. With this study, we aim to present the advantages and disadvantages of these radiological image visualization systems in the advanced management of radiological studies. We will compare the features of the VITREA2® and AW VolumeShare 5® radiology workstation with free open source software applications like OsiriX® and 3D Slicer®, with examples from specific studies.
Advanced imaging programs: maximizing a multislice CT investment.
Falk, Robert
2008-01-01
Advanced image processing has moved from a luxury to a necessity in the practice of medicine. A hospital's adoption of sophisticated 3D imaging entails several important steps with many factors to consider in order to be successful. Like any new hospital program, 3D post-processing should be introduced through a strategic planning process that includes administrators, physicians, and technologists to design, implement, and market a program that is scalable-one that minimizes up front costs while providing top level service. This article outlines the steps for planning, implementation, and growth of an advanced imaging program.
New method for identifying features of an image on a digital video display
NASA Astrophysics Data System (ADS)
Doyle, Michael D.
1991-04-01
The MetaMap process extends the concept of direct manipulation human-computer interfaces to new limits. Its specific capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. The correlation is accomplished through reprogramming of both the color map and the image so that discrete image elements comprise unique sets of color indices. This process allows the correlation to be accomplished with very efficient data storage and program execution times. Image databases adapted to this process become object-oriented as a result. Very sophisticated interrelationships can be set up between images text and program control mechanisms using this process. An application of this interfacing process to the design of an interactive atlas of medical histology as well as other possible applications are described. The MetaMap process is protected by U. S. patent #4
MorphoHawk: Geometric-based Software for Manufacturing and More
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keith Arterburn
2001-04-01
Hollywood movies portray facial recognition as a perfected technology, but reality is that sophisticated computers and algorithmic calculations are far from perfect. In fact, the most sophisticated and successful computer for recognizing faces and other imagery already is the human brain with more than 10 billion nerve cells. Beginning at birth, humans process data and connect optical and sensory experiences that create unparalleled accumulation of data for people to associate images with life experiences, emotions and knowledge. Computers are powerful, rapid and tireless, but still cannot compare to the highly sophisticated relational calculations and associations that the human computer canmore » produce in connecting ‘what we see with what we know.’« less
Electromagnetic Imaging Methods for Nondestructive Evaluation Applications
Deng, Yiming; Liu, Xin
2011-01-01
Electromagnetic nondestructive tests are important and widely used within the field of nondestructive evaluation (NDE). The recent advances in sensing technology, hardware and software development dedicated to imaging and image processing, and material sciences have greatly expanded the application fields, sophisticated the systems design and made the potential of electromagnetic NDE imaging seemingly unlimited. This review provides a comprehensive summary of research works on electromagnetic imaging methods for NDE applications, followed by the summary and discussions on future directions. PMID:22247693
Commercial applications for optical data storage
NASA Astrophysics Data System (ADS)
Tas, Jeroen
1991-03-01
Optical data storage has spurred the market for document imaging systems. These systems are increasingly being used to electronically manage the processing, storage and retrieval of documents. Applications range from straightforward archives to sophisticated workflow management systems. The technology is developing rapidly and within a few years optical imaging facilities will be incorporated in most of the office information systems. This paper gives an overview of the status of the market, the applications and the trends of optical imaging systems.
Applying a visual language for image processing as a graphical teaching tool in medical imaging
NASA Astrophysics Data System (ADS)
Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin
1992-05-01
Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.
The Hico Image Processing System: A Web-Accessible Hyperspectral Remote Sensing Toolbox
NASA Astrophysics Data System (ADS)
Harris, A. T., III; Goodman, J.; Justice, B.
2014-12-01
As the quantity of Earth-observation data increases, the use-case for hosting analytical tools in geospatial data centers becomes increasingly attractive. To address this need, HySpeed Computing and Exelis VIS have developed the HICO Image Processing System, a prototype cloud computing system that provides online, on-demand, scalable remote sensing image processing capabilities. The system provides a mechanism for delivering sophisticated image processing analytics and data visualization tools into the hands of a global user community, who will only need a browser and internet connection to perform analysis. Functionality of the HICO Image Processing System is demonstrated using imagery from the Hyperspectral Imager for the Coastal Ocean (HICO), an imaging spectrometer located on the International Space Station (ISS) that is optimized for acquisition of aquatic targets. Example applications include a collection of coastal remote sensing algorithms that are directed at deriving critical information on water and habitat characteristics of our vulnerable coastal environment. The project leverages the ENVI Services Engine as the framework for all image processing tasks, and can readily accommodate the rapid integration of new algorithms, datasets and processing tools.
Technical Note: Detection of gas bubble leakage via correlation of water column multibeam images
NASA Astrophysics Data System (ADS)
Schneider von Deimling, J.; Papenberg, C.
2012-03-01
Hydroacoustic detection of natural gas release from the seafloor has been conducted in the past by using singlebeam echosounders. In contrast, modern multibeam swath mapping systems allow much wider coverage, higher resolution, and offer 3-D spatial correlation. Up to the present, the extremely high data rate hampers water column backscatter investigations and more sophisticated visualization and processing techniques are needed. Here, we present water column backscatter data acquired with a 50 kHz prototype multibeam system over a period of 75 seconds. Display types are of swath-images as well as of a "re-sorted" singlebeam presentation. Thus, individual and/or groups of gas bubbles rising from the 24 m deep seafloor clearly emerge in the acoustic images, making it possible to estimate rise velocities. A sophisticated processing scheme is introduced to identify those rising gas bubbles in the hydroacoustic data. We apply a cross-correlation technique adapted from particle imaging velocimetry (PIV) to the acoustic backscatter images. Temporal and spatial drift patterns of the bubbles are assessed and are shown to match very well to measured and theoretical rise patterns. The application of this processing to our field data gives clear results with respect to unambiguous bubble detection and remote bubble rise velocimetry. The method can identify and exclude the main source of misinterpretations, i.e. fish-mediated echoes. Although image-based cross-correlation techniques are well known in the field of fluid mechanics for high resolution and non-inversive current flow field analysis, we present the first application of this technique as an acoustic bubble detector.
2005-01-01
more legible and to restore its unity [2]. The need to retouch the image in an unobtrusive way extended naturally from paintings to photography and...to software tools that allow a sophisticated but mostly manual process [7]. In this article we introduce a novel algorithm for automatic digi- tal...This is done only for a didactic purpose, since our algorithm was devised for 2D, and there are other techniques (such as splines) that might yield
Applications in Digital Image Processing
ERIC Educational Resources Information Center
Silverman, Jason; Rosen, Gail L.; Essinger, Steve
2013-01-01
Students are immersed in a mathematically intensive, technological world. They engage daily with iPods, HDTVs, and smartphones--technological devices that rely on sophisticated but accessible mathematical ideas. In this article, the authors provide an overview of four lab-type activities that have been used successfully in high school mathematics…
Document Examination: Applications of Image Processing Systems.
Kopainsky, B
1989-12-01
Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.
Visual Communications And Image Processing
NASA Astrophysics Data System (ADS)
Hsing, T. Russell; Tzou, Kou-Hu
1989-07-01
This special issue on Visual Communications and Image Processing contains 14 papers that cover a wide spectrum in this fast growing area. For the past few decades, researchers and scientists have devoted their efforts to these fields. Through this long-lasting devotion, we witness today the growing popularity of low-bit-rate video as a convenient tool for visual communication. We also see the integration of high-quality video into broadband digital networks. Today, with more sophisticated processing, clearer and sharper pictures are being restored from blurring and noise. Also, thanks to the advances in digital image processing, even a PC-based system can be built to recognize highly complicated Chinese characters at the speed of 300 characters per minute. This special issue can be viewed as a milestone of visual communications and image processing on its journey to eternity. It presents some overviews on advanced topics as well as some new development in specific subjects.
Computer-aided light sheet flow visualization using photogrammetry
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1994-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.
Computer-Aided Light Sheet Flow Visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
NASA Astrophysics Data System (ADS)
Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph
2016-09-01
CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.
Computer-aided light sheet flow visualization
NASA Technical Reports Server (NTRS)
Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.
1993-01-01
A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.
Real-Time Processing of Pressure-Sensitive Paint Images
2006-12-01
intermediate or final data to the hard disk in 3D grid format. In addition to the pressure or pressure coefficient at every grid point, the saved file may...occurs. Nevertheless, to achieve an accurate mapping between 2D image coordinates and 3D spatial coordinates, additional parameters must be introduced. A...improved mapping between the 2D and 3D coordinates. In a more sophisticated approach, additional terms corresponding to specific deformation modes
Technical Note: Detection of gas bubble leakage via correlation of water column multibeam images
NASA Astrophysics Data System (ADS)
Schneider von Deimling, J.; Papenberg, C.
2011-07-01
Hydroacoustic detection of natural gas release from the seafloor has been conducted in the past by using singlebeam echosounders. In contrast modern multibeam swath mapping systems allow much wider coverage, higher resolution, and offer 3-D spatial correlation. However, up to the present, the extremely high data rate hampers water column backscatter investigations. More sophisticated visualization and processing techniques for water column backscatter analysis are still under development. We here present such water column backscattering data gathered with a 50 kHz prototype multibeam system. Water column backscattering data is presented in videoframes grabbed over 75 s and a "re-sorted" singlebeam presentation. Thus individual gas bubbles rising from the 24 m deep seafloor clearly emerge in the acoustic images and rise velocities can be determined. A sophisticated processing scheme is introduced to identify those rising gas bubbles in the hydroacoustic data. It applies a cross-correlation technique similar to that used in Particle Imaging Velocimetry (PIV) to the acoustic backscatter images. Tempo-spatial drift patterns of the bubbles are assessed and match very well measured and theoretical rise patterns. The application of this processing scheme to our field data gives impressive results with respect to unambiguous bubble detection and remote bubble rise velocimetry. The method can identify and exclude the main driver for misinterpretations, i.e. fish-mediated echoes. Even though image-based cross-correlation techniques are well known in the field of fluid mechanics for high resolution and non-inversive current flow field analysis, this technique was never applied in the proposed sense for an acoustic bubble detector.
Single image super-resolution reconstruction algorithm based on eage selection
NASA Astrophysics Data System (ADS)
Zhang, Yaolan; Liu, Yijun
2017-05-01
Super-resolution (SR) has become more important, because it can generate high-quality high-resolution (HR) images from low-resolution (LR) input images. At present, there are a lot of work is concentrated on developing sophisticated image priors to improve the image quality, while taking much less attention to estimating and incorporating the blur model that can also impact the reconstruction results. We present a new reconstruction method based on eager selection. This method takes full account of the factors that affect the blur kernel estimation and accurately estimating the blur process. When comparing with the state-of-the-art methods, our method has comparable performance.
Synchrotron radiation microtomography of Taylor bubbles in capillary two-phase flow
NASA Astrophysics Data System (ADS)
Boden, Stephan; dos Santos Rolo, Tomy; Baumbach, Tilo; Hampel, Uwe
2014-07-01
We report on a study to measure the three-dimensional shape of Taylor bubbles in capillaries using synchrotron radiation in conjunction with ultrafast radiographic imaging. Moving Taylor bubbles in 2-mm round and square capillaries were radiographically scanned with an ultrahigh frame rate of up to 36,000 fps and 5.6-µm pixel separation. Consecutive images were properly processed to yield 2D transmission radiographs of high contrast-to-noise ratio. Application of 3D tomographic image reconstruction disclosed the 3D bubble shape. The results provide a reference data base for development of sophisticated interface resolving CFD computations.
CellAnimation: an open source MATLAB framework for microscopy assays.
Georgescu, Walter; Wikswo, John P; Quaranta, Vito
2012-01-01
Advances in microscopy technology have led to the creation of high-throughput microscopes that are capable of generating several hundred gigabytes of images in a few days. Analyzing such wealth of data manually is nearly impossible and requires an automated approach. There are at present a number of open-source and commercial software packages that allow the user to apply algorithms of different degrees of sophistication to the images and extract desired metrics. However, the types of metrics that can be extracted are severely limited by the specific image processing algorithms that the application implements, and by the expertise of the user. In most commercial software, code unavailability prevents implementation by the end user of newly developed algorithms better suited for a particular type of imaging assay. While it is possible to implement new algorithms in open-source software, rewiring an image processing application requires a high degree of expertise. To obviate these limitations, we have developed an open-source high-throughput application that allows implementation of different biological assays such as cell tracking or ancestry recording, through the use of small, relatively simple image processing modules connected into sophisticated imaging pipelines. By connecting modules, non-expert users can apply the particular combination of well-established and novel algorithms developed by us and others that are best suited for each individual assay type. In addition, our data exploration and visualization modules make it easy to discover or select specific cell phenotypes from a heterogeneous population. CellAnimation is distributed under the Creative Commons Attribution-NonCommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/). CellAnimationsource code and documentation may be downloaded from www.vanderbilt.edu/viibre/software/documents/CellAnimation.zip. Sample data are available at www.vanderbilt.edu/viibre/software/documents/movies.zip. walter.georgescu@vanderbilt.edu Supplementary data available at Bioinformatics online.
Fast automated analysis of strong gravitational lenses with convolutional neural networks.
Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J
2017-08-30
Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
1981-03-13
UNCLASSIFIED SECURITY CLAS,:FtfC ’i OF TH*!’ AGC W~ct P- A* 7~9r1) 0. ABSTRACT (continued) onuing in concert with a sophisticated detector has...and New York, 1969. Whalen, M.F., L.J. O’Brien, and A.N. Mucciardi, "Application of Adaptive Learning Netowrks for the Characterization of Two
Image segmentation using hidden Markov Gauss mixture models.
Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M
2007-07-01
Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.
The interactive digital video interface
NASA Technical Reports Server (NTRS)
Doyle, Michael D.
1989-01-01
A frequent complaint in the computer oriented trade journals is that current hardware technology is progressing so quickly that software developers cannot keep up. A example of this phenomenon can be seen in the field of microcomputer graphics. To exploit the advantages of new mechanisms of information storage and retrieval, new approaches must be made towards incorporating existing programs as well as developing entirely new applications. A particular area of need is the correlation of discrete image elements to textural information. The interactive digital video (IDV) interface embodies a new concept in software design which addresses these needs. The IDV interface is a patented device and language independent process for identifying image features on a digital video display and which allows a number of different processes to be keyed to that identification. Its capabilities include the correlation of discrete image elements to relevant text information and the correlation of these image features to other images as well as to program control mechanisms. Sophisticated interrelationships can be set up between images, text, and program control mechanisms.
Development of a fusion approach selection tool
NASA Astrophysics Data System (ADS)
Pohl, C.; Zeng, Y.
2015-06-01
During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.
MIDAS - ESO's new image processing system
NASA Astrophysics Data System (ADS)
Banse, K.; Crane, P.; Grosbol, P.; Middleburg, F.; Ounnas, C.; Ponz, D.; Waldthausen, H.
1983-03-01
The Munich Image Data Analysis System (MIDAS) is an image processing system whose heart is a pair of VAX 11/780 computers linked together via DECnet. One of these computers, VAX-A, is equipped with 3.5 Mbytes of memory, 1.2 Gbytes of disk storage, and two tape drives with 800/1600 bpi density. The other computer, VAX-B, has 4.0 Mbytes of memory, 688 Mbytes of disk storage, and one tape drive with 1600/6250 bpi density. MIDAS is a command-driven system geared toward the interactive user. The type and number of parameters in a command depends on the unique parameter invoked. MIDAS is a highly modular system that provides building blocks for the undertaking of more sophisticated applications. Presently, 175 commands are available. These include the modification of the color-lookup table interactively, to enhance various image features, and the interactive extraction of subimages.
Fast automated analysis of strong gravitational lenses with convolutional neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
Fast automated analysis of strong gravitational lenses with convolutional neural networks
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-30
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less
Fast automated analysis of strong gravitational lenses with convolutional neural networks
NASA Astrophysics Data System (ADS)
Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.
2017-08-01
Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.
Software organization for a prolog-based prototyping system for machine vision
NASA Astrophysics Data System (ADS)
Jones, Andrew C.; Hack, Ralf; Batchelor, Bruce G.
1996-11-01
We describe PIP (prolog image processing)--a prototype system for interactive image processing using Prolog, implemented on an Apple Macintosh computer. PIP is the latest in a series of products that the third author has been involved in the implementation of, under the collective title Prolog+. PIP differs from our previous systems in two particularly important respects. The first is that whereas we previously required dedicated image processing hardware, the present system implements image processing routines using software. The second difference is that our present system is hierarchical in structure, where the top level of the hierarchy emulates Prolog+, but there is a flexible infrastructure which supports more sophisticated image manipulation which we will be able to exploit in due course . We discuss the impact of the Apple Macintosh operating system upon the implementation of the image processing functions, and the interface between these functions and the Prolog system. We also explain how the existing set of Prolog+ commands has been implemented. PIP is now nearing maturity, and we will make a version of it generally available in the near future. However, although the represent version of PIP constitutes a complete image processing tool, there are a number of ways in which we are intending to enhance future versions, with a view to added flexibility and efficiency: we discuss these ideas briefly near the end of the present paper.
PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.
Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David
2009-04-01
Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.
ERIC Educational Resources Information Center
Moraes, Edgar P.; da Silva, Nilbert S. A.; de Morais, Camilo de L. M.; das Neves, Luiz S.; de Lima, Kassio M. G.
2014-01-01
The flame test is a classical analytical method that is often used to teach students how to identify specific metals. However, some universities in developing countries have difficulties acquiring the sophisticated instrumentation needed to demonstrate how to identify and quantify metals. In this context, a method was developed based on the flame…
SVM Pixel Classification on Colour Image Segmentation
NASA Astrophysics Data System (ADS)
Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.
2018-04-01
The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.
Automated visual imaging interface for the plant floor
NASA Astrophysics Data System (ADS)
Wutke, John R.
1991-03-01
The paper will provide an overview of the challenges facing a user of automated visual imaging (" AVI" ) machines and the philosophies that should be employed in designing them. As manufacturing tools and equipment become more sophisticated it is increasingly difficult to maintain an efficient interaction between the operator and machine. The typical user of an AVI machine in a production environment is technically unsophisticated. Also operator and machine ergonomics are often a neglected or poorly addressed part of an efficient manufacturing process. This paper presents a number of man-machine interface design techniques and philosophies that effectively solve these problems.
Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory
NASA Astrophysics Data System (ADS)
Dichter, W.; Doris, K.; Conkling, C.
1982-06-01
A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.
Single-Scale Fusion: An Effective Approach to Merging Images.
Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C
2017-01-01
Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.
Space Spurred Computer Graphics
NASA Technical Reports Server (NTRS)
1983-01-01
Dicomed Corporation was asked by NASA in the early 1970s to develop processing capabilities for recording images sent from Mars by Viking spacecraft. The company produced a film recorder which increased the intensity levels and the capability for color recording. This development led to a strong technology base resulting in sophisticated computer graphics equipment. Dicomed systems are used to record CAD (computer aided design) and CAM (computer aided manufacturing) equipment, to update maps and produce computer generated animation.
Radiomics: Images Are More than Pictures, They Are Data
Kinahan, Paul E.; Hricak, Hedvig
2016-01-01
In the past decade, the field of medical image analysis has grown exponentially, with an increased number of pattern recognition tools and an increase in data set sizes. These advances have facilitated the development of processes for high-throughput extraction of quantitative features that result in the conversion of images into mineable data and the subsequent analysis of these data for decision support; this practice is termed radiomics. This is in contrast to the traditional practice of treating medical images as pictures intended solely for visual interpretation. Radiomic data contain first-, second-, and higher-order statistics. These data are combined with other patient data and are mined with sophisticated bioinformatics tools to develop models that may potentially improve diagnostic, prognostic, and predictive accuracy. Because radiomics analyses are intended to be conducted with standard of care images, it is conceivable that conversion of digital images to mineable data will eventually become routine practice. This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer. PMID:26579733
Imaging in Central Nervous System Drug Discovery.
Gunn, Roger N; Rabiner, Eugenii A
2017-01-01
The discovery and development of central nervous system (CNS) drugs is an extremely challenging process requiring large resources, timelines, and associated costs. The high risk of failure leads to high levels of risk. Over the past couple of decades PET imaging has become a central component of the CNS drug-development process, enabling decision-making in phase I studies, where early discharge of risk provides increased confidence to progress a candidate to more costly later phase testing at the right dose level or alternatively to kill a compound through failure to meet key criteria. The so called "3 pillars" of drug survival, namely; tissue exposure, target engagement, and pharmacologic activity, are particularly well suited for evaluation by PET imaging. This review introduces the process of CNS drug development before considering how PET imaging of the "3 pillars" has advanced to provide valuable tools for decision-making on the critical path of CNS drug development. Finally, we review the advances in PET science of biomarker development and analysis that enable sophisticated drug-development studies in man. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep
2010-06-05
Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less
Flow-gated radial phase-contrast imaging in the presence of weak flow.
Peng, Hsu-Hsia; Huang, Teng-Yi; Wang, Fu-Nien; Chung, Hsiao-Wen
2013-01-01
To implement a flow-gating method to acquire phase-contrast (PC) images of carotid arteries without use of an electrocardiography (ECG) signal to synchronize the acquisition of imaging data with pulsatile arterial flow. The flow-gating method was realized through radial scanning and sophisticated post-processing methods including downsampling, complex difference, and correlation analysis to improve the evaluation of flow-gating times in radial phase-contrast scans. Quantitatively comparable results (R = 0.92-0.96, n = 9) of flow-related parameters, including mean velocity, mean flow rate, and flow volume, with conventional ECG-gated imaging demonstrated that the proposed method is highly feasible. The radial flow-gating PC imaging method is applicable in carotid arteries. The proposed flow-gating method can potentially avoid the setting up of ECG-related equipment for brain imaging. This technique has potential use in patients with arrhythmia or weak ECG signals.
EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project
NASA Astrophysics Data System (ADS)
La Hoz, Cesar; Belyey, Vasyl
2012-07-01
Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.
Masedunskas, Andrius; Milberg, Oleg; Porat-Shliom, Natalie; Sramkova, Monika; Wigand, Tim; Amornphimoltham, Panomwat; Weigert, Roberto
2012-01-01
Intravital microscopy is an extremely powerful tool that enables imaging several biological processes in live animals. Recently, the ability to image subcellular structures in several organs combined with the development of sophisticated genetic tools has made possible extending this approach to investigate several aspects of cell biology. Here we provide a general overview of intravital microscopy with the goal of highlighting its potential and challenges. Specifically, this review is geared toward researchers that are new to intravital microscopy and focuses on practical aspects of carrying out imaging in live animals. Here we share the know-how that comes from first-hand experience, including topics such as choosing the right imaging platform and modality, surgery and stabilization techniques, anesthesia and temperature control. Moreover, we highlight some of the approaches that facilitate subcellular imaging in live animals by providing numerous examples of imaging selected organelles and the actin cytoskeleton in multiple organs. PMID:22992750
NASA Astrophysics Data System (ADS)
Zielinski, Jerzy S.
The dramatic increase in number and volume of digital images produced in medical diagnostics, and the escalating demand for rapid access to these relevant medical data, along with the need for interpretation and retrieval has become of paramount importance to a modern healthcare system. Therefore, there is an ever growing need for processed, interpreted and saved images of various types. Due to the high cost and unreliability of human-dependent image analysis, it is necessary to develop an automated method for feature extraction, using sophisticated mathematical algorithms and reasoning. This work is focused on digital image signal processing of biological and biomedical data in one- two- and three-dimensional space. Methods and algorithms presented in this work were used to acquire data from genomic sequences, breast cancer, and biofilm images. One-dimensional analysis was applied to DNA sequences which were presented as a non-stationary sequence and modeled by a time-dependent autoregressive moving average (TD-ARMA) model. Two-dimensional analyses used 2D-ARMA model and applied it to detect breast cancer from x-ray mammograms or ultrasound images. Three-dimensional detection and classification techniques were applied to biofilm images acquired using confocal laser scanning microscopy. Modern medical images are geometrically arranged arrays of data. The broadening scope of imaging as a way to organize our observations of the biophysical world has led to a dramatic increase in our ability to apply new processing techniques and to combine multiple channels of data into sophisticated and complex mathematical models of physiological function and dysfunction. With explosion of the amount of data produced in a field of biomedicine, it is crucial to be able to construct accurate mathematical models of the data at hand. Two main purposes of signal modeling are: data size conservation and parameter extraction. Specifically, in biomedical imaging we have four key problems that were addressed in this work: (i) registration, i.e. automated methods of data acquisition and the ability to align multiple data sets with each other; (ii) visualization and reconstruction, i.e. the environment in which registered data sets can be displayed on a plane or in multidimensional space; (iii) segmentation, i.e. automated and semi-automated methods to create models of relevant anatomy from images; (iv) simulation and prediction, i.e. techniques that can be used to simulate growth end evolution of researched phenomenon. Mathematical models can not only be used to verify experimental findings, but also to make qualitative and quantitative predictions, that might serve as guidelines for the future development of technology and/or treatment.
NASA Astrophysics Data System (ADS)
Kretschmer, E.; Bachner, M.; Blank, J.; Dapp, R.; Ebersoldt, A.; Friedl-Vallon, F.; Guggenmoser, T.; Gulde, T.; Hartmann, V.; Lutz, R.; Maucher, G.; Neubert, T.; Oelhaf, H.; Preusse, P.; Schardt, G.; Schmitt, C.; Schönfeld, A.; Tan, V.
2015-06-01
The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA), a Fourier-transform-spectrometer-based limb spectral imager, operates on high-altitude research aircraft to study the transit region between the troposphere and the stratosphere. It is one of the most sophisticated systems to be flown on research aircraft in Europe, requiring constant monitoring and human intervention in addition to an automation system. To ensure proper functionality and interoperability on multiple platforms, a flexible control and communication system was laid out. The architectures of the communication system as well as the protocols used are reviewed. The integration of this architecture in the automation process as well as the scientific campaign flight application context are discussed.
NASA Astrophysics Data System (ADS)
Kretschmer, E.; Bachner, M.; Blank, J.; Dapp, R.; Ebersoldt, A.; Friedl-Vallon, F.; Guggenmoser, T.; Gulde, T.; Hartmann, V.; Lutz, R.; Maucher, G.; Neubert, T.; Oelhaf, H.; Preusse, P.; Schardt, G.; Schmitt, C.; Schönfeld, A.; Tan, V.
2015-02-01
The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA), a Fourier transform spectrometer based limb spectral imager, operates on high-altitude research aircraft to study the transit region between the troposphere and the stratosphere. It is one of the most sophisticated systems to be flown on research aircraft in Europe, requiring constant monitoring and human intervention in addition to an automation system. To ensure proper functionality and interoperability on multiple platforms, a flexible control and communication system was laid out. The architectures of the communication system as well as the protocols used are reviewed. The integration of this architecture in the automation process as well as the scientific campaign flight application context are discussed.
The continual innovation of commercial PET/CT solutions in nuclear cardiology: Siemens Healthineers.
Bendriem, Bernard; Reed, Jessie; McCullough, Kathryn; Khan, Mohammad Raza; Smith, Anne M; Thomas, Damita; Long, Misty
2018-04-10
Cardiac PET/CT is an evolving, non-invasive imaging modality that impacts patient management in many clinical scenarios. Beyond offering the capability to assess myocardial perfusion, inflammatory cardiac pathologies, and myocardial viability, cardiac PET/CT also allows for the non-invasive quantitative assessment of myocardial blood flow (MBF) and myocardial flow reserve (MFR). Recognizing the need for an enhanced comprehension of coronary physiology, Siemens Healthineers implemented a sophisticated solution for the calculation of MBF and MFR in 2009. As a result, each aspect of their innovative scanner and image-processing technology seamlessly integrates into an efficient, easy-to-use workflow for everyday clinical use that maximizes the number of patients who potentially benefit from this imaging modality.
Acute Severe Aortic Regurgitation: Imaging with Pathological Correlation.
Janardhanan, Rajesh; Pasha, Ahmed Khurshid
2016-03-01
Acute aortic regurgitation (AR) is an important finding associated with a wide variety of disease processes. Its timely diagnosis is of utmost importance. Delay in diagnosis could prove fatal. We describe a case of acute severe AR that was timely diagnosed using real time three-dimensional (3D) transesophageal echocardiogram (3D TEE). Not only did it diagnose but also the images obtained by 3D TEE clearly matched with the pathologic specimen. Using this sophisticated imaging modality that is mostly available at the tertiary centers helped in the timely diagnosis, which lead to the optimal management saving his life. Echocardiography and especially 3D TEE can diagnose AR very accurately. Surgical intervention is the definitive treatment but medical therapy is utilized to stabilize the patient initially.
NASA Astrophysics Data System (ADS)
Jaušovec, Norbert
2017-07-01
Recently the number of theories trying to explain the brain - cognition - behavior relation has been increased. Promoted on the one hand by the development of sophisticated brain imaging techniques, such as functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI), and on the other, by complex computational models based on chaos and graph theory. But has this really advanced our understanding of the brain-behavior relation beyond Descartes's dualistic mind body division? One could critically argue that replacing the pineal body with extracellular electric fields represented in the electroencephalogram (EEG) as rapid transitional processes (RTS), combined with algebraic topology and dubbed brain topodynamics [1] is just putting lipstick on an outmoded evergreen.
Space Optic Manufacturing - X-ray Mirror
NASA Technical Reports Server (NTRS)
1998-01-01
NASA's Space Optics Manufacturing Center has been working to expand our view of the universe via sophisticated new telescopes. The Optics Center's goal is to develop low-cost, advanced space optics technologies for the NASA program in the 21st century - including the long-term goal of imaging Earth-like planets in distant solar systems. To reduce the cost of mirror fabrication, Marshall Space Flight Center (MSFC) has developed replication techniques, the machinery and materials to replicate electro-formed nickel mirrors. The process allows fabricating precisely shaped mandrels to be used and reused as masters for replicating high-quality mirrors. This image shows a lightweight replicated x-ray mirror with gold coatings applied.
1998-08-31
NASA's Space Optics Manufacturing Center has been working to expand our view of the universe via sophisticated new telescopes. The Optics Center's goal is to develop low-cost, advanced space optics technologies for the NASA program in the 21st century - including the long-term goal of imaging Earth-like planets in distant solar systems. To reduce the cost of mirror fabrication, Marshall Space Flight Center (MSFC) has developed replication techniques, the machinery and materials to replicate electro-formed nickel mirrors. The process allows fabricating precisely shaped mandrels to be used and reused as masters for replicating high-quality mirrors. This image shows a lightweight replicated x-ray mirror with gold coatings applied.
Autonomous robot software development using simple software components
NASA Astrophysics Data System (ADS)
Burke, Thomas M.; Chung, Chan-Jin
2004-10-01
Developing software to control a sophisticated lane-following, obstacle-avoiding, autonomous robot can be demanding and beyond the capabilities of novice programmers - but it doesn"t have to be. A creative software design utilizing only basic image processing and a little algebra, has been employed to control the LTU-AISSIG autonomous robot - a contestant in the 2004 Intelligent Ground Vehicle Competition (IGVC). This paper presents a software design equivalent to that used during the IGVC, but with much of the complexity removed. The result is an autonomous robot software design, that is robust, reliable, and can be implemented by programmers with a limited understanding of image processing. This design provides a solid basis for further work in autonomous robot software, as well as an interesting and achievable robotics project for students.
Real-time face and gesture analysis for human-robot interaction
NASA Astrophysics Data System (ADS)
Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd
2010-05-01
Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.
Examining Candidate Information Search Processes: The Impact of Processing Goals and Sophistication.
ERIC Educational Resources Information Center
Huang, Li-Ning
2000-01-01
Investigates how 4 different information-processing goals, varying on the dimensions of effortful versus effortless and impression-driven versus non-impression-driven processing, and individual difference in political sophistication affect the depth at which undergraduate students process candidate information and their decision-making strategies.…
Sparsity based target detection for compressive spectral imagery
NASA Astrophysics Data System (ADS)
Boada, David Alberto; Arguello Fuentes, Henry
2016-09-01
Hyperspectral imagery provides significant information about the spectral characteristics of objects and materials present in a scene. It enables object and feature detection, classification, or identification based on the acquired spectral characteristics. However, it relies on sophisticated acquisition and data processing systems able to acquire, process, store, and transmit hundreds or thousands of image bands from a given area of interest which demands enormous computational resources in terms of storage, computationm, and I/O throughputs. Specialized optical architectures have been developed for the compressed acquisition of spectral images using a reduced set of coded measurements contrary to traditional architectures that need a complete set of measurements of the data cube for image acquisition, dealing with the storage and acquisition limitations. Despite this improvement, if any processing is desired, the image has to be reconstructed by an inverse algorithm in order to be processed, which is also an expensive task. In this paper, a sparsity-based algorithm for target detection in compressed spectral images is presented. Specifically, the target detection model adapts a sparsity-based target detector to work in a compressive domain, modifying the sparse representation basis in the compressive sensing problem by means of over-complete training dictionaries and a wavelet basis representation. Simulations show that the presented method can achieve even better detection results than the state of the art methods.
Web-accessible cervigram automatic segmentation tool
NASA Astrophysics Data System (ADS)
Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.
2010-03-01
Uterine cervix image analysis is of great importance to the study of uterine cervix cancer, which is among the leading cancers affecting women worldwide. In this paper, we describe our proof-of-concept, Web-accessible system for automated segmentation of significant tissue regions in uterine cervix images, which also demonstrates our research efforts toward promoting collaboration between engineers and physicians for medical image analysis projects. Our design and implementation unifies the merits of two commonly used languages, MATLAB and Java. It circumvents the heavy workload of recoding the sophisticated segmentation algorithms originally developed in MATLAB into Java while allowing remote users who are not experienced programmers and algorithms developers to apply those processing methods to their own cervicographic images and evaluate the algorithms. Several other practical issues of the systems are also discussed, such as the compression of images and the format of the segmentation results.
Estimation of color filter array data from JPEG images for improved demosaicking
NASA Astrophysics Data System (ADS)
Feng, Wei; Reeves, Stanley J.
2006-02-01
On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.
Poka Yoke system based on image analysis and object recognition
NASA Astrophysics Data System (ADS)
Belu, N.; Ionescu, L. M.; Misztal, A.; Mazăre, A.
2015-11-01
Poka Yoke is a method of quality management which is related to prevent faults from arising during production processes. It deals with “fail-sating” or “mistake-proofing”. The Poka-yoke concept was generated and developed by Shigeo Shingo for the Toyota Production System. Poka Yoke is used in many fields, especially in monitoring production processes. In many cases, identifying faults in a production process involves a higher cost than necessary cost of disposal. Usually, poke yoke solutions are based on multiple sensors that identify some nonconformities. This means the presence of different equipment (mechanical, electronic) on production line. As a consequence, coupled with the fact that the method itself is an invasive, affecting the production process, would increase its price diagnostics. The bulky machines are the means by which a Poka Yoke system can be implemented become more sophisticated. In this paper we propose a solution for the Poka Yoke system based on image analysis and identification of faults. The solution consists of a module for image acquisition, mid-level processing and an object recognition module using associative memory (Hopfield network type). All are integrated into an embedded system with AD (Analog to Digital) converter and Zync 7000 (22 nm technology).
Acute Severe Aortic Regurgitation: Imaging with Pathological Correlation
Janardhanan, Rajesh; Pasha, Ahmed Khurshid
2016-01-01
Context: Acute aortic regurgitation (AR) is an important finding associated with a wide variety of disease processes. Its timely diagnosis is of utmost importance. Delay in diagnosis could prove fatal. Case Report: We describe a case of acute severe AR that was timely diagnosed using real time three-dimensional (3D) transesophageal echocardiogram (3D TEE). Not only did it diagnose but also the images obtained by 3D TEE clearly matched with the pathologic specimen. Using this sophisticated imaging modality that is mostly available at the tertiary centers helped in the timely diagnosis, which lead to the optimal management saving his life. Conclusion: Echocardiography and especially 3D TEE can diagnose AR very accurately. Surgical intervention is the definitive treatment but medical therapy is utilized to stabilize the patient initially. PMID:27114975
Escaping compound eye ancestry: the evolution of single-chamber eyes in holometabolous larvae.
Buschbeck, Elke K
2014-08-15
Stemmata, the eyes of holometabolous insect larvae, have gained little attention, even though they exhibit remarkably different optical solutions, ranging from compound eyes with upright images, to sophisticated single-chamber eyes with inverted images. Such optical differences raise the question of how major transitions may have occurred. Stemmata evolved from compound eye ancestry, and optical differences are apparent even in some of the simplest systems that share strong cellular homology with adult ommatidia. The transition to sophisticated single-chamber eyes occurred many times independently, and in at least two different ways: through the fusion of many ommatidia [as in the sawfly (Hymenoptera)], and through the expansion of single ommatidia [as in tiger beetles (Coleoptera), antlions (Neuroptera) and dobsonflies (Megaloptera)]. Although ommatidia-like units frequently have multiple photoreceptor layers (tiers), sophisticated image-forming stemmata tend to only have one photoreceptor tier, presumably a consequence of the lens only being able to efficiently focus light on to one photoreceptor layer. An interesting exception is found in some diving beetles [Dytiscidae (Coleoptera)], in which two retinas receive sharp images from a bifocal lens. Taken together, stemmata represent a great model system to study an impressive set of optical solutions that evolved from a relatively simple ancestral organization. © 2014. Published by The Company of Biologists Ltd.
[Development of Nanotechnology for X-Ray Astronomy Instrumentation
NASA Technical Reports Server (NTRS)
Schattenburg, Mark L.
2004-01-01
This Research Grant provides support for development of nanotechnology for x-ray astronomy instrumentation. MIT has made significant progress in several development areas. In the last year we have made considerable progress in demonstrating the high-fidelity patterning and replication of x-ray reflection gratings. We developed a process for fabricating blazed gratings in silicon with extremely smooth and sharp sawtooth profiles, and developed a nanoimprint process for replication. We also developed sophisticated new fixturing for holding thin optics during metrology without causing distortion. We developed a new image processing algorithm for our Shack-Hartmann tool that uses Zernike polynomials. This has resulted in much more accurate and repeatable measurements on thin optics.
How does the brain process music?
Warren, Jason
2008-02-01
The organisation of the musical brain is a major focus of interest in contemporary neuroscience. This reflects the increasing sophistication of tools (especially imaging techniques) to examine brain anatomy and function in health and disease, and the recognition that music provides unique insights into a number of aspects of nonverbal brain function. The emerging picture is complex but coherent, and moves beyond older ideas of music as the province of a single brain area or hemisphere to the concept of music as a 'whole-brain' phenomenon. Music engages a distributed set of cortical modules that process different perceptual, cognitive and emotional components with varying selectivity. 'Why' rather than 'how' the brain processes music is a key challenge for the future.
Imaging enzyme-triggered self-assembly of small molecules inside live cells
Gao, Yuan; Shi, Junfeng; Yuan, Dan; Xu, Bing
2012-01-01
Self-assembly of small molecules in water to form nanofibers, besides generating sophisticated biomaterials, promises a simple system inside cells for regulating cellular processes. But lack of a convenient approach for studying the self-assembly of small molecules inside cells hinders the development of such systems. Here we report a method to image enzyme-triggered self-assembly of small molecules inside live cells. After linking a fluorophore to a self-assembly motif to make a precursor, we confirmed by 31P NMR and rheology that enzyme-triggered conversion of the precursor to a hydrogelator results in the formation of a hydrogel via self-assembly. The imaging contrast conferred by the nanofibers of the hydrogelators allowed the evaluation of intracellular self-assembly; the dynamics, and the localization of the nanofibers of the hydrogelators in live cells. This approach explores supramolecular chemistry inside cells and may lead to new insights, processes, or materials at the interface of chemistry and biology. PMID:22929790
Digital Compositing Techniques for Coronal Imaging (Invited review)
NASA Astrophysics Data System (ADS)
Espenak, F.
2000-04-01
The solar corona exhibits a huge range in brightness which cannot be captured in any single photographic exposure. Short exposures show the bright inner corona and prominences, while long exposures reveal faint details in equatorial streamers and polar brushes. For many years, radial gradient filters and other analog techniques have been used to compress the corona's dynamic range in order to study its morphology. Such techniques demand perfect pointing and tracking during the eclipse, and can be difficult to calibrate. In the past decade, the speed, memory and hard disk capacity of personal computers have rapidly increased as prices continue to drop. It is now possible to perform sophisticated image processing of eclipse photographs on commercially available CPU's. Software programs such as Adobe Photoshop permit combining multiple eclipse photographs into a composite image which compresses the corona's dynamic range and can reveal subtle features and structures. Algorithms and digital techniques used for processing 1998 eclipse photographs will be discussed which are equally applicable to the recent eclipse of 1999 August 11.
Astronomical Image Processing with Hadoop
NASA Astrophysics Data System (ADS)
Wiley, K.; Connolly, A.; Krughoff, S.; Gardner, J.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-07-01
In the coming decade astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. With a requirement that these images be analyzed in real time to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. In the commercial world, new techniques that utilize cloud computing have been developed to handle massive data streams. In this paper we describe how cloud computing, and in particular the map-reduce paradigm, can be used in astronomical data processing. We will focus on our experience implementing a scalable image-processing pipeline for the SDSS database using Hadoop (http://hadoop.apache.org). This multi-terabyte imaging dataset approximates future surveys such as those which will be conducted with the LSST. Our pipeline performs image coaddition in which multiple partially overlapping images are registered, integrated and stitched into a single overarching image. We will first present our initial implementation, then describe several critical optimizations that have enabled us to achieve high performance, and finally describe how we are incorporating a large in-house existing image processing library into our Hadoop system. The optimizations involve prefiltering of the input to remove irrelevant images from consideration, grouping individual FITS files into larger, more efficient indexed files, and a hybrid system in which a relational database is used to determine the input images relevant to the task. The incorporation of an existing image processing library, written in C++, presented difficult challenges since Hadoop is programmed primarily in Java. We will describe how we achieved this integration and the sophisticated image processing routines that were made feasible as a result. We will end by briefly describing the longer term goals of our work, namely detection and classification of transient objects and automated object classification.
Positron emission tomography (PET) advances in neurological applications
NASA Astrophysics Data System (ADS)
Sossi, V.
2003-09-01
Positron Emission Tomography (PET) is a functional imaging modality used in brain research to map in vivo neurotransmitter and receptor activity and to investigate glucose utilization or blood flow patterns both in healthy and disease states. Such research is made possible by the wealth of radiotracers available for PET, by the fact that metabolic and kinetic parameters of particular processes can be extracted from PET data and by the continuous development of imaging techniques. In recent years great advancements have been made in the areas of PET instrumentation, data quantification and image reconstruction that allow for more detailed and accurate biological information to be extracted from PET data. It is now possible to quantitatively compare data obtained either with different tracers or with the same tracer under different scanning conditions. These sophisticated imaging approaches enable detailed investigation of disease mechanisms and system response to disease and/or therapy.
Hyperpolarized 13C pyruvate mouse brain metabolism with absorptive-mode EPSI at 1 T
NASA Astrophysics Data System (ADS)
Miloushev, Vesselin Z.; Di Gialleonardo, Valentina; Salamanca-Cardona, Lucia; Correa, Fabian; Granlund, Kristin L.; Keshari, Kayvan R.
2017-02-01
The expected signal in echo-planar spectroscopic imaging experiments was explicitly modeled jointly in spatial and spectral dimensions. Using this as a basis, absorptive-mode type detection can be achieved by appropriate choice of spectral delays and post-processing techniques. We discuss the effects of gradient imperfections and demonstrate the implementation of this sequence at low field (1.05 T), with application to hyperpolarized [1-13C] pyruvate imaging of the mouse brain. The sequence achieves sufficient signal-to-noise to monitor the conversion of hyperpolarized [1-13C] pyruvate to lactate in the mouse brain. Hyperpolarized pyruvate imaging of mouse brain metabolism using an absorptive-mode EPSI sequence can be applied to more sophisticated murine disease and treatment models. The simple modifications presented in this work, which permit absorptive-mode detection, are directly translatable to human clinical imaging and generate improved absorptive-mode spectra without the need for refocusing pulses.
Handling of huge multispectral image data volumes from a spectral hole burning device (SHBD)
NASA Astrophysics Data System (ADS)
Graff, Werner; Rosselet, Armel C.; Wild, Urs P.; Gschwind, Rudolf; Keller, Christoph U.
1995-06-01
We use chlorin-doped polymer films at low temperatures as the primary imaging detector. Based on the principles of persistent spectral hole burning, this system is capable of storing spatial and spectral information simultaneously in one exposure with extremely high resolution. The sun as an extended light source has been imaged onto the film. The information recorded amounts to tens of GBytes. This data volume is read out by scanning the frequency of a tunable dye laser and reading the images with a digital CCD camera. For acquisition, archival, processing, and visualization, we use MUSIC (MUlti processor System with Intelligent Communication), a single instruction multiple data parallel processor system equipped with the necessary I/O facilities. The huge amount of data requires the developemnt of sophisticated algorithms to efficiently calibrate the data and to extract useful and new information for solar physics.
Aerial image metrology for OPC modeling and mask qualification
NASA Astrophysics Data System (ADS)
Chen, Ao; Foong, Yee Mei; Thaler, Thomas; Buttgereit, Ute; Chung, Angeline; Burbine, Andrew; Sturtevant, John; Clifford, Chris; Adam, Kostas; De Bisschop, Peter
2017-06-01
As nodes become smaller and smaller, the OPC applied to enable these nodes becomes more and more sophisticated. This trend peaks today in curve-linear OPC approaches that are currently starting to appear on the roadmap. With this sophistication of OPC, the mask pattern complexity increases. CD-SEM based mask qualification strategies as they are used today are starting to struggle to provide a precise forecast of the printing behavior of a mask on wafer. An aerial image CD measurement performed on ZEISS Wafer-Level CD system (WLCD) is a complementary approach to mask CD-SEMs to judge the lithographical performance of the mask and its critical production features. The advantage of the aerial image is that it includes all optical effects of the mask such as OPC, SRAF, 3D mask effects, once the image is taken under scanner equivalent illumination conditions. Additionally, it reduces the feature complexity and analyzes the printing relevant CD.
Yagahara, Ayako; Yokooka, Yuki; Jiang, Guoqian; Tsuji, Shintarou; Fukuda, Akihisa; Nishimoto, Naoki; Kurowarabi, Kunio; Ogasawara, Katsuhiko
2018-03-01
Describing complex mammography examination processes is important for improving the quality of mammograms. It is often difficult for experienced radiologic technologists to explain the process because their techniques depend on their experience and intuition. In our previous study, we analyzed the process using a new bottom-up hierarchical task analysis and identified key components of the process. Leveraging the results of the previous study, the purpose of this study was to construct a mammographic examination process ontology to formally describe the relationships between the process and image evaluation criteria to improve the quality of mammograms. First, we identified and created root classes: task, plan, and clinical image evaluation (CIE). Second, we described an "is-a" relation referring to the result of the previous study and the structure of the CIE. Third, the procedural steps in the ontology were described using the new properties: "isPerformedBefore," "isPerformedAfter," and "isPerformedAfterIfNecessary." Finally, the relationships between tasks and CIEs were described using the "isAffectedBy" property to represent the influence of the process on image quality. In total, there were 219 classes in the ontology. By introducing new properties related to the process flow, a sophisticated mammography examination process could be visualized. In relationships between tasks and CIEs, it became clear that the tasks affecting the evaluation criteria related to positioning were greater in number than those for image quality. We developed a mammographic examination process ontology that makes knowledge explicit for a comprehensive mammography process. Our research will support education and help promote knowledge sharing about mammography examination expertise.
Co-design of software and hardware to implement remote sensing algorithms
NASA Astrophysics Data System (ADS)
Theiler, James P.; Frigo, Janette R.; Gokhale, Maya; Szymanski, John J.
2002-01-01
Both for offline searches through large data archives and for onboard computation at the sensor head, there is a growing need for ever-more rapid processing of remote sensing data. For many algorithms of use in remote sensing, the bulk of the processing takes place in an ``inner loop'' with a large number of simple operations. For these algorithms, dramatic speedups can often be obtained with specialized hardware. The difficulty and expense of digital design continues to limit applicability of this approach, but the development of new design tools is making this approach more feasible, and some notable successes have been reported. On the other hand, it is often the case that processing can also be accelerated by adopting a more sophisticated algorithm design. Unfortunately, a more sophisticated algorithm is much harder to implement in hardware, so these approaches are often at odds with each other. With careful planning, however, it is sometimes possible to combine software and hardware design in such a way that each complements the other, and the final implementation achieves speedup that would not have been possible with a hardware-only or a software-only solution. We will in particular discuss the co-design of software and hardware to achieve substantial speedup of algorithms for multispectral image segmentation and for endmember identification.
Twellmann, Thorsten; Meyer-Baese, Anke; Lange, Oliver; Foo, Simon; Nattkemper, Tim W.
2008-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has become an important tool in breast cancer diagnosis, but evaluation of multitemporal 3D image data holds new challenges for human observers. To aid the image analysis process, we apply supervised and unsupervised pattern recognition techniques for computing enhanced visualizations of suspicious lesions in breast MRI data. These techniques represent an important component of future sophisticated computer-aided diagnosis (CAD) systems and support the visual exploration of spatial and temporal features of DCE-MRI data stemming from patients with confirmed lesion diagnosis. By taking into account the heterogeneity of cancerous tissue, these techniques reveal signals with malignant, benign and normal kinetics. They also provide a regional subclassification of pathological breast tissue, which is the basis for pseudo-color presentations of the image data. Intelligent medical systems are expected to have substantial implications in healthcare politics by contributing to the diagnosis of indeterminate breast lesions by non-invasive imaging. PMID:19255616
Securing quality of camera-based biomedical optics
NASA Astrophysics Data System (ADS)
Guse, Frank; Kasper, Axel; Zinter, Bob
2009-02-01
As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.
1999-04-01
NASA's Space Optics Manufacturing Center has been working to expand our view of the universe via sophisticated new telescopes. The Optics Center's goal is to develop low-cost, advanced space optics technologies for the NASA program in the 21st century - including the long-term goal of imaging Earth-like planets in distant solar systems. To reduce the cost of mirror fabrication, Marshall Space Flight Center (MSFC) has developed replication techniques, the machinery, and materials to replicate electro-formed nickel mirrors. The process allows fabricating precisely shaped mandrels to be used and reused as masters for replicating high-quality mirrors. Image shows Dr. Alan Shapiro cleaning mirror mandrel to be applied with highly reflective and high-density coating in the Large Aperture Coating Chamber, MFSC Space Optics Manufacturing Technology Center (SOMTC).
Prefrontal Cortex, Emotion, and Approach/Withdrawal Motivation
Spielberg, Jeffrey M.; Stewart, Jennifer L.; Levin, Rebecca L.; Miller, Gregory A.; Heller, Wendy
2010-01-01
This article provides a selective review of the literature and current theories regarding the role of prefrontal cortex, along with some other critical brain regions, in emotion and motivation. Seemingly contradictory findings have often appeared in this literature. Research attempting to resolve these contradictions has been the basis of new areas of growth and has led to more sophisticated understandings of emotional and motivational processes as well as neural networks associated with these processes. Progress has, in part, depended on methodological advances that allow for increased resolution in brain imaging. A number of issues are currently in play, among them the role of prefrontal cortex in emotional or motivational processes. This debate fosters research that will likely lead to further refinement of conceptualizations of emotion, motivation, and the neural processes associated with them. PMID:20574551
Prefrontal Cortex, Emotion, and Approach/Withdrawal Motivation.
Spielberg, Jeffrey M; Stewart, Jennifer L; Levin, Rebecca L; Miller, Gregory A; Heller, Wendy
2008-01-01
This article provides a selective review of the literature and current theories regarding the role of prefrontal cortex, along with some other critical brain regions, in emotion and motivation. Seemingly contradictory findings have often appeared in this literature. Research attempting to resolve these contradictions has been the basis of new areas of growth and has led to more sophisticated understandings of emotional and motivational processes as well as neural networks associated with these processes. Progress has, in part, depended on methodological advances that allow for increased resolution in brain imaging. A number of issues are currently in play, among them the role of prefrontal cortex in emotional or motivational processes. This debate fosters research that will likely lead to further refinement of conceptualizations of emotion, motivation, and the neural processes associated with them.
NASA Technical Reports Server (NTRS)
2002-01-01
Warmer surface temperatures over just a few months in the Antarctic can splinter an ice shelf and prime it for a major collapse, NASA and university scientists report in the latest issue of the Journal of Glaciology. Using satellite images of tell-tale melt water on the ice surface and a sophisticated computer simulation of the motions and forces within an ice shelf, the scientists demonstrated that added pressure from surface water filling crevasses can crack the ice entirely through. The process can be expected to become more widespread if Antarctic summer temperatures increase. This true-color image from Landsat 7, acquired on February 21, 2000, shows pools of melt water on the surface of the Larsen Ice Shelf, and drifting icebergs that have split from the shelf. The upper image is an overview of the shelf's edge, while the lower image is displayed at full resolution of 30 meters (98 feet) per pixel. The labeled pond in the lower image measures roughly 1.6 by 1.6 km (1.0 x 1.0 miles). Full text of Press Release More Images and Animations Image courtesy Landsat 7 Science Team and NASA GSFC
Interferometric synthetic aperture radar: Building tomorrow's tools today
Lu, Zhong
2006-01-01
A synthetic aperture radar (SAR) system transmits electromagnetic (EM) waves at a wavelength that can range from a few millimeters to tens of centimeters. The radar wave propagates through the atmosphere and interacts with the Earth’s surface. Part of the energy is reflected back to the SAR system and recorded. Using a sophisticated image processing technique, called SAR processing (Curlander and McDonough, 1991), both the intensity and phase of the reflected (or backscattered) signal of each ground resolution element (a few meters to tens of meters) can be calculated in the form of a complex-valued SAR image representing the reflectivity of the ground surface. The amplitude or intensity of the SAR image is determined primarily by terrain slope, surface roughness, and dielectric constants, whereas the phase of the SAR image is determined primarily by the distance between the satellite antenna and the ground targets, slowing of the signal by the atmosphere, and the interaction of EM waves with ground surface. Interferometric SAR (InSAR) imaging, a recently developed remote sensing technique, utilizes the interaction of EM waves, referred to as interference, to measure precise distances. Very simply, InSAR involves the use of two or more SAR images of the same area to extract landscape topography and its deformation patterns.
Automated, on-board terrain analysis for precision landings
NASA Technical Reports Server (NTRS)
Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.
2006-01-01
Advances in space robotics technology hinge to a large extent upon the development and deployment of sophisticated new vision-based methods for automated in-space mission operations and scientific survey. To this end, we have developed a new concept for automated terrain analysis that is based upon a generic image enhancement platform|multi-scale retinex (MSR) and visual servo (VS) processing. This pre-conditioning with the MSR and the vs produces a "canonical" visual representation that is largely independent of lighting variations, and exposure errors. Enhanced imagery is then processed with a biologically inspired two-channel edge detection process, followed by a smoothness based criteria for image segmentation. Landing sites can be automatically determined by examining the results of the smoothness-based segmentation which shows those areas in the image that surpass a minimum degree of smoothness. Though the msr has proven to be a very strong enhancement engine, the other elements of the approach|the vs, terrain map generation, and smoothness-based segmentation|are in early stages of development. Experimental results on data from the Mars Global Surveyor show that the imagery can be processed to automatically obtain smooth landing sites. In this paper, we describe the method used to obtain these landing sites, and also examine the smoothness criteria in terms of the imager and scene characteristics. Several examples of applying this method to simulated and real imagery are shown.
ERIC Educational Resources Information Center
Price, Norman T.
2013-01-01
The availability and sophistication of visual display images, such as simulations, for use in science classrooms has increased exponentially however, it can be difficult for teachers to use these images to encourage and engage active student thinking. There is a need to describe flexible discussion strategies that use visual media to engage active…
Switching non-local median filter
NASA Astrophysics Data System (ADS)
Matsuoka, Jyohei; Koga, Takanori; Suetake, Noriaki; Uchino, Eiji
2015-06-01
This paper describes a novel image filtering method for removal of random-valued impulse noise superimposed on grayscale images. Generally, it is well known that switching-type median filters are effective for impulse noise removal. In this paper, we propose a more sophisticated switching-type impulse noise removal method in terms of detail-preserving performance. Specifically, the noise detector of the proposed method finds out noise-corrupted pixels by focusing attention on the difference between the value of a pixel of interest (POI) and the median of its neighboring pixel values, and on the POI's isolation tendency from the surrounding pixels. Furthermore, the removal of the detected noise is performed by the newly proposed median filter based on non-local processing, which has superior detail-preservation capability compared to the conventional median filter. The effectiveness and the validity of the proposed method are verified by some experiments using natural grayscale images.
Real-time generation of infrared ocean scene based on GPU
NASA Astrophysics Data System (ADS)
Jiang, Zhaoyi; Wang, Xun; Lin, Yun; Jin, Jianqiu
2007-12-01
Infrared (IR) image synthesis for ocean scene has become more and more important nowadays, especially for remote sensing and military application. Although a number of works present ready-to-use simulations, those techniques cover only a few possible ways of water interacting with the environment. And the detail calculation of ocean temperature is rarely considered by previous investigators. With the advance of programmable features of graphic card, many algorithms previously limited to offline processing have become feasible for real-time usage. In this paper, we propose an efficient algorithm for real-time rendering of infrared ocean scene using the newest features of programmable graphics processors (GPU). It differs from previous works in three aspects: adaptive GPU-based ocean surface tessellation, sophisticated balance equation of thermal balance for ocean surface, and GPU-based rendering for infrared ocean scene. Finally some results of infrared image are shown, which are in good accordance with real images.
The AAO fiber instrument data simulator
NASA Astrophysics Data System (ADS)
Goodwin, Michael; Farrell, Tony; Smedley, Scott; Heald, Ron; Heijmans, Jeroen; De Silva, Gayandhi; Carollo, Daniela
2012-09-01
The fiber instrument data simulator is an in-house software tool that simulates detector images of fiber-fed spectrographs developed by the Australian Astronomical Observatory (AAO). In addition to helping validate the instrument designs, the resulting simulated images are used to develop the required data reduction software. Example applications that have benefited from the tool usage are the HERMES and SAMI instrumental projects for the Anglo-Australian Telescope (AAT). Given the sophistication of these projects an end-to-end data simulator that accurately models the predicted detector images is required. The data simulator encompasses all aspects of the transmission and optical aberrations of the light path: from the science object, through the atmosphere, telescope, fibers, spectrograph and finally the camera detectors. The simulator runs under a Linux environment that uses pre-calculated information derived from ZEMAX models and processed data from MATLAB. In this paper, we discuss the aspects of the model, software, example simulations and verification.
Assessing the impact of graphical quality on automatic text recognition in digital maps
NASA Astrophysics Data System (ADS)
Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang
2016-08-01
Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.
A ganglion-cell-based primary image representation method and its contribution to object recognition
NASA Astrophysics Data System (ADS)
Wei, Hui; Dai, Zhi-Long; Zuo, Qing-Song
2016-10-01
A visual stimulus is represented by the biological visual system at several levels: in the order from low to high levels, they are: photoreceptor cells, ganglion cells (GCs), lateral geniculate nucleus cells and visual cortical neurons. Retinal GCs at the early level need to represent raw data only once, but meet a wide number of diverse requests from different vision-based tasks. This means the information representation at this level is general and not task-specific. Neurobiological findings have attributed this universal adaptation to GCs' receptive field (RF) mechanisms. For the purposes of developing a highly efficient image representation method that can facilitate information processing and interpretation at later stages, here we design a computational model to simulate the GC's non-classical RF. This new image presentation method can extract major structural features from raw data, and is consistent with other statistical measures of the image. Based on the new representation, the performances of other state-of-the-art algorithms in contour detection and segmentation can be upgraded remarkably. This work concludes that applying sophisticated representation schema at early state is an efficient and promising strategy in visual information processing.
Live Cell in Vitro and in Vivo Imaging Applications: Accelerating Drug Discovery
Isherwood, Beverley; Timpson, Paul; McGhee, Ewan J; Anderson, Kurt I; Canel, Marta; Serrels, Alan; Brunton, Valerie G; Carragher, Neil O
2011-01-01
Dynamic regulation of specific molecular processes and cellular phenotypes in live cell systems reveal unique insights into cell fate and drug pharmacology that are not gained from traditional fixed endpoint assays. Recent advances in microscopic imaging platform technology combined with the development of novel optical biosensors and sophisticated image analysis solutions have increased the scope of live cell imaging applications in drug discovery. We highlight recent literature examples where live cell imaging has uncovered novel insight into biological mechanism or drug mode-of-action. We survey distinct types of optical biosensors and associated analytical methods for monitoring molecular dynamics, in vitro and in vivo. We describe the recent expansion of live cell imaging into automated target validation and drug screening activities through the development of dedicated brightfield and fluorescence kinetic imaging platforms. We provide specific examples of how temporal profiling of phenotypic response signatures using such kinetic imaging platforms can increase the value of in vitro high-content screening. Finally, we offer a prospective view of how further application and development of live cell imaging technology and reagents can accelerate preclinical lead optimization cycles and enhance the in vitro to in vivo translation of drug candidates. PMID:24310493
Theory of Remote Image Formation
NASA Astrophysics Data System (ADS)
Blahut, Richard E.
2004-11-01
In many applications, images, such as ultrasonic or X-ray signals, are recorded and then analyzed with digital or optical processors in order to extract information. Such processing requires the development of algorithms of great precision and sophistication. This book presents a unified treatment of the mathematical methods that underpin the various algorithms used in remote image formation. The author begins with a review of transform and filter theory. He then discusses two- and three-dimensional Fourier transform theory, the ambiguity function, image construction and reconstruction, tomography, baseband surveillance systems, and passive systems (where the signal source might be an earthquake or a galaxy). Information-theoretic methods in image formation are also covered, as are phase errors and phase noise. Throughout the book, practical applications illustrate theoretical concepts, and there are many homework problems. The book is aimed at graduate students of electrical engineering and computer science, and practitioners in industry. Presents a unified treatment of the mathematical methods that underpin the algorithms used in remote image formation Illustrates theoretical concepts with reference to practical applications Provides insights into the design parameters of real systems
Planetary Radar Imaging with the Deep-Space Network's 34 Meter Uplink Array
NASA Technical Reports Server (NTRS)
Vilnrotter, V.; Tsao, P.; Lee, D.; Cornish, T.; Jao, J.; Slade, M.
2011-01-01
A coherent uplink array consisting of up to three 34-meter antennas of NASA's Deep Space Network has been developed for the primary purpose of increasing EIRP at the spacecraft. Greater EIRP ensures greater reach, higher uplink data rates for command and configuration control, as well as improved search and recovery capabilities during spacecraft emergencies. It has been conjectured that Doppler-delay radar imaging of lunar targets can be extended to planetary imaging, where the long baseline of the uplink array can provide greater resolution than a single antenna, as well as potentially higher EIRP. However, due to the well known R-4 loss in radar links, imaging of distant planets is a very challenging endeavor, requiring accurate phasing of the Uplink Array antennas, cryogenically cooled low-noise receiver amplifiers, and sophisticated processing of the received data to extract the weak echoes characteristic of planetary radar. This article describes experiments currently under way to image the planets Mercury and Venus, highlights improvements in equipment and techniques, and presents planetary images obtained to date with two 34 meter antennas configured as a coherently phased Uplink Array.
Planetary Radar Imaging with the Deep-Space Network's 34 Meter Uplink Array
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor; Tsao, P.; Lee, D.; Cornish, T.; Jao, J.; Slade, M.
2011-01-01
A coherent Uplink Array consisting of two or three 34-meter antennas of NASA's Deep Space Network has been developed for the primary purpose of increasing EIRP at the spacecraft. Greater EIRP ensures greater reach, higher uplink data rates for command and configuration control, as well as improved search and recovery capabilities during spacecraft emergencies. It has been conjectured that Doppler-delay radar imaging of lunar targets can be extended to planetary imaging, where the long baseline of the uplink array can provide greater resolution than a single antenna, as well as potentially higher EIRP. However, due to the well known R4 loss in radar links, imaging of distant planets is a very challenging endeavor, requiring accurate phasing of the Uplink Array antennas, cryogenically cooled low-noise receiver amplifiers, and sophisticated processing of the received data to extract the weak echoes characteristic of planetary radar. This article describes experiments currently under way to image the planets Mercury and Venus, highlights improvements in equipment and techniques, and presents planetary images obtained to date with two 34 meter antennas configured as a coherently phased Uplink Array.
Dawn of Advanced Molecular Medicine: Nanotechnological Advancements in Cancer Imaging and Therapy
Kaittanis, Charalambos; Shaffer, Travis M.; Thorek, Daniel L. J.; Grimm, Jan
2014-01-01
Nanotechnology plays an increasingly important role not only in our everyday life (with all its benefits and dangers) but also in medicine. Nanoparticles are to date the most intriguing option to deliver high concentrations of agents specifically and directly to cancer cells; therefore, a wide variety of these nanomaterials has been developed and explored. These span the range from simple nanoagents to sophisticated smart devices for drug delivery or imaging. Nanomaterials usually provide a large surface area, allowing for decoration with a large amount of moieties on the surface for either additional functionalities or targeting. Besides using particles solely for imaging purposes, they can also carry as a payload a therapeutic agent. If both are combined within the same particle, a theranostic agent is created. The sophistication of highly developed nanotechnology targeting approaches provides a promising means for many clinical implementations and can provide improved applications for otherwise suboptimal formulations. In this review we will explore nanotechnology both for imaging and therapy to provide a general overview of the field and its impact on cancer imaging and therapy. PMID:25271430
Gopakumar, Gopalakrishna Pillai; Swetha, Murali; Sai Siva, Gorthi; Sai Subrahmanyam, Gorthi R K
2018-03-01
The present paper introduces a focus stacking-based approach for automated quantitative detection of Plasmodium falciparum malaria from blood smear. For the detection, a custom designed convolutional neural network (CNN) operating on focus stack of images is used. The cell counting problem is addressed as the segmentation problem and we propose a 2-level segmentation strategy. Use of CNN operating on focus stack for the detection of malaria is first of its kind, and it not only improved the detection accuracy (both in terms of sensitivity [97.06%] and specificity [98.50%]) but also favored the processing on cell patches and avoided the need for hand-engineered features. The slide images are acquired with a custom-built portable slide scanner made from low-cost, off-the-shelf components and is suitable for point-of-care diagnostics. The proposed approach of employing sophisticated algorithmic processing together with inexpensive instrumentation can potentially benefit clinicians to enable malaria diagnosis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Image based performance analysis of thermal imagers
NASA Astrophysics Data System (ADS)
Wegner, D.; Repasi, E.
2016-05-01
Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.
NASA Astrophysics Data System (ADS)
Hirano, Ryoichi; Iida, Susumu; Amano, Tsuyoshi; Watanabe, Hidehiro; Hatakeyama, Masahiro; Murakami, Takeshi; Yoshikawa, Shoji; Suematsu, Kenichi; Terao, Kenji
2015-07-01
High-sensitivity EUV mask pattern defect detection is one of the major issues in order to realize the device fabrication by using the EUV lithography. We have already designed a novel Projection Electron Microscope (PEM) optics that has been integrated into a new inspection system named EBEYE-V30 ("Model EBEYE" is an EBARA's model code), and which seems to be quite promising for 16 nm hp generation EUVL Patterned mask Inspection (PI). Defect inspection sensitivity was evaluated by capturing an electron image generated at the mask by focusing onto an image sensor. The progress of the novel PEM optics performance is not only about making an image sensor with higher resolution but also about doing a better image processing to enhance the defect signal. In this paper, we describe the experimental results of EUV patterned mask inspection using the above-mentioned system. The performance of the system is measured in terms of defect detectability for 11 nm hp generation EUV mask. To improve the inspection throughput for 11 nm hp generation defect detection, it would require a data processing rate of greater than 1.5 Giga- Pixel-Per-Second (GPPS) that would realize less than eight hours of inspection time including the step-and-scan motion associated with the process. The aims of the development program are to attain a higher throughput, and enhance the defect detection sensitivity by using an adequate pixel size with sophisticated image processing resulting in a higher processing rate.
Data to Pictures to Data: Outreach Imaging Software and Metadata
NASA Astrophysics Data System (ADS)
Levay, Z.
2011-07-01
A convergence between astronomy science and digital photography has enabled a steady stream of visually rich imagery from state-of-the-art data. The accessibility of hardware and software has facilitated an explosion of astronomical images for outreach, from space-based observatories, ground-based professional facilities and among the vibrant amateur astrophotography community. Producing imagery from science data involves a combination of custom software to understand FITS data (FITS Liberator), off-the-shelf, industry-standard software to composite multi-wavelength data and edit digital photographs (Adobe Photoshop), and application of photo/image-processing techniques. Some additional effort is needed to close the loop and enable this imagery to be conveniently available for various purposes beyond web and print publication. The metadata paradigms in digital photography are now complying with FITS and science software to carry information such as keyword tags and world coordinates, enabling these images to be usable in more sophisticated, imaginative ways exemplified by Sky in Google Earth and World Wide Telescope.
Jacob, A L; Regazzoni, P; Bilecen, D; Rasmus, M; Huegli, R W; Messmer, P
2007-01-01
Technology integration is an enabling technological prerequisite to achieve a major breakthrough in sophisticated intra-operative imaging, navigation and robotics in minimally invasive and/or emergency diagnosis and therapy. Without a high degree of integration and reliability comparable to that achieved in the aircraft industry image guidance in its different facets will not ultimately succeed. As of today technology integration in the field of image-guidance is close to nonexistent. Technology integration requires inter-departmental integration of human and financial resources and of medical processes in a dialectic way. This expanded techno-socio-economic integration has profound consequences for the administration and working conditions in hospitals. At the university hospital of Basel, Switzerland, a multimodality multifunction sterile suite was put into operation after a substantial pre-run. We report the lessons learned during our venture into the world of medical technology integration and describe new possibilities for similar integration projects in the future.
Enhancing Web applications in radiology with Java: estimating MR imaging relaxation times.
Dagher, A P; Fitzpatrick, M; Flanders, A E; Eng, J
1998-01-01
Java is a relatively new programming language that has been used to develop a World Wide Web-based tool for estimating magnetic resonance (MR) imaging relaxation times, thereby demonstrating how Java may be used for Web-based radiology applications beyond improving the user interface of teaching files. A standard processing algorithm coded with Java is downloaded along with the hypertext markup language (HTML) document. The user (client) selects the desired pulse sequence and inputs data obtained from a region of interest on the MR images. The algorithm is used to modify selected MR imaging parameters in an equation that models the phenomenon being evaluated. MR imaging relaxation times are estimated, and confidence intervals and a P value expressing the accuracy of the final results are calculated. Design features such as simplicity, object-oriented programming, and security restrictions allow Java to expand the capabilities of HTML by offering a more versatile user interface that includes dynamic annotations and graphics. Java also allows the client to perform more sophisticated information processing and computation than is usually associated with Web applications. Java is likely to become a standard programming option, and the development of stand-alone Java applications may become more common as Java is integrated into future versions of computer operating systems.
New Capabilities With Dry Silver Recording Materials
NASA Astrophysics Data System (ADS)
Morgan, David A.
1987-04-01
Dry Silver technology was discovered at 3M and introduced into various imaging applications in the mid-sixties. In the early 1980's, quality films and papers with extended spectral responses, greater dynamic ranges, and improved sensitivity and edge acuity were introduced into sophisticated imaging systems. These products also have improved shelf life at elevated storage temperatures, and improved print stability. At the present time, 3M is developing a full-color dry silver product. This product has the same rapid-access, easy-to-use characteristics as the black and white dry silver recording materials. It has high resolution, long gray scale, and adequate sensitivity for CRT's and other electronically addressable exposure devices. The product can be processed in the same processors used for the black and white dry silver products.
STS-42 MS/PLC Norman E. Thagard adjusts Rack 10 FES equipment in IML-1 module
1992-01-30
STS042-05-006 (22-30 Jan 1992) --- Astronaut Norman E. Thagard, payload commander, performs the Fluids Experiment System (FES) in the International Microgravity Laboratory (IML-1) science module. The FES is a NASA-developed facility that produces optical images of fluid flows during the processing of materials in space. The system's sophisticated optics consist of a laser to make holograms of samples and a video camera to record images of flows in and around samples. Thagard was joined by six fellow crewmembers for eight days of scientific research aboard Discovery in Earth-orbit. Most of their on-duty time was spent in this IML-1 science module, positioned in the cargo bay and attached via a tunnel to Discovery's airlock.
USDA-ARS?s Scientific Manuscript database
Recent developments in spectrally encoded microspheres (SEMs)-based technologies provide high multiplexing possibilities. Most SEMs-based assays required a flow cytometer with sophisticated fluidics and optics. The new imaging superparamagnetic SEMs-based platform transports SEMs with considerably ...
Black Males and Television: New Images Versus Old Stereotypes.
ERIC Educational Resources Information Center
Douglas, Robert L.
1987-01-01
This paper focuses on historic portrayal of black males in service and support roles in the media and their relation to social reality. Both television and films use glamorous sophisticated trappings seemingly to enhance the image of black males, but the personalities of the characters they play remain stereotypic. (VM)
Milewski, Robert J; Kumagai, Yutaro; Fujita, Katsumasa; Standley, Daron M; Smith, Nicholas I
2010-11-19
Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively. We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest. The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without compromise in image quality or information loss in associated spectra. These results motivate further use of label free microscopy techniques in real-time imaging of live immune cells.
a Clustering-Based Approach for Evaluation of EO Image Indexing
NASA Astrophysics Data System (ADS)
Bahmanyar, R.; Rigoll, G.; Datcu, M.
2013-09-01
The volume of Earth Observation data is increasing immensely in order of several Terabytes a day. Therefore, to explore and investigate the content of this huge amount of data, developing more sophisticated Content-Based Information Retrieval (CBIR) systems are highly demanded. These systems should be able to not only discover unknown structures behind the data, but also provide relevant results to the users' queries. Since in any retrieval system the images are processed based on a discrete set of their features (i.e., feature descriptors), study and assessment of the structure of feature space, build by different feature descriptors, is of high importance. In this paper, we introduce a clustering-based approach to study the content of image collections. In our approach, we claim that using both internal and external evaluation of clusters for different feature descriptors, helps to understand the structure of feature space. Moreover, the semantic understanding of users about the images also can be assessed. To validate the performance of our approach, we used an annotated Synthetic Aperture Radar (SAR) image collection. Quantitative results besides the visualization of feature space demonstrate the applicability of our approach.
A robust hidden Markov Gauss mixture vector quantizer for a noisy source.
Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M
2009-07-01
Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.
Contrast-enhanced and targeted ultrasound.
Postema, Michiel; Gilja, Odd Helge
2011-01-07
Ultrasonic imaging is becoming the most popular medical imaging modality, owing to the low price per examination and its safety. However, blood is a poor scatterer of ultrasound waves at clinical diagnostic transmit frequencies. For perfusion imaging, markers have been designed to enhance the contrast in B-mode imaging. These so-called ultrasound contrast agents consist of microscopically small gas bubbles encapsulated in biodegradable shells. In this review, the physical principles of ultrasound contrast agent microbubble behavior and their adjustment for drug delivery including sonoporation are described. Furthermore, an outline of clinical imaging applications of contrast-enhanced ultrasound is given. It is a challenging task to quantify and predict which bubble phenomenon occurs under which acoustic condition, and how these phenomena may be utilized in ultrasonic imaging. Aided by high-speed photography, our improved understanding of encapsulated microbubble behavior will lead to more sophisticated detection and delivery techniques. More sophisticated methods use quantitative approaches to measure the amount and the time course of bolus or reperfusion curves, and have shown great promise in revealing effective tumor responses to anti-angiogenic drugs in humans before tumor shrinkage occurs. These are beginning to be accepted into clinical practice. In the long term, targeted microbubbles for molecular imaging and eventually for directed anti-tumor therapy are expected to be tested.
Contrast-enhanced and targeted ultrasound
Postema, Michiel; Gilja, Odd Helge
2011-01-01
Ultrasonic imaging is becoming the most popular medical imaging modality, owing to the low price per examination and its safety. However, blood is a poor scatterer of ultrasound waves at clinical diagnostic transmit frequencies. For perfusion imaging, markers have been designed to enhance the contrast in B-mode imaging. These so-called ultrasound contrast agents consist of microscopically small gas bubbles encapsulated in biodegradable shells. In this review, the physical principles of ultrasound contrast agent microbubble behavior and their adjustment for drug delivery including sonoporation are described. Furthermore, an outline of clinical imaging applications of contrast-enhanced ultrasound is given. It is a challenging task to quantify and predict which bubble phenomenon occurs under which acoustic condition, and how these phenomena may be utilized in ultrasonic imaging. Aided by high-speed photography, our improved understanding of encapsulated microbubble behavior will lead to more sophisticated detection and delivery techniques. More sophisticated methods use quantitative approaches to measure the amount and the time course of bolus or reperfusion curves, and have shown great promise in revealing effective tumor responses to anti-angiogenic drugs in humans before tumor shrinkage occurs. These are beginning to be accepted into clinical practice. In the long term, targeted microbubbles for molecular imaging and eventually for directed anti-tumor therapy are expected to be tested. PMID:21218081
Cao, Lu; Graauw, Marjo de; Yan, Kuan; Winkel, Leah; Verbeek, Fons J
2016-05-03
Endocytosis is regarded as a mechanism of attenuating the epidermal growth factor receptor (EGFR) signaling and of receptor degradation. There is increasing evidence becoming available showing that breast cancer progression is associated with a defect in EGFR endocytosis. In order to find related Ribonucleic acid (RNA) regulators in this process, high-throughput imaging with fluorescent markers is used to visualize the complex EGFR endocytosis process. Subsequently a dedicated automatic image and data analysis system is developed and applied to extract the phenotype measurement and distinguish different developmental episodes from a huge amount of images acquired through high-throughput imaging. For the image analysis, a phenotype measurement quantifies the important image information into distinct features or measurements. Therefore, the manner in which prominent measurements are chosen to represent the dynamics of the EGFR process becomes a crucial step for the identification of the phenotype. In the subsequent data analysis, classification is used to categorize each observation by making use of all prominent measurements obtained from image analysis. Therefore, a better construction for a classification strategy will support to raise the performance level in our image and data analysis system. In this paper, we illustrate an integrated analysis method for EGFR signalling through image analysis of microscopy images. Sophisticated wavelet-based texture measurements are used to obtain a good description of the characteristic stages in the EGFR signalling. A hierarchical classification strategy is designed to improve the recognition of phenotypic episodes of EGFR during endocytosis. Different strategies for normalization, feature selection and classification are evaluated. The results of performance assessment clearly demonstrate that our hierarchical classification scheme combined with a selected set of features provides a notable improvement in the temporal analysis of EGFR endocytosis. Moreover, it is shown that the addition of the wavelet-based texture features contributes to this improvement. Our workflow can be applied to drug discovery to analyze defected EGFR endocytosis processes.
A regional assessment of information technology sophistication in Missouri nursing homes.
Alexander, Gregory L; Madsen, Richard; Wakefield, Douglas
2010-08-01
To provide a state profile of information technology (IT) sophistication in Missouri nursing homes. Primary survey data were collected from December 2006 to August 2007. A descriptive, exploratory cross-sectional design was used to investigate dimensions of IT sophistication (technological, functional, and integration) related to resident care, clinical support, and administrative processes. Each dimension was used to describe the clinical domains and demographics (ownership, regional location, and bed size). The final sample included 185 nursing homes. A wide range of IT sophistication is being used in administrative and resident care management processes, but very little in clinical support activities. Evidence suggests nursing homes in Missouri are expanding use of IT beyond traditional administrative and billing applications to patient care and clinical applications. This trend is important to provide support for capabilities which have been implemented to achieve national initiatives for meaningful use of IT in health care settings.
A Java software for creation of image mosaics.
Bossert, Oliver
2004-08-01
Besides the dimensions of the selected image field width, the resolution of the individual objects is also of major importance for automatic reconstruction and other sophisticated histological work. The software solution presented here allows the user to create image mosaics by using a combination of several photographs. Optimum control is achieved by combining two procedures and several control mechanisms. In sample tests involving 50 image pairs, all images were mosaiced without giving rise to error. The program is ready for public download.
The Ansel Adams zone system: HDR capture and range compression by chemical processing
NASA Astrophysics Data System (ADS)
McCann, John J.
2010-02-01
We tend to think of digital imaging and the tools of PhotoshopTM as a new phenomenon in imaging. We are also familiar with multiple-exposure HDR techniques intended to capture a wider range of scene information, than conventional film photography. We know about tone-scale adjustments to make better pictures. We tend to think of everyday, consumer, silver-halide photography as a fixed window of scene capture with a limited, standard range of response. This description of photography is certainly true, between 1950 and 2000, for instant films and negatives processed at the drugstore. These systems had fixed dynamic range and fixed tone-scale response to light. All pixels in the film have the same response to light, so the same light exposure from different pixels was rendered as the same film density. Ansel Adams, along with Fred Archer, formulated the Zone System, staring in 1940. It was earlier than the trillions of consumer photos in the second half of the 20th century, yet it was much more sophisticated than today's digital techniques. This talk will describe the chemical mechanisms of the zone system in the parlance of digital image processing. It will describe the Zone System's chemical techniques for image synthesis. It also discusses dodging and burning techniques to fit the HDR scene into the LDR print. Although current HDR imaging shares some of the Zone System's achievements, it usually does not achieve all of them.
Prostate seed implant quality assessment using MR and CT image fusion.
Amdur, R J; Gladstone, D; Leopold, K A; Harris, R D
1999-01-01
After a seed implant of the prostate, computerized tomography (CT) is ideal for determining seed distribution but soft tissue anatomy is frequently not well visualized. Magnetic resonance (MR) images soft tissue anatomy well but seed visualization is problematic. We describe a method of fusing CT and MR images to exploit the advantages of both of these modalities when assessing the quality of a prostate seed implant. Eleven consecutive prostate seed implant patients were imaged with axial MR and CT scans. MR and CT images were fused in three dimensions using the Pinnacle 3.0 version of the ADAC treatment planning system. The urethra and bladder base were used to "line up" MR and CT image sets during image fusion. Alignment was accomplished using translation and rotation in the three ortho-normal planes. Accuracy of image fusion was evaluated by calculating the maximum deviation in millimeters between the center of the urethra on axial MR versus CT images. Implant quality was determined by comparing dosimetric results to previously set parameters. Image fusion was performed with a high degree of accuracy. When lining up the urethra and base of bladder, the maximum difference in axial position of the urethra between MR and CT averaged 2.5 mm (range 1.3-4.0 mm, SD 0.9 mm). By projecting CT-derived dose distributions over MR images of soft tissue structures, qualitative and quantitative evaluation of implant quality is straightforward. The image-fusion process we describe provides a sophisticated way of assessing the quality of a prostate seed implant. Commercial software makes the process time-efficient and available to any clinical practice with a high-quality treatment planning system. While we use MR to image soft tissue structures, the process could be used with any imaging modality that is able to visualize the prostatic urethra (e.g., ultrasound).
Cancer Imaging Phenomics Toolkit (CaPTK) | Informatics Technology for Cancer Research (ITCR)
CaPTk is a tool that facilitates translation of highly sophisticated methods that help us gain a comprehensive understanding of the underlying mechanisms of cancer from medical imaging research to the clinic. It replicates basic interactive functionalities of radiological workstations and is distributed under a BSD-style license.
Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis
Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.
2006-01-01
In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709
NASA Astrophysics Data System (ADS)
Tyson, Eric J.; Buckley, James; Franklin, Mark A.; Chamberlain, Roger D.
2008-10-01
The imaging atmospheric Cherenkov technique for high-energy gamma-ray astronomy is emerging as an important new technique for studying the high energy universe. Current experiments have data rates of ≈20TB/year and duty cycles of about 10%. In the future, more sensitive experiments may produce up to 1000 TB/year. The data analysis task for these experiments requires keeping up with this data rate in close to real-time. Such data analysis is a classic example of a streaming application with very high performance requirements. This class of application often benefits greatly from the use of non-traditional approaches for computation including using special purpose hardware (FPGAs and ASICs), or sophisticated parallel processing techniques. However, designing, debugging, and deploying to these architectures is difficult and thus they are not widely used by the astrophysics community. This paper presents the Auto-Pipe design toolset that has been developed to address many of the difficulties in taking advantage of complex streaming computer architectures for such applications. Auto-Pipe incorporates a high-level coordination language, functional and performance simulation tools, and the ability to deploy applications to sophisticated architectures. Using the Auto-Pipe toolset, we have implemented the front-end portion of an imaging Cherenkov data analysis application, suitable for real-time or offline analysis. The application operates on data from the VERITAS experiment, and shows how Auto-Pipe can greatly ease performance optimization and application deployment of a wide variety of platforms. We demonstrate a performance improvement over a traditional software approach of 32x using an FPGA solution and 3.6x using a multiprocessor based solution.
Bio-inspired nano-sensor-enhanced CNN visual computer.
Porod, Wolfgang; Werblin, Frank; Chua, Leon O; Roska, Tamas; Rodriguez-Vazquez, Angel; Roska, Botond; Fay, Patrick; Bernstein, Gary H; Huang, Yih-Fang; Csurgay, Arpad I
2004-05-01
Nanotechnology opens new ways to utilize recent discoveries in biological image processing by translating the underlying functional concepts into the design of CNN (cellular neural/nonlinear network)-based systems incorporating nanoelectronic devices. There is a natural intersection joining studies of retinal processing, spatio-temporal nonlinear dynamics embodied in CNN, and the possibility of miniaturizing the technology through nanotechnology. This intersection serves as the springboard for our multidisciplinary project. Biological feature and motion detectors map directly into the spatio-temporal dynamics of CNN for target recognition, image stabilization, and tracking. The neural interactions underlying color processing will drive the development of nanoscale multispectral sensor arrays for image fusion. Implementing such nanoscale sensors on a CNN platform will allow the implementation of device feedback control, a hallmark of biological sensory systems. These biologically inspired CNN subroutines are incorporated into the new world of analog-and-logic algorithms and software, containing also many other active-wave computing mechanisms, including nature-inspired (physics and chemistry) as well as PDE-based sophisticated spatio-temporal algorithms. Our goal is to design and develop several miniature prototype devices for target detection, navigation, tracking, and robotics. This paper presents an example illustrating the synergies emerging from the convergence of nanotechnology, biotechnology, and information and cognitive science.
NASA Astrophysics Data System (ADS)
Hollmach, Julia; Schweizer, Julia; Steiner, Gerald; Knels, Lilla; Funk, Richard H. W.; Thalheim, Silko; Koch, Edmund
2011-07-01
Retinal diseases like age-related macular degeneration have become an important cause of visual loss depending on increasing life expectancy and lifestyle habits. Due to the fact that no satisfying treatment exists, early diagnosis and prevention are the only possibilities to stop the degeneration. The protein cytochrome c (cyt c) is a suitable marker for degeneration processes and apoptosis because it is a part of the respiratory chain and involved in the apoptotic pathway. The determination of the local distribution and oxidative state of cyt c in living cells allows the characterization of cell degeneration processes. Since cyt c exhibits characteristic absorption bands between 400 and 650 nm wavelength, uv/vis in situ spectroscopic imaging was used for its characterization in retinal ganglion cells. The large amount of data, consisting of spatial and spectral information, was processed by multivariate data analysis. The challenge consists in the identification of the molecular information of cyt c. Baseline correction, principle component analysis (PCA) and cluster analysis (CA) were performed in order to identify cyt c within the spectral dataset. The combination of PCA and CA reveals cyt c and its oxidative state. The results demonstrate that uv/vis spectroscopic imaging in conjunction with sophisticated multivariate methods is a suitable tool to characterize cyt c under in situ conditions.
Comprehensive approach to image-guided surgery
NASA Astrophysics Data System (ADS)
Peters, Terence M.; Comeau, Roch M.; Kasrai, Reza; St. Jean, Philippe; Clonda, Diego; Sinasac, M.; Audette, Michel A.; Fenster, Aaron
1998-06-01
Image-guided surgery has evolved over the past 15 years from stereotactic planning, where the surgeon planned approaches to intracranial targets on the basis of 2D images presented on a simple workstation, to the use of sophisticated multi- modality 3D image integration in the operating room, with guidance being provided by mechanically, optically or electro-magnetically tracked probes or microscopes. In addition, sophisticated procedures such as thalamotomies and pallidotomies to relieve the symptoms of Parkinson's disease, are performed with the aid of volumetric atlases integrated with the 3D image data. Operations that are performed stereotactically, that is to say via a small burr- hole in the skull, are able to assume that the information contained in the pre-operative imaging study, accurately represents the brain morphology during the surgical procedure. On the other hand, preforming a procedure via an open craniotomy presents a problem. Not only does tissue shift when the operation begins, even the act of opening the skull can cause significant shift of the brain tissue due to the relief of intra-cranial pressure, or the effect of drugs. Means of tracking and correcting such shifts from an important part of the work in the field of image-guided surgery today. One approach has ben through the development of intra-operative MRI imaging systems. We describe an alternative approach which integrates intra-operative ultrasound with pre-operative MRI to track such changes in tissue morphology.
From synchrotron radiation to lab source: advanced speckle-based X-ray imaging using abrasive paper
NASA Astrophysics Data System (ADS)
Wang, Hongchang; Kashyap, Yogesh; Sawhney, Kawal
2016-02-01
X-ray phase and dark-field imaging techniques provide complementary and inaccessible information compared to conventional X-ray absorption or visible light imaging. However, such methods typically require sophisticated experimental apparatus or X-ray beams with specific properties. Recently, an X-ray speckle-based technique has shown great potential for X-ray phase and dark-field imaging using a simple experimental arrangement. However, it still suffers from either poor resolution or the time consuming process of collecting a large number of images. To overcome these limitations, in this report we demonstrate that absorption, dark-field, phase contrast, and two orthogonal differential phase contrast images can simultaneously be generated by scanning a piece of abrasive paper in only one direction. We propose a novel theoretical approach to quantitatively extract the above five images by utilising the remarkable properties of speckles. Importantly, the technique has been extended from a synchrotron light source to utilise a lab-based microfocus X-ray source and flat panel detector. Removing the need to raster the optics in two directions significantly reduces the acquisition time and absorbed dose, which can be of vital importance for many biological samples. This new imaging method could potentially provide a breakthrough for numerous practical imaging applications in biomedical research and materials science.
ERIC Educational Resources Information Center
Greene, Jeffrey Alan; Azevedo, Roger
2009-01-01
In this study, we used think-aloud verbal protocols to examine how various macro-level processes of self-regulated learning (SRL; e.g., planning, monitoring, strategy use, handling of task difficulty and demands) were associated with the acquisition of a sophisticated mental model of a complex biological system. Numerous studies examine how…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fromm, Catherine
2015-08-20
Ptychography is an advanced diffraction based imaging technique that can achieve resolution of 5nm and below. It is done by scanning a sample through a beam of focused x-rays using discrete yet overlapping scan steps. Scattering data is collected on a CCD camera, and the phase of the scattered light is reconstructed with sophisticated iterative algorithms. Because the experimental setup is similar, ptychography setups can be created by retrofitting existing STXM beam lines with new hardware. The other challenge comes in the reconstruction of the collected scattering images. Scattering data must be adjusted and packaged with experimental parameters to calibratemore » the reconstruction software. The necessary pre-processing of data prior to reconstruction is unique to each beamline setup, and even the optical alignments used on that particular day. Pre-processing software must be developed to be flexible and efficient in order to allow experiments appropriate control and freedom in the analysis of their hard-won data. This paper will describe the implementation of pre-processing software which successfully connects data collection steps to reconstruction steps, letting the user accomplish accurate and reliable ptychography.« less
TOF-SIMS imaging technique with information entropy
NASA Astrophysics Data System (ADS)
Aoyagi, Satoka; Kawashima, Y.; Kudo, Masahiro
2005-05-01
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) is capable of chemical imaging of proteins on insulated samples in principal. However, selection of specific peaks related to a particular protein, which are necessary for chemical imaging, out of numerous candidates had been difficult without an appropriate spectrum analysis technique. Therefore multivariate analysis techniques, such as principal component analysis (PCA), and analysis with mutual information defined by information theory, have been applied to interpret SIMS spectra of protein samples. In this study mutual information was applied to select specific peaks related to proteins in order to obtain chemical images. Proteins on insulated materials were measured with TOF-SIMS and then SIMS spectra were analyzed by means of the analysis method based on the comparison using mutual information. Chemical mapping of each protein was obtained using specific peaks related to each protein selected based on values of mutual information. The results of TOF-SIMS images of proteins on the materials provide some useful information on properties of protein adsorption, optimality of immobilization processes and reaction between proteins. Thus chemical images of proteins by TOF-SIMS contribute to understand interactions between material surfaces and proteins and to develop sophisticated biomaterials.
[Digital imaging and robotics in endoscopic surgery].
Go, P M
1998-05-23
The introduction of endoscopical surgery has among other things influenced technical developments in surgery. Owing to digitalisation, major progress will be made in imaging and in the sophisticated technology sometimes called robotics. Digital storage makes the results of imaging diagnostics (e.g. the results of radiological examination) suitable for transmission via video conference systems for telediagnostic purposes. The availability of digital video technique renders possible the processing, storage and retrieval of moving images as well. During endoscopical operations use may be made of a robot arm which replaces the camera man. The arm does not grow tired and provides a stable image. The surgeon himself can operate or address the arm and it can remember fixed image positions to which it can return if ordered to do so. The next step is to carry out surgical manipulations via a robot arm. This may make operations more patient-friendly. A robot arm can also have remote control: telerobotics. At the Internet site of this journal a number of supplements to this article can be found, for instance three-dimensional (3D) illustrations (which is the purpose of the 3D spectacles enclosed with this issue) and a quiz (http:@appendix.niwi. knaw.nl).
Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen
2002-12-10
Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.
Middleton, Mark; Frantzis, Jim; Healy, Brendan; Jones, Mark; Murry, Rebecca; Kron, Tomas; Plank, Ashley; Catton, Charles; Martin, Jarad
2011-12-01
The quality assurance (QA) of image-guided radiation therapy (IGRT) within clinical trials is in its infancy, but its importance will continue to grow as IGRT becomes the standard of care. The purpose of this study was to demonstrate the feasibility of IGRT QA as part of the credentialing process for a clinical trial. As part of the accreditation process for a randomized trial in prostate cancer hypofraction, IGRT benchmarking across multiple sites was incorporated. Each participating site underwent IGRT credentialing via a site visit. In all centers, intraprostatic fiducials were used. A real-time assessment of analysis of IGRT was performed using Varian's Offline Review image analysis package. Two-dimensional (2D) kV and MV electronic portal imaging prostate patient datasets were used, consisting of 39 treatment verification images for 2D/2D comparison with the digitally reconstructed radiograph derived from the planning scan. The influence of differing sites, image modality, and observer experience on IGRT was then assessed. Statistical analysis of the mean mismatch errors showed that IGRT analysis was performed uniformly regardless of institution, therapist seniority, or imaging modality across the three orthogonal planes. The IGRT component of clinical trials that include sophisticated planning and treatment protocols must undergo stringent QA. The IGRT technique of intraprostatic fiducials has been shown in the context of this trial to be undertaken in a uniform manner across Australia. Extending this concept to many sites with different equipment and IGRT experience will require a robust remote credentialing process. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.
Reliability-Based Model to Analyze the Performance and Cost of a Transit Fare Collection System.
DOT National Transportation Integrated Search
1985-06-01
The collection of transit system fares has become more sophisticated in recent years, with more flexible structures requiring more sophisticated fare collection equipment to process tickets and admit passengers. However, this new and complex equipmen...
Automated Blazar Light Curves Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Spencer James
Every night in a remote clearing called Fenton Hill high in the Jemez Mountains of central New Mexico, a bank of robotically controlled telescopes tilt their lenses to the sky for another round of observation through digital imaging. Los Alamos National Laboratory’s Thinking Telescopes project is watching for celestial transients including high-power cosmic flashes called, and like all science, it can be messy work. To keep the project clicking along, Los Alamos scientists routinely install equipment upgrades, maintain the site, and refine the sophisticated machinelearning computer programs that process those images and extract useful data from them. Each week themore » system amasses 100,000 digital images of the heavens, some of which are compromised by clouds, wind gusts, focus problems, and so on. For a graduate student at the Lab taking a year’s break between master’s and Ph.D. studies, working with state-of-the-art autonomous telescopes that can make fundamental discoveries feels light years beyond the classroom.« less
NASA Technical Reports Server (NTRS)
Rice, R. F.
1978-01-01
Various communication systems were considered which are required to transmit both imaging and a typically error sensitive, class of data called general science/engineering (gse) over a Gaussian channel. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an Advanced Imaging Communication System (AICS) which exhibits the rather significant potential advantages of sophisticated data compression coupled with powerful yet practical channel coding.
Two-Dimensional Optoelectronic Graphene Nanoprobes for Neural Nerwork
NASA Astrophysics Data System (ADS)
Hong, Tu; Kitko, Kristina; Wang, Rui; Zhang, Qi; Xu, Yaqiong
2014-03-01
Brain is the most complex network created by nature, with billions of neurons connected by trillions of synapses through sophisticated wiring patterns and countless modulatory mechanisms. Current methods to study the neuronal process, either by electrophysiology or optical imaging, have significant limitations on throughput and sensitivity. Here, we use graphene, a monolayer of carbon atoms, as a two-dimensional nanoprobe for neural network. Scanning photocurrent measurement is applied to detect the local integration of electrical and chemical signals in mammalian neurons. Such interface between nanoscale electronic device and biological system provides not only ultra-high sensitivity, but also sub-millisecond temporal resolution, owing to the high carrier mobility of graphene.
Performance comparison of denoising filters for source camera identification
NASA Astrophysics Data System (ADS)
Cortiana, A.; Conotter, V.; Boato, G.; De Natale, F. G. B.
2011-02-01
Source identification for digital content is one of the main branches of digital image forensics. It relies on the extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently characterizes the digital device which generated the content. Such noise is estimated as the difference between the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance comparison of different denoising filters for source identification purposes. In particular, results achieved with a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously employed in such a context.
1999-04-01
NASA's Space Optics Manufacturing Center has been working to expand our view of the universe via sophisticated new telescopes. The Optics Center's goal is to develop low-cost, advanced space optics technologies to the NASA program in the 21st century - including the long-term goal of imaging Earth-like planets in distant solar systems. To reduce the cost of mirror fabrication, Marshall Space Flight Center (MSFC) has developed replication techniques, the machinery, and materials to replicate electro-formed nickel mirrors. The process allows fabricating precisely shaped mandrels to be used and reused as masters for replicating high-quality mirrors. Photograph shows J.R. Griffith inspecting a replicated x-ray mirror mandrel.
X-ray Tomography and Chemical Imaging within Butterfly Wing Scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Jianhua; Lee Yaochang; Tang, M.-T.
2007-01-19
The rainbow like color of butterfly wings is associated with the internal and surface structures of the wing scales. While the photonic structure of the scales is believed to diffract specific lights at different angle, there is no adequate probe directly answering the 3-D structures with sufficient spatial resolution. The NSRRC nano-transmission x-ray microscope (nTXM) with tens nanometers spatial resolution is able to image biological specimens without artifacts usually introduced in sophisticated sample staining processes. With the intrinsic deep penetration of x-rays, the nTXM is capable of nondestructively investigating the internal structures of fragile and soft samples. In this study,more » we imaged the structure of butterfly wing scales in 3-D view with 60 nm spatial resolution. In addition, synchrotron-radiation-based Fourier transform Infrared (FT-IR) microspectroscopy was employed to analyze the chemical components with spatial information of the butterfly wing scales. Based on the infrared spectral images, we suggest that the major components of scale structure were rich in protein and polysaccharide.« less
Bio-inspired color sketch for eco-friendly printing
NASA Astrophysics Data System (ADS)
Safonov, Ilia V.; Tolstaya, Ekaterina V.; Rychagov, Michael N.; Lee, Hokeun; Kim, Sang Ho; Choi, Donchul
2012-01-01
Saving of toner/ink consumption is an important task in modern printing devices. It has a positive ecological and social impact. We propose technique for converting print-job pictures to a recognizable and pleasant color sketches. Drawing a "pencil sketch" from a photo relates to a special area in image processing and computer graphics - non-photorealistic rendering. We describe a new approach for automatic sketch generation which allows to create well-recognizable sketches and to preserve partly colors of the initial picture. Our sketches contain significantly less color dots then initial images and this helps to save toner/ink. Our bio-inspired approach is based on sophisticated edge detection technique for a mask creation and multiplication of source image with increased contrast by this mask. To construct the mask we use DoG edge detection, which is a result of blending of initial image with its blurred copy through the alpha-channel, which is created from Saliency Map according to Pre-attentive Human Vision model. Measurement of percentage of saved toner and user study proves effectiveness of proposed technique for toner saving in eco-friendly printing mode.
SkinScan©: A PORTABLE LIBRARY FOR MELANOMA DETECTION ON HANDHELD DEVICES
Wadhawan, Tarun; Situ, Ning; Lancaster, Keith; Yuan, Xiaojing; Zouridakis, George
2011-01-01
We have developed a portable library for automated detection of melanoma termed SkinScan© that can be used on smartphones and other handheld devices. Compared to desktop computers, embedded processors have limited processing speed, memory, and power, but they have the advantage of portability and low cost. In this study we explored the feasibility of running a sophisticated application for automated skin cancer detection on an Apple iPhone 4. Our results demonstrate that the proposed library with the advanced image processing and analysis algorithms has excellent performance on handheld and desktop computers. Therefore, deployment of smartphones as screening devices for skin cancer and other skin diseases can have a significant impact on health care delivery in underserved and remote areas. PMID:21892382
Using brain stimulation to disentangle neural correlates of conscious vision
de Graaf, Tom A.; Sack, Alexander T.
2014-01-01
Research into the neural correlates of consciousness (NCCs) has blossomed, due to the advent of new and increasingly sophisticated brain research tools. Neuroimaging has uncovered a variety of brain processes that relate to conscious perception, obtained in a range of experimental paradigms. But methods such as functional magnetic resonance imaging or electroencephalography do not always afford inference on the functional role these brain processes play in conscious vision. Such empirical NCCs could reflect neural prerequisites, neural consequences, or neural substrates of a conscious experience. Here, we take a closer look at the use of non-invasive brain stimulation (NIBS) techniques in this context. We discuss and review how NIBS methodology can enlighten our understanding of brain mechanisms underlying conscious vision by disentangling the empirical NCCs. PMID:25295015
Submillimeter video imaging with a superconducting bolometer array
NASA Astrophysics Data System (ADS)
Becker, Daniel Thomas
Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bombers and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) bolometers makes them ideal for passive imaging of thermal signals at millimeter and submillimeter wavelengths. I have built a 350 GHz video-rate imaging system using an array of feedhorn-coupled TES bolometers. The system operates at standoff distances of 16 m to 28 m with a measured spatial resolution of 1.4 cm (at 17 m). It currently contains one 251-detector sub-array, and can be expanded to contain four sub-arrays for a total of 1004 detectors. The system has been used to take video images that reveal the presence of weapons concealed beneath a shirt in an indoor setting. This dissertation describes the design, implementation and characterization of this system. It presents an overview of the challenges associated with standoff passive imaging and how these problems can be overcome through the use of large-format TES bolometer arrays. I describe the design of the system and cover the results of detector and optical characterization. I explain the procedure used to generate video images using the system, and present a noise analysis of those images. This analysis indicates that the Noise Equivalent Temperature Difference (NETD) of the video images is currently limited by artifacts of the scanning process. More sophisticated image processing algorithms can eliminate these artifacts and reduce the NETD to 100 mK, which is the target value for the most demanding passive imaging scenarios. I finish with an overview of future directions for this system.
Filtering algorithm for dotted interferences
NASA Astrophysics Data System (ADS)
Osterloh, K.; Bücherl, T.; Lierse von Gostomski, Ch.; Zscherpel, U.; Ewert, U.; Bock, S.
2011-09-01
An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.
Object recognition through turbulence with a modified plenoptic camera
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher
2015-03-01
Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.
A contemporary perspective on capitated reimbursement for imaging services.
Schwartz, H W
1995-01-01
Capitation ensures predictability of healthcare costs, requires acceptance of a premium in return for providing all required medical services and defines the actual dollar amount paid to a physician or hospital on a per member per month basis for a service or group of services. Capitation is expected to dramatically affect the marketplace in the near future, as private enterprise demands lower, more stable healthcare costs. Capitation requires detailed quantitative and financial data, including: eligibility and benefits determination, encounter processing, referral management, claims processing, case management, physician compensation, insurance management functions, outcomes reporting, performance management and cost accounting. It is important to understand actuarial risk and capitation marketing when considering a capitation contract. Also, capitated payment methodologies may vary to include modified fee-for-service, incentive pay, risk pool redistributions, merit, or a combination. Risk is directly related to the ability to predict utilization and unit cost of imaging services provided to a specific insured population. In capitated environments, radiologists will have even less control over referrals than they have today and will serve many more "covered lives"; long-term relationships with referring physicians will continue to evaporate; and services will be provided under exclusive, multi-year contracts. In addition to intensified use of technology for image transfer, telecommunications and sophisticated data processing and tracking systems, imaging departments must continue to provide the greatest amount of appropriate diagnostic information in a timely fashion at the lowest feasible cost and risk to the patient.
Autonomous system for Web-based microarray image analysis.
Bozinov, Daniel
2003-12-01
Software-based feature extraction from DNA microarray images still requires human intervention on various levels. Manual adjustment of grid and metagrid parameters, precise alignment of superimposed grid templates and gene spots, or simply identification of large-scale artifacts have to be performed beforehand to reliably analyze DNA signals and correctly quantify their expression values. Ideally, a Web-based system with input solely confined to a single microarray image and a data table as output containing measurements for all gene spots would directly transform raw image data into abstracted gene expression tables. Sophisticated algorithms with advanced procedures for iterative correction function can overcome imminent challenges in image processing. Herein is introduced an integrated software system with a Java-based interface on the client side that allows for decentralized access and furthermore enables the scientist to instantly employ the most updated software version at any given time. This software tool is extended from PixClust as used in Extractiff incorporated with Java Web Start deployment technology. Ultimately, this setup is destined for high-throughput pipelines in genome-wide medical diagnostics labs or microarray core facilities aimed at providing fully automated service to its users.
Scalable splitting algorithms for big-data interferometric imaging in the SKA era
NASA Astrophysics Data System (ADS)
Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves
2016-11-01
In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.
ERIC Educational Resources Information Center
Gothelf, Doron; Furfaro, Joyce A.; Penniman, Lauren C.; Glover, Gary H.; Reiss, Allan L.
2005-01-01
Studying the biological mechanisms underlying mental retardation and developmental disabilities (MR/DD) is a very complex task. This is due to the wide heterogeneity of etiologies and pathways that lead to MR/DD. Breakthroughs in genetics and molecular biology and the development of sophisticated brain imaging techniques during the last decades…
Advanced Pediatric Brain Imaging Research and Training Program
2014-10-01
death and disability in children. Recent advances in pediatric magnetic resonance imaging ( MRI ) techniques are revolutionizing our understanding of... MRI , brain injury. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON USAMRMC a...principles of pediatric brain injury and recovery following injury, as well as the clinical application of sophisticated MRI techniques that are
Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle
NASA Astrophysics Data System (ADS)
Ettl, Svenja
2015-04-01
'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.
Dust-penetrating (DUSPEN) see-through lidar for helicopter situational awareness in DVE
NASA Astrophysics Data System (ADS)
Murray, James T.; Seely, Jason; Plath, Jeff; Gotfredson, Eric; Engel, John; Ryder, Bill; Van Lieu, Neil; Goodwin, Ron; Wagner, Tyler; Fetzer, Greg; Kridler, Nick; Melancon, Chris; Panici, Ken; Mitchell, Anthony
2013-10-01
Areté Associates recently developed and flight tested a next-generation low-latency near real-time dust-penetrating (DUSPEN) imaging lidar system. These tests were accomplished for Naval Air Warfare Center (NAWC) Aircraft Division (AD) 4.5.6 (EO/IR Sensor Division) under the Office of Naval Research (ONR) Future Naval Capability (FNC) Helicopter Low-Level Operations (HELO) Product 2 program. Areté's DUSPEN system captures full lidar waveforms and uses sophisticated real-time detection and filtering algorithms to discriminate hard target returns from dust and other obscurants. Down-stream 3D image processing methods are used to enhance pilot visualization of threat objects and ground features during severe DVE conditions. This paper presents results from these recent flight tests in full brown-out conditions at Yuma Proving Grounds (YPG) from a CH-53E Super Stallion helicopter platform.
Dust-Penetrating (DUSPEN) "see-through" lidar for helicopter situational awareness in DVE
NASA Astrophysics Data System (ADS)
Murray, James T.; Seely, Jason; Plath, Jeff; Gotfreson, Eric; Engel, John; Ryder, Bill; Van Lieu, Neil; Goodwin, Ron; Wagner, Tyler; Fetzer, Greg; Kridler, Nick; Melancon, Chris; Panici, Ken; Mitchell, Anthony
2013-05-01
Areté Associates recently developed and flight tested a next-generation low-latency near real-time dust-penetrating (DUSPEN) imaging lidar system. These tests were accomplished for Naval Air Warfare Center (NAWC) Aircraft Division (AD) 4.5.6 (EO/IR Sensor Division) under the Office of Naval Research (ONR) Future Naval Capability (FNC) Helicopter Low-Level Operations (HELO) Product 2 program. Areté's DUSPEN system captures full lidar waveforms and uses sophisticated real-time detection and filtering algorithms to discriminate hard target returns from dust and other obscurants. Down-stream 3D image processing methods are used to enhance pilot visualization of threat objects and ground features during severe DVE conditions. This paper presents results from these recent flight tests in full brown-out conditions at Yuma Proving Grounds (YPG) from a CH-53E Super Stallion helicopter platform.
1999-04-01
NASA's Space Optics Manufacturing Center has been working to expand our view of the universe via sophisticated new telescopes. The Optics Center's goal is to develop low-cost, advanced space optics technologies for the NASA program in the 21st century - including the long-term goal of imaging Earth-like planets in distant solar systems. To reduce the cost of mirror fabrication, Marshall Space Flight Center (MSFC) has developed replication techniques, the machinery, and materials to replicate electro-formed nickel mirrors. The process allows fabricating precisely shaped mandrels to be used and reused as masters for replicating high-quality mirrors. MSFC's Space Optics Manufacturing Technology Center (SOMTC) has grinding and polishing equipment ranging from conventional spindles to custom-designed polishers. These capabilities allow us to grind precisely and polish a variety of optical devices, including x-ray mirror mandrels. This image shows Charlie Griffith polishing the half-meter mandrel at SOMTC.
The Optics Option: Preparing For A Career In Optics
NASA Astrophysics Data System (ADS)
Hartmann, Rudolf
1989-04-01
We live in a visual world. Without vision, our perception of the environment would be severely limited. Visual stimuli are seen, recorded, and processed in many different ways. Astronomy, the process of imaging distant objects, and microscopy, the process of magnifying minute detail, are extensions of vision. Other extensions of vision include seeing things in different spectra, processing images for enhancement, making decisions automatically, and guiding and controlling sophisticated, complex industrial and military equipment. Optics is the study of this vision and its applications. Optics is a fascinating field that is growing rapidly. Students and practitioners of optics are attracted to the field for a variety of reasons. Hobbies such as photography, astronomy, and video recording, as well as academic pursuits, such as a high school physics or science project, may spawn an interest in optics; however, college training is the cornerstone of an optics career. Optics is part of physics, and as such, requires coursework in the areas of geometrical optics, physical optics, spectroscopy, electricity, magnetism, and solid state physics. In addition, mathematics is extremely important for optics design, analysis, and modeling. Optics is the successful synergism of these many disciplines. Many colleges and universities offer undergraduate and graduate optics curricula. Rochester University's Institute of Optics and the Optical Sciences Center of the University of Arizona are the most prestigious of these institutions. Further, such societies as the Optical Society of America (OSA) and the International Society for Optical Engineering (SPIE) offer a wide variety of valuable short courses, tutorials, seminars, and papers at conferences that are held several times a year. Traditional optics fields, such as optometry, the examination of the eye and correction of its defects, or ophthalmology, the study of disease and treatment of the eye, are optics-oriented careers. Exciting new fields, such as optical communication, optical computing, Phase conjugation, adaptive optics, and holography, are expanding the scope of optics technologies. Development of sophisticated military EO systems presents one of the greatest opportunities and challenges in the optics world today.
A new concept for medical imaging centered on cellular phone technology.
Granot, Yair; Ivorra, Antoni; Rubinsky, Boris
2008-04-30
According to World Health Organization reports, some three quarters of the world population does not have access to medical imaging. In addition, in developing countries over 50% of medical equipment that is available is not being used because it is too sophisticated or in disrepair or because the health personnel are not trained to use it. The goal of this study is to introduce and demonstrate the feasibility of a new concept in medical imaging that is centered on cellular phone technology and which may provide a solution to medical imaging in underserved areas. The new system replaces the conventional stand-alone medical imaging device with a new medical imaging system made of two independent components connected through cellular phone technology. The independent units are: a) a data acquisition device (DAD) at a remote patient site that is simple, with limited controls and no image display capability and b) an advanced image reconstruction and hardware control multiserver unit at a central site. The cellular phone technology transmits unprocessed raw data from the patient site DAD and receives and displays the processed image from the central site. (This is different from conventional telemedicine where the image reconstruction and control is at the patient site and telecommunication is used to transmit processed images from the patient site). The primary goal of this study is to demonstrate that the cellular phone technology can function in the proposed mode. The feasibility of the concept is demonstrated using a new frequency division multiplexing electrical impedance tomography system, which we have developed for dynamic medical imaging, as the medical imaging modality. The system is used to image through a cellular phone a simulation of breast cancer tumors in a medical imaging diagnostic mode and to image minimally invasive tissue ablation with irreversible electroporation in a medical imaging interventional mode.
Different coding strategies for the perception of stable and changeable facial attributes.
Taubert, Jessica; Alais, David; Burr, David
2016-09-01
Perceptual systems face competing requirements: improving signal-to-noise ratios of noisy images, by integration; and maximising sensitivity to change, by differentiation. Both processes occur in human vision, under different circumstances: they have been termed priming, or serial dependencies, leading to positive sequential effects; and adaptation or habituation, which leads to negative sequential effects. We reasoned that for stable attributes, such as the identity and gender of faces, the system should integrate: while for changeable attributes like facial expression, it should also engage contrast mechanisms to maximise sensitivity to change. Subjects viewed a sequence of images varying simultaneously in gender and expression, and scored each as male or female, and happy or sad. We found strong and consistent positive serial dependencies for gender, and negative dependency for expression, showing that both processes can operate at the same time, on the same stimuli, depending on the attribute being judged. The results point to highly sophisticated mechanisms for optimizing use of past information, either by integration or differentiation, depending on the permanence of that attribute.
NASA Technical Reports Server (NTRS)
Cardullo, Frank M.; Lewis, Harold W., III; Panfilov, Peter B.
2007-01-01
An extremely innovative approach has been presented, which is to have the surgeon operate through a simulator running in real-time enhanced with an intelligent controller component to enhance the safety and efficiency of a remotely conducted operation. The use of a simulator enables the surgeon to operate in a virtual environment free from the impediments of telecommunication delay. The simulator functions as a predictor and periodically the simulator state is corrected with truth data. Three major research areas must be explored in order to ensure achieving the objectives. They are: simulator as predictor, image processing, and intelligent control. Each is equally necessary for success of the project and each of these involves a significant intelligent component in it. These are diverse, interdisciplinary areas of investigation, thereby requiring a highly coordinated effort by all the members of our team, to ensure an integrated system. The following is a brief discussion of those areas. Simulator as a predictor: The delays encountered in remote robotic surgery will be greater than any encountered in human-machine systems analysis, with the possible exception of remote operations in space. Therefore, novel compensation techniques will be developed. Included will be the development of the real-time simulator, which is at the heart of our approach. The simulator will present real-time, stereoscopic images and artificial haptic stimuli to the surgeon. Image processing: Because of the delay and the possibility of insufficient bandwidth a high level of novel image processing is necessary. This image processing will include several innovative aspects, including image interpretation, video to graphical conversion, texture extraction, geometric processing, image compression and image generation at the surgeon station. Intelligent control: Since the approach we propose is in a sense predictor based, albeit a very sophisticated predictor, a controller, which not only optimizes end effector trajectory but also avoids error, is essential. We propose to investigate two different approaches to the controller design. One approach employs an optimal controller based on modern control theory; the other one involves soft computing techniques, i.e. fuzzy logic, neural networks, genetic algorithms and hybrids of these.
InterFace: A software package for face image warping, averaging, and principal components analysis.
Kramer, Robin S S; Jenkins, Rob; Burton, A Mike
2017-12-01
We describe InterFace, a software package for research in face recognition. The package supports image warping, reshaping, averaging of multiple face images, and morphing between faces. It also supports principal components analysis (PCA) of face images, along with tools for exploring the "face space" produced by PCA. The package uses a simple graphical user interface, allowing users to perform these sophisticated image manipulations without any need for programming knowledge. The program is available for download in the form of an app, which requires that users also have access to the (freely available) MATLAB Runtime environment.
Do, Bao H; Wu, Andrew; Biswal, Sandip; Kamaya, Aya; Rubin, Daniel L
2010-11-01
Storing and retrieving radiology cases is an important activity for education and clinical research, but this process can be time-consuming. In the process of structuring reports and images into organized teaching files, incidental pathologic conditions not pertinent to the primary teaching point can be omitted, as when a user saves images of an aortic dissection case but disregards the incidental osteoid osteoma. An alternate strategy for identifying teaching cases is text search of reports in radiology information systems (RIS), but retrieved reports are unstructured, teaching-related content is not highlighted, and patient identifying information is not removed. Furthermore, searching unstructured reports requires sophisticated retrieval methods to achieve useful results. An open-source, RadLex(®)-compatible teaching file solution called RADTF, which uses natural language processing (NLP) methods to process radiology reports, was developed to create a searchable teaching resource from the RIS and the picture archiving and communication system (PACS). The NLP system extracts and de-identifies teaching-relevant statements from full reports to generate a stand-alone database, thus converting existing RIS archives into an on-demand source of teaching material. Using RADTF, the authors generated a semantic search-enabled, Web-based radiology archive containing over 700,000 cases with millions of images. RADTF combines a compact representation of the teaching-relevant content in radiology reports and a versatile search engine with the scale of the entire RIS-PACS collection of case material. ©RSNA, 2010
Application of failure mode and effect analysis in a radiology department.
Thornton, Eavan; Brook, Olga R; Mendiratta-Lala, Mishal; Hallett, Donna T; Kruskal, Jonathan B
2011-01-01
With increasing deployment, complexity, and sophistication of equipment and related processes within the clinical imaging environment, system failures are more likely to occur. These failures may have varying effects on the patient, ranging from no harm to devastating harm. Failure mode and effect analysis (FMEA) is a tool that permits the proactive identification of possible failures in complex processes and provides a basis for continuous improvement. This overview of the basic principles and methodology of FMEA provides an explanation of how FMEA can be applied to clinical operations in a radiology department to reduce, predict, or prevent errors. The six sequential steps in the FMEA process are explained, and clinical magnetic resonance imaging services are used as an example for which FMEA is particularly applicable. A modified version of traditional FMEA called Healthcare Failure Mode and Effect Analysis, which was introduced by the U.S. Department of Veterans Affairs National Center for Patient Safety, is briefly reviewed. In conclusion, FMEA is an effective and reliable method to proactively examine complex processes in the radiology department. FMEA can be used to highlight the high-risk subprocesses and allows these to be targeted to minimize the future occurrence of failures, thus improving patient safety and streamlining the efficiency of the radiology department. RSNA, 2010
Evaluation of Lunar Dark Mantle Deposits as Key to Future Human Missions
NASA Technical Reports Server (NTRS)
Coombs, Cassandra
1997-01-01
I proposed to continue detailed mapping, analysis and assessment of the lunar pyroclastic dark mantle deposits in support of the Human Exploration and Development of Space (HEDS) initiative. Specifically: (1) I continued gathering data via the Internet and mailable media, and a variety of other digital lunar images including; high resolution digital images of the new Apollo masters from JSC, images from Clementine and Galileo, and recent telescopic images from Hawaii; (2) continued analyses on these images using sophisticated hardware and software at JSC and the College of Charleston to determine and map composition using returned sample data for calibration; (3) worked closely with Dr. David McKay and others at JSC to relate sample data to image data using laboratory spectra from JSC and Brown University; (4) mapped the extent, thickness, and composition of important dark mantle deposits in selected study areas; and (5) began composing a geographically referenced database of lunar pyroclastic materials in the Apollo 17 area. The results have been used to identify and evaluate several candidate landing sites in dark mantle terrains. Additional work spawned from this effort includes the development of an educational CD-Rom on exploring the Moon: Contact Light. Throughout the whole process I have been in contact with the JSC HEDS personnel.
NASA Astrophysics Data System (ADS)
Ouma, Yashon O.
2016-01-01
Technologies for imaging the surface of the Earth, through satellite based Earth observations (EO) have enormously evolved over the past 50 years. The trends are likely to evolve further as the user community increases and their awareness and demands for EO data also increases. In this review paper, a development trend on EO imaging systems is presented with the objective of deriving the evolving patterns for the EO user community. From the review and analysis of medium-to-high resolution EO-based land-surface sensor missions, it is observed that there is a predictive pattern in the EO evolution trends such that every 10-15 years, more sophisticated EO imaging systems with application specific capabilities are seen to emerge. Such new systems, as determined in this review, are likely to comprise of agile and small payload-mass EO land surface imaging satellites with the ability for high velocity data transmission and huge volumes of spatial, spectral, temporal and radiometric resolution data. This availability of data will magnify the phenomenon of ;Big Data; in Earth observation. Because of the ;Big Data; issue, new computing and processing platforms such as telegeoprocessing and grid-computing are expected to be incorporated in EO data processing and distribution networks. In general, it is observed that the demand for EO is growing exponentially as the application and cost-benefits are being recognized in support of resource management.
Spaceborne Imaging Radar Symposium
NASA Technical Reports Server (NTRS)
Elachi, C.
1983-01-01
An overview of the present state of the art in the different scientific and technological fields related to spaceborne imaging radars was presented. The data acquired with the SEASAT SAR (1978) and Shuttle Imaging Radar, SIR-A (1981) clearly demonstrated the important emphasis in the 80's is going to be on in-depth research investigations conducted with the more flexible and sophisticated SIR series instruments and on long term monitoring of geophysical phenomena conducted from free-flying platforms such as ERS-1 and RADARSAT.
NASA Astrophysics Data System (ADS)
Coleman, Lamar W...
1985-02-01
Progress in laser fusion research has increased the need for detail and precision in the diagnosis of experiments. This has spawned the development and use of sophisticated sub-nanosecond resolution diavostic systems. These systems typically use ultrafast x-ray or optical streak caAleras in combination. with spatially imaging or spectrally dispersing elements. These instruments provide high resolution data essential for understanding the processes occurrilltg in the interaction. of high. intensity laser light with targets. Several of these types of instruments and their capabilities will be discussed. The utilization of these kinds of diagnostics systems on the nearly completed 100 kJ Nova laser facility will be described.
NASA Astrophysics Data System (ADS)
Coleman, L. W.
1985-01-01
Progress in laser fusion research has increased the need for detail and precision in the diagnosis of experiments. This has spawned the development and use of sophisticated sub-nanosecond resolution diagnostic systems. These systems typically use ultrafast X-ray or optical streak cameras in combination with spatially imaging or spectrally dispersing elements. These instruments provide high resolution data essential for understanding the processes occurring in the interaction of high intensity laser light with targets. Several of these types of instruments and their capabilities will be discussed. The utilization of these kinds of diagnostics systems on the nearly completed 100 kJ Nova laser facility will be described.
NASA Astrophysics Data System (ADS)
Wang, Bingyuan; Zhang, Yao; Liu, Dongyuan; Ding, Xuemei; Dan, Mai; Pan, Tiantian; Wang, Yihan; Li, Jiao; Zhou, Zhongxing; Zhang, Limin; Zhao, Huijuan; Gao, Feng
2018-02-01
Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging method to monitor the cerebral hemodynamic through the optical changes measured at the scalp surface. It has played a more and more important role in psychology and medical imaging communities. Real-time imaging of brain function using NIRS makes it possible to explore some sophisticated human brain functions unexplored before. Kalman estimator has been frequently used in combination with modified Beer-Lamber Law (MBLL) based optical topology (OT), for real-time brain function imaging. However, the spatial resolution of the OT is low, hampering the application of OT in exploring some complicated brain functions. In this paper, we develop a real-time imaging method combining diffuse optical tomography (DOT) and Kalman estimator, much improving the spatial resolution. Instead of only presenting one spatially distributed image indicating the changes of the absorption coefficients at each time point during the recording process, one real-time updated image using the Kalman estimator is provided. Its each voxel represents the amplitude of the hemodynamic response function (HRF) associated with this voxel. We evaluate this method using some simulation experiments, demonstrating that this method can obtain more reliable spatial resolution images. Furthermore, a statistical analysis is also conducted to help to decide whether a voxel in the field of view is activated or not.
Research on properties of an infrared imaging diffractive element
NASA Astrophysics Data System (ADS)
Rachoń, M.; Wegrzyńska, K.; Doch, M.; Kołodziejczyk, A.; Siemion, A.; Suszek, J.; Kakarenko, K.; Sypek, M.
2014-09-01
Novel thermovision imaging systems having high efficiency require very sophisticated optical components. This paper describes the diffractive optical elements which are designed for the wavelengths between 8 and 14 μm for the application in the FLIR cameras. In the current paper the authors present phase only diffractive elements manufactured in the etched gallium arsenide. Due to the simplicity of the manufacturing process only binary phase elements were designed and manufactured. Such solution exhibits huge chromatic aberration. Moreover, the performance of such elements is rather poor, which is caused by two factors. The first one is the limited diffraction efficiency (c.a. 40%) of binary phase structures. The second problem lies in the Fresnel losses coming from the reflection from the two surfaces (around 50%). Performance of this structures is limited and the imaging contrast is poor. However, such structures can be used for relatively cheap practical testing of the new ideas. For example this solution is sufficient for point spread function (PSF) measurements. Different diffractive elements were compared. The first one was the equivalent of the lens designed on the basis of the paraxial approximation. For the second designing process, the non-paraxial approach was used. It was due to the fact that f/# was equal to 1. For the non-paraxial designing the focal spot is smaller and better focused. Moreover, binary phase structures suffer from huge chromatic aberrations. Finally, it is presented that non-paraxially designed optical element imaging with extended depth of focus (light-sword) can suppress chromatic aberration and therefore it creates the image not only in the image plane.
"Big Data" in Rheumatology: Intelligent Data Modeling Improves the Quality of Imaging Data.
Landewé, Robert B M; van der Heijde, Désirée
2018-05-01
Analysis of imaging data in rheumatology is a challenge. Reliability of scores is an issue for several reasons. Signal-to-noise ratio of most imaging techniques is rather unfavorable (too little signal in relation to too much noise). Optimal use of all available data may help to increase credibility of imaging data, but knowledge of complicated statistical methodology and the help of skilled statisticians are required. Clinicians should appreciate the merits of sophisticated data modeling and liaise with statisticians to increase the quality of imaging results, as proper imaging studies in rheumatology imply more than a supersensitive imaging technique alone. Copyright © 2018 Elsevier Inc. All rights reserved.
Sensing Super-position: Visual Instrument Sensor Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2006-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.
Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines
Mikut, Ralf
2017-01-01
Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID:29095927
Using hyperspectral remote sensing for land cover classification
NASA Astrophysics Data System (ADS)
Zhang, Wendy W.; Sriharan, Shobha
2005-01-01
This project used hyperspectral data set to classify land cover using remote sensing techniques. Many different earth-sensing satellites, with diverse sensors mounted on sophisticated platforms, are currently in earth orbit. These sensors are designed to cover a wide range of the electromagnetic spectrum and are generating enormous amounts of data that must be processed, stored, and made available to the user community. The Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) collects data in 224 bands that are approximately 9.6 nm wide in contiguous bands between 0.40 and 2.45 mm. Hyperspectral sensors acquire images in many, very narrow, contiguous spectral bands throughout the visible, near-IR, and thermal IR portions of the spectrum. The unsupervised image classification procedure automatically categorizes the pixels in an image into land cover classes or themes. Experiments on using hyperspectral remote sensing for land cover classification were conducted during the 2003 and 2004 NASA Summer Faculty Fellowship Program at Stennis Space Center. Research Systems Inc.'s (RSI) ENVI software package was used in this application framework. In this application, emphasis was placed on: (1) Spectrally oriented classification procedures for land cover mapping, particularly, the supervised surface classification using AVIRIS data; and (2) Identifying data endmembers.
Capturing intraoperative deformations: research experience at Brigham and Women's Hospital.
Warfield, Simon K; Haker, Steven J; Talos, Ion-Florin; Kemper, Corey A; Weisenfeld, Neil; Mewes, Andrea U J; Goldberg-Zimring, Daniel; Zou, Kelly H; Westin, Carl-Fredrik; Wells, William M; Tempany, Clare M C; Golby, Alexandra; Black, Peter M; Jolesz, Ferenc A; Kikinis, Ron
2005-04-01
During neurosurgical procedures the objective of the neurosurgeon is to achieve the resection of as much diseased tissue as possible while achieving the preservation of healthy brain tissue. The restricted capacity of the conventional operating room to enable the surgeon to visualize critical healthy brain structures and tumor margin has lead, over the past decade, to the development of sophisticated intraoperative imaging techniques to enhance visualization. However, both rigid motion due to patient placement and nonrigid deformations occurring as a consequence of the surgical intervention disrupt the correspondence between preoperative data used to plan surgery and the intraoperative configuration of the patient's brain. Similar challenges are faced in other interventional therapies, such as in cryoablation of the liver, or biopsy of the prostate. We have developed algorithms to model the motion of key anatomical structures and system implementations that enable us to estimate the deformation of the critical anatomy from sequences of volumetric images and to prepare updated fused visualizations of preoperative and intraoperative images at a rate compatible with surgical decision making. This paper reviews the experience at Brigham and Women's Hospital through the process of developing and applying novel algorithms for capturing intraoperative deformations in support of image guided therapy.
Boone, John M; Yang, Kai; Burkett, George W; Packard, Nathan J; Huang, Shih-ying; Bowen, Spencer; Badawi, Ramsey D; Lindfors, Karen K
2010-02-01
Mammography has served the population of women who are at-risk for breast cancer well over the past 30 years. While mammography has undergone a number of changes as digital detector technology has advanced, other modalities such as computed tomography have experienced technological sophistication over this same time frame as well. The advent of large field of view flat panel detector systems enable the development of breast CT and several other niche CT applications, which rely on cone beam geometry. The breast, it turns out, is well suited to cone beam CT imaging because the lack of bones reduces artifacts, and the natural tapering of the breast anteriorly reduces the x-ray path lengths through the breast at large cone angle, reducing cone beam artifacts as well. We are in the process of designing a third prototype system which will enable the use of breast CT for image guided interventional procedures. This system will have several copies fabricated so that several breast CT scanners can be used in a multi-institutional clinical trial to better understand the role that this technology can bring to breast imaging.
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.
Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers
Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari
2017-01-01
Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829
NASA Astrophysics Data System (ADS)
Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.
1999-05-01
PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.
Acharya, Rajendra Udyavara; Yu, Wenwei; Zhu, Kuanyi; Nayak, Jagadish; Lim, Teik-Cheng; Chan, Joey Yiptong
2010-08-01
Human eyes are most sophisticated organ, with perfect and interrelated subsystems such as retina, pupil, iris, cornea, lens and optic nerve. The eye disorder such as cataract is a major health problem in the old age. Cataract is formed by clouding of lens, which is painless and developed slowly over a long period. Cataract will slowly diminish the vision leading to the blindness. At an average age of 65, it is most common and one third of the people of this age in world have cataract in one or both the eyes. A system for detection of the cataract and to test for the efficacy of the post-cataract surgery using optical images is proposed using artificial intelligence techniques. Images processing and Fuzzy K-means clustering algorithm is applied on the raw optical images to detect the features specific to three classes to be classified. Then the backpropagation algorithm (BPA) was used for the classification. In this work, we have used 140 optical image belonging to the three classes. The ANN classifier showed an average rate of 93.3% in detecting normal, cataract and post cataract optical images. The system proposed exhibited 98% sensitivity and 100% specificity, which indicates that the results are clinically significant. This system can also be used to test the efficacy of the cataract operation by testing the post-cataract surgery optical images.
Catching the whispers from Uranus
NASA Technical Reports Server (NTRS)
Bartok, C. D.
1986-01-01
Sophisticated telecommunications techniques are described that were used to acquire images of Uranus, its 14 moons and ten narrow rings darker than coal. The images, equal in quality to those transmitted from Saturn several years earlier despite the signal being weaker by 6 dB due to the increased distance, were received from Voyager 2 during its January 24, 1986 flyby of Uranus. Solutions to the problem of the weakening signal were found in modifications to Voyager's image processing system and NASA's ground tracking network. In April 1985, Voyager's prime flight data computer was reconfigured to accept only nonimaging science data, and its backup, only imaging data; the latter was reprogrammed to determine only arithmetic differences between adjacent pixel intensities rather than absolute intensities. By image compression, equivalent imaging information could be sent at lower bit rates. Instead of Golay coding, Reed-Solomon onboard encoding was used. These techniques gained the equivalent of 4-dB in imaging yield. Additional improvements were gained by using earth station antennas in pairs (the Parkes radio telescope and the Canberra ground station antenna). Moves under way to prepare for the Voyager encounter with Neptune in 1989 are described (using additional antennas and arrays, scaling up the Deep Space Network antennas from 64 m to 70 m, etc.) to assure almost Saturn-equivalent pictures despite a further 3.5-dB drop in signal strength.
Information management of a department of diagnostic imaging.
Vincenzoni, M; Campioni, P; Vecchioli Scaldazza, A; Capocasa, G; Marano, P
1998-01-01
It is well-known that while RIS allows the management of all input and output data of a Radiology service, PACS plays a major role in the management of all radiologic images. However, the two systems should be closely integrated: scheduling of a radiologic exam requires direct automated integration with the system of image management for retrieval of previous exams and storage of the exam just completed. A modern information system of integration of data and radiologic images should be based on an automated work flow management in al its components, being at the same time flexible and compatible with the ward organization to support and computerize each stage of the working process. Similarly, standard protocols (DICOM 3.0, HL7) defined for interfacing with the Diagnostic Imaging (D.I.) department and the other components of modules of a modern HIS, should be used. They ensure the system to be expandable and accessible to ensure share and integration of information with HIS, emergency service or wards. Correct RIS/PACS integration allows a marked improvement in the efficiency of a modern D.I. department with a positive impact on the daily activity, prompt availability of previous data and images with sophisticated handling of diagnostic images to enhance the reporting quality. The increased diffusion of internet and intranet technology predicts future developments still to be discovered.
NASA Astrophysics Data System (ADS)
Dinten, Jean-Marc; Petié, Philippe; da Silva, Anabela; Boutet, Jérôme; Koenig, Anne; Hervé, Lionel; Berger, Michel; Laidevant, Aurélie; Rizo, Philippe
2006-03-01
Optical imaging of fluorescent probes is an essential tool for investigation of molecular events in small animals for drug developments. In order to get localization and quantification information of fluorescent labels, CEA-LETI has developed efficient approaches in classical reflectance imaging as well as in diffuse optical tomographic imaging with continuous and temporal signals. This paper presents an overview of the different approaches investigated and their performances. High quality fluorescence reflectance imaging is obtained thanks to the development of an original "multiple wavelengths" system. The uniformity of the excitation light surface area is better than 15%. Combined with the use of adapted fluorescent probes, this system enables an accurate detection of pathological tissues, such as nodules, beneath the animal's observed area. Performances for the detection of ovarian nodules on a nude mouse are shown. In order to investigate deeper inside animals and get 3D localization, diffuse optical tomography systems are being developed for both slab and cylindrical geometries. For these two geometries, our reconstruction algorithms are based on analytical expression of light diffusion. Thanks to an accurate introduction of light/matter interaction process in the algorithms, high quality reconstructions of tumors in mice have been obtained. Reconstruction of lung tumors on mice are presented. By the use of temporal diffuse optical imaging, localization and quantification performances can be improved at the price of a more sophisticated acquisition system and more elaborate information processing methods. Such a system based on a pulsed laser diode and a time correlated single photon counting system has been set up. Performances of this system for localization and quantification of fluorescent probes are presented.
[From the x-ray department to the institute for imaging diagnosis].
Voegeli, E; Steck, W
1985-02-01
The increasing sophistication of diagnostic radiology has led to rising emphasis on modality-related training and practice in radiological subspecialties. To accomplish both optimal patient management and a rational, cost-effective analysis of imaging procedures, a comprehensive approach to modern radiology is needed rather than a technology-related attitude. The imaging department, where the various imaging data are synthesized and correlation by a general practitioner of radiology, as opposed to the subspecialty radiologist, is the most suitable solution. The principles of management and the layout of such a center are described by the authors.
NASA Astrophysics Data System (ADS)
Konik, Arda; Madsen, Mark T.; Sunderland, John J.
2012-10-01
In human emission tomography, combined PET/CT and SPECT/CT cameras provide accurate attenuation maps for sophisticated scatter and attenuation corrections. Having proven their potential, these scanners are being adapted for small animal imaging using similar correction approaches. However, attenuation and scatter effects in small animal imaging are substantially less than in human imaging. Hence, the value of sophisticated corrections is not obvious for small animal imaging considering the additional cost and complexity of these methods. In this study, using GATE Monte Carlo package, we simulated the Inveon small animal SPECT (single pinhole collimator) scanner to find the scatter fractions of various sizes of the NEMA-mouse (diameter: 2-5.5 cm , length: 7 cm), NEMA-rat (diameter: 3-5.5 cm, length: 15 cm) and MOBY (diameter: 2.1-5.5 cm, length: 3.5-9.1 cm) phantoms. The simulations were performed for three radionuclides commonly used in small animal SPECT studies:99mTc (140 keV), 111In (171 keV 90% and 245 keV 94%) and 125I (effective 27.5 keV). For the MOBY phantoms, the total Compton scatter fractions ranged (over the range of phantom sizes) from 4-10% for 99mTc (126-154 keV), 7-16% for 111In (154-188 keV), 3-7% for 111In (220-270 keV) and 17-30% for 125I (15-45 keV) including the scatter contributions from the tungsten collimator, lead shield and air (inside and outside the camera heads). For the NEMA-rat phantoms, the scatter fractions ranged from 10-15% (99mTc), 17-23% 111In: 154-188 keV), 8-12% (111In: 220-270 keV) and 32-40% (125I). Our results suggest that energy window methods based on solely emission data are sufficient for all mouse and most rat studies for 99mTc and 111In. However, more sophisticated methods may be needed for 125I.
Towards Guided Underwater Survey Using Light Visual Odometry
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.
2017-02-01
A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.
NASA Astrophysics Data System (ADS)
Bismuth, Vincent; Vancamberg, Laurence; Gorges, Sébastien
2009-02-01
During interventional radiology procedures, guide-wires are usually inserted into the patients vascular tree for diagnosis or healing purpose. These procedures are monitored with an Xray interventional system providing images of the interventional devices navigating through the patient's body. The automatic detection of such tools by image processing means has gained maturity over the past years and enables applications ranging from image enhancement to multimodal image fusion. Sophisticated detection methods are emerging, which rely on a variety of device enhancement techniques. In this article we reviewed and classified these techniques into three families. We chose a state of the art approach in each of them and built a rigorous framework to compare their detection capability and their computational complexity. Through simulations and the intensive use of ROC curves we demonstrated that the Hessian based methods are the most robust to strong curvature of the devices and that the family of rotated filters technique is the most suited for detecting low CNR and low curvature devices. The steerable filter approach demonstrated less interesting detection capabilities and appears to be the most expensive one to compute. Finally we demonstrated the interest of automatic guide-wire detection on a clinical topic: the compensation of respiratory motion in multimodal image fusion.
Changing requirements and solutions for unattended ground sensors
NASA Astrophysics Data System (ADS)
Prado, Gervasio; Johnson, Robert
2007-10-01
Unattended Ground Sensors (UGS) were first used to monitor Viet Cong activity along the Ho Chi Minh Trail in the 1960's. In the 1980's, significant improvement in the capabilities of UGS became possible with the development of digital signal processors; this led to their use as fire control devices for smart munitions (for example: the Wide Area Mine) and later to monitor the movements of mobile missile launchers. In these applications, the targets of interest were large military vehicles with strong acoustic, seismic and magnetic signatures. Currently, the requirements imposed by new terrorist threats and illegal border crossings have changed the emphasis to the monitoring of light vehicles and foot traffic. These new requirements have changed the way UGS are used. To improve performance against targets with lower emissions, sensors are used in multi-modal arrangements. Non-imaging sensors (acoustic, seismic, magnetic and passive infrared) are now being used principally as activity sensors to cue imagers and remote cameras. The availability of better imaging technology has made imagers the preferred source of "actionable intelligence". Infrared cameras are now based on un-cooled detector-arrays that have made their application in UGS possible in terms of their cost and power consumption. Visible light imagers are also more sensitive extending their utility well beyond twilight. The imagers are equipped with sophisticated image processing capabilities (image enhancement, moving target detection and tracking, image compression). Various commercial satellite services now provide relatively inexpensive long-range communications and the Internet provides fast worldwide access to the data.
Automatic initialization and quality control of large-scale cardiac MRI segmentations.
Albà, Xènia; Lekadir, Karim; Pereañez, Marco; Medrano-Gracia, Pau; Young, Alistair A; Frangi, Alejandro F
2018-01-01
Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The results obtained based on over 1200 cases from the Cardiac Atlas Project show the promise of fully automatic initialization and quality control for population studies. Copyright © 2017 Elsevier B.V. All rights reserved.
Kukkonen, C A
1995-06-01
High-speed information processing technologies being developed and applied by the Jet Propulsion Laboratory for NASA and Department of Defense mission needs have potential dual-uses in telemedicine and other medical applications. Fiber optic ground networks connected with microwave satellite links allow NASA to communicate with its astronauts in Earth orbit or on the moon, and with its deep space probes billions of miles away. These networks monitor the health of astronauts and or robotic spacecraft. Similar communications technology will also allow patients to communicate with doctors anywhere on Earth. NASA space missions have science as a major objective. Science sensors have become so sophisticated that they can take more data than our scientists can analyze by hand. High performance computers--workstations, supercomputer and massively parallel computers are being used to transform this data into knowledge. This is done using image processing, data visualization and other techniques to present the data--one's and zero's in forms that a human analyst can readily relate to and understand. Medical sensors have also explored in the in data output--witness CT scans, MRI, and ultrasound. This data must be presented in visual form and computers will allow routine combination of many two dimensional MRI images into three dimensional reconstructions of organs that then can be fully examined by physicians. Emerging technologies such as neural networks that are being "trained" to detect craters on planets or incoming missiles amongst decoys can be used to identify microcalcification in mammograms.
Operational GPS Imaging System at Multiple Scales for Earth Science and Monitoring of Geohazards
NASA Astrophysics Data System (ADS)
Blewitt, Geoffrey; Hammond, William; Kreemer, Corné
2016-04-01
Toward scientific targets that range from slow deep Earth processes to geohazard rapid response, our operational GPS data analysis system produces smooth, yet detailed maps of 3-dimensional land motion with respect to our Earth's center of mass at multiple spatio-temporal scales with various latencies. "GPS Imaging" is implemented operationally as a back-end processor to our GPS data processing facility, which uses JPL's GIPSY OASIS II software to produce positions from 14,000 GPS stations in ITRF every 5 minutes, with coordinate precision that gradually improves as latency increases upward from 1 hour to 2 weeks. Our GPS Imaging system then applies sophisticated signal processing and image filtering techniques to generate images of land motion covering our Earth's continents with high levels of robustness, accuracy, spatial resolution, and temporal resolution. Techniques employed by our GPS Imaging system include: (1) similarity transformation of polyhedron coordinates to ITRF with optional common-mode filtering to enhance local transient signal to noise ratio, (2) a comprehensive database of ~100,000 potential step events based on earthquake catalogs and equipment logs, (3) an automatic, robust, and accurate non-parametric estimator of station velocity that is insensitive to prevalent step discontinuities, outliers, seasonality, and heteroscedasticity; (4) a realistic estimator of velocity error bars based on subsampling statistics; (5) image processing to create a map of land motion that is based on median spatial filtering on the Delauney triangulation, which is effective at despeckling the data while faithfully preserving edge features; (6) a velocity time series estimator to assist identification of transient behavior, such as unloading caused by drought, and (7) a method of integrating InSAR and GPS for fine-scale seamless imaging in ITRF. Our system is being used to address three main scientific focus areas, including (1) deep Earth processes, (2) anthropogenic lithospheric processes, and (3) dynamic solid Earth events. Our prototype images show that the striking, first-order signal in North America and Europe is large scale uplift and subsidence from mantle flow driven by Glacial Isostatic Adjustment. At regional scales, the images reveal that anthropogenic lithospheric processes can dominate vertical land motion in extended regions, such as the rapid subsidence of California's Central Valley (CV) exacerbated by drought. The Earth's crust is observed to rebound elastically as evidenced by uplift of surrounding mountain ranges. Images also reveal natural uplift of mountains, mantle relaxation associated with earthquakes over the last century, and uplift at plate boundaries driven by interseismic locking. Using the high-rate positions at low latency, earthquake events can be rapidly imaged, modeled, and monitored for afterslip, potential aftershocks, and subsequent deeper relaxation. Thus we are imaging deep Earth processes with unprecedented scope, resolution and accuracy. In addition to supporting these scientific focus areas, the data products are also being used to support the global reference frame (ITRF), and show potential to enhance missions such as GRACE and NISAR by providing complementary information on Earth processes.
Mental imagery and idiom comprehension: a comparison of school-age children and adults.
Nippold, Marilyn A; Duthie, Jill K
2003-08-01
Previous research has shown that transparent idioms (e.g., paddle your own canoe) are generally easier for children to interpret than opaque idioms (e.g., paint the town red), results that support the metasemantic hypothesis of figurative understanding (M. A. Nippold, 1998). This is the view that beyond exposure to idioms and attention to the linguistic context, the learner analyzes the expressions internally to infer meaning, a process that is easier to execute when the literal and nonliteral meanings overlap. The present study was designed to investigate mental imagery in relation to the discrepancy in difficulty between transparent and opaque expressions. Twenty familiar idioms, half transparent and half opaque, were presented to 40 school-age children (mean age = 12;3 [years;months]) and 40 adults (mean age = 27;0) who were asked to describe in writing their own mental images for each expression. The participants were also given a written multiple-choice task to measure their comprehension of the idioms. The results indicated that mental imagery for idioms undergoes a developmental process and is associated with comprehension. Although school-age children were able to report relevant mental images for idioms, their images were less sophisticated than those of adults and were more likely to be concrete and to reflect only a partial understanding of the expressions. In contrast, the images reported by adults were more likely to be figurative. The findings suggest that the mental images people report for idioms may serve as a barometer of their depth of understanding of the expressions.
Supervised Classification Techniques for Hyperspectral Data
NASA Technical Reports Server (NTRS)
Jimenez, Luis O.
1997-01-01
The recent development of more sophisticated remote sensing systems enables the measurement of radiation in many mm-e spectral intervals than previous possible. An example of this technology is the AVIRIS system, which collects image data in 220 bands. The increased dimensionality of such hyperspectral data provides a challenge to the current techniques for analyzing such data. Human experience in three dimensional space tends to mislead one's intuition of geometrical and statistical properties in high dimensional space, properties which must guide our choices in the data analysis process. In this paper high dimensional space properties are mentioned with their implication for high dimensional data analysis in order to illuminate the next steps that need to be taken for the next generation of hyperspectral data classifiers.
Digital video steganalysis exploiting collusion sensitivity
NASA Astrophysics Data System (ADS)
Budhia, Udit; Kundur, Deepa
2004-09-01
In this paper we present an effective steganalyis technique for digital video sequences based on the collusion attack. Steganalysis is the process of detecting with a high probability and low complexity the presence of covert data in multimedia. Existing algorithms for steganalysis target detecting covert information in still images. When applied directly to video sequences these approaches are suboptimal. In this paper, we present a method that overcomes this limitation by using redundant information present in the temporal domain to detect covert messages in the form of Gaussian watermarks. Our gains are achieved by exploiting the collusion attack that has recently been studied in the field of digital video watermarking, and more sophisticated pattern recognition tools. Applications of our scheme include cybersecurity and cyberforensics.
MO-DE-206-00: Joint AAPM-WMIS Symposium: Metabolic Imaging of Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
In this symposium jointly sponsored by the World Molecular Imaging Society (WMIS) and the AAPM, luminary speakers on imaging metabolism will discuss three impactful topics. The first presentation on Cellular Metabolism of FDG will be given by Guillem Pratx (Stanford). This presentation will detail new work on looking at how the most common molecular imaging agent, fluoro-deoxy-glucose is metabolized at a cellular level. This will be followed by a talk on an improved approach to whole-body PET imaging by Simon Cherry (UC Davis). Simon’s work on a new whole-body PET imaging system promises to have dramatic improvement in our abilitymore » to detect and characterize cancer using PET. Finally, Jim Bankson (MD Anderson) will discuss extremely sophisticated approaches to quantifying hyperpolarized-13-C pyruvate metabolism using MR imaging. This technology promises to compliment the exquisite sensitivity of PET with an ability to measure not just uptake, but tumor metabolism. Learning Objectives: Understand the metabolism of FDG at a cellular level. Appreciate the engineering related to a novel new high-sensitivity whole-body PET imaging system. Understand the process of hyperpolarization, how pyruvate relates to metabolism and how advanced modeling can be used to better quantify this data. G. Pratx, Funding: 5R01CA186275, 1R21CA193001, and Damon Runyon Cancer Foundation. S. Cherry, National Institutes of Health; University of California, Davis; Siemens Medical SolutionsJ. Bankson, GE Healthcare; NCI P30-CA016672; CPRIT PR140021-P5.« less
A Perceptually Weighted Rank Correlation Indicator for Objective Image Quality Assessment
NASA Astrophysics Data System (ADS)
Wu, Qingbo; Li, Hongliang; Meng, Fanman; Ngan, King N.
2018-05-01
In the field of objective image quality assessment (IQA), the Spearman's $\\rho$ and Kendall's $\\tau$ are two most popular rank correlation indicators, which straightforwardly assign uniform weight to all quality levels and assume each pair of images are sortable. They are successful for measuring the average accuracy of an IQA metric in ranking multiple processed images. However, two important perceptual properties are ignored by them as well. Firstly, the sorting accuracy (SA) of high quality images are usually more important than the poor quality ones in many real world applications, where only the top-ranked images would be pushed to the users. Secondly, due to the subjective uncertainty in making judgement, two perceptually similar images are usually hardly sortable, whose ranks do not contribute to the evaluation of an IQA metric. To more accurately compare different IQA algorithms, we explore a perceptually weighted rank correlation indicator in this paper, which rewards the capability of correctly ranking high quality images, and suppresses the attention towards insensitive rank mistakes. More specifically, we focus on activating `valid' pairwise comparison towards image quality, whose difference exceeds a given sensory threshold (ST). Meanwhile, each image pair is assigned an unique weight, which is determined by both the quality level and rank deviation. By modifying the perception threshold, we can illustrate the sorting accuracy with a more sophisticated SA-ST curve, rather than a single rank correlation coefficient. The proposed indicator offers a new insight for interpreting visual perception behaviors. Furthermore, the applicability of our indicator is validated in recommending robust IQA metrics for both the degraded and enhanced image data.
ERIC Educational Resources Information Center
Felten, Peter
2008-01-01
Living in an image-rich world does not mean students (or faculty and administrators) naturally possess sophisticated visual literacy skills, just as continually listening to an iPod does not teach a person to critically analyze or create music. Instead, "visual literacy involves the ability to understand, produce, and use culturally significant…
Burkovskiy, I; Lehmann, C; Jiang, C; Zhou, J
2016-11-01
Intravital microscopy of the intestine is a sophisticated technique that allows qualitative and quantitative in vivo observation of dynamic cellular interactions and blood flow at a high resolution. Physiological conditions of the animal and in particular of the observed organ, such as temperature and moisture are crucial for intravital imaging. Often, the microscopy stage with the animal or the organ of interest imposes limitations on how well the animal can be maintained. In addition, the access for additional oxygen supply or drug administration during the procedure is rather restricted. To address these limitations, we developed a novel intravital microscopy platform, allowing us to have improved access to the animal during the intravital microscopy procedure, as well as improved microenvironmental maintenance. The production process of this prototype platform is based on 3D printing of device parts in a single-step process. The simplicity of production and the advantages of this versatile and customizable design are shown and discussed in this paper. Our design potentially represents a major step forward in facilitating intestinal intravital imaging using fluorescent microscopy. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
The geophysical processor system: Automated analysis of ERS-1 SAR imagery
NASA Technical Reports Server (NTRS)
Stern, Harry L.; Rothrock, D. Andrew; Kwok, Ronald; Holt, Benjamin
1994-01-01
The Geophysical Processor System (GPS) at the Alaska (U.S.) SAR (Synthetic Aperture Radar) Facility (ASF) uses ERS-1 SAR images as input to generate three types of products: sea ice motion, sea ice type, and ocean wave spectra. The GPS, operating automatically with minimal human intervention, delivers its output to the Archive and Catalog System (ACS) where scientists can search and order the products on line. The GPS has generated more than 10,000 products since it became operational in Feb. 1992, and continues to deliver 500 new products per month to the ACS. These products cover the Beaufort and Chukchi Seas and the western portion of the central Arctic Ocean. More geophysical processing systems are needed to handle the large volumes of data from current and future satellites. Images must be routinely and consistently analyzed to yield useful information for scientists. The current GPS is a good, working prototype on the way to more sophisticated systems.
NASA Astrophysics Data System (ADS)
Pauley, Mark A.; Dalrymple, Glenn V.; Zhu, Quiming; Chu, Wei-Kom
2000-12-01
With the continued centralization of medical care into large, regional centers, there is a growing need for a flexible, inexpensive, and secure system to rapidly provide referring physicians in the field with the results of the sophisticated medical tests performed at these facilities. Furthermore, the medical community has long recognized the need for a system with similar characteristics to maintain and upgrade patient case sets for oral and written student examinations. With the move toward filmless radiographic instrumentation, the widespread and growing use of digital methods and the Internet, both of these processes can now be realized. This article describes the conceptual development and testing of a protocol that allow users to transmit, modify, remotely store and display the images and textual information of medical cases via the Internet. We also discuss some of the legal issues we encountered regarding the transmission of medical information; these issues have had a direct impact on the implementation of the results of this project.
Trigger and Readout System for the Ashra-1 Detector
NASA Astrophysics Data System (ADS)
Aita, Y.; Aoki, T.; Asaoka, Y.; Morimoto, Y.; Motz, H. M.; Sasaki, M.; Abiko, C.; Kanokohata, C.; Ogawa, S.; Shibuya, H.; Takada, T.; Kimura, T.; Learned, J. G.; Matsuno, S.; Kuze, S.; Binder, P. M.; Goldman, J.; Sugiyama, N.; Watanabe, Y.
Highly sophisticated trigger and readout system has been developed for All-sky Survey High Resolution Air-shower (Ashra) detector. Ashra-1 detector has 42 degree diameter field of view. Detection of Cherenkov and fluorescence light from large background in the large field of view requires finely segmented and high speed trigger and readout system. The system is composed of optical fiber image transmission system, 64 × 64 channel trigger sensor and FPGA based trigger logic processor. The system typically processes the image within 10 to 30 ns and opens the shutter on the fine CMOS sensor. 64 × 64 coarse split image is transferred via 64 × 64 precisely aligned optical fiber bundle to a photon sensor. Current signals from the photon sensor are discriminated by custom made trigger amplifiers. FPGA based processor processes 64 × 64 hit pattern and correspondent partial area of the fine image is acquired. Commissioning earth skimming tau neutrino observational search was carried out with this trigger system. In addition to the geometrical advantage of the Ashra observational site, the excellent tau shower axis measurement based on the fine imaging and the night sky background rejection based on the fine and fast imaging allow zero background tau shower search. Adoption of the optical fiber bundle and trigger LSI realized 4k channel trigger system cheaply. Detectability of tau shower is also confirmed by simultaneously observed Cherenkov air shower. Reduction of the trigger threshold appears to enhance the effective area especially in PeV tau neutrino energy region. New two dimensional trigger LSI was introduced and the trigger threshold was lowered. New calibration system of the trigger system was recently developed and introduced to the Ashra detector
Digital image film generation: from the photoscientist's perspective
Boyd, John E.
1982-01-01
The technical sophistication of photoelectronic transducers, integrated circuits, and laser-beam film recorders has made digital imagery an alternative to traditional analog imagery for remote sensing. Because a digital image is stored in discrete digital values, image enhancement is possible before the data are converted to a photographic image. To create a special film-reproduction curve - which can simulate any desired gamma, relative film speed, and toe/shoulder response - the digital-to-analog transfer function of the film recorder is uniquely defined and implemented by a lookup table in the film recorder. Because the image data are acquired in spectral bands, false-color composites also can be given special characteristics by selecting a reproduction curve tailored for each band.
Melanoma detection using smartphone and multimode hyperspectral imaging
NASA Astrophysics Data System (ADS)
MacKinnon, Nicholas; Vasefi, Fartash; Booth, Nicholas; Farkas, Daniel L.
2016-04-01
This project's goal is to determine how to effectively implement a technology continuum from a low cost, remotely deployable imaging device to a more sophisticated multimode imaging system within a standard clinical practice. In this work a smartphone is used in conjunction with an optical attachment to capture cross-polarized and collinear color images of a nevus that are analyzed to quantify chromophore distribution. The nevus is also imaged by a multimode hyperspectral system, our proprietary SkinSpect™ device. Relative accuracy and biological plausibility of the two systems algorithms are compared to assess aspects of feasibility of in-home or primary care practitioner smartphone screening prior to rigorous clinical analysis via the SkinSpect.
An approach for quantitative image quality analysis for CT
NASA Astrophysics Data System (ADS)
Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe
2016-03-01
An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.
Cosmacini, P; Piacentini, P
2008-08-01
A few centuries after the practice of mummification was finally abolished in the seventh century A.D., mummies began to capture the collective imagination, exerting a mysterious fascination that continues to this day. From the beginning, the radiological study of Egyptian mummies permitted the collection not only of medical data but also of anthropological and archaeological evidence. The first radiological study of an Egyptian mummy was performed by Flinders Petrie shortly after the discovery of X-rays in 1895, and since then, radiology has never stopped investigating these special patients. By the end of the 1970s, computed tomography (CT) scanning permitted more in-depth studies to be carried out without requiring the mummies to be removed from their cartonnage. CT images can be used to obtain a three-dimensional reconstruction of the mummy that provides important new information, in part thanks to the virtual endoscopy technique known as "fly through". Moreover, starting from CT data and using sophisticated graphics software, one can reconstruct an image of the face of the mummified individual at the time of his or her death. The history of imaging, from its origins until now, from the simplest to the most sophisticated technique, allows us to appreciate why these studies have been, and still are, fundamental in the study of Egyptian mummies.
NASA Astrophysics Data System (ADS)
Akilan, A.; Nagasubramanian, V.; Chaudhry, A.; Reddy, D. Rajesh; Sudheer Reddy, D.; Usha Devi, R.; Tirupati, T.; Radhadevi, P. V.; Varadan, G.
2014-11-01
Block Adjustment is a technique for large area mapping for images obtained from different remote sensingsatellites.The challenge in this process is to handle huge number of satellite imageries from different sources with different resolution and accuracies at the system level. This paper explains a system with various tools and techniques to effectively handle the end-to-end chain in large area mapping and production with good level of automation and the provisions for intuitive analysis of final results in 3D and 2D environment. In addition, the interface for using open source ortho and DEM references viz., ETM, SRTM etc. and displaying ESRI shapes for the image foot-prints are explained. Rigorous theory, mathematical modelling, workflow automation and sophisticated software engineering tools are included to ensure high photogrammetric accuracy and productivity. Major building blocks like Georeferencing, Geo-capturing and Geo-Modelling tools included in the block adjustment solution are explained in this paper. To provide optimal bundle block adjustment solution with high precision results, the system has been optimized in many stages to exploit the full utilization of hardware resources. The robustness of the system is ensured by handling failure in automatic procedure and saving the process state in every stage for subsequent restoration from the point of interruption. The results obtained from various stages of the system are presented in the paper.
A Multi-Functional Imaging Approach to High-Content Protein Interaction Screening
Matthews, Daniel R.; Fruhwirth, Gilbert O.; Weitsman, Gregory; Carlin, Leo M.; Ofo, Enyinnaya; Keppler, Melanie; Barber, Paul R.; Tullis, Iain D. C.; Vojnovic, Borivoj; Ng, Tony; Ameer-Beg, Simon M.
2012-01-01
Functional imaging can provide a level of quantification that is not possible in what might be termed traditional high-content screening. This is due to the fact that the current state-of-the-art high-content screening systems take the approach of scaling-up single cell assays, and are therefore based on essentially pictorial measures as assay indicators. Such phenotypic analyses have become extremely sophisticated, advancing screening enormously, but this approach can still be somewhat subjective. We describe the development, and validation, of a prototype high-content screening platform that combines steady-state fluorescence anisotropy imaging with fluorescence lifetime imaging (FLIM). This functional approach allows objective, quantitative screening of small molecule libraries in protein-protein interaction assays. We discuss the development of the instrumentation, the process by which information on fluorescence resonance energy transfer (FRET) can be extracted from wide-field, acceptor fluorescence anisotropy imaging and cross-checking of this modality using lifetime imaging by time-correlated single-photon counting. Imaging of cells expressing protein constructs where eGFP and mRFP1 are linked with amino-acid chains of various lengths (7, 19 and 32 amino acids) shows the two methodologies to be highly correlated. We validate our approach using a small-scale inhibitor screen of a Cdc42 FRET biosensor probe expressed in epidermoid cancer cells (A431) in a 96 microwell-plate format. We also show that acceptor fluorescence anisotropy can be used to measure variations in hetero-FRET in protein-protein interactions. We demonstrate this using a screen of inhibitors of internalization of the transmembrane receptor, CXCR4. These assays enable us to demonstrate all the capabilities of the instrument, image processing and analytical techniques that have been developed. Direct correlation between acceptor anisotropy and donor FLIM is observed for FRET assays, providing an opportunity to rapidly screen proteins, interacting on the nano-meter scale, using wide-field imaging. PMID:22506000
Teachers as Leaders in a Knowledge Society: Encouraging Signs of a New Professionalism
ERIC Educational Resources Information Center
Andrews, Dorothy; Crowther, Frank
2006-01-01
Challenges confronting schools worldwide are greater than ever, and, likewise, many teachers possess capabilities, talents, and formal credentials more sophisticated than ever. However, the responsibility and authority accorded to teachers have not grown significantly, nor has the image of teaching as a profession advanced significantly. The…
USDA-ARS?s Scientific Manuscript database
Greenhouse cultivation has evolved from simple covered rows of open-fields crops to highly sophisticated controlled environment agriculture (CEA) facilities that projected the image of plant factories for urban farming. The advances and improvements in CEA have promoted the scientific solutions for ...
ERIC Educational Resources Information Center
Needelman, Bert; Weiner, Norman L.
1976-01-01
Argues that artistic viewpoints significantly influence our perceptions of everyday life; the arts have been a major force in the construction of the deviant role and physical appearance has been a prime artistic device in concretizing this image. As populations continue to become more educated and sophisticated the great mass of people will…
UAVs Being Used for Environmental Surveying
Chung, Sandra
2017-12-09
UAVs, are much more sophisticated than your typical remote-controlled plane. INL robotics and remote sensing experts have added state-of-the-art imaging and wireless technology to the UAVs to create intelligent remote surveillance craft that can rapidly survey a wide area for damage and track down security threats.
Enhancing Teaching and Learning in Higher Education with a Total Multimedia Approach.
ERIC Educational Resources Information Center
Wells, F. Stuart; Kick, Russell C.
If multimedia technology is to be successfully employed to enhance classroom instruction and learning, the full capabilities of the technology must be used. The complete power of multimedia includes high quality graphics and images, sophisticated navigational techniques and transitional effects, appropriate music and sound, animation, and,…
Sharma, Harshita; Alekseychuk, Alexander; Leskovsky, Peter; Hellwich, Olaf; Anand, R S; Zerbe, Norman; Hufnagl, Peter
2012-10-04
Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community. The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based) graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases. The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function. The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR)-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1224798882787923.
2012-01-01
Background Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community. Methods The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based) graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases. Results The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function. Conclusion The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR)-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1224798882787923. PMID:23035717
MO-DE-206-01: Cellular Metabolism of FDG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratx, G.
In this symposium jointly sponsored by the World Molecular Imaging Society (WMIS) and the AAPM, luminary speakers on imaging metabolism will discuss three impactful topics. The first presentation on Cellular Metabolism of FDG will be given by Guillem Pratx (Stanford). This presentation will detail new work on looking at how the most common molecular imaging agent, fluoro-deoxy-glucose is metabolized at a cellular level. This will be followed by a talk on an improved approach to whole-body PET imaging by Simon Cherry (UC Davis). Simon’s work on a new whole-body PET imaging system promises to have dramatic improvement in our abilitymore » to detect and characterize cancer using PET. Finally, Jim Bankson (MD Anderson) will discuss extremely sophisticated approaches to quantifying hyperpolarized-13-C pyruvate metabolism using MR imaging. This technology promises to compliment the exquisite sensitivity of PET with an ability to measure not just uptake, but tumor metabolism. Learning Objectives: Understand the metabolism of FDG at a cellular level. Appreciate the engineering related to a novel new high-sensitivity whole-body PET imaging system. Understand the process of hyperpolarization, how pyruvate relates to metabolism and how advanced modeling can be used to better quantify this data. G. Pratx, Funding: 5R01CA186275, 1R21CA193001, and Damon Runyon Cancer Foundation. S. Cherry, National Institutes of Health; University of California, Davis; Siemens Medical SolutionsJ. Bankson, GE Healthcare; NCI P30-CA016672; CPRIT PR140021-P5.« less
MO-DE-206-03: Quantifying Metabolism with Hyperpolarized MR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bankson, J.
In this symposium jointly sponsored by the World Molecular Imaging Society (WMIS) and the AAPM, luminary speakers on imaging metabolism will discuss three impactful topics. The first presentation on Cellular Metabolism of FDG will be given by Guillem Pratx (Stanford). This presentation will detail new work on looking at how the most common molecular imaging agent, fluoro-deoxy-glucose is metabolized at a cellular level. This will be followed by a talk on an improved approach to whole-body PET imaging by Simon Cherry (UC Davis). Simon’s work on a new whole-body PET imaging system promises to have dramatic improvement in our abilitymore » to detect and characterize cancer using PET. Finally, Jim Bankson (MD Anderson) will discuss extremely sophisticated approaches to quantifying hyperpolarized-13-C pyruvate metabolism using MR imaging. This technology promises to compliment the exquisite sensitivity of PET with an ability to measure not just uptake, but tumor metabolism. Learning Objectives: Understand the metabolism of FDG at a cellular level. Appreciate the engineering related to a novel new high-sensitivity whole-body PET imaging system. Understand the process of hyperpolarization, how pyruvate relates to metabolism and how advanced modeling can be used to better quantify this data. G. Pratx, Funding: 5R01CA186275, 1R21CA193001, and Damon Runyon Cancer Foundation. S. Cherry, National Institutes of Health; University of California, Davis; Siemens Medical SolutionsJ. Bankson, GE Healthcare; NCI P30-CA016672; CPRIT PR140021-P5.« less
MO-DE-206-02: Cellular Metabolism of FDG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cherry, S.
In this symposium jointly sponsored by the World Molecular Imaging Society (WMIS) and the AAPM, luminary speakers on imaging metabolism will discuss three impactful topics. The first presentation on Cellular Metabolism of FDG will be given by Guillem Pratx (Stanford). This presentation will detail new work on looking at how the most common molecular imaging agent, fluoro-deoxy-glucose is metabolized at a cellular level. This will be followed by a talk on an improved approach to whole-body PET imaging by Simon Cherry (UC Davis). Simon’s work on a new whole-body PET imaging system promises to have dramatic improvement in our abilitymore » to detect and characterize cancer using PET. Finally, Jim Bankson (MD Anderson) will discuss extremely sophisticated approaches to quantifying hyperpolarized-13-C pyruvate metabolism using MR imaging. This technology promises to compliment the exquisite sensitivity of PET with an ability to measure not just uptake, but tumor metabolism. Learning Objectives: Understand the metabolism of FDG at a cellular level. Appreciate the engineering related to a novel new high-sensitivity whole-body PET imaging system. Understand the process of hyperpolarization, how pyruvate relates to metabolism and how advanced modeling can be used to better quantify this data. G. Pratx, Funding: 5R01CA186275, 1R21CA193001, and Damon Runyon Cancer Foundation. S. Cherry, National Institutes of Health; University of California, Davis; Siemens Medical SolutionsJ. Bankson, GE Healthcare; NCI P30-CA016672; CPRIT PR140021-P5.« less
Novel approach for low-cost muzzle flash detection system
NASA Astrophysics Data System (ADS)
Voskoboinik, Asher
2008-04-01
A low-cost muzzle flash detection based on CMOS sensor technology is proposed. This low-cost technology makes it possible to detect various transient events with characteristic times between dozens of microseconds up to dozens of milliseconds while sophisticated algorithms successfully separate them from false alarms by utilizing differences in geometrical characteristics and/or temporal signatures. The proposed system consists of off-the-shelf smart CMOS cameras with built-in signal and image processing capabilities for pre-processing together with allocated memory for storing a buffer of images for further post-processing. Such a sensor does not require sending giant amounts of raw data to a real-time processing unit but provides all calculations in-situ where processing results are the output of the sensor. This patented CMOS muzzle flash detection concept exhibits high-performance detection capability with very low false-alarm rates. It was found that most false-alarms due to sun glints are from sources at distances of 500-700 meters from the sensor and can be distinguished by time examination techniques from muzzle flash signals. This will enable to eliminate up to 80% of falsealarms due to sun specular reflections in the battle field. Additional effort to distinguish sun glints from suspected muzzle flash signal is made by optimization of the spectral band in Near-IR region. The proposed system can be used for muzzle detection of small arms, missiles and rockets and other military applications.
Modeling, simulation, and analysis of optical remote sensing systems
NASA Technical Reports Server (NTRS)
Kerekes, John Paul; Landgrebe, David A.
1989-01-01
Remote Sensing of the Earth's resources from space-based sensors has evolved in the past 20 years from a scientific experiment to a commonly used technological tool. The scientific applications and engineering aspects of remote sensing systems have been studied extensively. However, most of these studies have been aimed at understanding individual aspects of the remote sensing process while relatively few have studied their interrelations. A motivation for studying these interrelationships has arisen with the advent of highly sophisticated configurable sensors as part of the Earth Observing System (EOS) proposed by NASA for the 1990's. Two approaches to investigating remote sensing systems are developed. In one approach, detailed models of the scene, the sensor, and the processing aspects of the system are implemented in a discrete simulation. This approach is useful in creating simulated images with desired characteristics for use in sensor or processing algorithm development. A less complete, but computationally simpler method based on a parametric model of the system is also developed. In this analytical model the various informational classes are parameterized by their spectral mean vector and covariance matrix. These class statistics are modified by models for the atmosphere, the sensor, and processing algorithms and an estimate made of the resulting classification accuracy among the informational classes. Application of these models is made to the study of the proposed High Resolution Imaging Spectrometer (HRIS). The interrelationships among observational conditions, sensor effects, and processing choices are investigated with several interesting results.
Software Simulates Sight: Flat Panel Mura Detection
NASA Technical Reports Server (NTRS)
2008-01-01
In the increasingly sophisticated world of high-definition flat screen monitors and television screens, image clarity and the elimination of distortion are paramount concerns. As the devices that reproduce images become more and more sophisticated, so do the technologies that verify their accuracy. By simulating the manner in which a human eye perceives and interprets a visual stimulus, NASA scientists have found ways to automatically and accurately test new monitors and displays. The Spatial Standard Observer (SSO) software metric, developed by Dr. Andrew B. Watson at Ames Research Center, measures visibility and defects in screens, displays, and interfaces. In the design of such a software tool, a central challenge is determining which aspects of visual function to include while accuracy and generality are important, relative simplicity of the software module is also a key virtue. Based on data collected in ModelFest, a large cooperative multi-lab project hosted by the Optical Society of America, the SSO simulates a simplified model of human spatial vision, operating on a pair of images that are viewed at a specific viewing distance with pixels having a known relation to luminance. The SSO measures the visibility of foveal spatial patterns, or the discriminability of two patterns, by incorporating only a few essential components of vision. These components include local contrast transformation, a contrast sensitivity function, local masking, and local pooling. By this construction, the SSO provides output in units of "just noticeable differences" (JND) a unit of measure based on the assumed smallest difference of sensory input detectable by a human being. Herein is the truly amazing ability of the SSO, while conventional methods can manipulate images, the SSO models human perception. This set of equations actually defines a mathematical way of working with an image that accurately reflects the way in which the human eye and mind behold a stimulus. The SSO is intended for a wide variety of applications, such as evaluating vision from unmanned aerial vehicles, measuring visibility of damage to aircraft and to the space shuttles, predicting outcomes of corrective laser eye surgery, inspecting displays during the manufacturing process, estimating the quality of compressed digital video, evaluating legibility of text, and predicting discriminability of icons or symbols in a graphical user interface.
Superior pattern processing is the essence of the evolved human brain
Mattson, Mark P.
2014-01-01
Humans have long pondered the nature of their mind/brain and, particularly why its capacities for reasoning, communication and abstract thought are far superior to other species, including closely related anthropoids. This article considers superior pattern processing (SPP) as the fundamental basis of most, if not all, unique features of the human brain including intelligence, language, imagination, invention, and the belief in imaginary entities such as ghosts and gods. SPP involves the electrochemical, neuronal network-based, encoding, integration, and transfer to other individuals of perceived or mentally-fabricated patterns. During human evolution, pattern processing capabilities became increasingly sophisticated as the result of expansion of the cerebral cortex, particularly the prefrontal cortex and regions involved in processing of images. Specific patterns, real or imagined, are reinforced by emotional experiences, indoctrination and even psychedelic drugs. Impaired or dysregulated SPP is fundamental to cognitive and psychiatric disorders. A broader understanding of SPP mechanisms, and their roles in normal and abnormal function of the human brain, may enable the development of interventions that reduce irrational decisions and destructive behaviors. PMID:25202234
Experimental characterization of the imaging properties of multifocal intraocular lenses
NASA Astrophysics Data System (ADS)
Gobbi, Pier Giorgio; Fasce, Francesco; Bozza, Stefano; Brancato, Rosario
2003-07-01
Many different types of intraocular lenses (IOL) are currently available for implantation, both as crystalline lens replacements and as phakic refractive elements. Their optical design is increasingly sophisticated, including aspherical surface profiles and multi-zone multifocal structures, however a quantitative and comparative characterization of their imaging properties is lacking. Also a qualitative visualization of their properties would be very useful for patients in the lens choice process. To this end an experimental eye model has been developed to allow for simulated in-vivo testing of IOLs. The model cornea is made of PMMA with a dioptric power of 43 D, and it has an aspherical profile designed to minimize spherical aberration across the visible spectrum. The eye model has a variable iris and a mechanical support to accomodate IOLs, immersed in physiological solution. The eye length is variable and the retina is replaced by a glass plate. The image formed on this "retina" is optically conjugated to a CCD camera, with a suitable magnification in order to mimic the human fovea resolution, and displayed onto a monitor. With such an opto-mechanical eye model, two types of images have been used to characterize IOLs: letter charts and variable contrast gratings, in order to directly simulate human visual acuity and contrast sensitivity.
Use of high-radiant flux, high-resolution DMD light engines in industrial applications
NASA Astrophysics Data System (ADS)
Müller, Alexandra; Ram, Surinder
2014-03-01
The field of application of industrial projectors is growing day by day. New Digital Micromirror Device (DMD) - based applications like 3D printing, 3D scanning, Printed Circuit Board (PCB) board printing and others are getting more and more sophisticated. The technical demands for the projection system are rising as new and more stringent requirements appear. The specification for industrial projection systems differ substantially from the ones of business and home beamers. Beamers are designed to please the human eye. Bright colors and image enhancement are far more important than uniformity of the illumination or image distortion. The human eye, followed by the processing of the brain can live with quite high intensity variations on the screen and image distortion. On the other hand, a projector designed for use in a specialized field has to be tailored regarding its unique requirements in order to make no quality compromises. For instance, when the image is projected onto a light sensitive resin, a good uniformity of the illumination is crucial for good material hardening (curing) results. The demands on the hardware and software are often very challenging. In the following we will review some parameters that have to be considered carefully for the design of industrial projectors in order to get the optimum result without compromises.
NASA Astrophysics Data System (ADS)
Alexakis, Dimitrios; Seiradakis, Kostas; Tsanis, Ioannis
2016-04-01
This article presents a remote sensing approach for spatio-temporal monitoring of both soil erosion and roughness using an Unmanned Aerial Vehicle (UAV). Soil erosion by water is commonly known as one of the main reasons for land degradation. Gully erosion causes considerable soil loss and soil degradation. Furthermore, quantification of soil roughness (irregularities of the soil surface due to soil texture) is important and affects surface storage and infiltration. Soil roughness is one of the most susceptible to variation in time and space characteristics and depends on different parameters such as cultivation practices and soil aggregation. A UAV equipped with a digital camera was employed to monitor soil in terms of erosion and roughness in two different study areas in Chania, Crete, Greece. The UAV followed predicted flight paths computed by the relevant flight planning software. The photogrammetric image processing enabled the development of sophisticated Digital Terrain Models (DTMs) and ortho-image mosaics with very high resolution on a sub-decimeter level. The DTMs were developed using photogrammetric processing of more than 500 images acquired with the UAV from different heights above the ground level. As the geomorphic formations can be observed from above using UAVs, shadowing effects do not generally occur and the generated point clouds have very homogeneous and high point densities. The DTMs generated from UAV were compared in terms of vertical absolute accuracies with a Global Navigation Satellite System (GNSS) survey. The developed data products were used for quantifying gully erosion and soil roughness in 3D as well as for the analysis of the surrounding areas. The significant elevation changes from multi-temporal UAV elevation data were used for estimating diachronically soil loss and sediment delivery without installing sediment traps. Concerning roughness, statistical indicators of surface elevation point measurements were estimated and various parameters such as standard deviation of DTM, deviation of residual and standard deviation of prominence were calculated directly from the extracted DTM. Sophisticated statistical filters and elevation indices were developed to quantify both soil erosion and roughness. The applied methodology for monitoring both soil erosion and roughness provides an optimum way of reducing the existing gap between field scale and satellite scale. Keywords : UAV, soil, erosion, roughness, DTM
It's not the pixel count, you fool
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2012-01-01
The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.
ARCHAEOLOGY: Paintings in Italian Cave May Be Oldest Yet.
Balter, M
2000-10-20
Stone slabs bearing images of an animal and a half-human, half-beast figure were uncovered during excavations by an Italian team at the Fumane Cave northwest of Verona. The images are believed to be at least as ancient as some found in the Grotte Chauvet in southern France--the current record holder at 32,000 years--and possibly even older. More important, cave art experts say, the new paintings bolster other evidence that humans engaged in sophisticated symbolic expression much earlier than once thought.
When Machines Think: Radiology's Next Frontier.
Dreyer, Keith J; Geis, J Raymond
2017-12-01
Artificial intelligence (AI), machine learning, and deep learning are terms now seen frequently, all of which refer to computer algorithms that change as they are exposed to more data. Many of these algorithms are surprisingly good at recognizing objects in images. The combination of large amounts of machine-consumable digital data, increased and cheaper computing power, and increasingly sophisticated statistical models combine to enable machines to find patterns in data in ways that are not only cost-effective but also potentially beyond humans' abilities. Building an AI algorithm can be surprisingly easy. Understanding the associated data structures and statistics, on the other hand, is often difficult and obscure. Converting the algorithm into a sophisticated product that works consistently in broad, general clinical use is complex and incompletely understood. To show how these AI products reduce costs and improve outcomes will require clinical translation and industrial-grade integration into routine workflow. Radiology has the chance to leverage AI to become a center of intelligently aggregated, quantitative, diagnostic information. Centaur radiologists, formed as a synergy of human plus computer, will provide interpretations using data extracted from images by humans and image-analysis computer algorithms, as well as the electronic health record, genomics, and other disparate sources. These interpretations will form the foundation of precision health care, or care customized to an individual patient. © RSNA, 2017.
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Pan, X; Stayman, J
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less
A novel mesh processing based technique for 3D plant analysis
2012-01-01
Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features. PMID:22553969
Neuroimaging for psychotherapy research: Current trends
WEINGARTEN, CAROL P.; STRAUMAN, TIMOTHY J.
2014-01-01
Objective This article reviews neuroimaging studies that inform psychotherapy research. An introduction to neuroimaging methods is provided as background for the increasingly sophisticated breadth of methods and findings appearing in psychotherapy research. Method We compiled and assessed a comprehensive list of neuroimaging studies of psychotherapy outcome, along with selected examples of other types of studies that also are relevant to psychotherapy research. We emphasized magnetic resonance imaging (MRI) since it is the dominant neuroimaging modality in psychological research. Results We summarize findings from neuroimaging studies of psychotherapy outcome, including treatment for depression, obsessive-compulsive disorder (OCD), and schizophrenia. Conclusions The increasing use of neuroimaging methods in the study of psychotherapy continues to refine our understanding of both outcome and process. We suggest possible directions for future neuroimaging studies in psychotherapy research. PMID:24527694
Intravital Fluorescence Videomicroscopy to Study Tumor Angiogenesis and Microcirculation1
Vajkoczy, Peter; Ullrich, Axel; Meager, Michael D
2000-01-01
Abstract Angiogenesis and microcirculation play a central role in growth and metastasis of human neoplasms, and, thus, represent a major target for novel treatment strategies. Mechanistic analysis of processes involved in tumor vascularization, however, requires sophisticated in vivo experimental models and techniques. Intravital microscopy allows direct assessment of tumor angiogenesis, microcirculation and overall perfusion. Its application to the study of tumor-induced neovascularization further provides information on molecular transport and delivery, intra- and extravascular cell-to-cell and cell-to-matrix interaction, as well as tumor oxygenation and metabolism. With the recent advances in the field of bioluminescence and fluorescent reporter genes, appropriate for in vivo imaging, the intravital fluorescent microscopic approach has to be considered a powerful tool to study microvascular, cellular and molecular mechanisms of tumor growth. PMID:10933068
Biocomputing nanoplatforms as therapeutics and diagnostics.
Evans, A C; Thadani, N N; Suh, J
2016-10-28
Biocomputing nanoplatforms are designed to detect and integrate single or multiple inputs under defined algorithms, such as Boolean logic gates, and generate functionally useful outputs, such as delivery of therapeutics or release of optically detectable signals. Using sensing modules composed of small molecules, polymers, nucleic acids, or proteins/peptides, nanoplatforms have been programmed to detect and process extrinsic stimuli, such as magnetic fields or light, or intrinsic stimuli, such as nucleic acids, enzymes, or pH. Stimulus detection can be transduced by the nanomaterial via three different mechanisms: system assembly, system disassembly, or system transformation. The increasingly sophisticated suite of biocomputing nanoplatforms may be invaluable for a multitude of applications, including medical diagnostics, biomedical imaging, environmental monitoring, and delivery of therapeutics to target cell populations. Copyright © 2016 Elsevier B.V. All rights reserved.
Quantifying biodiversity using digital cameras and automated image analysis.
NASA Astrophysics Data System (ADS)
Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.
2009-04-01
Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and enabling automatic deletion of images generated by erroneous triggering (e.g. cloud movements). This is the first step to a hierarchical image processing framework, where situation subclasses such as birds or climatic conditions can be fed into more appropriate automated or semi-automated data mining software.
The 'Adventist advantage'. Glendale Adventist Medical Center distinguishes itself.
Botvin, Judith D
2002-01-01
Glendale Adventist Medical Center, Glendale, Calif., adopted an image-building campaign to differentiate the 450-bed hospital from its neighbors. This included the headline "Adventist Advantage," used in a series of sophisticated ads, printed in gold. In all their efforts, marketers consider the sensibilities of the sizable Armenian, Korean, Hispanic and Chinese populations.
Aslam, Tariq Mehmood; Shakir, Savana; Wong, James; Au, Leon; Ashworth, Jane
2012-12-01
Mucopolysaccharidoses (MPS) can cause corneal opacification that is currently difficult to objectively quantify. With newer treatments for MPS comes an increased need for a more objective, valid and reliable index of disease severity for clinical and research use. Clinical evaluation by slit lamp is very subjective and techniques based on colour photography are difficult to standardise. In this article the authors present evidence for the utility of dedicated image analysis algorithms applied to images obtained by a highly sophisticated iris recognition camera that is small, manoeuvrable and adapted to achieve rapid, reliable and standardised objective imaging in a wide variety of patients while minimising artefactual interference in image quality.
Coaxial fundus camera for opthalmology
NASA Astrophysics Data System (ADS)
de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.
2015-09-01
A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.
The role of simulation in neurosurgery.
Rehder, Roberta; Abd-El-Barr, Muhammad; Hooten, Kristopher; Weinstock, Peter; Madsen, Joseph R; Cohen, Alan R
2016-01-01
In an era of residency duty-hour restrictions, there has been a recent effort to implement simulation-based training methods in neurosurgery teaching institutions. Several surgical simulators have been developed, ranging from physical models to sophisticated virtual reality systems. To date, there is a paucity of information describing the clinical benefits of existing simulators and the assessment strategies to help implement them into neurosurgical curricula. Here, we present a systematic review of the current models of simulation and discuss the state-of-the-art and future directions for simulation in neurosurgery. Retrospective literature review. Multiple simulators have been developed for neurosurgical training, including those for minimally invasive procedures, vascular, skull base, pediatric, tumor resection, functional neurosurgery, and spine surgery. The pros and cons of existing systems are reviewed. Advances in imaging and computer technology have led to the development of different simulation models to complement traditional surgical training. Sophisticated virtual reality (VR) simulators with haptic feedback and impressive imaging technology have provided novel options for training in neurosurgery. Breakthrough training simulation using 3D printing technology holds promise for future simulation practice, proving high-fidelity patient-specific models to complement residency surgical learning.
1994-04-12
STS059-S-040 (12 April 1994) --- STS-59's MAPS (Measurement of Air Pollution from Satellites) experiment is sending real-time data that provides the most comprehensive view of carbon monoxide concentrations on Earth ever recorded. This computer image shows a summary of "quick look" data obtained by the MAPS instrument during its first days of operations as part of the Space Shuttle Endeavour's SRL-1 payload. This data will be processed using more sophisticated techniques following the flight. The color red indicates areas with the highest levels of carbon monoxide. These Northern Hemisphere springtime carbon monoxide values are generally significantly higher than the values found in the Southern Hemisphere. This is in direct contrast to the data obtained by the MAPS experiment during November 1981 and October 1984, i.e. during Northern Hemisphere fall. The astronauts aboard Endeavour have seen fires in most of the areas showing higher carbon monoxide values (China, Eastern Australia, and equatorial Africa). The relationship between the observed fires and the higher carbon monoxide values will be investigated following SRL-1 by combining the MAPS data with meteorological data, surface imagery, and Space Shuttle hand-held photographs. By the end of SRL-1, MAPS will have acquired data over most of the globe between 57 degrees north and 57 degrees south latitudes. The entire data set will be carefully analyzed using sophisticated post-flight data processing techniques. The data will then be applied in a variety of scientific studies concerning chemistry and transport processes in the atmosphere. The MAPS experiment measures the carbon monoxide in the lower atmosphere. This gas is produced both as a result of natural processes and as a result of human activities. The primary human resources of carbon monoxide are automobiles and industry and the burning of plant materials. The primary natural source is the interaction of sunlight with naturally occurring ozone and water vapor. The strength of all of these sources changes seasonally.
Introducing DeBRa: a detailed breast model for radiological studies
NASA Astrophysics Data System (ADS)
Ma, Andy K. W.; Gunn, Spencer; Darambara, Dimitra G.
2009-07-01
Currently, x-ray mammography is the method of choice in breast cancer screening programmes. As the mammography technology moves from 2D imaging modalities to 3D, conventional computational phantoms do not have sufficient detail to support the studies of these advanced imaging systems. Studies of these 3D imaging systems call for a realistic and sophisticated computational model of the breast. DeBRa (Detailed Breast model for Radiological studies) is the most advanced, detailed, 3D computational model of the breast developed recently for breast imaging studies. A DeBRa phantom can be constructed to model a compressed breast, as in film/screen, digital mammography and digital breast tomosynthesis studies, or a non-compressed breast as in positron emission mammography and breast CT studies. Both the cranial-caudal and mediolateral oblique views can be modelled. The anatomical details inside the phantom include the lactiferous duct system, the Cooper ligaments and the pectoral muscle. The fibroglandular tissues are also modelled realistically. In addition, abnormalities such as microcalcifications, irregular tumours and spiculated tumours are inserted into the phantom. Existing sophisticated breast models require specialized simulation codes. Unlike its predecessors, DeBRa has elemental compositions and densities incorporated into its voxels including those of the explicitly modelled anatomical structures and the noise-like fibroglandular tissues. The voxel dimensions are specified as needed by any study and the microcalcifications are embedded into the voxels so that the microcalcification sizes are not limited by the voxel dimensions. Therefore, DeBRa works with general-purpose Monte Carlo codes. Furthermore, general-purpose Monte Carlo codes allow different types of imaging modalities and detector characteristics to be simulated with ease. DeBRa is a versatile and multipurpose model specifically designed for both x-ray and γ-ray imaging studies.
Dynamic heart phantom with functional mitral and aortic valves
NASA Astrophysics Data System (ADS)
Vannelli, Claire; Moore, John; McLeod, Jonathan; Ceh, Dennis; Peters, Terry
2015-03-01
Cardiac valvular stenosis, prolapse and regurgitation are increasingly common conditions, particularly in an elderly population with limited potential for on-pump cardiac surgery. NeoChord©, MitraClipand numerous stent-based transcatheter aortic valve implantation (TAVI) devices provide an alternative to intrusive cardiac operations; performed while the heart is beating, these procedures require surgeons and cardiologists to learn new image-guidance based techniques. Developing these visual aids and protocols is a challenging task that benefits from sophisticated simulators. Existing models lack features needed to simulate off-pump valvular procedures: functional, dynamic valves, apical and vascular access, and user flexibility for different activation patterns such as variable heart rates and rapid pacing. We present a left ventricle phantom with these characteristics. The phantom can be used to simulate valvular repair and replacement procedures with magnetic tracking, augmented reality, fluoroscopy and ultrasound guidance. This tool serves as a platform to develop image-guidance and image processing techniques required for a range of minimally invasive cardiac interventions. The phantom mimics in vivo mitral and aortic valve motion, permitting realistic ultrasound images of these components to be acquired. It also has a physiological realistic left ventricular ejection fraction of 50%. Given its realistic imaging properties and non-biodegradable composition—silicone for tissue, water for blood—the system promises to reduce the number of animal trials required to develop image guidance applications for valvular repair and replacement. The phantom has been used in validation studies for both TAVI image-guidance techniques1, and image-based mitral valve tracking algorithms2.
A neural marker of medical visual expertise: implications for training.
Rourke, Liam; Cruikshank, Leanna C; Shapke, Larissa; Singhal, Anthony
2016-12-01
Researchers have identified a component of the EEG that discriminates visual experts from novices. The marker indexes a comprehensive model of visual processing, and if it is apparent in physicians, it could be used to investigate the development and training of their visual expertise. The purpose of this study was to determine whether a neural marker of visual expertise-the enhanced N170 event-related potential-is apparent in the EEGs of physicians as they interpret diagnostic images. We conducted a controlled trial with 10 cardiologists and 9 pulmonologists. Each participant completed 520 trials of a standard visual processing task involving the rapid evaluation of EKGs and CXRs-indicating-lung-disease. Ostensibly, each participant is expert with one type of image and competent with the other. We collected behavioral data on the participants' expertise with EKGs and CXRs and electrophysiological data on the magnitude, latency, and scalp location of their N170 ERPs as they interpreted the two types of images. Cardiologists demonstrated significantly more expertise with EKGs than CXRs, and this was reflected in an increased amplitude of their N170 ERPs while reading EKGs compared to CXRs. Pulmonologists demonstrated equal expertise with both types of images, and this was reflected in equal N170 ERP amplitudes for EKGs and CXRs. The results suggest provisionally that visual expertise has a similar substrate in medical practice as it does in other domains that have been studied extensively. This provides support for applying a sophisticated body of literature to questions about training and assessment of visual expertise among physicians.
Space Radar Image of Wadi Kufra, Libya
1998-04-14
The ability of a sophisticated radar instrument to image large regions of the world from space, using different frequencies that can penetrate dry sand cover, produced the discovery in this image: a previously unknown branch of an ancient river, buried under thousands of years of windblown sand in a region of the Sahara Desert in North Africa. This area is near the Kufra Oasis in southeast Libya, centered at 23.3 degrees north latitude, 22.9 degrees east longitude. The image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture (SIR-C/X-SAR) imaging radar when it flew aboard the space shuttle Endeavour on its 60th orbit on October 4, 1994. This SIR-C image reveals a system of old, now inactive stream valleys, called "paleodrainage systems, http://photojournal.jpl.nasa.gov/catalog/PIA01310
Fluorescence Molecular Tomography: Principles and Potential for Pharmaceutical Research
Stuker, Florian; Ripoll, Jorge; Rudin, Markus
2011-01-01
Fluorescence microscopic imaging is widely used in biomedical research to study molecular and cellular processes in cell culture or tissue samples. This is motivated by the high inherent sensitivity of fluorescence techniques, the spatial resolution that compares favorably with cellular dimensions, the stability of the fluorescent labels used and the sophisticated labeling strategies that have been developed for selectively labeling target molecules. More recently, two and three-dimensional optical imaging methods have also been applied to monitor biological processes in intact biological organisms such as animals or even humans. These whole body optical imaging approaches have to cope with the fact that biological tissue is a highly scattering and absorbing medium. As a consequence, light propagation in tissue is well described by a diffusion approximation and accurate reconstruction of spatial information is demanding. While in vivo optical imaging is a highly sensitive method, the signal is strongly surface weighted, i.e., the signal detected from the same light source will become weaker the deeper it is embedded in tissue, and strongly depends on the optical properties of the surrounding tissue. Derivation of quantitative information, therefore, requires tomographic techniques such as fluorescence molecular tomography (FMT), which maps the three-dimensional distribution of a fluorescent probe or protein concentration. The combination of FMT with a structural imaging method such as X-ray computed tomography (CT) or Magnetic Resonance Imaging (MRI) will allow mapping molecular information on a high definition anatomical reference and enable the use of prior information on tissue's optical properties to enhance both resolution and sensitivity. Today many of the fluorescent assays originally developed for studies in cellular systems have been successfully translated for experimental studies in animals. The opportunity of monitoring molecular processes non-invasively in the intact organism is highly attractive from a diagnostic point of view but even more so for the drug developer, who can use the techniques for proof-of-mechanism and proof-of-efficacy studies. This review shall elucidate the current status and potential of fluorescence tomography including recent advances in multimodality imaging approaches for preclinical and clinical drug development. PMID:24310495
Derakhshanrad, Seyed Alireza; Piven, Emily; Ghoochani, Bahareh Zeynalzadeh
2017-10-01
Walter J. Freeman pioneered the neurodynamic model of brain activity when he described the brain dynamics for cognitive information transfer as the process of circular causality at intention, meaning, and perception (IMP) levels. This view contributed substantially to establishment of the Intention, Meaning, and Perception Model of Neuro-occupation in occupational therapy. As described by the model, IMP levels are three components of the brain dynamics system, with nonlinear connections that enable cognitive function to be processed in a circular causality fashion, known as Cognitive Process of Circular Causality (CPCC). Although considerable research has been devoted to study the brain dynamics by sophisticated computerized imaging techniques, less attention has been paid to study it through investigating the adaptation process of thoughts and behaviors. To explore how CPCC manifested thinking and behavioral patterns, a qualitative case study was conducted on two matched female participants with strokes, who were of comparable ages, affected sides, and other characteristics, except for their resilience and motivational behaviors. CPCC was compared by matrix analysis between two participants, using content analysis with pre-determined categories. Different patterns of thinking and behavior may have happened, due to disparate regulation of CPCC between two participants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, D; Gach, H; Li, H
Purpose: The daily treatment MRIs acquired on MR-IGRT systems, like diagnostic MRIs, suffer from intensity inhomogeneity issue, associated with B1 and B0 inhomogeneities. An improved homomorphic unsharp mask (HUM) filtering method, automatic and robust body segmentation, and imaging field-of-view (FOV) detection methods were developed to compute the multiplicative slow-varying correction field and correct the intensity inhomogeneity. The goal is to improve and normalize the voxel intensity so that the images could be processed more accurately by quantitative methods (e.g., segmentation and registration) that require consistent image voxel intensity values. Methods: HUM methods have been widely used for years. A bodymore » mask is required, otherwise the body surface in the corrected image would be incorrectly bright due to the sudden intensity transition at the body surface. In this study, we developed an improved HUM-based correction method that includes three main components: 1) Robust body segmentation on the normalized image gradient map, 2) Robust FOV detection (needed for body segmentation) using region growing and morphologic filters, and 3) An effective implementation of HUM using repeated Gaussian convolution. Results: The proposed method was successfully tested on patient images of common anatomical sites (H/N, lung, abdomen and pelvis). Initial qualitative comparisons showed that this improved HUM method outperformed three recently published algorithms (FCM, LEMS, MICO) in both computation speed (by 50+ times) and robustness (in intermediate to severe inhomogeneity situations). Currently implemented in MATLAB, it takes 20 to 25 seconds to process a 3D MRI volume. Conclusion: Compared to more sophisticated MRI inhomogeneity correction algorithms, the improved HUM method is simple and effective. The inhomogeneity correction, body mask, and FOV detection methods developed in this study would be useful as preprocessing tools for many MRI-related research and clinical applications in radiotherapy. Authors have received research grants from ViewRay and Varian.« less
Extensible Computational Chemistry Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-08-09
ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing the power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of themore » inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less
Reduction and analysis techniques for infrared imaging data
NASA Technical Reports Server (NTRS)
Mccaughrean, Mark
1989-01-01
Infrared detector arrays are becoming increasingly available to the astronomy community, with a number of array cameras already in use at national observatories, and others under development at many institutions. As the detector technology and imaging instruments grow more sophisticated, more attention is focussed on the business of turning raw data into scientifically significant information. Turning pictures into papers, or equivalently, astronomy into astrophysics, both accurately and efficiently, is discussed. Also discussed are some of the factors that can be considered at each of three major stages; acquisition, reduction, and analysis, concentrating in particular on several of the questions most relevant to the techniques currently applied to near infrared imaging.
NASA Technical Reports Server (NTRS)
Wattson, R. B.; Harvey, P.; Swift, R.
1975-01-01
An intrinsic silicon charge injection device (CID) television sensor array has been used in conjunction with a CaMoO4 colinear tunable acousto optic filter, a 61 inch reflector, a sophisticated computer system, and a digital color TV scan converter/computer to produce near IR images of Saturn and Jupiter with 10A spectral resolution and approximately 3 inch spatial resolution. The CID camera has successfully obtained digitized 100 x 100 array images with 5 minutes of exposure time, and slow-scanned readout to a computer. Details of the equipment setup, innovations, problems, experience, data and final equipment performance limits are given.
IRLooK: an advanced mobile infrared signature measurement, data reduction, and analysis system
NASA Astrophysics Data System (ADS)
Cukur, Tamer; Altug, Yelda; Uzunoglu, Cihan; Kilic, Kayhan; Emir, Erdem
2007-04-01
Infrared signature measurement capability has a key role in the electronic warfare (EW) self protection systems' development activities. In this article, the IRLooK System and its capabilities will be introduced. IRLooK is a truly innovative mobile infrared signature measurement system with all its design, manufacturing and integration accomplished by an engineering philosophy peculiar to ASELSAN. IRLooK measures the infrared signatures of military and civil platforms such as fixed/rotary wing aircrafts, tracked/wheeled vehicles and navy vessels. IRLooK has the capabilities of data acquisition, pre-processing, post-processing, analysis, storing and archiving over shortwave, mid-wave and long wave infrared spectrum by means of its high resolution radiometric sensors and highly sophisticated software analysis tools. The sensor suite of IRLooK System includes imaging and non-imaging radiometers and a spectroradiometer. Single or simultaneous multiple in-band measurements as well as high radiant intensity measurements can be performed. The system provides detailed information on the spectral, spatial and temporal infrared signature characteristics of the targets. It also determines IR Decoy characteristics. The system is equipped with a high quality field proven two-axes tracking mount to facilitate target tracking. Manual or automatic tracking is achieved by using a passive imaging tracker. The system also includes a high quality weather station and field-calibration equipment including cavity and extended area blackbodies. The units composing the system are mounted on flat-bed trailers and the complete system is designed to be transportable by large body aircraft.
Minimising back reflections from the common path objective in a fundus camera
NASA Astrophysics Data System (ADS)
Swat, A.
2016-11-01
Eliminating back reflections is critical in the design of a fundus camera with internal illuminating system. As there is very little light reflected from the retina, even excellent antireflective coatings are not sufficient suppression of ghost reflections, therefore the number of surfaces in the common optics in illuminating and imaging paths shall be minimised. Typically a single aspheric objective is used. In the paper an alternative approach, an objective with all spherical surfaces, is presented. As more surfaces are required, more sophisticated method is needed to get rid of back reflections. Typically back reflections analysis, comprise treating subsequent objective surfaces as mirrors, and reflections from the objective surfaces are traced back through the imaging path. This approach can be applied in both sequential and nonsequential ray tracing. It is good enough for system check but not very suitable for early optimisation process in the optical system design phase. There are also available standard ghost control merit function operands in the sequential ray-trace, for example in Zemax system, but these don't allow back ray-trace in an alternative optical path, illumination vs. imaging. What is proposed in the paper, is a complete method to incorporate ghost reflected energy into the raytracing system merit function for sequential mode which is more efficient in optimisation process. Although developed for the purpose of specific case of fundus camera, the method might be utilised in a wider range of applications where ghost control is critical.
Focus on: Washington Hospital Center, Biomedical Engineering Department.
Hughes, J D
1995-01-01
The Biomedical Engineering Department of the Washington Hospital Center provides clinical engineering services to an urban 907-bed, tertiary care teaching hospital and a variety of associated healthcare facilities. With an annual budget of over $3,000,000, the 24-person department provides cradle-to-grave support for a host of sophisticated medical devices and imaging systems such as lasers, CT scanners, and linear accelerators as well as traditional patient care instrumentation. Hallmarks of the department include its commitment to customer service and patient care, close collaboration with clinicians and quality assurance teams throughout the hospital system, proactive involvement in all phases of the technology management process, and shared leadership in safety standards with the hospital's risk management group. Through this interactive process, the department has assisted the Center not only in the acquisition of 11,000 active devices with a value of more than $64 million, but also in becoming one of the leading providers of high technology healthcare in the Washington, DC metropolitan area.
Moving Object Detection Using a Parallax Shift Vector Algorithm
NASA Astrophysics Data System (ADS)
Gural, Peter S.; Otto, Paul R.; Tedesco, Edward F.
2018-07-01
There are various algorithms currently in use to detect asteroids from ground-based observatories, but they are generally restricted to linear or mildly curved movement of the target object across the field of view. Space-based sensors in high inclination, low Earth orbits can induce significant parallax in a collected sequence of images, especially for objects at the typical distances of asteroids in the inner solar system. This results in a highly nonlinear motion pattern of the asteroid across the sensor, which requires a more sophisticated search pattern for detection processing. Both the classical pattern matching used in ground-based asteroid search and the more sensitive matched filtering and synthetic tracking techniques, can be adapted to account for highly complex parallax motion. A new shift vector generation methodology is discussed along with its impacts on commonly used detection algorithms, processing load, and responsiveness to asteroid track reporting. The matched filter, template generator, and pattern matcher source code for the software described herein are available via GitHub.
Rates of inactivation of waterborne coliphages by monochloramine.
Dee, S W; Fogleman, J C
1992-01-01
A sophisticated water quality monitoring program was established to evaluate virus removal through Denver's 1-million-gal (ca. 4-million-liter)/day Direct Potable Reuse Demonstration Plant. As a comparison point for the reuse demonstration plant, Denver's main water treatment facility was also monitored for coliphage organisms. Through the routine monitoring of the main plant, it was discovered that coliphage organisms were escaping the water treatment processes. Monochloramine residuals and contact times (CT values) required to achieve 99% inactivation were determined for coliphage organisms entering and leaving this conventional water treatment plant. The coliphage tested in the effluent waters had higher CT values on the average than those of the influent waters. CT values established for some of these coliphages suggest that monochloramine alone is not capable of removing 2 orders of magnitude of these specific organisms in a typical water treatment facility. Electron micrographs revealed one distinct type of phage capable of escaping the water treatment processes and three distinct types of phages in all. Images PMID:1444427
Producing picture-perfect posters.
Bach, D B; Vellet, A D; Karlik, S J; Downey, D B; Levin, M F; Munk, P L
1993-06-01
Scientific posters form an integral part of many radiology meetings. They provide the opportunity for interested parties to read the material at an individualized pace, to study the images in detail, and to return to the exhibit numerous times. Although the content of the poster is undoubtedly its most important component, the visual presentation of the material can enhance or detract from the clarity of the message. With the wide availability of sophisticated computer programs for desktop publishing (DTP), one can now create the poster on a computer monitor with full control of the form as well as the content. This process will result in a professional-appearing poster, yet still allow the author the opportunity to make innumerable revisions, as the poster is visualized in detail on the computer monitor before printing. Furthermore, this process is less expensive than the traditional method of typesetting individual sections separately and mounting them on cardboard for display. The purpose of this article is to present our approach to poster production using commercially available DTP computer programs.
The bedside examination of the vestibulo-ocular reflex (VOR): An update
Kheradmand, A.; Zee, D.S.
2014-01-01
Diagnosing dizzy patients remains a daunting challenge to the clinician in spite of modern imaging and increasingly sophisticated electrophysiological testing. Here we review the major bedside tests of the vestibulo-ocular reflex and how, when combined with a proper examination of the other eye movement systems, one can arrive at an accurate vestibular diagnosis. PMID:22981296
Towards low cost photoacoustic Microscopy system for evaluation of skin health
NASA Astrophysics Data System (ADS)
Hariri, Ali; Fatima, Afreen; Mohammadian, Nafiseh; Bely, Nicholas; Nasiriavanaki, Mohammadreza
2016-09-01
Photoacoustic imaging (PAI) involves both optical and ultrasound imaging, owing to this combination the system is capable of generating high resolution images with good penetration depth. With the growing applications of PAI in neurology, vascular biology, dermatology, ophthalmology, tissue engineering, angiogenesis etc., there is a need to make the system more compact, cheap and effective. Therefore we designed an economical and compact version of PAI systems by replacing expensive and sophisticated lasers with a robust pulsed laser diode of 905 nm wavelength. In this study, we determine the feasibility of the Photoacoustic imaging with a very low excitation energy of 0.1uJ in Photoacoustic microscopy. We developed a low cost portable Photoacoustic Imaging including microscopy (both reflection) Phantom study was performed in this configuration and also ex-vivo image was obtained from mouse skin.
Brown, Gregory G; Anderson, Vicki; Bigler, Erin D; Chan, Agnes S; Fama, Rosemary; Grabowski, Thomas J; Zakzanis, Konstantine K
2017-11-01
The American Psychological Association (APA) celebrated its 125th anniversary in 2017. As part of this celebration, the APA journal Neuropsychology has published in its November 2017 issue 11 papers describing some of the advances in the field of neuropsychology over the past 25 years. The papers address three broad topics: assessment and intervention, brain imaging, and theory and methods. The papers describe the rise of new assessment and intervention technologies, the impact of evidence for neuroplasticity on neurorehabilitation. Examples of the use of mathematical models of cognition to investigate latent neurobehavioral processes, the development of the field of neuropsychology in select international countries, the increasing sophistication of brain imaging methods, the recent evidence for localizationist and connectionist accounts of neurobehavioral functioning, the advances in neurobehavioral genomics, and descriptions of newly developed statistical models of longitudinal change. Together the papers convey evidence of the vibrant growth in the field of neuropsychology over the quarter century since APA's 100th anniversary in 1992. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Validating spatial structure in canopy water content using geostatistics
NASA Technical Reports Server (NTRS)
Sanderson, E. W.; Zhang, M. H.; Ustin, S. L.; Rejmankova, E.; Haxo, R. S.
1995-01-01
Heterogeneity in ecological phenomena are scale dependent and affect the hierarchical structure of image data. AVIRIS pixels average reflectance produced by complex absorption and scattering interactions between biogeochemical composition, canopy architecture, view and illumination angles, species distributions, and plant cover as well as other factors. These scales affect validation of pixel reflectance, typically performed by relating pixel spectra to ground measurements acquired at scales of 1m(exp 2) or less (e.g., field spectra, foilage and soil samples, etc.). As image analysis becomes more sophisticated, such as those for detection of canopy chemistry, better validation becomes a critical problem. This paper presents a methodology for bridging between point measurements and pixels using geostatistics. Geostatistics have been extensively used in geological or hydrogeolocial studies but have received little application in ecological studies. The key criteria for kriging estimation is that the phenomena varies in space and that an underlying controlling process produces spatial correlation between the measured data points. Ecological variation meets this requirement because communities vary along environmental gradients like soil moisture, nutrient availability, or topography.
Pixel level optical-transfer-function design based on the surface-wave-interferometry aperture
Zheng, Guoan; Wang, Yingmin; Yang, Changhuei
2010-01-01
The design of optical transfer function (OTF) is of significant importance for optical information processing in various imaging and vision systems. Typically, OTF design relies on sophisticated bulk optical arrangement in the light path of the optical systems. In this letter, we demonstrate a surface-wave-interferometry aperture (SWIA) that can be directly incorporated onto optical sensors to accomplish OTF design on the pixel level. The whole aperture design is based on the bull’s eye structure. It composes of a central hole (diameter of 300 nm) and periodic groove (period of 560 nm) on a 340 nm thick gold layer. We show, with both simulation and experiment, that different types of optical transfer functions (notch, highpass and lowpass filter) can be achieved by manipulating the interference between the direct transmission of the central hole and the surface wave (SW) component induced from the periodic groove. Pixel level OTF design provides a low-cost, ultra robust, highly compact method for numerous applications such as optofluidic microscopy, wavefront detection, darkfield imaging, and computational photography. PMID:20721038
MovieMaker: a web server for rapid rendering of protein motions and interactions
Maiti, Rajarshi; Van Domselaar, Gary H.; Wishart, David S.
2005-01-01
MovieMaker is a web server that allows short (∼10 s), downloadable movies of protein motions to be generated. It accepts PDB files or PDB accession numbers as input and automatically calculates, renders and merges the necessary image files to create colourful animations covering a wide range of protein motions and other dynamic processes. Users have the option of animating (i) simple rotation, (ii) morphing between two end-state conformers, (iii) short-scale, picosecond vibrations, (iv) ligand docking, (v) protein oligomerization, (vi) mid-scale nanosecond (ensemble) motions and (vii) protein folding/unfolding. MovieMaker does not perform molecular dynamics calculations. Instead it is an animation tool that uses a sophisticated superpositioning algorithm in conjunction with Cartesian coordinate interpolation to rapidly and automatically calculate the intermediate structures needed for many of its animations. Users have extensive control over the rendering style, structure colour, animation quality, background and other image features. MovieMaker is intended to be a general-purpose server that allows both experts and non-experts to easily generate useful, informative protein animations for educational and illustrative purposes. MovieMaker is accessible at . PMID:15980488
Photonanomedicine: a convergence of photodynamic therapy and nanotechnology
NASA Astrophysics Data System (ADS)
Obaid, Girgis; Broekgaarden, Mans; Bulin, Anne-Laure; Huang, Huang-Chiao; Kuriakose, Jerrin; Liu, Joyce; Hasan, Tayyaba
2016-06-01
As clinical nanomedicine has emerged over the past two decades, phototherapeutic advancements using nanotechnology have also evolved and impacted disease management. Because of unique features attributable to the light activation process of molecules, photonanomedicine (PNM) holds significant promise as a personalized, image-guided therapeutic approach for cancer and non-cancer pathologies. The convergence of advanced photochemical therapies such as photodynamic therapy (PDT) and imaging modalities with sophisticated nanotechnologies is enabling the ongoing evolution of fundamental PNM formulations, such as Visudyne®, into progressive forward-looking platforms that integrate theranostics (therapeutics and diagnostics), molecular selectivity, the spatiotemporally controlled release of synergistic therapeutics, along with regulated, sustained drug dosing. Considering that the envisioned goal of these integrated platforms is proving to be realistic, this review will discuss how PNM has evolved over the years as a preclinical and clinical amalgamation of nanotechnology with PDT. The encouraging investigations that emphasize the potent synergy between photochemistry and nanotherapeutics, in addition to the growing realization of the value of these multi-faceted theranostic nanoplatforms, will assist in driving PNM formulations into mainstream oncological clinical practice as a necessary tool in the medical armamentarium.
Extracting Semantic Building Models from Aerial Stereo Images and Conversion to Citygml
NASA Astrophysics Data System (ADS)
Sengul, A.
2012-07-01
The collection of geographic data is of primary importance for the creation and maintenance of a GIS. Traditionally the acquisition of 3D information has been the task of photogrammetry using aerial stereo images. Digital photogrammetric systems employ sophisticated software to extract digital terrain models or to plot 3D objects. The demand for 3D city models leads to new applications and new standards. City Geography Mark-up Language (CityGML), a concept for modelling and exchange of 3D city and landscape models, defines the classes and relations for the most relevant topographic objects in cities and regional models with respect to their geometrical, topological, semantically and topological properties. It now is increasingly accepted, since it fulfils the prerequisites required e.g. for risk analysis, urban planning, and simulations. There is a need to include existing 3D information derived from photogrammetric processes in CityGML databases. In order to filling the gap, this paper reports on a framework transferring data plotted by Erdas LPS and Stereo Analyst for ArcGIS software to CityGML using Safe Software's Feature Manupulate Engine (FME)
Neuroimaging of Cerebrovascular Disease in the Aging Brain
Gupta, Ajay; Nair, Sreejit; Schweitzer, Andrew D.; Kishore, Sirish; Johnson, Carl E.; Comunale, Joseph P.; Tsiouris, Apostolos J.; Sanelli, Pina C.
2012-01-01
Cerebrovascular disease remains a significant public health burden with its greatest impact on the elderly population. Advances in neuroimaging techniques allow detailed and sophisticated evaluation of many manifestations of cerebrovascular disease in the brain parenchyma as well as in the intracranial and extracranial vasculature. These tools continue to contribute to our understanding of the multifactorial processes that occur in the age-dependent development of cerebrovascular disease. Structural abnormalities related to vascular disease in the brain and vessels have been well characterized with CT and MRI based techniques. We review some of the pathophysiologic mechanisms in the aging brain and cerebral vasculature and the related structural abnormalities detectable on neuroimaging, including evaluation of age-related white matter changes, atherosclerosis of the cerebral vasculature, and cerebral infarction. In addition, newer neuroimaging techniques, such as diffusion tensor imaging, perfusion techniques, and assessment of cerebrovascular reserve, are also reviewed, as these techniques can detect physiologic alterations which complement the morphologic changes that cause cerebrovascular disease in the aging brain.Further investigation of these advanced imaging techniques has potential application to the understanding and diagnosis of cerebrovascular disease in the elderly. PMID:23185721
Tratamiento formal de imágenes astronómicas con PSF espacialmente variable
NASA Astrophysics Data System (ADS)
Sánchez, B. O.; Domínguez, M. J.; Lares, M.
2017-10-01
We present a python implementation of a method for PSF determination in the context of optimal subtraction of astronomical images. We introduce an expansion of the spatially variant point spread function (PSF) in terms of the Karhunen Loève basis. The advantage of this approach is that the basis is able to naturally adapt to the data, instead of imposing a fixed ad-hoc analytic form. Simulated image reconstruction was analyzed, by using the measured PSF, with good agreement in terms of sky background level between the reconstructed and original images. The technique is simple enough to be implemented on more sophisticated image subtraction methods, since it improves its results without extra computational cost in a spatially variant PSF environment.
Cover estimation and payload location using Markov random fields
NASA Astrophysics Data System (ADS)
Quach, Tu-Thach
2014-02-01
Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly.
NASA Astrophysics Data System (ADS)
Dupuy, Pascal; Harter, Jean
1995-09-01
Iris is a modular infrared thermal image developed by SAGEM since 1988, based on a 288 by 4 IRCCD detector. The first section of the presentation gives a description of the different modules of the IRIS thermal imager and their evolution in recent years. The second section covers the description of the major evolution, namely the integrated detector cooler assembly (IDCA), using a SOFRADIR 288 by 4 detector and a SAGEM microcooler, now integrated in the IRIS thermal imagers. The third section gives the description of two functions integrated in the IRIS thermal imager: (1) image enhancement, using a digital convolution filter, and (2) automatic hot points detection and tracking, offering an assistance to surveillance and automatic detection. The last section presents several programs for navy, air forces, and land applications for which IRIS has already been selected and achieved.
NASA Astrophysics Data System (ADS)
Zaleta, Kristy L.
The purpose of this study was to investigate the impact of gender and type of inquiry curriculum (open or structured) on science process skills and epistemological beliefs in science of sixth grade students. The current study took place in an urban northeastern middle school. The researcher utilized a sample of convenience comprised of 303 sixth grade students taught by four science teachers on separate teams. The study employed mixed methods with a quasi-experimental design, pretest-posttest comparison group with 17 intact classrooms of students. Students' science process skills and epistemological beliefs in science (source, certainty, development, and justification) were measured before and after the intervention, which exposed different groups of students to different types of inquiry (structured or open). Differences between comparison and treatment groups and between male and female students were analyzed after the intervention, on science process skills, using a two-way analysis of covariance (ANCOVA), and, on epistemological beliefs in science, using a two-way multivariate analysis of covariance (MANCOVA). Responses from two focus groups of open inquiry students were cycle coded and examined for themes and patterns. Quantitative measurements indicated that girls scored significantly higher on science process skills than boys, regardless of type of inquiry instruction. Neither gender nor type of inquiry instruction predicted students' epistemological beliefs in science after accounting for students' pretest scores. The dimension Development accounted for 10.6% of the variance in students' science process skills. Qualitative results indicated that students with sophisticated epistemological beliefs expressed engagement with the open-inquiry curriculum. Students in both the sophisticated and naive beliefs groups identified challenges with the curriculum and improvement in learning as major themes. The types of challenges identified differed between the groups: sophisticated beliefs group students focused on their insecurity of not knowing how to complete the activities correctly, and naive beliefs group students focused on the amount of work and how long it took them to complete it. The description of the improvement in learning was at a basic level for the naive beliefs group and at a more complex level for the sophisticated beliefs group. Implications for researchers and educators are discussed.
NASA Astrophysics Data System (ADS)
Sebesta, Mikael; Egelberg, Peter J.; Langberg, Anders; Lindskov, Jens-Henrik; Alm, Kersti; Janicke, Birgit
2016-03-01
Live-cell imaging enables studying dynamic cellular processes that cannot be visualized in fixed-cell assays. An increasing number of scientists in academia and the pharmaceutical industry are choosing live-cell analysis over or in addition to traditional fixed-cell assays. We have developed a time-lapse label-free imaging cytometer HoloMonitorM4. HoloMonitor M4 assists researchers to overcome inherent disadvantages of fluorescent analysis, specifically effects of chemical labels or genetic modifications which can alter cellular behavior. Additionally, label-free analysis is simple and eliminates the costs associated with staining procedures. The underlying technology principle is based on digital off-axis holography. While multiple alternatives exist for this type of analysis, we prioritized our developments to achieve the following: a) All-inclusive system - hardware and sophisticated cytometric analysis software; b) Ease of use enabling utilization of instrumentation by expert- and entrylevel researchers alike; c) Validated quantitative assay end-points tracked over time such as optical path length shift, optical volume and multiple derived imaging parameters; d) Reliable digital autofocus; e) Robust long-term operation in the incubator environment; f) High throughput and walk-away capability; and finally g) Data management suitable for single- and multi-user networks. We provide examples of HoloMonitor applications of label-free cell viability measurements and monitoring of cell cycle phase distribution.
NASA Astrophysics Data System (ADS)
Kuehnel, C.; Hennemuth, A.; Oeltze, S.; Boskamp, T.; Peitgen, H.-O.
2008-03-01
The diagnosis support in the field of coronary artery disease (CAD) is very complex due to the numerous symptoms and performed studies leading to the final diagnosis. CTA and MRI are on their way to replace invasive catheter angiography. Thus, there is a need for sophisticated software tools that present the different analysis results, and correlate the anatomical and dynamic image information. We introduce a new software assistant for the combined result visualization of CTA and MR images, in which a dedicated concept for the structured presentation of original data, segmentation results, and individual findings is realized. Therefore, we define a comprehensive class hierarchy and assign suitable interaction functions. User guidance is coupled as closely as possible with available data, supporting a straightforward workflow design. The analysis results are extracted from two previously developed software assistants, providing coronary artery analysis and measurements, function analysis as well as late enhancement data investigation. As an extension we introduce a finding concept directly relating suspicious positions to the underlying data. An affine registration of CT and MR data in combination with the AHA 17-segment model enables the coupling of local findings to positions in all data sets. Furthermore, sophisticated visualization in 2D and 3D and interactive bull's eye plots facilitate a correlation of coronary stenoses and physiology. The software has been evaluated on 20 patient data sets.
Mousavi, Hojjat Seyed; Monga, Vishal; Rao, Ganesh; Rao, Arvind U K
2015-01-01
Histopathological images have rich structural information, are multi-channel in nature and contain meaningful pathological information at various scales. Sophisticated image analysis tools that can automatically extract discriminative information from the histopathology image slides for diagnosis remain an area of significant research activity. In this work, we focus on automated brain cancer grading, specifically glioma grading. Grading of a glioma is a highly important problem in pathology and is largely done manually by medical experts based on an examination of pathology slides (images). To complement the efforts of clinicians engaged in brain cancer diagnosis, we develop novel image processing algorithms and systems to automatically grade glioma tumor into two categories: Low-grade glioma (LGG) and high-grade glioma (HGG) which represent a more advanced stage of the disease. We propose novel image processing algorithms based on spatial domain analysis for glioma tumor grading that will complement the clinical interpretation of the tissue. The image processing techniques are developed in close collaboration with medical experts to mimic the visual cues that a clinician looks for in judging of the grade of the disease. Specifically, two algorithmic techniques are developed: (1) A cell segmentation and cell-count profile creation for identification of Pseudopalisading Necrosis, and (2) a customized operation of spatial and morphological filters to accurately identify microvascular proliferation (MVP). In both techniques, a hierarchical decision is made via a decision tree mechanism. If either Pseudopalisading Necrosis or MVP is found present in any part of the histopathology slide, the whole slide is identified as HGG, which is consistent with World Health Organization guidelines. Experimental results on the Cancer Genome Atlas database are presented in the form of: (1) Successful detection rates of pseudopalisading necrosis and MVP regions, (2) overall classification accuracy into LGG and HGG categories, and (3) receiver operating characteristic curves which can facilitate a desirable trade-off between HGG detection and false-alarm rates. The proposed method demonstrates fairly high accuracy and compares favorably against best-known alternatives such as the state-of-the-art WND-CHARM feature set provided by NIH combined with powerful support vector machine classifier. Our results reveal that the proposed method can be beneficial to a clinician in effectively separating histopathology slides into LGG and HGG categories, particularly where the analysis of a large number of slides is needed. Our work also reveals that MVP regions are much harder to detect than Pseudopalisading Necrosis and increasing accuracy of automated image processing for MVP detection emerges as a significant future research direction.
NASA Astrophysics Data System (ADS)
Schuster, J.
2018-02-01
Military requirements demand both single and dual-color infrared (IR) imaging systems with both high resolution and sharp contrast. To quantify the performance of these imaging systems, a key measure of performance, the modulation transfer function (MTF), describes how well an optical system reproduces an objects contrast in the image plane at different spatial frequencies. At the center of an IR imaging system is the focal plane array (FPA). IR FPAs are hybrid structures consisting of a semiconductor detector pixel array, typically fabricated from HgCdTe, InGaAs or III-V superlattice materials, hybridized with heat/pressure to a silicon read-out integrated circuit (ROIC) with indium bumps on each pixel providing the mechanical and electrical connection. Due to the growing sophistication of the pixel arrays in these FPAs, sophisticated modeling techniques are required to predict, understand, and benchmark the pixel array MTF that contributes to the total imaging system MTF. To model the pixel array MTF, computationally exhaustive 2D and 3D numerical simulation approaches are required to correctly account for complex architectures and effects such as lateral diffusion from the pixel corners. It is paramount to accurately model the lateral di_usion (pixel crosstalk) as it can become the dominant mechanism limiting the detector MTF if not properly mitigated. Once the detector MTF has been simulated, it is directly decomposed into its constituent contributions to reveal exactly what is limiting the total detector MTF, providing a path for optimization. An overview of the MTF will be given and the simulation approach will be discussed in detail, along with how different simulation parameters effect the MTF calculation. Finally, MTF optimization strategies (crosstalk mitigation) will be discussed.
Diagnostic Imaging Services in Magnet and Non-Magnet Hospitals: Trends in Utilization and Costs.
Jayawardhana, Jayani; Welton, John M
2015-12-01
The purpose of this study was to better understand trends in utilization and costs of diagnostic imaging services at Magnet hospitals (MHs) and non-Magnet hospitals (NMHs). A data set was created by merging hospital-level data from the American Hospital Association's annual survey and Medicare cost reports, individual-level inpatient data from the Healthcare Cost and Utilization Project, and Magnet recognition status data from the American Nurses Credentialing Center. A descriptive analysis was conducted to evaluate the trends in utilization and costs of CT, MRI, and ultrasound procedures among MHs and NMHs in urban locations between 2000 and 2006 from the following ten states: Arizona, California, Colorado, Florida, Iowa, Maryland, North Carolina, New Jersey, New York, and Washington. When matched by bed size, severity of illness (case mix index), and clinical technological sophistication (Saidin index) quantiles, MHs in higher quantiles indicated higher rates of utilization of imaging services for MRI, CT, and ultrasound in comparison with NMHs in the same quantiles. However, average costs of MRI, CT, and ultrasounds were lower at MHs in comparison with NMHs in the same quantiles. Overall, MHs that are larger in size (number of beds), serve more severely ill patients (case mix index), and are more technologically sophisticated (Saidin index) show higher utilization of diagnostic imaging services, although costs per procedure at MHs are lower in comparison with similar NMHs, indicating possible cost efficiency at MHs. Further research is necessary to understand the relationship between the utilization of diagnostic imaging services among MHs and its impact on patient outcomes. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Computer based guidance in the modern operating room: a historical perspective.
Bova, Frank
2010-01-01
The past few decades have seen the introduction of many different and innovative approaches aimed at enhancing surgical technique. As microprocessors have decreased in size and increased in processing power, more sophisticated systems have been developed. Some systems have attempted to provide enhanced instrument control while others have attempted to provide tools for surgical guidance. These systems include robotics, image enhancements, and frame-based and frameless guidance procedures. In almost every case the system's design goals were achieved and surgical outcomes were enhanced, yet a vast majority of today's surgical procedures are conducted without the aid of these advances. As new tools are developed and existing tools refined, special attention to the systems interface and integration into the operating room environment will be required before increased utilization of these technologies can be realized.
Zhu, Xiao-Hong; Lu, Ming; Chen, Wei
2018-07-01
Brain energy metabolism relies predominantly on glucose and oxygen utilization to generate biochemical energy in the form of adenosine triphosphate (ATP). ATP is essential for maintaining basal electrophysiological activities in a resting brain and supporting evoked neuronal activity under an activated state. Studying complex neuroenergetic processes in the brain requires sophisticated neuroimaging techniques enabling noninvasive and quantitative assessment of cerebral energy metabolisms and quantification of metabolic rates. Recent state-of-the-art in vivo X-nuclear MRS techniques, including 2 H, 17 O and 31 P MRS have shown promise, especially at ultra-high fields, in the quest for understanding neuroenergetics and brain function using preclinical models and in human subjects under healthy and diseased conditions. Copyright © 2018 Elsevier Inc. All rights reserved.
1999-04-21
NASA's Space Optics Manufacturing Center has been working to expand our view of the universe via sophisticated new telescopes. The Optics Center's goal is to develop low-cost, advanced space optics technologies for the NASA program in the 21st century - including the long-term goal of imaging Earth-like planets in distant solar systems. To reduce the cost of mirror fabrication, Marshall Space Flight Center (MSFC) has developed replication techniques, the machinery, and materials to replicate electro-formed nickel mirrors. The process allows fabricating precisely shaped mandrels to be used and reused as masters for replicating high-quality mirrors. Dr. Joe Ritter examines a replicated electro-formed nickel-alloy mirror which exemplifies the improvements in mirror fabrication techniques, with benefits such as dramtic weight reduction that have been achieved at the Marshall Space Flight Center's Space Optics Manufacturing Technology Center (SOMTC).
Bone Marrow Adipocyte Developmental Origin and Biology.
Bukowska, Joanna; Frazier, Trivia; Smith, Stanley; Brown, Theodore; Bender, Robert; McCarthy, Michelle; Wu, Xiying; Bunnell, Bruce A; Gimble, Jeffrey M
2018-06-01
This review explores how the relationships between bone marrow adipose tissue (BMAT) adipogenesis with advancing age, obesity, and/or bone diseases (osteopenia or osteoporosis) contribute to mechanisms underlying musculoskeletal pathophysiology. Recent studies have re-defined adipose tissue as a dynamic, vital organ with functions extending beyond its historic identity restricted solely to that of an energy reservoir or sink. "State of the art" methodologies provide novel insights into the developmental origin, physiology, and function of different adipose tissue depots. These include genetic tracking of adipose progenitors, viral vectors application, and sophisticated non-invasive imaging modalities. While constricted within the rigid bone cavity, BMAT vigorously contributes to local and systemic metabolic processes including hematopoiesis, osteogenesis, and energy metabolism and undergoes dynamic changes as a function of age, diet, bone topography, or sex. These insights will impact future research and therapies relating to osteoporosis.
NASA Astrophysics Data System (ADS)
Döring, D.; Solodov, I.; Busse, G.
Sound and ultrasound in air are the products of a multitude of different processes and thus can be favorable or undesirable phenomena. Development of experimental tools for non-invasive measurements and imaging of airborne sound fields is of importance for linear and nonlinear nondestructive material testing as well as noise control in industrial or civil engineering applications. One possible solution is based on acousto-optic interaction, like light diffraction imaging. The diffraction approach usually requires a sophisticated setup with fine optical alignment barely applicable in industrial environment. This paper focuses on the application of the robust experimental tool of scanning laser vibrometry, which utilizes commercial off-the-shelf equipment. The imaging technique of air-coupled vibrometry (ACV) is based on the modulation of the optical path length by the acoustic pressure of the sound wave. The theoretical considerations focus on the analysis of acousto-optical phase modulation. The sensitivity of the ACV in detecting vibration velocity was estimated as ~1 mm/s. The ACV applications to imaging of linear airborne fields are demonstrated for leaky wave propagation and measurements of ultrasonic air-coupled transducers. For higher-intensity ultrasound, the classical nonlinear effect of the second harmonic generation was measured in air. Another nonlinear application includes a direct observation of the nonlinear air-coupled emission (NACE) from the damaged areas in solid materials. The source of the NACE is shown to be strongly localized around the damage and proposed as a nonlinear "tag" to discern and image the defects.
Large scale digital atlases in neuroscience
NASA Astrophysics Data System (ADS)
Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.
2014-03-01
Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.
NASA Astrophysics Data System (ADS)
White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.
2012-06-01
Phishing website analysis is largely still a time-consuming manual process of discovering potential phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing phishing sites up and down rapidly at new locations, making automated response essential. In this paper, we present a method for rapid, automated detection and analysis of phishing websites. Our method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch the pages pointed to by each URL and characterize each page with a set of easily computed values such as number of images and links. We also capture a screen-shot of the rendered page image, compute a hash of the image and use the Hamming distance between these image hashes as a form of visual comparison. We provide initial results demonstrate the feasibility of our techniques by comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for future work.
Helioviewer: A Web 2.0 Tool for Visualizing Heterogeneous Heliophysics Data
NASA Astrophysics Data System (ADS)
Hughitt, V. K.; Ireland, J.; Lynch, M. J.; Schmeidel, P.; Dimitoglou, G.; Müeller, D.; Fleck, B.
2008-12-01
Solar physics datasets are becoming larger, richer, more numerous and more distributed. Feature/event catalogs (describing objects of interest in the original data) are becoming important tools in navigating these data. In the wake of this increasing influx of data and catalogs there has been a growing need for highly sophisticated tools for accessing and visualizing this wealth of information. Helioviewer is a novel tool for integrating and visualizing disparate sources of solar and Heliophysics data. Taking advantage of the newly available power of modern web application frameworks, Helioviewer merges image and feature catalog data, and provides for Heliophysics data a familiar interface not unlike Google Maps or MapQuest. In addition to streamlining the process of combining heterogeneous Heliophysics datatypes such as full-disk images and coronagraphs, the inclusion of visual representations of automated and human-annotated features provides the user with an integrated and intuitive view of how different factors may be interacting on the Sun. Currently, Helioviewer offers images from The Extreme ultraviolet Imaging Telescope (EIT), The Large Angle and Spectrometric COronagraph experiment (LASCO) and the Michelson Doppler Imager (MDI) instruments onboard The Solar and Heliospheric Observatory (SOHO), as well as The Transition Region and Coronal Explorer (TRACE). Helioviewer also incorporates feature/event information from the LASCO CME List, NOAA Active Regions, CACTus CME and Type II Radio Bursts feature/event catalogs. The project is undergoing continuous development with many more data sources and additional functionality planned for the near future.
Wang, Yu; Helminen, Emily; Jiang, Jingfeng
2015-09-01
Quasistatic ultrasound elastography (QUE) is being used to augment in vivo characterization of breast lesions. Results from early clinical trials indicated that there was a lack of confidence in image interpretation. Such confidence can only be gained through rigorous imaging tests using complex, heterogeneous but known media. The objective of this study is to build a virtual breast QUE simulation platform in the public domain that can be used not only for innovative QUE research but also for rigorous imaging tests. The main thrust of this work is to streamline biomedical ultrasound simulations by leveraging existing open source software packages including Field II (ultrasound simulator), VTK (geometrical visualization and processing), FEBio [finite element (FE) analysis], and Tetgen (mesh generator). However, integration of these open source packages is nontrivial and requires interdisciplinary knowledge. In the first step, a virtual breast model containing complex anatomical geometries was created through a novel combination of image-based landmark structures and randomly distributed (small) structures. Image-based landmark structures were based on data from the NIH Visible Human Project. Subsequently, an unstructured FE-mesh was created by Tetgen. In the second step, randomly positioned point scatterers were placed within the meshed breast model through an octree-based algorithm to make a virtual breast ultrasound phantom. In the third step, an ultrasound simulator (Field II) was used to interrogate the virtual breast phantom to obtain simulated ultrasound echo data. Of note, tissue deformation generated using a FE-simulator (FEBio) was the basis of deforming the original virtual breast phantom in order to obtain the postdeformation breast phantom for subsequent ultrasound simulations. Using the procedures described above, a full cycle of QUE simulations involving complex and highly heterogeneous virtual breast phantoms can be accomplished for the first time. Representative examples were used to demonstrate capabilities of this virtual simulation platform. In the first set of three ultrasound simulation examples, three heterogeneous volumes of interest were selected from a virtual breast ultrasound phantom to perform sophisticated ultrasound simulations. These resultant B-mode images realistically represented the underlying complex but known media. In the second set of three QUE examples, advanced applications in QUE were simulated. The first QUE example was to show breast tumors with complex shapes and/or compositions. The resultant strain images showed complex patterns that were normally seen in freehand clinical ultrasound data. The second and third QUE examples demonstrated (deformation-dependent) nonlinear strain imaging and time-dependent strain imaging, respectively. The proposed virtual QUE platform was implemented and successfully tested in this study. Through show-case examples, the proposed work has demonstrated its capabilities of creating sophisticated QUE data in a way that cannot be done through the manufacture of physical tissue-mimicking phantoms and other software. This open software architecture will soon be made available in the public domain and can be readily adapted to meet specific needs of different research groups to drive innovations in QUE.
NASA Astrophysics Data System (ADS)
Marschall, L. A.; Snyder, G. A.; Good, R. F.; Hayden, M. B.; Cooper, P. R.
1998-12-01
Students in introductory and advanced astronomy classes can now experience the process of discovering asteroids, can measure proper motions, and can actually see the parallax of real astronomical objects on the screen, using a new set of computer-based exercises from Project CLEA. The heart of the exercise is a sophisticated astrometry program "Astrometry of Asteroids", which is a restricted version of CLEA's research software "Tools for Astrometry" described elsewhere at this meeting. The program, as used in the teaching lab, allows students to read and display digital images, co-align pairs of images using designated reference stars, blink and identify moving objects on the pairs, compare images with charts produced from the HST Guide Star Catalog (GSC), and fit equatorial coordinates to the images using designated reference stars from the GSC. Complete technical manuals for the exercise are provided for the use of the instructor, and a set of digital images, in FITS format, is included for the exercise. A student manual is provided for an exercise in which students go through the step-by-step process of determining the tangential velocity of an asteroid. Students first examine a series of images of a near-earth asteroid taken over several hours, blinking pairs to identify the moving object. They next measure the equatorial coordinates on a half-dozen images, and from this calculate an angular velocity of the object. Finally, using a pair of images of the asteroid taken simultaneously at the National Undergraduate Research Observatory (NURO) and at Colgate University, they measure the parallax of the asteroid, and thus its distance, which enables them to convert the angular velocity into a tangential velocity. An optional set of 10 pairs of images is provided, some of which contain asteroids, so that students can try to "find the asteroid" for themselves. The software is extremely flexible, and though materials are provided for a self-contained exercise, teachers can adapt the material to a wide variety of uses. The software and manuals are currently available on the Web. Project CLEA is supported by grants from Gettysburg College and the National Science Foundation.
DOT National Transportation Integrated Search
2017-06-30
The ever-increasing processing speed and computational power of computers and simulation systems has led to correspondingly larger, more sophisticated representations of evacuation traffic processes. Today, micro-level analyses can be conducted for m...
Techniques to derive geometries for image-based Eulerian computations
Dillard, Seth; Buchholz, James; Vigmostad, Sarah; Kim, Hyunggun; Udaykumar, H.S.
2014-01-01
Purpose The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted. Design/methodology/approach Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures. Findings While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics. Originality/value It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting. PMID:25750470
Precision Navigation of Cassini Images Using Rings, Icy Satellites, and Fuzzy Bodies
NASA Astrophysics Data System (ADS)
French, Robert S.; Showalter, Mark R.; Gordon, Mitchell K.
2016-10-01
Before images from the Cassini spacecraft can be analyzed, errors in the published pointing information (up to ~110 pixels for the Imaging Science Subsystem Narrow Angle Camera) must be corrected so that the line of sight vector for each pixel is known. This complicated and labor-intensive process involves matching the image contents with known features such as stars, rings, or moons. Metadata, such as lighting geometry or ring radius and longitude, must be computed for each pixel as well. Both steps require mastering the SPICE toolkit, a highly capable piece of software with a steep learning curve. Only after these steps are completed can the actual scientific investigation begin.We have embarked on a three-year project to perform these steps for all 400,000+ Cassini ISS images as well as images taken by the VIMS, UVIS, and CIRS instruments. The result will be a series of SPICE kernels that include accurate pointing information and a series of backplanes that include precomputed metadata for each pixel. All data will be made public through the PDS Ring-Moon Systems Node (http://www.pds-rings.seti.org). We expect this project to dramatically decrease the time required for scientists to analyze Cassini data.In a previous poster (French et al. 2014, DPS #46, 422.01) we discussed our progress navigating images using stars, simple ring models, and well-defined icy bodies. In this poster we will report on our current progress including the use of more sophisticated ring models, navigation of "fuzzy" bodies such as Titan and Saturn, and use of crater matching on high-resolution images of the icy satellites.
Visual air quality simulation techniques
NASA Astrophysics Data System (ADS)
Molenar, John V.; Malm, William C.; Johnson, Christopher E.
Visual air quality is primarily a human perceptual phenomenon beginning with the transfer of image-forming information through an illuminated, scattering and absorbing atmosphere. Visibility, especially the visual appearance of industrial emissions or the degradation of a scenic view, is the principal atmospheric characteristic through which humans perceive air pollution, and is more sensitive to changing pollution levels than any other air pollution effect. Every attempt to quantify economic costs and benefits of air pollution has indicated that good visibility is a highly valued and desired environmental condition. Measurement programs can at best approximate the state of the ambient atmosphere at a few points in a scenic vista viewed by an observer. To fully understand the visual effect of various changes in the concentration and distribution of optically important atmospheric pollutants requires the use of aerosol and radiative transfer models. Communication of the output of these models to scientists, decision makers and the public is best done by applying modern image-processing systems to generate synthetic images representing the modeled air quality conditions. This combination of modeling techniques has been under development for the past 15 yr. Initially, visual air quality simulations were limited by a lack of computational power to simplified models depicting Gaussian plumes or uniform haze conditions. Recent explosive growth in low cost, high powered computer technology has allowed the development of sophisticated aerosol and radiative transfer models that incorporate realistic terrain, multiple scattering, non-uniform illumination, varying spatial distribution, concentration and optical properties of atmospheric constituents, and relative humidity effects on aerosol scattering properties. This paper discusses these improved models and image-processing techniques in detail. Results addressing uniform and non-uniform layered haze conditions in both urban and remote pristine areas will be presented.
A comprehensive numerical analysis of background phase correction with V-SHARP.
Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand
2017-04-01
Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Real-time rendering for multiview autostereoscopic displays
NASA Astrophysics Data System (ADS)
Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.
2006-02-01
In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.
A new JPEG-based steganographic algorithm for mobile devices
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Cherukuri, Ravindranath C.; Schneider, Erik C.; White, Gregory B.
2006-05-01
Currently, cellular phones constitute a significant portion of the global telecommunications market. Modern cellular phones offer sophisticated features such as Internet access, on-board cameras, and expandable memory which provide these devices with excellent multimedia capabilities. Because of the high volume of cellular traffic, as well as the ability of these devices to transmit nearly all forms of data. The need for an increased level of security in wireless communications is becoming a growing concern. Steganography could provide a solution to this important problem. In this article, we present a new algorithm for JPEG-compressed images which is applicable to mobile platforms. This algorithm embeds sensitive information into quantized discrete cosine transform coefficients obtained from the cover JPEG. These coefficients are rearranged based on certain statistical properties and the inherent processing and memory constraints of mobile devices. Based on the energy variation and block characteristics of the cover image, the sensitive data is hidden by using a switching embedding technique proposed in this article. The proposed system offers high capacity while simultaneously withstanding visual and statistical attacks. Based on simulation results, the proposed method demonstrates an improved retention of first-order statistics when compared to existing JPEG-based steganographic algorithms, while maintaining a capacity which is comparable to F5 for certain cover images.
Green, Walton A.; Little, Stefan A.; Price, Charles A.; Wing, Scott L.; Smith, Selena Y.; Kotrc, Benjamin; Doria, Gabriela
2014-01-01
The reticulate venation that is characteristic of a dicot leaf has excited interest from systematists for more than a century, and from physiological and developmental botanists for decades. The tools of digital image acquisition and computer image analysis, however, are only now approaching the sophistication needed to quantify aspects of the venation network found in real leaves quickly, easily, accurately, and reliably enough to produce biologically meaningful data. In this paper, we examine 120 leaves distributed across vascular plants (representing 118 genera and 80 families) using two approaches: a semiquantitative scoring system called “leaf ranking,” devised by the late Leo Hickey, and an automated image-analysis protocol. In the process of comparing these approaches, we review some methodological issues that arise in trying to quantify a vein network, and discuss the strengths and weaknesses of automatic data collection and human pattern recognition. We conclude that subjective leaf rank provides a relatively consistent, semiquantitative measure of areole size among other variables; that modal areole size is generally consistent across large sections of a leaf lamina; and that both approaches—semiquantitative, subjective scoring; and fully quantitative, automated measurement—have appropriate places in the study of leaf venation. PMID:25202646
ERIC Educational Resources Information Center
Dann, Chris; Richardson, Tony
2015-01-01
This article examines the case of Catch Me Excel (CeMeE), an electronic feedback system developed to facilitate video, image and written feedback in the workplace to educators about pedagogical-related outcomes. It comprises a sophisticated, technological feedback system of which the resultant data can be used to enhance classroom, schools and…
NASA Technical Reports Server (NTRS)
Squyres, S. W.
1993-01-01
The MESUR mission will place a network of small, robust landers on the Martian surface, making a coordinated set of observations for at least one Martian year. MESUR presents some major challenges for development of instruments, instrument deployment systems, and on board data processing techniques. The instrument payload has not yet been selected, but the straw man payload is (1) a three-axis seismometer; (2) a meteorology package that senses pressure, temperature, wind speed and direction, humidity, and sky brightness; (3) an alphaproton-X-ray spectrometer (APXS); (4) a thermal analysis/evolved gas analysis (TA/EGA) instrument; (5) a descent imager, (6) a panoramic surface imager; (7) an atmospheric structure instrument (ASI) that senses pressure, temperature, and acceleration during descent to the surface; and (8) radio science. Because of the large number of landers to be sent (about 16), all these instruments must be very lightweight. All but the descent imager and the ASI must survive landing loads that may approach 100 g. The meteorology package, seismometer, and surface imager must be able to survive on the surface for at least one Martian year. The seismometer requires deployment off the lander body. The panoramic imager and some components of the meteorology package require deployment above the lander body. The APXS must be placed directly against one or more rocks near the lander, prompting consideration of a micro rover for deployment of this instrument. The TA/EGA requires a system to acquire, contain, and heat a soil sample. Both the imagers and, especially, the seismometer will be capable of producing large volumes of data, and will require use of sophisticated data compression techniques.
WE-D-303-00: Computational Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John; Brigham and Women’s Hospital and Dana-Farber Cancer Institute, Boston, MA
2015-06-15
Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less
Online Process Scaffolding and Students' Self-Regulated Learning with Hypermedia.
ERIC Educational Resources Information Center
Azevedo, Roger; Cromley, Jennifer G.; Thomas, Leslie; Seibert, Diane; Tron, Myriam
This study examined the role of different scaffolding instructional interventions in facilitating students' shift to more sophisticated mental models as indicated by both performance and process data. Undergraduate students (n=53) were randomly assigned to 1 of 3 scaffolding conditions (adaptive content and process scaffolding (ACPS), adaptive…
Applications of Machine Learning for Radiation Therapy.
Arimura, Hidetaka; Nakamoto, Takahiro
2016-01-01
Radiation therapy has been highly advanced as image guided radiation therapy (IGRT) by making advantage of image engineering technologies. Recently, novel frameworks based on image engineering technologies as well as machine learning technologies have been studied for sophisticating the radiation therapy. In this review paper, the author introduces several researches of applications of machine learning for radiation therapy. For examples, a method to determine the threshold values for standardized uptake value (SUV) for estimation of gross tumor volume (GTV) in positron emission tomography (PET) images, an approach to estimate the multileaf collimator (MLC) position errors between treatment plans and radiation delivery time, and prediction frameworks for esophageal stenosis and radiation pneumonitis risk after radiation therapy are described. Finally, the author introduces seven issues that one should consider when applying machine learning models to radiation therapy.
NASA Astrophysics Data System (ADS)
Lin, Chien-Liang; Su, Yu-Zheng; Hung, Min-Wei; Huang, Kuo-Cheng
2010-08-01
In recent years, Augmented Reality (AR)[1][2][3] is very popular in universities and research organizations. The AR technology has been widely used in Virtual Reality (VR) fields, such as sophisticated weapons, flight vehicle development, data model visualization, virtual training, entertainment and arts. AR has characteristics to enhance the display output as a real environment with specific user interactive functions or specific object recognitions. It can be use in medical treatment, anatomy training, precision instrument casting, warplane guidance, engineering and distance robot control. AR has a lot of vantages than VR. This system developed combines sensors, software and imaging algorithms to make users feel real, actual and existing. Imaging algorithms include gray level method, image binarization method, and white balance method in order to make accurate image recognition and overcome the effects of light.
Clegg, G; Roebuck, S; Steedman, D
2001-01-01
Objectives—To develop a computer based storage system for clinical images—radiographs, photographs, ECGs, text—for use in teaching, training, reference and research within an accident and emergency (A&E) department. Exploration of methods to access and utilise the data stored in the archive. Methods—Implementation of a digital image archive using flatbed scanner and digital camera as capture devices. A sophisticated coding system based on ICD 10. Storage via an "intelligent" custom interface. Results—A practical solution to the problems of clinical image storage for teaching purposes. Conclusions—We have successfully developed a digital image capture and storage system, which provides an excellent teaching facility for a busy A&E department. We have revolutionised the practice of the "hand-over meeting". PMID:11435357
An automated digital imaging system for environmental monitoring applications
Bogle, Rian; Velasco, Miguel; Vogel, John
2013-01-01
Recent improvements in the affordability and availability of high-resolution digital cameras, data loggers, embedded computers, and radio/cellular modems have advanced the development of sophisticated automated systems for remote imaging. Researchers have successfully placed and operated automated digital cameras in remote locations and in extremes of temperature and humidity, ranging from the islands of the South Pacific to the Mojave Desert and the Grand Canyon. With the integration of environmental sensors, these automated systems are able to respond to local conditions and modify their imaging regimes as needed. In this report we describe in detail the design of one type of automated imaging system developed by our group. It is easily replicated, low-cost, highly robust, and is a stand-alone automated camera designed to be placed in remote locations, without wireless connectivity.
Image-guided interventions and computer-integrated therapy: Quo vadis?
Peters, Terry M; Linte, Cristian A
2016-10-01
Significant efforts have been dedicated to minimizing invasiveness associated with surgical interventions, most of which have been possible thanks to the developments in medical imaging, surgical navigation, visualization and display technologies. Image-guided interventions have promised to dramatically change the way therapies are delivered to many organs. However, in spite of the development of many sophisticated technologies over the past two decades, other than some isolated examples of successful implementations, minimally invasive therapy is far from enjoying the wide acceptance once envisioned. This paper provides a large-scale overview of the state-of-the-art developments, identifies several barriers thought to have hampered the wider adoption of image-guided navigation, and suggests areas of research that may potentially advance the field. Copyright © 2016. Published by Elsevier B.V.
Controls for Burning Solid Wastes
ERIC Educational Resources Information Center
Toro, Richard F.; Weinstein, Norman J.
1975-01-01
Modern thermal solid waste processing systems are becoming more complex, incorporating features that require instrumentation and control systems to a degree greater than that previously required just for proper combustion control. With the advent of complex, sophisticated, thermal processing systems, TV monitoring and computer control should…
Budget Limits Prompt Humming Hive Togetherness.
ERIC Educational Resources Information Center
Dain, Jo Anne
1983-01-01
Describes Truckee Meadows Community College's word processing center, in which students are trained in modern word processing techniques on the same equipment that meets the college's needs for a sophisticated computerized system. Considers equipment, safeguards, advantages, and current and potential uses of the center. (DMM)
Fast, cheap and in control: spectral imaging with handheld devices
NASA Astrophysics Data System (ADS)
Gooding, Edward A.; Deutsch, Erik R.; Huehnerhoff, Joseph; Hajian, Arsen R.
2017-05-01
Remote sensing has moved out of the laboratory and into the real world. Instruments using reflection or Raman imaging modalities become faster, cheaper and more powerful annually. Enabling technologies include virtual slit spectrometer design, high power multimode diode lasers, fast open-loop scanning systems, low-noise IR-sensitive array detectors and low-cost computers with touchscreen interfaces. High-volume manufacturing assembles these components into inexpensive portable or handheld devices that make possible sophisticated decision-making based on robust data analytics. Examples include threat, hazmat and narcotics detection; remote gas sensing; biophotonic screening; environmental remediation and a host of other applications.
Improving multispectral satellite image compression using onboard subpixel registration
NASA Astrophysics Data System (ADS)
Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin
2013-09-01
Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.
Word Processing and Its Implications for Business Communications Courses.
ERIC Educational Resources Information Center
Kruk, Leonard B.
Word processing, a systematic approach to office work, is currently based on the use of sophisticated dictating and typing machines. The word processing market is rapidly increasing with the paper explosion brought on by such factors as increasing governmental regulation, Internal Revenue Service requirements, and the need for stockholders to be…
ERIC Educational Resources Information Center
Seethamraju, Ravi
2011-01-01
The sophistication of the integrated world of work and increased recognition of business processes as critical corporate assets require graduates to develop "process orientation" and an "integrated view" of business. Responding to these dynamic changes in business organizations, business schools are also continuing to modify…
ERIC Educational Resources Information Center
Sun, Lina
2017-01-01
Graphic novels, which tell real and fictional stories using a combination of words and images, are often sophisticated, and involve intriguing topics. There has been an increasing interest in teaching with graphic novels to promote literacy as one alternative to traditional literacy pedagogy (e.g., Gorman, 2003; Schwarz, 2002). A pedagogy of…
ERIC Educational Resources Information Center
Kovack-Lesh, Kristine A.; McMurray, Bob; Oakes, Lisa M.
2014-01-01
We assessed the eye-movements of 4-month-old infants (N = 38) as they visually inspected pairs of images of cats or dogs. In general, infants who had previous experience with pets exhibited more sophisticated inspection than did infants without pet experience, both directing more visual attention to the informative head regions of the animals,…
Extending the Stabilized Supralinear Network model for binocular image processing.
Selby, Ben; Tripp, Bryan
2017-06-01
The visual cortex is both extensive and intricate. Computational models are needed to clarify the relationships between its local mechanisms and high-level functions. The Stabilized Supralinear Network (SSN) model was recently shown to account for many receptive field phenomena in V1, and also to predict subtle receptive field properties that were subsequently confirmed in vivo. In this study, we performed a preliminary exploration of whether the SSN is suitable for incorporation into large, functional models of the visual cortex, considering both its extensibility and computational tractability. First, whereas the SSN receives abstract orientation signals as input, we extended it to receive images (through a linear-nonlinear stage), and found that the extended version behaved similarly. Secondly, whereas the SSN had previously been studied in a monocular context, we found that it could also reproduce data on interocular transfer of surround suppression. Finally, we reformulated the SSN as a convolutional neural network, and found that it scaled well on parallel hardware. These results provide additional support for the plausibility of the SSN as a model of lateral interactions in V1, and suggest that the SSN is well suited as a component of complex vision models. Future work will use the SSN to explore relationships between local network interactions and sophisticated vision processes in large networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sensing Super-Position: Human Sensing Beyond the Visual Spectrum
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2007-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems.
Managing Credit Card Expenses: Nova Southeastern University Shares Cost-Saving Techniques.
ERIC Educational Resources Information Center
Peskin, Carol Ann
1994-01-01
Nova Southeastern University, Florida, has implemented a variety of techniques of cost containment for campus credit card transactions. These include restricted card acceptance parameters, careful merchant rate negotiation, increased automation of transaction processing, and sophisticated processing techniques. The university has demonstrated…
Assessment of input uncertainty by seasonally categorized latent variables using SWAT
USDA-ARS?s Scientific Manuscript database
Watershed processes have been explored with sophisticated simulation models for the past few decades. It has been stated that uncertainty attributed to alternative sources such as model parameters, forcing inputs, and measured data should be incorporated during the simulation process. Among varyin...
Recent advances in the application of electron tomography to materials chemistry.
Leary, Rowan; Midgley, Paul A; Thomas, John Meurig
2012-10-16
Nowadays, tomography plays a central role in pureand applied science, in medicine, and in many branches of engineering and technology. It entails reconstructing the three-dimensional (3D) structure of an object from a tilt series of two-dimensional (2D) images. Its origin goes back to 1917, when Radon showed mathematically how a series of 2D projection images could be converted to the 3D structural one. Tomographic X-ray and positron scanning for 3D medical imaging, with a resolution of ∼1 mm, is now ubiquitous in major hospitals. Electron tomography, a relatively new chemical tool, with a resolution of ∼1 nm, has been recently adopted by materials chemists as an invaluable aid for the 3D study of the morphologies, spatially-discriminating chemical compositions, and defect properties of nanostructured materials. In this Account, we review the advances that have been made in facilitating the recording of the required series of 2D electron microscopic images and the subsequent process of 3D reconstruction of specimens that are vulnerable, to a greater or lesser degree, to electron beam damage. We describe how high-fidelity 3D tomograms may be obtained from relatively few 2D images by incorporating prior structural knowledge into the reconstruction process. In particular, we highlight the vital role of compressed sensing, a recently developed procedure well-known to information theorists that exploits ideas of image compression and "sparsity" (that the important image information can be captured in a reduced data set). We also touch upon another promising approach, "discrete" tomography, which builds into the reconstruction process a prior assumption that the object can be described in discrete terms, such as the number of constituent materials and their expected densities. Other advances made recently that we outline, such as the availability of aberration-corrected electron microscopes, electron wavelength monochromators, and sophisticated specimen goniometers, have all contributed significantly to the further development of quantitative 3D studies of nanostructured materials, including nanoparticle-heterogeneous catalysts, fuel-cell components, and drug-delivery systems, as well as photovoltaic and plasmonic devices, and are likely to enhance our knowledge of many other facets of materials chemistry, such as organic-inorganic composites, solar-energy devices, bionanotechnology, biomineralization, and energy-storage systems composed of high-permittivity metal oxides.
Automatic micropropagation of plants
NASA Astrophysics Data System (ADS)
Otte, Clemens; Schwanke, Joerg; Jensch, Peter F.
1996-12-01
Micropropagation is a sophisticated technique for the rapid multiplication of plants. It has a great commercial potential due to the speed of propagation, the high plant quality, and the ability to produce disease-free plants. However, micropropagation is usually done by hand which makes the process cost-intensive and tedious for the workers especially because it requires a sterile work-place. Therefore, we have developed a prototype automation system for the micropropagation of a grass species (miscanthus sinensis gigantheus). The objective of this paper is to describe the robotic system in an overview and to discuss the vision system more closely including the implemented morphological operations recognizing the cutting and gripping points of miscanthus plants. Fuzzy controllers are used to adapt the parameters of image operations on-line to each individual plant. Finally, we discuss our experiences with the developed prototype an give a preview of a possible real production line system.
Multilamellar Structures and Filament Bundles Are Found on the Cell Surface during Bunyavirus Egress
Sanz-Sánchez, Laura; Risco, Cristina
2013-01-01
Inside cells, viruses build specialized compartments for replication and morphogenesis. We observed that virus release associates with specific structures found on the surface of mammalian cells. Cultured adherent cells were infected with a bunyavirus and processed for oriented sectioning and transmission electron microscopy. Imaging of cell basal regions showed sophisticated multilamellar structures (MLS) and extracellular filament bundles with attached viruses. Correlative light and electron microscopy confirmed that both MLS and filaments proliferated during the maximum egress of new viruses. MLS dimensions and structure were reminiscent of those reported for the nanostructures on gecko fingertips, which are responsible for the extraordinary attachment capacity of these lizards. As infected cells with MLS were more resistant to detachment than control cells, we propose an adhesive function for these structures, which would compensate for the loss of adherence during release of new virus progeny. PMID:23799021
Structural neuroimaging in neuropsychology: History and contemporary applications.
Bigler, Erin D
2017-11-01
Neuropsychology's origins began long before there were any in vivo methods to image the brain. That changed with the advent of computed tomography in the 1970s and magnetic resonance imaging in the early 1980s. Now computed tomography and magnetic resonance imaging are routinely a part of neuropsychological investigations with an increasing number of sophisticated methods for image analysis. This review examines the history of neuroimaging utilization in neuropsychological investigations, highlighting the basic methods that go into image quantification and the various metrics that can be derived. Neuroimaging methods and limitations for identify what constitutes a lesion are discussed. Likewise, the influence of various demographic and developmental factors that influence quantification of brain structure are reviewed. Neuroimaging is an integral part of 21st Century neuropsychology. The importance of neuroimaging to advancing neuropsychology is emphasized. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Ultrasound tissue analysis and characterization
NASA Astrophysics Data System (ADS)
Kaufhold, John; Chan, Ray C.; Karl, William C.; Castanon, David A.
1999-07-01
On the battlefield of the future, it may become feasible for medics to perform, via application of new biomedical technologies, more sophisticated diagnoses and surgery than is currently practiced. Emerging biomedical technology may enable the medic to perform laparoscopic surgical procedures to remove, for example, shrapnel from injured soldiers. Battlefield conditions constrain the types of medical image acquisition and interpretation which can be performed. Ultrasound is the only viable biomedical imaging modality appropriate for deployment on the battlefield -- which leads to image interpretation issues because of the poor quality of ultrasound imagery. To help overcome these issues, we develop and implement a method of image enhancement which could aid non-experts in the rapid interpretation and use of ultrasound imagery. We describe an energy minimization approach to finding boundaries in medical images and show how prior information on edge orientation can be incorporated into this framework to detect tissue boundaries oriented at a known angle.
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-01-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less
Cellular resolution functional imaging in behaving rats using voluntary head restraint
Scott, Benjamin B.; Brody, Carlos D.; Tank, David W.
2013-01-01
SUMMARY High-throughput operant conditioning systems for rodents provide efficient training on sophisticated behavioral tasks. Combining these systems with technologies for cellular resolution functional imaging would provide a powerful approach to study neural dynamics during behavior. Here we describe an integrated two-photon microscope and behavioral apparatus that allows cellular resolution functional imaging of cortical regions during epochs of voluntary head restraint. Rats were trained to initiate periods of restraint up to 8 seconds in duration, which provided the mechanical stability necessary for in vivo imaging while allowing free movement between behavioral trials. A mechanical registration system repositioned the head to within a few microns, allowing the same neuronal populations to be imaged on each trial. In proof-of-principle experiments, calcium dependent fluorescence transients were recorded from GCaMP-labeled cortical neurons. In contrast to previous methods for head restraint, this system can also be incorporated into high-throughput operant conditioning systems. PMID:24055015
End-to-end imaging information rate advantages of various alternative communication systems
NASA Technical Reports Server (NTRS)
Rice, R. F.
1982-01-01
The efficiency of various deep space communication systems which are required to transmit both imaging and a typically error sensitive class of data called general science and engineering (gse) are compared. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an advanced imaging communication system (AICS) which exhibits the rather significant advantages of sophisticated data compression coupled with powerful yet practical channel coding. For example, under certain conditions the improved AICS efficiency could provide as much as two orders of magnitude increase in imaging information rate compared to a single channel uncoded, uncompressed system while maintaining the same gse data rate in both systems. Additional details describing AICS compression and coding concepts as well as efforts to apply them are provided in support of the system analysis.
Wide-field optical coherence tomography based microangiography for retinal imaging
Zhang, Qinqin; Lee, Cecilia S.; Chao, Jennifer; Chen, Chieh-Li; Zhang, Thomas; Sharma, Utkarsh; Zhang, Anqi; Liu, Jin; Rezaei, Kasra; Pepple, Kathryn L.; Munsen, Richard; Kinyoun, James; Johnstone, Murray; Van Gelder, Russell N.; Wang, Ruikang K.
2016-01-01
Optical coherence tomography angiography (OCTA) allows for the evaluation of functional retinal vascular networks without a need for contrast dyes. For sophisticated monitoring and diagnosis of retinal diseases, OCTA capable of providing wide-field and high definition images of retinal vasculature in a single image is desirable. We report OCTA with motion tracking through an auxiliary real-time line scan ophthalmoscope that is clinically feasible to image functional retinal vasculature in patients, with a coverage of more than 60 degrees of retina while still maintaining high definition and resolution. We demonstrate six illustrative cases with unprecedented details of vascular involvement in retinal diseases. In each case, OCTA yields images of the normal and diseased microvasculature at all levels of the retina, with higher resolution than observed with fluorescein angiography. Wide-field OCTA technology will be an important next step in augmenting the utility of OCT technology in clinical practice. PMID:26912261
Wide-field optical coherence tomography based microangiography for retinal imaging
NASA Astrophysics Data System (ADS)
Zhang, Qinqin; Lee, Cecilia S.; Chao, Jennifer; Chen, Chieh-Li; Zhang, Thomas; Sharma, Utkarsh; Zhang, Anqi; Liu, Jin; Rezaei, Kasra; Pepple, Kathryn L.; Munsen, Richard; Kinyoun, James; Johnstone, Murray; van Gelder, Russell N.; Wang, Ruikang K.
2016-02-01
Optical coherence tomography angiography (OCTA) allows for the evaluation of functional retinal vascular networks without a need for contrast dyes. For sophisticated monitoring and diagnosis of retinal diseases, OCTA capable of providing wide-field and high definition images of retinal vasculature in a single image is desirable. We report OCTA with motion tracking through an auxiliary real-time line scan ophthalmoscope that is clinically feasible to image functional retinal vasculature in patients, with a coverage of more than 60 degrees of retina while still maintaining high definition and resolution. We demonstrate six illustrative cases with unprecedented details of vascular involvement in retinal diseases. In each case, OCTA yields images of the normal and diseased microvasculature at all levels of the retina, with higher resolution than observed with fluorescein angiography. Wide-field OCTA technology will be an important next step in augmenting the utility of OCT technology in clinical practice.
Wide-field optical coherence tomography based microangiography for retinal imaging.
Zhang, Qinqin; Lee, Cecilia S; Chao, Jennifer; Chen, Chieh-Li; Zhang, Thomas; Sharma, Utkarsh; Zhang, Anqi; Liu, Jin; Rezaei, Kasra; Pepple, Kathryn L; Munsen, Richard; Kinyoun, James; Johnstone, Murray; Van Gelder, Russell N; Wang, Ruikang K
2016-02-25
Optical coherence tomography angiography (OCTA) allows for the evaluation of functional retinal vascular networks without a need for contrast dyes. For sophisticated monitoring and diagnosis of retinal diseases, OCTA capable of providing wide-field and high definition images of retinal vasculature in a single image is desirable. We report OCTA with motion tracking through an auxiliary real-time line scan ophthalmoscope that is clinically feasible to image functional retinal vasculature in patients, with a coverage of more than 60 degrees of retina while still maintaining high definition and resolution. We demonstrate six illustrative cases with unprecedented details of vascular involvement in retinal diseases. In each case, OCTA yields images of the normal and diseased microvasculature at all levels of the retina, with higher resolution than observed with fluorescein angiography. Wide-field OCTA technology will be an important next step in augmenting the utility of OCT technology in clinical practice.
NASA Astrophysics Data System (ADS)
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-11-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.
Cellular Level Brain Imaging in Behaving Mammals: An Engineering Approach
Hamel, Elizabeth J.O.; Grewe, Benjamin F.; Parker, Jones G.; Schnitzer, Mark J.
2017-01-01
Fluorescence imaging offers expanding capabilities for recording neural dynamics in behaving mammals, including the means to monitor hundreds of cells targeted by genetic type or connectivity, track cells over weeks, densely sample neurons within local microcircuits, study cells too inactive to isolate in extracellular electrical recordings, and visualize activity in dendrites, axons, or dendritic spines. We discuss recent progress and future directions for imaging in behaving mammals from a systems engineering perspective, which seeks holistic consideration of fluorescent indicators, optical instrumentation, and computational analyses. Today, genetically encoded indicators of neural Ca2+ dynamics are widely used, and those of trans-membrane voltage are rapidly improving. Two complementary imaging paradigms involve conventional microscopes for studying head-restrained animals and head-mounted miniature microscopes for imaging in freely behaving animals. Overall, the field has attained sufficient sophistication that increased cooperation between those designing new indicators, light sources, microscopes, and computational analyses would greatly benefit future progress. PMID:25856491
NASA Astrophysics Data System (ADS)
Seeto, Wen Jun; Lipke, Elizabeth Ann
2016-03-01
Tracking of rolling cells via in vitro experiment is now commonly performed using customized computer programs. In most cases, two critical challenges continue to limit analysis of cell rolling data: long computation times due to the complexity of tracking algorithms and difficulty in accurately correlating a given cell with itself from one frame to the next, which is typically due to errors caused by cells that either come close in proximity to each other or come in contact with each other. In this paper, we have developed a sophisticated, yet simple and highly effective, rolling cell tracking system to address these two critical problems. This optical cell tracking analysis (OCTA) system first employs ImageJ for cell identification in each frame of a cell rolling video. A custom MATLAB code was written to use the geometric and positional information of all cells as the primary parameters for matching each individual cell with itself between consecutive frames and to avoid errors when tracking cells that come within close proximity to one another. Once the cells are matched, rolling velocity can be obtained for further analysis. The use of ImageJ for cell identification eliminates the need for high level MATLAB image processing knowledge. As a result, only fundamental MATLAB syntax is necessary for cell matching. OCTA has been implemented in the tracking of endothelial colony forming cell (ECFC) rolling under shear. The processing time needed to obtain tracked cell data from a 2 min ECFC rolling video recorded at 70 frames per second with a total of over 8000 frames is less than 6 min using a computer with an Intel® Core™ i7 CPU 2.80 GHz (8 CPUs). This cell tracking system benefits cell rolling analysis by substantially reducing the time required for post-acquisition data processing of high frame rate video recordings and preventing tracking errors when individual cells come in close proximity to one another.
Clark, Roger N.; Swayze, Gregg A.; Livo, K. Eric; Kokaly, Raymond F.; Sutley, Steve J.; Dalton, J. Brad; McDougal, Robert R.; Gent, Carol A.
2003-01-01
Imaging spectroscopy is a tool that can be used to spectrally identify and spatially map materials based on their specific chemical bonds. Spectroscopic analysis requires significantly more sophistication than has been employed in conventional broadband remote sensing analysis. We describe a new system that is effective at material identification and mapping: a set of algorithms within an expert system decision‐making framework that we call Tetracorder. The expertise in the system has been derived from scientific knowledge of spectral identification. The expert system rules are implemented in a decision tree where multiple algorithms are applied to spectral analysis, additional expert rules and algorithms can be applied based on initial results, and more decisions are made until spectral analysis is complete. Because certain spectral features are indicative of specific chemical bonds in materials, the system can accurately identify and map those materials. In this paper we describe the framework of the decision making process used for spectral identification, describe specific spectral feature analysis algorithms, and give examples of what analyses and types of maps are possible with imaging spectroscopy data. We also present the expert system rules that describe which diagnostic spectral features are used in the decision making process for a set of spectra of minerals and other common materials. We demonstrate the applications of Tetracorder to identify and map surface minerals, to detect sources of acid rock drainage, and to map vegetation species, ice, melting snow, water, and water pollution, all with one set of expert system rules. Mineral mapping can aid in geologic mapping and fault detection and can provide a better understanding of weathering, mineralization, hydrothermal alteration, and other geologic processes. Environmental site assessment, such as mapping source areas of acid mine drainage, has resulted in the acceleration of site cleanup, saving millions of dollars and years in cleanup time. Imaging spectroscopy data and Tetracorder analysis can be used to study both terrestrial and planetary science problems. Imaging spectroscopy can be used to probe planetary systems, including their atmospheres, oceans, and land surfaces.
Fourier-interpolation superresolution optical fluctuation imaging (fSOFi) (Conference Presentation)
NASA Astrophysics Data System (ADS)
Enderlein, Joerg; Stein, Simon C.; Huss, Anja; Hähnel, Dirk; Gregor, Ingo
2016-02-01
Stochastic Optical Fluctuation Imaging (SOFI) is a superresolution fluorescence microscopy technique which allows to enhance the spatial resolution of an image by evaluating the temporal fluctuations of blinking fluorescent emitters. SOFI is not based on the identification and localization of single molecules such as in the widely used Photoactivation Localization Microsopy (PALM) or Stochastic Optical Reconstruction Microscopy (STORM), but computes a superresolved image via temporal cumulants from a recorded movie. A technical challenge hereby is that, when directly applying the SOFI algorithm to a movie of raw images, the pixel size of the final SOFI image is the same as that of the original images, which becomes problematic when the final SOFI resolution is much smaller than this value. In the past, sophisticated cross-correlation schemes have been used for tackling this problem. Here, we present an alternative, exact, straightforward, and simple solution using an interpolation scheme based on Fourier transforms. We exemplify the method on simulated and experimental data.
Scientific meaning of meanings: quests for discoveries concerning our cultural ills.
Patterson, C C
1998-08-01
This paper outlines pioneering concepts of fundamental physical and emotional features of the human brain which served as primary operators. These have developed during the past 10,000 years, giving rise to our present global megacultures and their various ancestral culture progenitors. Essential points are these: (1) Biological evolution endowed the human brain (quite inadvertently and unintentionally) with enormous latent powers for complex and sophisticated abstract ratiocinations. (2) Magnitudes of these latent powers grew exponentially with linear enlargements of brain size during the evolution of the genetic ancestors of Homo sapiens sapiens (Hss) during the past 3 million years, but these latent powers never materialized in utilized forms within the environmental contexts in which they evolved. (3) These sophisticated, abstract ratiocinations, both latent powers and operative forms in today's Hss brain, are divided between two major categories: utilitarian thinking and nonutilitarian thinking. (4) These two different types of thinking processes are carried out within separate, different regional combinations of neuronal biochemical entities within the same individual brain. (5) Sensitivities of abstract, sophisticated ratiocination processes within the human brain to influences from communication interactions with other human brains are exponentially greater in comparison with any other species of central nervous system in the earth's biosphere. This makes the brain population density the utmost critical factor, and determines the character of human thought within interacting populations of brains at a given time and place within a particular culture. (6) Abrupt increases of sedentary brain population densities, unnaturally greater by orders of magnitude than those that existed previously in biological evolutionary contexts, were engendered by the inauguration of agricultural practices 10,000 years ago. This enabled latent powers of the human brain used for complex and sophisticated abstract ratiocinations to become manifest in materialized forms of usage within relatively large groups of humans living i certain regions of the earth. (7) Thinking processes of the utilitarian category within brains living in such regions guided and dominated the development of sophisticated and complex social hierarchies and institutions, forms of communication, technologies, and cultures since that time. This dominating factor relegated thinking processes within the nonutilitarian categories of those brains to subservient roles during those developments. (8) Nonutilitarian abstracts ratiocinations possess a potential for proper adjudication and guidance of utilitarian abstract ratiocinations in the latter's development of culture. However, lack of the former's proper role in cultural developments since the beginning of the Holocene interglacial era has resulted in the imprisonment of Hss as aliens in an intellectual hell on a foreign planet.
Toews, Michael D; Pearson, Tom C; Campbell, James F
2006-04-01
Computed tomography, an imaging technique commonly used for diagnosing internal human health ailments, uses multiple x-rays and sophisticated software to recreate a cross-sectional representation of a subject. The use of this technique to image hard red winter wheat, Triticum aestivm L., samples infested with pupae of Sitophilus oryzae (L.) was investigated. A software program was developed to rapidly recognize and quantify the infested kernels. Samples were imaged in a 7.6-cm (o.d.) plastic tube containing 0, 50, or 100 infested kernels per kg of wheat. Interkernel spaces were filled with corn oil so as to increase the contrast between voids inside kernels and voids among kernels. Automated image processing, using a custom C language software program, was conducted separately on each 100 g portion of the prepared samples. The average detection accuracy in the five infested kernels per 100-g samples was 94.4 +/- 7.3% (mean +/- SD, n = 10), whereas the average detection accuracy in the 10 infested kernels per 100-g sample was 87.3 +/- 7.9% (n = 10). Detection accuracy in the 10 infested kernels per 100-g samples was slightly less than the five infested kernels per 100-g samples because of some infested kernels overlapping with each other or air bubbles in the oil. A mean of 1.2 +/- 0.9 (n = 10) bubbles (per tube) was incorrectly classed as infested kernels in replicates containing no infested kernels. In light of these positive results, future studies should be conducted using additional grains, insect species, and life stages.
Automated delineation of radiotherapy volumes: are we going in the right direction?
Whitfield, G A; Price, P; Price, G J; Moore, C J
2013-01-01
ABSTRACT. Rapid and accurate delineation of target volumes and multiple organs at risk, within the enduring International Commission on Radiation Units and Measurement framework, is now hugely important in radiotherapy, owing to the rapid proliferation of intensity-modulated radiotherapy and the advent of four-dimensional image-guided adaption. Nevertheless, delineation is still generally clinically performed with little if any machine assistance, even though it is both time-consuming and prone to interobserver variation. Currently available segmentation tools include those based on image greyscale interrogation, statistical shape modelling and body atlas-based methods. However, all too often these are not able to match the accuracy of the expert clinician, which remains the universally acknowledged gold standard. In this article we suggest that current methods are fundamentally limited by their lack of ability to incorporate essential human clinical decision-making into the underlying models. Hybrid techniques that utilise prior knowledge, make sophisticated use of greyscale information and allow clinical expertise to be integrated are needed. This may require a change in focus from automated segmentation to machine-assisted delineation. Similarly, new metrics of image quality reflecting fitness for purpose would be extremely valuable. We conclude that methods need to be developed to take account of the clinician's expertise and honed visual processing capabilities as much as the underlying, clinically meaningful information content of the image data being interrogated. We illustrate our observations and suggestions through our own experiences with two software tools developed as part of research council-funded projects. PMID:23239689
New technologies lead to a new frontier: cognitive multiple data representation
NASA Astrophysics Data System (ADS)
Buffat, S.; Liege, F.; Plantier, J.; Roumes, C.
2005-05-01
The increasing number and complexity of operational sensors (radar, infrared, hyperspectral...) and availability of huge amount of data, lead to more and more sophisticated information presentations. But one key element of the IMINT line cannot be improved beyond initial system specification: the operator.... In order to overcome this issue, we have to better understand human visual object representation. Object recognition theories in human vision balance between matching 2D templates representation with viewpoint-dependant information, and a viewpoint-invariant system based on structural description. Spatial frequency content is relevant due to early vision filtering. Orientation in depth is an important variable to challenge object constancy. Three objects, seen from three different points of view in a natural environment made the original images in this study. Test images were a combination of spatial frequency filtered original images and an additive contrast level of white noise. In the first experiment, the observer's task was a same versus different forced choice with spatial alternative. Test images had the same noise level in a presentation row. Discrimination threshold was determined by modifying the white noise contrast level by means of an adaptative method. In the second experiment, a repetition blindness paradigm was used to further investigate the viewpoint effect on object recognition. The results shed some light on the human visual system processing of objects displayed under different physical descriptions. This is an important achievement because targets which not always match physical properties of usual visual stimuli can increase operational workload.
A numerical study of sensory-guided multiple views for improved object identification
NASA Astrophysics Data System (ADS)
Blakeslee, B. A.; Zelnio, E. G.; Koditschek, D. E.
2014-06-01
We explore the potential on-line adjustment of sensory controls for improved object identification and discrimination in the context of a simulated high resolution camera system carried onboard a maneuverable robotic platform that can actively choose its observational position and pose. Our early numerical studies suggest the significant efficacy and enhanced performance achieved by even very simple feedback-driven iteration of the view in contrast to identification from a fixed pose, uninformed by any active adaptation. Specifically, we contrast the discriminative performance of the same conventional classification system when informed by: a random glance at a vehicle; two random glances at a vehicle; or a random glance followed by a guided second look. After each glance, edge detection algorithms isolate the most salient features of the image and template matching is performed through the use of the Hausdor↵ distance, comparing the simulated sensed images with reference images of the vehicles. We present initial simulation statistics that overwhelmingly favor the third scenario. We conclude with a sketch of our near-future steps in this study that will entail: the incorporation of more sophisticated image processing and template matching algorithms; more complex discrimination tasks such as distinguishing between two similar vehicles or vehicles in motion; more realistic models of the observers mobility including platform dynamics and eventually environmental constraints; and expanding the sensing task beyond the identification of a specified object selected from a pre-defined library of alternatives.
Classifier dependent feature preprocessing methods
NASA Astrophysics Data System (ADS)
Rodriguez, Benjamin M., II; Peterson, Gilbert L.
2008-04-01
In mobile applications, computational complexity is an issue that limits sophisticated algorithms from being implemented on these devices. This paper provides an initial solution to applying pattern recognition systems on mobile devices by combining existing preprocessing algorithms for recognition. In pattern recognition systems, it is essential to properly apply feature preprocessing tools prior to training classification models in an attempt to reduce computational complexity and improve the overall classification accuracy. The feature preprocessing tools extended for the mobile environment are feature ranking, feature extraction, data preparation and outlier removal. Most desktop systems today are capable of processing a majority of the available classification algorithms without concern of processing while the same is not true on mobile platforms. As an application of pattern recognition for mobile devices, the recognition system targets the problem of steganalysis, determining if an image contains hidden information. The measure of performance shows that feature preprocessing increases the overall steganalysis classification accuracy by an average of 22%. The methods in this paper are tested on a workstation and a Nokia 6620 (Symbian operating system) camera phone with similar results.
Enhancing critical thinking with case studies and nursing process.
Neill, K M; Lachat, M F; Taylor-Panek, S
1997-01-01
Challenged to enhance critical thinking concepts in a sophomore nursing process course, faculty expanded the lecture format to include group explorations of patient case studies. The group format facilitated a higher level of analysis of patient cases and more sophisticated applications of nursing process. This teaching strategy was a positive learning experience for students and faculty.
Wang, Yu; Helminen, Emily; Jiang, Jingfeng
2015-01-01
Purpose: Quasistatic ultrasound elastography (QUE) is being used to augment in vivo characterization of breast lesions. Results from early clinical trials indicated that there was a lack of confidence in image interpretation. Such confidence can only be gained through rigorous imaging tests using complex, heterogeneous but known media. The objective of this study is to build a virtual breast QUE simulation platform in the public domain that can be used not only for innovative QUE research but also for rigorous imaging tests. Methods: The main thrust of this work is to streamline biomedical ultrasound simulations by leveraging existing open source software packages including Field II (ultrasound simulator), VTK (geometrical visualization and processing), FEBio [finite element (FE) analysis], and Tetgen (mesh generator). However, integration of these open source packages is nontrivial and requires interdisciplinary knowledge. In the first step, a virtual breast model containing complex anatomical geometries was created through a novel combination of image-based landmark structures and randomly distributed (small) structures. Image-based landmark structures were based on data from the NIH Visible Human Project. Subsequently, an unstructured FE-mesh was created by Tetgen. In the second step, randomly positioned point scatterers were placed within the meshed breast model through an octree-based algorithm to make a virtual breast ultrasound phantom. In the third step, an ultrasound simulator (Field II) was used to interrogate the virtual breast phantom to obtain simulated ultrasound echo data. Of note, tissue deformation generated using a FE-simulator (FEBio) was the basis of deforming the original virtual breast phantom in order to obtain the postdeformation breast phantom for subsequent ultrasound simulations. Using the procedures described above, a full cycle of QUE simulations involving complex and highly heterogeneous virtual breast phantoms can be accomplished for the first time. Results: Representative examples were used to demonstrate capabilities of this virtual simulation platform. In the first set of three ultrasound simulation examples, three heterogeneous volumes of interest were selected from a virtual breast ultrasound phantom to perform sophisticated ultrasound simulations. These resultant B-mode images realistically represented the underlying complex but known media. In the second set of three QUE examples, advanced applications in QUE were simulated. The first QUE example was to show breast tumors with complex shapes and/or compositions. The resultant strain images showed complex patterns that were normally seen in freehand clinical ultrasound data. The second and third QUE examples demonstrated (deformation-dependent) nonlinear strain imaging and time-dependent strain imaging, respectively. Conclusions: The proposed virtual QUE platform was implemented and successfully tested in this study. Through show-case examples, the proposed work has demonstrated its capabilities of creating sophisticated QUE data in a way that cannot be done through the manufacture of physical tissue-mimicking phantoms and other software. This open software architecture will soon be made available in the public domain and can be readily adapted to meet specific needs of different research groups to drive innovations in QUE. PMID:26328994
NASA Technical Reports Server (NTRS)
Margetan, Frank J.; Leckey, Cara A.; Barnard, Dan
2012-01-01
The size and shape of a delamination in a multi-layered structure can be estimated in various ways from an ultrasonic pulse/echo image. For example the -6dB contours of measured response provide one simple estimate of the boundary. More sophisticated approaches can be imagined where one adjusts the proposed boundary to bring measured and predicted UT images into optimal agreement. Such approaches require suitable models of the inspection process. In this paper we explore issues pertaining to model-based size estimation for delaminations in carbon fiber reinforced laminates. In particular we consider the influence on sizing when the delamination is non-planar or partially transmitting in certain regions. Two models for predicting broadband sonic time-domain responses are considered: (1) a fast "simple" model using paraxial beam expansions and Kirchhoff and phase-screen approximations; and (2) the more exact (but computationally intensive) 3D elastodynamic finite integration technique (EFIT). Model-to-model and model-to experiment comparisons are made for delaminations in uniaxial composite plates, and the simple model is then used to critique the -6dB rule for delamination sizing.
Surveillance of Arthropod Vector-Borne Infectious Diseases Using Remote Sensing Techniques: A Review
Kalluri, Satya; Gilruth, Peter; Rogers, David; Szczur, Martha
2007-01-01
Epidemiologists are adopting new remote sensing techniques to study a variety of vector-borne diseases. Associations between satellite-derived environmental variables such as temperature, humidity, and land cover type and vector density are used to identify and characterize vector habitats. The convergence of factors such as the availability of multi-temporal satellite data and georeferenced epidemiological data, collaboration between remote sensing scientists and biologists, and the availability of sophisticated, statistical geographic information system and image processing algorithms in a desktop environment creates a fertile research environment. The use of remote sensing techniques to map vector-borne diseases has evolved significantly over the past 25 years. In this paper, we review the status of remote sensing studies of arthropod vector-borne diseases due to mosquitoes, ticks, blackflies, tsetse flies, and sandflies, which are responsible for the majority of vector-borne diseases in the world. Examples of simple image classification techniques that associate land use and land cover types with vector habitats, as well as complex statistical models that link satellite-derived multi-temporal meteorological observations with vector biology and abundance, are discussed here. Future improvements in remote sensing applications in epidemiology are also discussed. PMID:17967056
Passive radiation detection using optically active CMOS sensors
NASA Astrophysics Data System (ADS)
Dosiek, Luke; Schalk, Patrick D.
2013-05-01
Recently, there have been a number of small-scale and hobbyist successes in employing commodity CMOS-based camera sensors for radiation detection. For example, several smartphone applications initially developed for use in areas near the Fukushima nuclear disaster are capable of detecting radiation using a cell phone camera, provided opaque tape is placed over the lens. In all current useful implementations, it is required that the sensor not be exposed to visible light. We seek to build a system that does not have this restriction. While building such a system would require sophisticated signal processing, it would nevertheless provide great benefits. In addition to fulfilling their primary function of image capture, cameras would also be able to detect unknown radiation sources even when the danger is considered to be low or non-existent. By experimentally profiling the image artifacts generated by gamma ray and β particle impacts, algorithms are developed to identify the unique features of radiation exposure, while discarding optical interaction and thermal noise effects. Preliminary results focus on achieving this goal in a laboratory setting, without regard to integration time or computational complexity. However, future work will seek to address these additional issues.
Lefor, Alan T
2011-08-01
Oncology research has traditionally been conducted using techniques from the biological sciences. The new field of computational oncology has forged a new relationship between the physical sciences and oncology to further advance research. By applying physics and mathematics to oncologic problems, new insights will emerge into the pathogenesis and treatment of malignancies. One major area of investigation in computational oncology centers around the acquisition and analysis of data, using improved computing hardware and software. Large databases of cellular pathways are being analyzed to understand the interrelationship among complex biological processes. Computer-aided detection is being applied to the analysis of routine imaging data including mammography and chest imaging to improve the accuracy and detection rate for population screening. The second major area of investigation uses computers to construct sophisticated mathematical models of individual cancer cells as well as larger systems using partial differential equations. These models are further refined with clinically available information to more accurately reflect living systems. One of the major obstacles in the partnership between physical scientists and the oncology community is communications. Standard ways to convey information must be developed. Future progress in computational oncology will depend on close collaboration between clinicians and investigators to further the understanding of cancer using these new approaches.
MovieMaker: a web server for rapid rendering of protein motions and interactions.
Maiti, Rajarshi; Van Domselaar, Gary H; Wishart, David S
2005-07-01
MovieMaker is a web server that allows short ( approximately 10 s), downloadable movies of protein motions to be generated. It accepts PDB files or PDB accession numbers as input and automatically calculates, renders and merges the necessary image files to create colourful animations covering a wide range of protein motions and other dynamic processes. Users have the option of animating (i) simple rotation, (ii) morphing between two end-state conformers, (iii) short-scale, picosecond vibrations, (iv) ligand docking, (v) protein oligomerization, (vi) mid-scale nanosecond (ensemble) motions and (vii) protein folding/unfolding. MovieMaker does not perform molecular dynamics calculations. Instead it is an animation tool that uses a sophisticated superpositioning algorithm in conjunction with Cartesian coordinate interpolation to rapidly and automatically calculate the intermediate structures needed for many of its animations. Users have extensive control over the rendering style, structure colour, animation quality, background and other image features. MovieMaker is intended to be a general-purpose server that allows both experts and non-experts to easily generate useful, informative protein animations for educational and illustrative purposes. MovieMaker is accessible at http://wishart.biology.ualberta.ca/moviemaker.
NASA Astrophysics Data System (ADS)
Winstroth, J.; Schoen, L.; Ernst, B.; Seume, J. R.
2014-06-01
Optical full-field measurement methods such as Digital Image Correlation (DIC) provide a new opportunity for measuring deformations and vibrations with high spatial and temporal resolution. However, application to full-scale wind turbines is not trivial. Elaborate preparation of the experiment is vital and sophisticated post processing of the DIC results essential. In the present study, a rotor blade of a 3.2 MW wind turbine is equipped with a random black-and-white dot pattern at four different radial positions. Two cameras are located in front of the wind turbine and the response of the rotor blade is monitored using DIC for different turbine operations. In addition, a Light Detection and Ranging (LiDAR) system is used in order to measure the wind conditions. Wind fields are created based on the LiDAR measurements and used to perform aeroelastic simulations of the wind turbine by means of advanced multibody codes. The results from the optical DIC system appear plausible when checked against common and expected results. In addition, the comparison of relative out-ofplane blade deflections shows good agreement between DIC results and aeroelastic simulations.
NASA Astrophysics Data System (ADS)
Srinivas, G.; Raghunandana, K.; Satish Shenoy, B.
2018-02-01
In the recent years the development of turbomachinery materials performance enhancement plays a vital role especially in aircraft air breathing engines like turbojet engine, turboprop engine, turboshaft engine and turbofan engines. Especially the transonic flow engines required highly sophisticated materials where it can sustain the entire thrust which can create by the engine. The main objective of this paper is to give an overview of the present cost-effective and technological capabilities process for turbomachinery component materials. Especially the main focus is given to study the Electro physical, Photonic additive removal process and Electro chemical process for turbomachinery parts manufacture. The aeronautical propulsion based technologies are reviewed thoroughly where in surface reliability, geometrical precession, and material removal and highly strengthened composite material deposition rates usually difficult to cut dedicated steels, Titanium and Nickel based alloys. In this paper the past aeronautical and propulsion mechanical based manufacturing technologies, current sophisticated technologies and also future challenging material processing techniques are covered. The paper also focuses on the brief description of turbomachinery components of shaping process and coating in aeromechanical applications.
FIB-SEM tomography in biology.
Kizilyaprak, Caroline; Bittermann, Anne Greet; Daraspe, Jean; Humbel, Bruno M
2014-01-01
Three-dimensional information is much easier to understand than a set of two-dimensional images. Therefore a layman is thrilled by the pseudo-3D image taken in a scanning electron microscope (SEM) while, when seeing a transmission electron micrograph, his imagination is challenged. First approaches to gain insight in the third dimension were to make serial microtome sections of a region of interest (ROI) and then building a model of the object. Serial microtome sectioning is a tedious and skill-demanding work and therefore seldom done. In the last two decades with the increase of computer power, sophisticated display options, and the development of new instruments, an SEM with a built-in microtome as well as a focused ion beam scanning electron microscope (FIB-SEM), serial sectioning, and 3D analysis has become far easier and faster.Due to the relief like topology of the microtome trimmed block face of resin-embedded tissue, the ROI can be searched in the secondary electron mode, and at the selected spot, the ROI is prepared with the ion beam for 3D analysis. For FIB-SEM tomography, a thin slice is removed with the ion beam and the newly exposed face is imaged with the electron beam, usually by recording the backscattered electrons. The process, also called "slice and view," is repeated until the desired volume is imaged.As FIB-SEM allows 3D imaging of biological fine structure at high resolution of only small volumes, it is crucial to perform slice and view at carefully selected spots. Finding the region of interest is therefore a prerequisite for meaningful imaging. Thin layer plastification of biofilms offers direct access to the original sample surface and allows the selection of an ROI for site-specific FIB-SEM tomography just by its pronounced topographic features.
WE-D-303-01: Development and Application of Digital Human Phantoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segars, P.
2015-06-15
Modern medical physics deals with complex problems such as 4D radiation therapy and imaging quality optimization. Such problems involve a large number of radiological parameters, and anatomical and physiological breathing patterns. A major challenge is how to develop, test, evaluate and compare various new imaging and treatment techniques, which often involves testing over a large range of radiological parameters as well as varying patient anatomies and motions. It would be extremely challenging, if not impossible, both ethically and practically, to test every combination of parameters and every task on every type of patient under clinical conditions. Computer-based simulation using computationalmore » phantoms offers a practical technique with which to evaluate, optimize, and compare imaging technologies and methods. Within simulation, the computerized phantom provides a virtual model of the patient’s anatomy and physiology. Imaging data can be generated from it as if it was a live patient using accurate models of the physics of the imaging and treatment process. With sophisticated simulation algorithms, it is possible to perform virtual experiments entirely on the computer. By serving as virtual patients, computational phantoms hold great promise in solving some of the most complex problems in modern medical physics. In this proposed symposium, we will present the history and recent developments of computational phantom models, share experiences in their application to advanced imaging and radiation applications, and discuss their promises and limitations. Learning Objectives: Understand the need and requirements of computational phantoms in medical physics research Discuss the developments and applications of computational phantoms Know the promises and limitations of computational phantoms in solving complex problems.« less
Fortier, Véronique; Levesque, Ives R
2018-06-01
Phase processing impacts the accuracy of quantitative susceptibility mapping (QSM). Techniques for phase unwrapping and background removal have been proposed and demonstrated mostly in brain. In this work, phase processing was evaluated in the context of large susceptibility variations (Δχ) and negligible signal, in particular for susceptibility estimation using the iterative phase replacement (IPR) algorithm. Continuous Laplacian, region-growing, and quality-guided unwrapping were evaluated. For background removal, Laplacian boundary value (LBV), projection onto dipole fields (PDF), sophisticated harmonic artifact reduction for phase data (SHARP), variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP), regularization enabled sophisticated harmonic artifact reduction for phase data (RESHARP), and 3D quadratic polynomial field removal were studied. Each algorithm was quantitatively evaluated in simulation and qualitatively in vivo. Additionally, IPR-QSM maps were produced to evaluate the impact of phase processing on the susceptibility in the context of large Δχ with negligible signal. Quality-guided unwrapping was the most accurate technique, whereas continuous Laplacian performed poorly in this context. All background removal algorithms tested resulted in important phase inaccuracies, suggesting that techniques used for brain do not translate well to situations where large Δχ and no or low signal are expected. LBV produced the smallest errors, followed closely by PDF. Results suggest that quality-guided unwrapping should be preferred, with PDF or LBV for background removal, for QSM in regions with large Δχ and negligible signal. This reduces the susceptibility inaccuracy introduced by phase processing. Accurate background removal remains an open question. Magn Reson Med 79:3103-3113, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Introduction to the concepts of TELEDEMO and TELEDIMS
NASA Technical Reports Server (NTRS)
Rice, R. F.; Schlutsmeyer, A. P.
1982-01-01
An introduction to the system concepts: TELEDEMO and TELEDIMS is provided. TELEDEMO is derived primarily from computer graphics and, via incorporation of sophisticated image data compression, enables effective low cost teleconferencing at data rates as low as 1K bit/second using dial-up phone lines. Combining TELEDEMO's powerful capabilities for the development of presentation material with microprocessor-based Information Management Systems (IMS) yields a truly all electronic IMS called TELEDIMS.
Punch Response of Gels at Different Loading Rates
2014-03-01
calibration (4, 6). While similar in density, neither clay nor gelatin simulates the tissue structure of the human body accurately. Danelson et al. (7...the load response of human tissue. 2 Recent work on gelatins has shown promise in robotics, sensors, and microfluidics (9). Hydrogels ( water -based...images of a high-contrast, random pattern of speckles and a sophisticated optimization program to measure full-field deformation. Figure 1 shows an
Worth the WAIT: Engaging Social Studies Students with Art in a Digital Age
ERIC Educational Resources Information Center
Crawford, B. Scott; Hicks, David; Doherty, Nicole
2009-01-01
If the mission of the social studies is to educate global citizens for the twenty-first century, then students must learn how to engage in the type of systematic and sophisticated literacy work that recognizes the power of images as well as texts. In an era of high stakes testing, it is not easy for teachers to find time to locate appropriate art,…
Imaging and Data Acquisition in Clinical Trials for Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
FitzGerald, Thomas J., E-mail: Thomas.Fitzgerald@umassmed.edu; Bishop-Jodoin, Maryann; Followill, David S.
2016-02-01
Cancer treatment evolves through oncology clinical trials. Cancer trials are multimodal and complex. Assuring high-quality data are available to answer not only study objectives but also questions not anticipated at study initiation is the role of quality assurance. The National Cancer Institute reorganized its cancer clinical trials program in 2014. The National Clinical Trials Network (NCTN) was formed and within it was established a Diagnostic Imaging and Radiation Therapy Quality Assurance Organization. This organization is Imaging and Radiation Oncology Core, the Imaging and Radiation Oncology Core Group, consisting of 6 quality assurance centers that provide imaging and radiation therapy qualitymore » assurance for the NCTN. Sophisticated imaging is used for cancer diagnosis, treatment, and management as well as for image-driven technologies to plan and execute radiation treatment. Integration of imaging and radiation oncology data acquisition, review, management, and archive strategies are essential for trial compliance and future research. Lessons learned from previous trials are and provide evidence to support diagnostic imaging and radiation therapy data acquisition in NCTN trials.« less
Missile signal processing common computer architecture for rapid technology upgrade
NASA Astrophysics Data System (ADS)
Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul
2004-10-01
Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.
2016-01-01
We introduce a portable biochemical analysis platform for rapid field deployment of nucleic acid-based diagnostics using consumer-class quadcopter drones. This approach exploits the ability to isothermally perform the polymerase chain reaction (PCR) with a single heater, enabling the system to be operated using standard 5 V USB sources that power mobile devices (via battery, solar, or hand crank action). Time-resolved fluorescence detection and quantification is achieved using a smartphone camera and integrated image analysis app. Standard sample preparation is enabled by leveraging the drone’s motors as centrifuges via 3D printed snap-on attachments. These advancements make it possible to build a complete DNA/RNA analysis system at a cost of ∼$50 ($US). Our instrument is rugged and versatile, enabling pinpoint deployment of sophisticated diagnostics to distributed field sites. This capability is demonstrated by successful in-flight replication of Staphylococcus aureus and λ-phage DNA targets in under 20 min. The ability to perform rapid in-flight assays with smartphone connectivity eliminates delays between sample collection and analysis so that test results can be delivered in minutes, suggesting new possibilities for drone-based systems to function in broader and more sophisticated roles beyond cargo transport and imaging. PMID:26898247
Priye, Aashish; Wong, Season; Bi, Yuanpeng; Carpio, Miguel; Chang, Jamison; Coen, Mauricio; Cope, Danielle; Harris, Jacob; Johnson, James; Keller, Alexandra; Lim, Richard; Lu, Stanley; Millard, Alex; Pangelinan, Adriano; Patel, Neal; Smith, Luke; Chan, Kamfai; Ugaz, Victor M
2016-05-03
We introduce a portable biochemical analysis platform for rapid field deployment of nucleic acid-based diagnostics using consumer-class quadcopter drones. This approach exploits the ability to isothermally perform the polymerase chain reaction (PCR) with a single heater, enabling the system to be operated using standard 5 V USB sources that power mobile devices (via battery, solar, or hand crank action). Time-resolved fluorescence detection and quantification is achieved using a smartphone camera and integrated image analysis app. Standard sample preparation is enabled by leveraging the drone's motors as centrifuges via 3D printed snap-on attachments. These advancements make it possible to build a complete DNA/RNA analysis system at a cost of ∼$50 ($US). Our instrument is rugged and versatile, enabling pinpoint deployment of sophisticated diagnostics to distributed field sites. This capability is demonstrated by successful in-flight replication of Staphylococcus aureus and λ-phage DNA targets in under 20 min. The ability to perform rapid in-flight assays with smartphone connectivity eliminates delays between sample collection and analysis so that test results can be delivered in minutes, suggesting new possibilities for drone-based systems to function in broader and more sophisticated roles beyond cargo transport and imaging.
Talent Development Gamification in Talent Selection Assessment Centres
ERIC Educational Resources Information Center
Tansley, Carole; Hafermalz, Ella; Dery, Kristine
2016-01-01
Purpose: The purpose of this paper is to examine the relationship between the use of sophisticated talent selection processes such as gamification and training and development interventions designed to ensure that candidates can successfully navigate the talent assessment process. Gamification is the application of game elements to non-game…
Acousto-Optic Tunable Filter for Time-Domain Processing of Ultra-Short Optical Pulses,
The application of acousto - optic tunable filters for shaping of ultra-fast pulses in the time domain is analyzed and demonstrated. With the rapid...advance of acousto - optic tunable filter (AOTF) technology, the opportunity for sophisticated signal processing capabilities arises. AOTFs offer unique
ERIC Educational Resources Information Center
Latta, Raymond F.; Downey, Carolyn J.
This book presents a wide array of sophisticated problem-solving tools and shows how to use them in a humanizing way that involves all stakeholders in the process. Chapter 1 develops the rationale for educational stakeholders to consider quality tools. Chapter 2 highlights three quality group-process tools--brainstorming, the nominal group…
Heiss, Alexander; Park, Daesung; Joel, Anna-Christin
2018-04-01
Spiders are natural specialists in fiber processing. In particular, cribellate spiders manifest this ability as they produce a wool of nanofibers to capture prey. During its production they deploy a sophisticated movement of their spinnerets to darn in the fibers as well as a comb-like row of setae, termed calamistrum, on the metatarsus which plays a key role in nanofiber processing. In comparison to the elaborate nanofiber extraction and handling process by the spider's calamistrum, the human endeavors of spinning and handling of artificial nanofibers is still a primitive technical process. An implementation of biomimetics in spinning technology could lead to new materials and applications. Despite the general progress in related fields of nanoscience, the expected leap forward in spinning technology depends on a better understanding of the specific shapes and surfaces that control the forces at the nanoscale and that are involved in the mechanical processing of the nanofibers, respectively. In this study, the authors investigated the morphology of the calamistrum of the cribellate spider Uloborus plumipes. Focused ion beam and scanning electron microscopy tomography provided a good image contrast and the best trade-off between investigation volume and spatial resolution. A comprehensive three-dimensional model is presented and the putative role of the calamistrum in nanofiber processing is discussed.
Emerging diagnostic and therapeutic molecular imaging applications in vascular disease
Eraso, Luis H; Reilly, Muredach P; Sehgal, Chandra; Mohler, Emile R
2013-01-01
Assessment of vascular disease has evolved from mere indirect and direct measurements of luminal stenosis to sophisticated imaging methods to depict millimeter structural changes of the vasculature. In the near future, the emergence of multimodal molecular imaging strategies may enable robust therapeutic and diagnostic (‘theragnostic’) approaches to vascular diseases that comprehensively consider structural, functional, biological and genomic characteristics of the disease in individualized risk assessment, early diagnosis and delivery of targeted interventions. This review presents a summary of recent preclinical and clinical developments in molecular imaging and theragnostic applications covering diverse atherosclerosis events such as endothelial activation, macrophage infammatory activity, plaque neovascularization and arterial thrombosis. The main focus is on molecular targets designed for imaging platforms commonly used in clinical medicine including magnetic resonance, computed tomography and positron emission tomography. A special emphasis is given to vascular ultrasound applications, considering the important role this imaging platform plays in the clinical and research practice of the vascular medicine specialty. PMID:21310769
New neutron imaging techniques to close the gap to scattering applications
NASA Astrophysics Data System (ADS)
Lehmann, Eberhard H.; Peetermans, S.; Trtik, P.; Betz, B.; Grünzweig, C.
2017-01-01
Neutron scattering and neutron imaging are activities at the strong neutron sources which have been developed rather independently. However, there are similarities and overlaps in the research topics to which both methods can contribute and thus useful synergies can be found. In particular, the spatial resolution of neutron imaging has improved recently, which - together with the enhancement of the efficiency in data acquisition- can be exploited to narrow the energy band and to implement more sophisticated methods like neutron grating interferometry. This paper provides a report about the current options in neutron imaging and describes how the gap to neutron scattering data can be closed in the future, e.g. by diffractive imaging, the use of polarized neutrons and the dark-field imagining of relevant materials. This overview is focused onto the interaction between neutron imaging and neutron scattering with the aim of synergy. It reflects mainly the authors’ experiences at their PSI facilities without ignoring the activities at the different other labs world-wide.
Dynamics of the DNA damage response: insights from live-cell imaging
Karanam, Ketki; Loewer, Alexander
2013-01-01
All organisms have to safeguard the integrity of their genome to prevent malfunctioning and oncogenic transformation. Sophisticated DNA damage response mechanisms have evolved to detect and repair genomic lesions. With the emergence of live-cell microscopy of individual cells, we now begin to appreciate the complex spatiotemporal kinetics of the DNA damage response and can address the causes and consequences of the heterogeneity in the responses of genetically identical cells. Here, we highlight key discoveries where live-cell imaging has provided unprecedented insights into how cells respond to DNA double-strand breaks and discuss the main challenges and promises in using this technique. PMID:23292635
Lung Cancer: Posttreatment Imaging: Radiation Therapy and Imaging Findings.
Benveniste, Marcelo F; Welsh, James; Viswanathan, Chitra; Shroff, Girish S; Betancourt Cuellar, Sonia L; Carter, Brett W; Marom, Edith M
2018-05-01
In this review, we discuss the different radiation delivery techniques available to treat non-small cell lung cancer, typical radiologic manifestations of conventional radiotherapy, and different patterns of lung injury and temporal evolution of the newer radiotherapy techniques. More sophisticated techniques include intensity-modulated radiotherapy, stereotactic body radiotherapy, proton therapy, and respiration-correlated computed tomography or 4-dimensional computed tomography for radiotherapy planning. Knowledge of the radiation treatment plan and technique, the completion date of radiotherapy, and the temporal evolution of radiation-induced lung injury is important to identify expected manifestations of radiation-induced lung injury and differentiate them from tumor recurrence or infection. Published by Elsevier Inc.
Sydney Observatory and astronomy teaching in the 90s
NASA Astrophysics Data System (ADS)
Lomb, N.
1996-05-01
Computers and the Internet have created a revolution in the way astronomy can be communicated to the public. At Sydney Observatory we make full use of these recent developments. In our lecture room a variety of sophisticated computer programs can show, with the help of a projection TV system, the appearance and motion of the sky at any place, date or time. The latest HST images obtained from the Internet can be shown, as can images taken through our own Meade 16 inch telescope. This recently installed computer-controlled telescope with its accurate pointing is an ideal instrument for a light-polluted site such as ours.
Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images
NASA Technical Reports Server (NTRS)
Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.
1999-01-01
Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.
Ozkan, Mehmet; Gündüz, Sabahattin; Yildiz, Mustafa; Duran, Nilüfer Eksi
2010-05-01
Prosthetic heart valve obstruction (PHVO) caused by pannus formation is an uncommon but serious complication. Although two-dimensional transesophageal echocardiography (2D-TEE) is the method of choice in the evaluation of PHVO, visualization of pannus is almost impossible with 2D-TEE. While demonstrating the precise aetiology of PHVO is essential for guiding the therapy, either thrombolysis for valve thrombosis or surgery for pannus formation, more sophisticated imaging techniques are needed in patients with suspected pannus formation. We present real-time 3D-TEE imaging in a patient with mechanical mitral PHVO, clearly demonstrating pannus overgrowth.
The clinical value of large neuroimaging data sets in Alzheimer's disease.
Toga, Arthur W
2012-02-01
Rapid advances in neuroimaging and cyberinfrastructure technologies have brought explosive growth in the Web-based warehousing, availability, and accessibility of imaging data on a variety of neurodegenerative and neuropsychiatric disorders and conditions. There has been a prolific development and emergence of complex computational infrastructures that serve as repositories of databases and provide critical functionalities such as sophisticated image analysis algorithm pipelines and powerful three-dimensional visualization and statistical tools. The statistical and operational advantages of collaborative, distributed team science in the form of multisite consortia push this approach in a diverse range of population-based investigations. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Istvan Etesi, Laszlo; Tolbert, K.; Schwartz, R.; Zarro, D.; Dennis, B.; Csillaghy, A.
2010-05-01
In our project "Extending the Virtual Solar Observatory (VSO)” we have combined some of the features available in Solar Software (SSW) to produce an integrated environment for data analysis, supporting the complete workflow from data location, retrieval, preparation, and analysis to creating publication-quality figures. Our goal is an integrated analysis experience in IDL, easy-to-use but flexible enough to allow more sophisticated procedures such as multi-instrument analysis. To that end, we have made the transition from a locally oriented setting where all the analysis is done on the user's computer, to an extended analysis environment where IDL has access to services available on the Internet. We have implemented a form of Cloud Computing that uses the VSO search and a new data retrieval and pre-processing server (PrepServer) that provides remote execution of instrument-specific data preparation. We have incorporated the interfaces to the VSO search and the PrepServer into an IDL widget (SHOW_SYNOP) that provides user-friendly searching and downloading of raw solar data and optionally sends search results for pre-processing to the PrepServer prior to downloading the data. The raw and pre-processed data can be displayed with our plotting suite, PLOTMAN, which can handle different data types (light curves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. PLOTMAN is highly configurable and suited for visual data analysis and for creating publishable figures. PLOTMAN and SHOW_SYNOP work hand-in-hand for a convenient working environment. Our environment supports a growing number of solar instruments that currently includes RHESSI, SOHO/EIT, TRACE, SECCHI/EUVI, HINODE/XRT, and HINODE/EIS.
Real-time automatic fiducial marker tracking in low contrast cine-MV images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang
2013-01-15
Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle.more » While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The standard deviations of the results from the 6 researchers are 2.3 and 2.6 pixels. The proposed framework takes about 128 ms to detect four markers in the first MV images and about 23 ms to track these markers in each of the subsequent images. Conclusions: The unified framework for tracking of multiple markers presented here can achieve marker detection accuracy similar to manual detection even in low-contrast cine-MV images. It can cope with shape deformations of fiducial markers at different gantry angles. The fast processing speed reduces the image processing portion of the system latency, therefore can improve the performance of real-time motion compensation.« less
NASA Technical Reports Server (NTRS)
Schwarz, F. C.
1971-01-01
Processing of electric power has been presented as a discipline that draws on almost every field of electrical engineering, including system and control theory, communications theory, electronic network design, and power component technology. The cost of power processing equipment, which often equals that of expensive, sophisticated, and unconventional sources of electrical energy, such as solar batteries, is a significant consideration in the choice of electric power systems.
NASA Astrophysics Data System (ADS)
Plionis, A. A.; Peterson, D. S.; Tandon, L.; LaMont, S. P.
2010-03-01
Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid non-distructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.
Neo-Sophistic Rhetorical Theory: Sophistic Precedents for Contemporary Epistemic Rhetoric.
ERIC Educational Resources Information Center
McComiskey, Bruce
Interest in the sophists has recently intensified among rhetorical theorists, culminating in the notion that rhetoric is epistemic. Epistemic rhetoric has its first and deepest roots in sophistic epistemological and rhetorical traditions, so that the view of rhetoric as epistemic is now being dubbed "neo-sophistic." In epistemic…
Temperature measurement with industrial color camera devices
NASA Astrophysics Data System (ADS)
Schmidradler, Dieter J.; Berndorfer, Thomas; van Dyck, Walter; Pretschuh, Juergen
1999-05-01
This paper discusses color camera based temperature measurement. Usually, visual imaging and infrared image sensing are treated as two separate disciplines. We will show, that a well selected color camera device might be a cheaper, more robust and more sophisticated solution for optical temperature measurement in several cases. Herein, only implementation fragments and important restrictions for the sensing element will be discussed. Our aim is to draw the readers attention to the use of visual image sensors for measuring thermal radiation and temperature and to give reasons for the need of improved technologies for infrared camera devices. With AVL-List, our partner of industry, we successfully used the proposed sensor to perform temperature measurement for flames inside the combustion chamber of diesel engines which finally led to the presented insights.
X-ray phase contrast tomography by tracking near field speckle
Wang, Hongchang; Berujon, Sebastien; Herzen, Julia; Atwood, Robert; Laundy, David; Hipp, Alexander; Sawhney, Kawal
2015-01-01
X-ray imaging techniques that capture variations in the x-ray phase can yield higher contrast images with lower x-ray dose than is possible with conventional absorption radiography. However, the extraction of phase information is often more difficult than the extraction of absorption information and requires a more sophisticated experimental arrangement. We here report a method for three-dimensional (3D) X-ray phase contrast computed tomography (CT) which gives quantitative volumetric information on the real part of the refractive index. The method is based on the recently developed X-ray speckle tracking technique in which the displacement of near field speckle is tracked using a digital image correlation algorithm. In addition to differential phase contrast projection images, the method allows the dark-field images to be simultaneously extracted. After reconstruction, compared to conventional absorption CT images, the 3D phase CT images show greatly enhanced contrast. This new imaging method has advantages compared to other X-ray imaging methods in simplicity of experimental arrangement, speed of measurement and relative insensitivity to beam movements. These features make the technique an attractive candidate for material imaging such as in-vivo imaging of biological systems containing soft tissue. PMID:25735237
NASA Technical Reports Server (NTRS)
Hall, David G.; Bridges, James
1992-01-01
A sophisticated, multi-channel computerized data acquisition and processing system was developed at the NASA LeRC for use in noise experiments. This technology, which is available for transfer to industry, provides a convenient, cost-effective alternative to analog tape recording for high frequency acoustic measurements. This system provides 32-channel acquisition of microphone signals with an analysis bandwidth up to 100 kHz per channel. Cost was minimized through the use of off-the-shelf components. Requirements to allow for future expansion were met by choosing equipment which adheres to established industry standards for hardware and software. Data processing capabilities include narrow band and 1/3 octave spectral analysis, compensation for microphone frequency response/directivity, and correction of acoustic data to standard day conditions. The system was used successfully in a major wind tunnel test program at NASA LeRC to acquire and analyze jet noise data in support of the High Speed Civil Transport (HSCT) program.
Tools for Atmospheric Radiative Transfer: Streamer and FluxNet. Revised
NASA Technical Reports Server (NTRS)
Key, Jeffrey R.; Schweiger, Axel J.
1998-01-01
Two tools for the solution of radiative transfer problems are presented. Streamer is a highly flexible medium spectral resolution radiative transfer model based on the plane-parallel theory of radiative transfer. Capable of computing either fluxes or radiances, it is suitable for studying radiative processes at the surface or within the atmosphere and for the development of remote-sensing algorithms. FluxNet is a fast neural network-based implementation of Streamer for computing surface fluxes. It allows for a sophisticated treatment of radiative processes in the analysis of large data sets and potential integration into geophysical models where computational efficiency is an issue. Documentation and tools for the development of alternative versions of Fluxnet are available. Collectively, Streamer and FluxNet solve a wide variety of problems related to radiative transfer: Streamer provides the detail and sophistication needed to perform basic research on most aspects of complex radiative processes while the efficiency and simplicity of FluxNet make it ideal for operational use.
Learning discriminative features from RGB-D images for gender and ethnicity identification
NASA Astrophysics Data System (ADS)
Azzakhnini, Safaa; Ballihi, Lahoucine; Aboutajdine, Driss
2016-11-01
The development of sophisticated sensor technologies gave rise to an interesting variety of data. With the appearance of affordable devices, such as the Microsoft Kinect, depth-maps and three-dimensional data became easily accessible. This attracted many computer vision researchers seeking to exploit this information in classification and recognition tasks. In this work, the problem of face classification in the context of RGB images and depth information (RGB-D images) is addressed. The purpose of this paper is to study and compare some popular techniques for gender recognition and ethnicity classification to understand how much depth data can improve the quality of recognition. Furthermore, we investigate which combination of face descriptors, feature selection methods, and learning techniques is best suited to better exploit RGB-D images. The experimental results show that depth data improve the recognition accuracy for gender and ethnicity classification applications in many use cases.
Lee, Kai-Hui; Chiu, Pei-Ling
2013-10-01
Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makowska, Małgorzata G.; Theil Kuhn, Luise; Cleemann, Lars N.
In high material penetration by neutrons allows for experiments using sophisticated sample environments providing complex conditions. Thus, neutron imaging holds potential for performing in situ nondestructive measurements on large samples or even full technological systems, which are not possible with any other technique. Our paper presents a new sample environment for in situ high resolution neutron imaging experiments at temperatures from room temperature up to 1100 degrees C and/or using controllable flow of reactive atmospheres. The design also offers the possibility to directly combine imaging with diffraction measurements. Design, special features, and specification of the furnace are described. In addition,more » examples of experiments successfully performed at various neutron facilities with the furnace, as well as examples of possible applications are presented. Our work covers a broad field of research from fundamental to technological investigations of various types of materials and components.« less
Makowska, Małgorzata G.; Theil Kuhn, Luise; Cleemann, Lars N.; ...
2015-12-17
In high material penetration by neutrons allows for experiments using sophisticated sample environments providing complex conditions. Thus, neutron imaging holds potential for performing in situ nondestructive measurements on large samples or even full technological systems, which are not possible with any other technique. Our paper presents a new sample environment for in situ high resolution neutron imaging experiments at temperatures from room temperature up to 1100 degrees C and/or using controllable flow of reactive atmospheres. The design also offers the possibility to directly combine imaging with diffraction measurements. Design, special features, and specification of the furnace are described. In addition,more » examples of experiments successfully performed at various neutron facilities with the furnace, as well as examples of possible applications are presented. Our work covers a broad field of research from fundamental to technological investigations of various types of materials and components.« less
Mitigating the Backlash: US Airpower as a Military Instrument of Policy
2003-06-01
maintain their preeminence by employing strategies based more on benevolence than coercion.”31 This is a key point , as it marks a line of departure...weapons are easily defeated with smoke or fire in the target area (to defeat laser designators and thermal imaging), by adequate concealment and...create decoy surface-to-air missiles (SAMs) and radars, some quite sophisticated, and to employ previously “strategic” (immobile, point -defense
Basic equipment requirements for hemodynamic monitoring.
Morton, B C
1979-01-01
Hemodynamic monitoring in the critically ill patient requires the use of sophisticated electronic devices. To use this equipment one should have a general understanding of the principles involved and the requirements of a reliable system. This communication serves to explain the requirements of the various components of a hemodynamic monitoring system and to demonstrate how they interact to produce accurate and safe electronic signals from mechanical wave forms obtained from the patient. Images FIG. 5 PMID:497978
Securing Information with Complex Optical Encryption Networks
2015-08-11
Network Security, Network Vulnerability , Multi-dimentional Processing, optoelectronic devices 16. SECURITY CLASSIFICATION OF: 17. LIMITATION... optoelectronic devices and systems should be analyzed before the retrieval, any hostile hacker will need to possess multi-disciplinary scientific...sophisticated optoelectronic principles and systems where he/she needs to process the information. However, in the military applications, most military
How Do Students Regulate their Learning of Complex Systems with Hypermedia?.
ERIC Educational Resources Information Center
Azevedo, Roger; Seibert, Diane; Guthrie, John T.; Cromley, Jennifer G.; Wang, Huei-yu; Tron, Myriam
This study examined the role of different goal-setting instructional interventions in facilitating students' shift to more sophisticated mental models of the circulatory system as indicated by both performance and process data. Researchers adopted the information processing model of self-regulated learning of P. Winne and colleagues (1998, 2001)…
DOT National Transportation Integrated Search
1998-09-16
In order to have effective public involvement, governments need a road map for : the decision-making process. Yet, citizens from small and medium sized cities : frequently do not have the resources to use sophisticated technology for public : involve...
That Elusive, Eclectic Thing Called Thermal Environment: What a Board Should Know About It
ERIC Educational Resources Information Center
Schutte, Frederick
1970-01-01
Discussion of proper thermal environment for protection of sophisticated educational equipment such as computer and data-processing machines, magnetic tapes, closed-circuit television and video tape communications systems.
NASA Astrophysics Data System (ADS)
Tauro, Flavia; Grimaldi, Salvatore
2017-04-01
Recently, several efforts have been devoted to the design and development of innovative, and often unintended, approaches for the acquisition of hydrological data. Among such pioneering techniques, this presentation reports recent advancements towards the establishment of a novel noninvasive and potentially continuous methodology based on the acquisition and analysis of images for spatially distributed observations of the kinematics of surface waters. The approach aims at enabling rapid, affordable, and accurate surface flow monitoring of natural streams. Flow monitoring is an integral part of hydrological sciences and is essential for disaster risk reduction and the comprehension of natural phenomena. However, water processes are inherently complex to observe: they are characterized by multiscale and highly heterogeneous phenomena which have traditionally demanded sophisticated and costly measurement techniques. Challenges in the implementation of such techniques have also resulted in lack of hydrological data during extreme events, in difficult-to-access environments, and at high temporal resolution. By combining low-cost yet high-resolution images and several velocimetry algorithms, noninvasive flow monitoring has been successfully conducted at highly heterogeneous scales, spanning from rills to highly turbulent streams, and medium-scale rivers, with minimal supervision by external users. Noninvasive image data acquisition has also afforded observations in high flow conditions. Latest novelties towards continuous flow monitoring at the catchment scale have entailed the development of a remote gauge-cam station on the Tiber River and integration of flow monitoring through image analysis with unmanned aerial systems (UASs) technology. The gauge-cam station and the UAS platform both afford noninvasive image acquisition and calibration through an innovative laser-based setup. Compared to traditional point-based instrumentation, images allow for generating surface flow velocity maps which fully describe the kinematics of the velocity field in natural streams. Also, continuous observations provide a close picture of the evolving dynamics of natural water bodies. Despite such promising achievements, dealing with images also involves coping with adverse illumination, massive data handling and storage, and data-intensive computing. Most importantly, establishing a novel observational technique requires estimation of the uncertainty associated to measurements and thorough comparison to existing benchmark approaches. In this presentation, we provide answers to some of these issues and perspectives for future research.
Comparison of fMRI paradigms assessing visuospatial processing: Robustness and reproducibility
Herholz, Peer; Zimmermann, Kristin M.; Westermann, Stefan; Frässle, Stefan; Jansen, Andreas
2017-01-01
The development of brain imaging techniques, in particular functional magnetic resonance imaging (fMRI), made it possible to non-invasively study the hemispheric lateralization of cognitive brain functions in large cohorts. Comprehensive models of hemispheric lateralization are, however, still missing and should not only account for the hemispheric specialization of individual brain functions, but also for the interactions among different lateralized cognitive processes (e.g., language and visuospatial processing). This calls for robust and reliable paradigms to study hemispheric lateralization for various cognitive functions. While numerous reliable imaging paradigms have been developed for language, which represents the most prominent left-lateralized brain function, the reliability of imaging paradigms investigating typically right-lateralized brain functions, such as visuospatial processing, has received comparatively less attention. In the present study, we aimed to establish an fMRI paradigm that robustly and reliably identifies right-hemispheric activation evoked by visuospatial processing in individual subjects. In a first study, we therefore compared three frequently used paradigms for assessing visuospatial processing and evaluated their utility to robustly detect right-lateralized brain activity on a single-subject level. In a second study, we then assessed the test-retest reliability of the so-called Landmark task–the paradigm that yielded the most robust results in study 1. At the single-voxel level, we found poor reliability of the brain activation underlying visuospatial attention. This suggests that poor signal-to-noise ratios can become a limiting factor for test-retest reliability. This represents a common detriment of fMRI paradigms investigating visuospatial attention in general and therefore highlights the need for careful considerations of both the possibilities and limitations of the respective fMRI paradigm–in particular, when being interested in effects at the single-voxel level. Notably, however, when focusing on the reliability of measures of hemispheric lateralization (which was the main goal of study 2), we show that hemispheric dominance (quantified by the lateralization index, LI, with |LI| >0.4) of the evoked activation could be robustly determined in more than 62% and, if considering only two categories (i.e., left, right), in more than 93% of our subjects. Furthermore, the reliability of the lateralization strength (LI) was “fair” to “good”. In conclusion, our results suggest that the degree of right-hemispheric dominance during visuospatial processing can be reliably determined using the Landmark task, both at the group and single-subject level, while at the same time stressing the need for future refinements of experimental paradigms and more sophisticated fMRI data acquisition techniques. PMID:29059201
Multi-modality 3D breast imaging with X-Ray tomosynthesis and automated ultrasound.
Sinha, Sumedha P; Roubidoux, Marilyn A; Helvie, Mark A; Nees, Alexis V; Goodsitt, Mitchell M; LeCarpentier, Gerald L; Fowlkes, J Brian; Chalek, Carl L; Carson, Paul L
2007-01-01
This study evaluated the utility of 3D automated ultrasound in conjunction with 3D digital X-Ray tomosynthesis for breast cancer detection and assessment, to better localize and characterize lesions in the breast. Tomosynthesis image volumes and automated ultrasound image volumes were acquired in the same geometry and in the same view for 27 patients. 3 MQSA certified radiologists independently reviewed the image volumes, visually correlating the images from the two modalities with in-house software. More sophisticated software was used on a smaller set of 10 cases, which enabled the radiologist to draw a 3D box around the suspicious lesion in one image set and isolate an anatomically correlated, similarly boxed region in the other modality image set. In the primary study, correlation was found to be moderately useful to the readers. In the additional study, using improved software, the median usefulness rating increased and confidence in localizing and identifying the suspicious mass increased in more than half the cases. As automated scanning and reading software techniques advance, superior results are expected.
Remote sensing: a tool for park planning and management
Draeger, William C.; Pettinger, Lawrence R.
1981-01-01
Remote sensing may be defined as the science of imaging or measuring objects from a distance. More commonly, however, the term is used in reference to the acquisition and use of photographs, photo-like images, and other data acquired from aircraft and satellites. Thus, remote sensing includes the use of such diverse materials as photographs taken by hand from a light aircraft, conventional aerial photographs obtained with a precision mapping camera, satellite images acquired with sophisticated scanning devices, radar images, and magnetic and gravimetric data that may not even be in image form. Remotely sensed images may be color or black and white, can vary in scale from those that cover only a few hectares of the earth's surface to those that cover tens of thousands of square kilometers, and they may be interpreted visually or with the assistance of computer systems. This article attempts to describe several of the commonly available types of remotely sensed data, to discuss approaches to data analysis, and to demonstrate (with image examples) typical applications that might interest managers of parks and natural areas.
[Rational imaging in locally advanced prostate cancer].
Beissert, M; Lorenz, R; Gerharz, E W
2008-11-01
Prostate cancer is one of the principal medical problems facing the male population in developed countries with an increasing need for sophisticated imaging techniques and risk-adapted treatment options. This article presents an overview of the current imaging procedures in the diagnosis of locally advanced prostate cancer. Apart from conventional gray-scale transrectal ultrasound (TRUS) as the most frequently used primary imaging modality we describe computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET). CT and MRI not only allow assessment of prostate anatomy but also a specific evaluation of the pelvic region. Color-coded and contrast-enhanced ultrasound, real-time elastography, dynamic contrast enhancement in MR imaging, diffusion imaging, and MR spectroscopy may lead to a clinically relevant improvement in the diagnosis of prostate cancer. While bone scintigraphy with (99m)Tc-bisphosphonates is still the method of choice in the evaluation of bone metastasis, whole-body MRI and PET using (18)F-NaF, (18)F-FDG, (11)C-choline, (11)C-acetate, and (18)F-choline as tracers achieve higher sensitivities.
Lip-reading enhancement for law enforcement
NASA Astrophysics Data System (ADS)
Theobald, Barry J.; Harvey, Richard; Cox, Stephen J.; Lewis, Colin; Owen, Gari P.
2006-09-01
Accurate lip-reading techniques would be of enormous benefit for agencies involved in counter-terrorism and other law-enforcement areas. Unfortunately, there are very few skilled lip-readers, and it is apparently a difficult skill to transmit, so the area is under-resourced. In this paper we investigate the possibility of making the lip-reading task more amenable to a wider range of operators by enhancing lip movements in video sequences using active appearance models. These are generative, parametric models commonly used to track faces in images and video sequences. The parametric nature of the model allows a face in an image to be encoded in terms of a few tens of parameters, while the generative nature allows faces to be re-synthesised using the parameters. The aim of this study is to determine if exaggerating lip-motions in video sequences by amplifying the parameters of the model improves lip-reading ability. We also present results of lip-reading tests undertaken by experienced (but non-expert) adult subjects who claim to use lip-reading in their speech recognition process. The results, which are comparisons of word error-rates on unprocessed and processed video, are mixed. We find that there appears to be the potential to improve the word error rate but, for the method to improve the intelligibility there is need for more sophisticated tracking and visual modelling. Our technique can also act as an expression or visual gesture amplifier and so has applications to animation and the presentation of information via avatars or synthetic humans.
Multi-Atlas Segmentation of Biomedical Images: A Survey
Iglesias, Juan Eugenio; Sabuncu, Mert R.
2015-01-01
Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing, Brandt, Menzel and Maurer Jr (2004), Klein, Mensh, Ghosh, Tourville and Hirsch (2005), and Heckemann, Hajnal, Aljabar, Rueckert and Hammers (2006), is becoming one of the most widely-used and successful image segmentation techniques in biomedical applications. By manipulating and utilizing the entire dataset of “atlases” (training images that have been previously labeled, e.g., manually by an expert), rather than some model-based average representation, MAS has the flexibility to better capture anatomical variation, thus offering superior segmentation accuracy. This benefit, however, typically comes at a high computational cost. Recent advancements in computer hardware and image processing software have been instrumental in addressing this challenge and facilitated the wide adoption of MAS. Today, MAS has come a long way and the approach includes a wide array of sophisticated algorithms that employ ideas from machine learning, probabilistic modeling, optimization, and computer vision, among other fields. This paper presents a survey of published MAS algorithms and studies that have applied these methods to various biomedical problems. In writing this survey, we have three distinct aims. Our primary goal is to document how MAS was originally conceived, later evolved, and now relates to alternative methods. Second, this paper is intended to be a detailed reference of past research activity in MAS, which now spans over a decade (2003 – 2014) and entails novel methodological developments and application-specific solutions. Finally, our goal is to also present a perspective on the future of MAS, which, we believe, will be one of the dominant approaches in biomedical image segmentation. PMID:26201875
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; ...
2016-11-28
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less
The contribution of physics to Nuclear Medicine: physicians' perspective on future directions.
Mankoff, David A; Pryma, Daniel A
2014-12-01
Advances in Nuclear Medicine physics enabled the specialty of Nuclear Medicine and directed research in other aspects of radiotracer imaging, ultimately leading to Nuclear Medicine's emergence as an important component of current medical practice. Nuclear Medicine's unique ability to characterize in vivo biology without perturbing it will assure its ongoing role in a practice of medicine increasingly driven by molecular biology. However, in the future, it is likely that advances in molecular biology and radiopharmaceutical chemistry will increasingly direct future developments in Nuclear Medicine physics, rather than relying on physics as the primary driver of advances in Nuclear Medicine. Working hand-in-hand with clinicians, chemists, and biologists, Nuclear Medicine physicists can greatly enhance the specialty by creating more sensitive and robust imaging devices, by enabling more facile and sophisticated image analysis to yield quantitative measures of regional in vivo biology, and by combining the strengths of radiotracer imaging with other imaging modalities in hybrid devices, with the overall goal to enhance Nuclear Medicine's ability to characterize regional in vivo biology.
NASA Astrophysics Data System (ADS)
Reddy, K. Rasool; Rao, Ch. Madhava
2018-04-01
Currently safety is one of the primary concerns in the transmission of images due to increasing the use of images within the industrial applications. So it's necessary to secure the image facts from unauthorized individuals. There are various strategies are investigated to secure the facts. In that encryption is certainly one of maximum distinguished method. This paper gives a sophisticated Rijndael (AES) algorithm to shield the facts from unauthorized humans. Here Exponential Key Change (EKE) concept is also introduced to exchange the key between client and server. The things are exchange in a network among client and server through a simple protocol is known as Trivial File Transfer Protocol (TFTP). This protocol is used mainly in embedded servers to transfer the data and also provide protection to the data if protection capabilities are integrated. In this paper, implementing a GUI environment for image encryption and decryption. All these experiments carried out on Linux environment the usage of Open CV-Python script.
Red, purple and pink: the colors of diffusion on pinterest.
Bakhshi, Saeideh; Gilbert, Eric
2015-01-01
Many lab studies have shown that colors can evoke powerful emotions and impact human behavior. Might these phenomena drive how we act online? A key research challenge for image-sharing communities is uncovering the mechanisms by which content spreads through the community. In this paper, we investigate whether there is link between color and diffusion. Drawing on a corpus of one million images crawled from Pinterest, we find that color significantly impacts the diffusion of images and adoption of content on image sharing communities such as Pinterest, even after partially controlling for network structure and activity. Specifically, Red, Purple and pink seem to promote diffusion, while Green, Blue, Black and Yellow suppress it. To our knowledge, our study is the first to investigate how colors relate to online user behavior. In addition to contributing to the research conversation surrounding diffusion, these findings suggest future work using sophisticated computer vision techniques. We conclude with a discussion on the theoretical, practical and design implications suggested by this work-e.g. design of engaging image filters.
Chen, Yang; Luo, Yan; Huang, Wei; Hu, Die; Zheng, Rong-Qin; Cong, Shu-Zhen; Meng, Fan-Kun; Yang, Hong; Lin, Hong-Jun; Sun, Yan; Wang, Xiu-Yan; Wu, Tao; Ren, Jie; Pei, Shu-Fang; Zheng, Ying; He, Yun; Hu, Yu; Yang, Na; Yan, Hongmei
2017-10-01
Hepatic fibrosis is a common middle stage of the pathological processes of chronic liver diseases. Clinical intervention during the early stages of hepatic fibrosis can slow the development of liver cirrhosis and reduce the risk of developing liver cancer. Performing a liver biopsy, the gold standard for viral liver disease management, has drawbacks such as invasiveness and a relatively high sampling error rate. Real-time tissue elastography (RTE), one of the most recently developed technologies, might be promising imaging technology because it is both noninvasive and provides accurate assessments of hepatic fibrosis. However, determining the stage of liver fibrosis from RTE images in a clinic is a challenging task. In this study, in contrast to the previous liver fibrosis index (LFI) method, which predicts the stage of diagnosis using RTE images and multiple regression analysis, we employed four classical classifiers (i.e., Support Vector Machine, Naïve Bayes, Random Forest and K-Nearest Neighbor) to build a decision-support system to improve the hepatitis B stage diagnosis performance. Eleven RTE image features were obtained from 513 subjects who underwent liver biopsies in this multicenter collaborative research. The experimental results showed that the adopted classifiers significantly outperformed the LFI method and that the Random Forest(RF) classifier provided the highest average accuracy among the four machine algorithms. This result suggests that sophisticated machine-learning methods can be powerful tools for evaluating the stage of hepatic fibrosis and show promise for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Research of spectacle frame measurement system based on structured light method
NASA Astrophysics Data System (ADS)
Guan, Dong; Chen, Xiaodong; Zhang, Xiuda; Yan, Huimin
2016-10-01
Automatic eyeglass lens edging system is now widely used to automatically cut and polish the uncut lens based on the spectacle frame shape data which is obtained from the spectacle frame measuring machine installed on the system. The conventional approach to acquire the frame shape data works in the contact scanning mode with a probe tracing around the groove contour of the spectacle frame which requires a sophisticated mechanical and numerical control system. In this paper, a novel non-contact optical measuring method based on structured light to measure the three dimensional (3D) data of the spectacle frame is proposed. First we focus on the processing approach solving the problem of deterioration of the structured light stripes caused by intense specular reflection on the frame surface. The techniques of bright-dark bi-level fringe projecting, multiple exposuring and high dynamic range imaging are introduced to obtain a high-quality image of structured light stripes. Then, the Gamma transform and median filtering are applied to enhance image contrast. In order to get rid of background noise from the image and extract the region of interest (ROI), an auxiliary lighting system of special design is utilized to help effectively distinguish between the object and the background. In addition, a morphological method with specific morphological structure-elements is adopted to remove noise between stripes and boundary of the spectacle frame. By further fringe center extraction and depth information acquisition through the method of look-up table, the 3D shape of the spectacle frame is recovered.
Bayesian Multiscale Analysis of X-Ray Jet Features in High Redshift Quasars
NASA Astrophysics Data System (ADS)
McKeough, Kathryn; Siemiginowska, A.; Kashyap, V.; Stein, N.
2014-01-01
X-ray emission of powerful quasar jets may be a result of the inverse Compton (IC) process in which the Cosmic Microwave Background (CMB) photons gain energy by interactions with the jet’s relativistic electrons. However, there is no definite evidence that IC/CMB process is responsible for the observed X-ray emission of large scale jets. A step toward understanding the X-ray emission process is to study the Radio and X-ray morphologies of the jet. We implement a sophisticated Bayesian image analysis program, Low-count Image Reconstruction and Analysis (LIRA) (Esch et al. 2004; Conners & van Dyk 2007), to analyze jet features in 11 Chandra images of high redshift quasars (z ~ 2 - 4.8). Out of the 36 regions where knots are visible in the radio jets, nine showed detectable X-ray emission. We measured the ratios of the X-ray and radio luminosities of the detected features and found that they are consistent with the CMB radiation relationship. We derived a range of the bulk lorentz factor (Γ) for detected jet features under the CMB jet emission model. There is no discernible trend of Γ with redshift within the sample. The efficiency of the X-ray emission between the detected jet feature and the corresponding quasar also shows no correlation with redshift. This work is supported in part by the National Science Foundation REU and the Department of Defense ASSURE programs under NSF Grant no.1262851 and by the Smithsonian Institution, and by NASA Contract NAS8-39073 to the Chandra X-ray Center (CXC). This research has made use of data obtained from the Chandra Data Archive and Chandra Source Catalog, and software provided by the CXC in the application packages CIAO, ChIPS, and Sherpa. We thank Teddy Cheung for providing the VLA radio images. Connors, A., & van Dyk, D. A. 2007, Statistical Challenges in Modern Astronomy IV, 371, 101 Esch, D. N., Connors, A., Karovska, M., & van Dyk, D. A. 2004, ApJ, 610, 1213
Ludwig, Susann K J; Zhu, Hongying; Phillips, Stephen; Shiledar, Ashutosh; Feng, Steve; Tseng, Derek; van Ginkel, Leendert A; Nielen, Michel W F; Ozcan, Aydogan
2014-11-01
Current contaminant and residue monitoring throughout the food chain is based on sampling, transport, administration, and analysis in specialized control laboratories. This is a highly inefficient and costly process since typically more than 99% of the samples are found to be compliant. On-site simplified prescreening may provide a scenario in which only samples that are suspect are transported and further processed. Such a prescreening can be performed using a small attachment on a cellphone. To this end, a cellphone-based imaging platform for a microsphere fluorescence immunoassay that detects the presence of anti-recombinant bovine somatotropin (rbST) antibodies in milk extracts was developed. RbST administration to cows increases their milk production, but is illegal in the EU and a public health concern in the USA. The cellphone monitors the presence of anti-rbST antibodies (rbST biomarker), which are endogenously produced upon administration of rbST and excreted in milk. The rbST biomarker present in milk extracts was captured by rbST covalently coupled to paramagnetic microspheres and labeled by quantum dot (QD)-coupled detection antibodies. The emitted fluorescence light from these captured QDs was then imaged using the cellphone camera. Additionally, a dark-field image was taken in which all microspheres present were visible. The fluorescence and dark-field microimages were analyzed using a custom-developed Android application running on the same cellphone. With this setup, the microsphere fluorescence immunoassay and cellphone-based detection were successfully applied to milk sample extracts from rbST-treated and untreated cows. An 80% true-positive rate and 95% true-negative rate were achieved using this setup. Next, the cellphone-based detection platform was benchmarked against a newly developed planar imaging array alternative and found to be equally performing versus the much more sophisticated alternative. Using cellphone-based on-site analysis in future residue monitoring can limit the number of samples for laboratory analysis already at an early stage. Therewith, the entire monitoring process can become much more efficient and economical.
Long Term Value of Apollo Samples: How Fundamental Understanding of a Body Takes Decades of Study
NASA Astrophysics Data System (ADS)
Borg, L. E.; Gaffney, A. M.; Kruijer, T. K.; Sio, C. K.
2018-04-01
Fundamental understanding of a body evolves as more sophisticated technology is applied to a progressively better understood sample set. Sample diversity is required to understand many geologic processes.
Imaging Strategies for Tissue Engineering Applications
Nam, Seung Yun; Ricles, Laura M.; Suggs, Laura J.
2015-01-01
Tissue engineering has evolved with multifaceted research being conducted using advanced technologies, and it is progressing toward clinical applications. As tissue engineering technology significantly advances, it proceeds toward increasing sophistication, including nanoscale strategies for material construction and synergetic methods for combining with cells, growth factors, or other macromolecules. Therefore, to assess advanced tissue-engineered constructs, tissue engineers need versatile imaging methods capable of monitoring not only morphological but also functional and molecular information. However, there is no single imaging modality that is suitable for all tissue-engineered constructs. Each imaging method has its own range of applications and provides information based on the specific properties of the imaging technique. Therefore, according to the requirements of the tissue engineering studies, the most appropriate tool should be selected among a variety of imaging modalities. The goal of this review article is to describe available biomedical imaging methods to assess tissue engineering applications and to provide tissue engineers with criteria and insights for determining the best imaging strategies. Commonly used biomedical imaging modalities, including X-ray and computed tomography, positron emission tomography and single photon emission computed tomography, magnetic resonance imaging, ultrasound imaging, optical imaging, and emerging techniques and multimodal imaging, will be discussed, focusing on the latest trends of their applications in recent tissue engineering studies. PMID:25012069
Challenges With the Diagnosis and Treatment of Cerebral Radiation Necrosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Samuel T., E-mail: chaos@ccf.org; Rose Ella Burkhardt Brain Tumor and Neuro-oncology Center, Cleveland Clinic, Cleveland, Ohio; Ahluwalia, Manmeet S.
The incidence of radiation necrosis has increased secondary to greater use of combined modality therapy for brain tumors and stereotactic radiosurgery. Given that its characteristics on standard imaging are no different that tumor recurrence, it is difficult to diagnose without use of more sophisticated imaging and nuclear medicine scans, although the accuracy of such scans is controversial. Historically, treatment had been limited to steroids, hyperbaric oxygen, anticoagulants, and surgical resection. A recent prospective randomized study has confirmed the efficacy of bevacizumab in treating radiation necrosis. Novel therapies include using focused interstitial laser thermal therapy. This article will review the diagnosismore » and treatment of radiation necrosis.« less
Kranz, Christine
2014-01-21
In recent years, major developments in scanning electrochemical microscopy (SECM) have significantly broadened the application range of this electroanalytical technique from high-resolution electrochemical imaging via nanoscale probes to large scale mapping using arrays of microelectrodes. A major driving force in advancing the SECM methodology is based on developing more sophisticated probes beyond conventional micro-disc electrodes usually based on noble metals or carbon microwires. This critical review focuses on the design and development of advanced electrochemical probes particularly enabling combinations of SECM with other analytical measurement techniques to provide information beyond exclusively measuring electrochemical sample properties. Consequently, this critical review will focus on recent progress and new developments towards multifunctional imaging.
Edmond, Gary
2013-03-01
Using as a case study the forensic comparison of images for purposes of identification, this essay considers how the history, philosophy and sociology of science might help courts to improve their responses to scientific and technical forms of expert opinion evidence in ways that are more consistent with legal system goals and values. It places an emphasis on the need for more sophisticated models of science and expertise that are capable of helping judges to identify sufficiently reliable types of expert evidence and to reflexively incorporate the weakness of trial safeguards and personnel into their admissibility decision making. Copyright © 2013. Published by Elsevier Ltd.
Development of a Learning Progression for the Formation of the Solar System
ERIC Educational Resources Information Center
Plummer, Julia D.; Palma, Christopher; Flarend, Alice; Rubin, KeriAnn; Ong, Yann Shiou; Botzer, Brandon; McDonald, Scott; Furman, Tanya
2015-01-01
This study describes the process of defining a hypothetical learning progression (LP) for astronomy around the big idea of "Solar System formation." At the most sophisticated level, students can explain how the formation process led to the current Solar System by considering how the planets formed from the collapse of a rotating cloud of…
Designing a Web-Based Science Learning Environment for Model-Based Collaborative Inquiry
ERIC Educational Resources Information Center
Sun, Daner; Looi, Chee-Kit
2013-01-01
The paper traces a research process in the design and development of a science learning environment called WiMVT (web-based inquirer with modeling and visualization technology). The WiMVT system is designed to help secondary school students build a sophisticated understanding of scientific conceptions, and the science inquiry process, as well as…
The Neuroscience of Dance and the Dance of Neuroscience: Defining a Path of Inquiry
ERIC Educational Resources Information Center
Dale, J. Alexander; Hyatt, Janyce; Hollerman, Jeff
2007-01-01
The neural processes of a person comprehending or creating music have intrigued neuroscientists and prompted them to examine the processing of information and emotion with some of the most recent and sophisticated techniques in the brain sciences (see, for example, Zatorre and his colleagues' work). These techniques and the excitement of studying…
Recent developments in dimensional nanometrology using AFMs
NASA Astrophysics Data System (ADS)
Yacoot, Andrew; Koenders, Ludger
2011-12-01
Scanning probe microscopes, in particular the atomic force microscope (AFM), have developed into sophisticated instruments that, throughout the world, are no longer used just for imaging, but for quantitative measurements. A role of the national measurement institutes has been to provide traceable metrology for these instruments. This paper presents a brief overview as to how this has been achieved, highlights the future requirements for metrology to support developments in AFM technology and describes work in progress to meet this need.
[The image of Byzantine medicine in the satire "Timarion"].
Leven, K H
1990-01-01
Byzantine medicine is usually regarded as a static and non-creative descendant of classical Greek medicine, a point of view confirmed by the Byzantine medical texts. In this essay, the anonymous satire "Timarion" is analyzed in respect to its image of contemporary medical theory. Timarion, the fictive narrator, falls ill with a fever and is brought to Hades by two conductors of souls. They assert that he cannot survive, because he has secreted all his elementary bile. According to a decree by Asclepios and Hippocrates posted in Hades, any person that has lost one of his four elements may not live longer. In Hades Timarion sues to the court of judges of the dead. His lawyer, the sophist Theodore of Smyrna, persuades the judges that the bile excreted by Timarion has not been elementary in the sense of humoral pathology. So Timarion is allowed to return to life. The author of the satire ridicules the fundamental axiom of the four humours. Asclepios, Hippocrates and Erasistratos, who are attached to the infernal court as experts, cannot defend their theory against the convincing arguments of a sophist. The "divine" Galen, who probably would have been able to, is absent in order to complete a book of his. The "Timarion" with its harsh critique of medical theory is very amusing and a rare example of "actuality" in Byzantine literature.
Optical imaging of airglow structure in equatorial plasma bubbles at radio scintillation scales
NASA Astrophysics Data System (ADS)
Holmes, J. M.; Pedersen, T.; Parris, R. T.; Stephens, B.; Caton, R. G.; Dao, E. V.; Kratochvil, S.; Morton, Y.; Xu, D.; Jiao, Y.; Taylor, S.; Carrano, C. S.
2015-12-01
Imagery of optical emissions from F-region plasma is one of the few means available todetermine plasma density structure in two dimensions. However, the smallest spatial scalesobservable with this technique are typically limited not by magnification of the lens or resolutionof the detector but rather by the optical throughput of the system, which drives the integrationtime, which in turn causes smearing of the features that are typically moving at speeds of 100m/s or more. In this paper we present high spatio-temporal imagery of equatorial plasma bubbles(EPBs) from an imaging system called the Large Aperture Ionospheric Structure Imager(LAISI), which was specifically designed to capture short-integration, high-resolution images ofF-region recombination airglow at λ557.7 nm. The imager features 8-inch diameter entranceoptics comprised of a unique F/0.87 lens, combined with a monolithic 8-inch diameterinterference filter and a 2x2-inch CCD detector. The LAISI field of view is approximately 30degrees. Filtered all-sky images at common airglow wavelengths are combined with magneticfield-aligned LAISI images, GNSS scintillation, and VHF scintillation data obtained atAscension Island (7.98S, 14.41W geographic). A custom-built, multi-constellation GNSS datacollection system was employed that sampled GPS L1, L2C, L5, GLONASS L1 and L2, BeidouB1, and Galileo E1 and E5a signals. Sophisticated processing software was able to maintainlock of all signals during strong scintillation, providing unprecedented spatial observability ofL band scintillation. The smallest-resolvable scale sizes above the noise floor in the EPBs, as viewed byLAISI, are illustrated for integration times of 1, 5 and 10 seconds, with concurrentzonal irregularity drift speeds from both spaced-receiver VHF measurements and single-stationGNSS measurements of S4 and σφ. These observable optical scale sizes are placed in thecontext of those that give rise to radio scintillation in VHF and L band signals.
Image labeling. The need for a better look.
Hunter, T
1994-10-01
The important message in this editorial is for radiologists to critically examine how well images are labeled in their own department. If it is not satisfactory, then institute corrective measures. These can range from sophisticated computer programs for printing flashcards to merely sending the chief technologist all those films one comes across with unreadable labels. The quality of the image labeling should also be a consideration when purchasing CT, MRI, ultrasound, computed radiography and digital angiography equipment. The fact that you consider this important should be communicated to equipment manufacturers in the hope that they will pay more attention to it and offer more flexibility for each department to design its own labels. In any event, I feel consistently bad film labeling results in sloppy radiology with possible patient harm and unpleasant legal consequences for the radiologist.
Setting up and running an advanced light microscopy and imaging facility.
Sánchez, Carlos; Muñoz, Ma Ángeles; Villalba, Maite; Labrador, Verónica; Díez-Guerra, F Javier
2011-07-01
During the last twenty years, interest in light microscopy and imaging techniques has grown in various fields, such as molecular and cellular biology, developmental biology, and neurobiology. In addition, the number of scientific articles and journals using these techniques is rapidly increasing. Nowadays, most research institutions require sophisticated microscopy systems to cover their investigation demands. In general, such instruments are too expensive and complex to be purchased and managed by a single laboratory or research group, so they have to be shared with other groups and supervised by specialized personnel. This is the reason why microscopy and imaging facilities are becoming so important at research institutions nowadays. In this unit, we have gathered and presented a number of issues and considerations from our own experience that we hope will be helpful when planning or setting up a new facility.
Real-time quantitative Schlieren imaging by fast Fourier demodulation of a checkered backdrop
NASA Astrophysics Data System (ADS)
Wildeman, Sander
2018-06-01
A quantitative synthetic Schlieren imaging (SSI) method based on fast Fourier demodulation is presented. Instead of a random dot pattern (as usually employed in SSI), a 2D periodic pattern (such as a checkerboard) is used as a backdrop to the refractive object of interest. The range of validity and accuracy of this "Fast Checkerboard Demodulation" (FCD) method are assessed using both synthetic data and experimental recordings of patterns optically distorted by small waves on a water surface. It is found that the FCD method is at least as accurate as sophisticated, multi-stage, digital image correlation (DIC) or optical flow (OF) techniques used with random dot patterns, and it is significantly faster. Efficient, fully vectorized, implementations of both the FCD and DIC/OF schemes developed for this study are made available as open source Matlab scripts.
Marinelli, A; Dunning, M; Weathersby, S; Hemsing, E; Xiang, D; Andonian, G; O'Shea, F; Miao, Jianwei; Hast, C; Rosenzweig, J B
2013-03-01
With the advent of coherent x rays provided by the x-ray free-electron laser (FEL), strong interest has been kindled in sophisticated diffraction imaging techniques. In this Letter, we exploit such techniques for the diagnosis of the density distribution of the intense electron beams typically utilized in an x-ray FEL itself. We have implemented this method by analyzing the far-field coherent transition radiation emitted by an inverse-FEL microbunched electron beam. This analysis utilizes an oversampling phase retrieval method on the transition radiation angular spectrum to reconstruct the transverse spatial distribution of the electron beam. This application of diffraction imaging represents a significant advance in electron beam physics, having critical applications to the diagnosis of high-brightness beams, as well as the collective microbunching instabilities afflicting these systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makowska, Małgorzata G., E-mail: malg@dtu.dk; European Spallation Source ESS AB, P.O. Box 176, SE-221 00 Lund; Theil Kuhn, Luise
High material penetration by neutrons allows for experiments using sophisticated sample environments providing complex conditions. Thus, neutron imaging holds potential for performing in situ nondestructive measurements on large samples or even full technological systems, which are not possible with any other technique. This paper presents a new sample environment for in situ high resolution neutron imaging experiments at temperatures from room temperature up to 1100 °C and/or using controllable flow of reactive atmospheres. The design also offers the possibility to directly combine imaging with diffraction measurements. Design, special features, and specification of the furnace are described. In addition, examples of experimentsmore » successfully performed at various neutron facilities with the furnace, as well as examples of possible applications are presented. This covers a broad field of research from fundamental to technological investigations of various types of materials and components.« less
NASA Astrophysics Data System (ADS)
Bresnahan, Patricia A.; Pukinskis, Madeleine; Wiggins, Michael
1999-03-01
Image quality assessment systems differ greatly with respect to the number and types of mags they need to evaluate, and their overall architectures. Managers of these systems, however, all need to be able to tune and evaluate system performance, requirements often overlooked or under-designed during project planning. Performance tuning tools allow users to define acceptable quality standards for image features and attributes by adjusting parameter settings. Performance analysis tools allow users to evaluate and/or predict how well a system performs in a given parameter state. While image assessment algorithms are becoming quite sophisticated, duplicating or surpassing the human decision making process in their speed and reliability, they often require a greater investment in 'training' or fine tuning of parameters in order to achieve optimum performance. This process may involve the analysis of hundreds or thousands of images, generating a large database of files and statistics that can be difficult to sort through and interpret. Compounding the difficulty is the fact that personnel charged with tuning and maintaining the production system may not have the statistical or analytical background required for the task. Meanwhile, hardware innovations have greatly increased the volume of images that can be handled in a given time frame, magnifying the consequences of running a production site with an inadequately tuned system. In this paper, some general requirements for a performance evaluation and tuning data visualization system are discussed. A custom engineered solution to the tuning and evaluation problem is then presented, developed within the context of a high volume image quality assessment, data entry, OCR, and image archival system. A key factor influencing the design of the system was the context-dependent definition of image quality, as perceived by a human interpreter. This led to the development of a five-level, hierarchical approach to image quality evaluation. Lower-level pass-fail conditions and decision rules were coded into the system. Higher-level image quality states were defined by allowing the users to interactively adjust the system's sensitivity to various image attributes by manipulating graphical controls. Results were presented in easily interpreted bar graphs. These graphs were mouse- sensitive, allowing the user to more fully explore the subsets of data indicated by various color blocks. In order to simplify the performance evaluation and tuning process, users could choose to view the results of (1) the existing system parameter state, (2) the results of any arbitrary parameter values they chose, or (3) the results of a quasi-optimum parameter state, derived by applying a decision rule to a large set of possible parameter states. Giving managers easy- to-use tools for defining the more subjective aspects of quality resulted in a system that responded to contextual cues that are difficult to hard-code. It had the additional advantage of allowing the definition of quality to evolve over time, as users became more knowledgeable as to the strengths and limitations of an automated quality inspection system.
Hillman, Elizabeth Mc; Voleti, Venkatakaushik; Patel, Kripa; Li, Wenze; Yu, Hang; Perez-Campos, Citlali; Benezra, Sam E; Bruno, Randy M; Galwaduge, Pubudu T
2018-06-01
As optical reporters and modulators of cellular activity have become increasingly sophisticated, the amount that can be learned about the brain via high-speed cellular imaging has increased dramatically. However, despite fervent innovation, point-scanning microscopy is facing a fundamental limit in achievable 3D imaging speeds and fields of view. A range of alternative approaches are emerging, some of which are moving away from point-scanning to use axially-extended beams or sheets of light, for example swept confocally aligned planar excitation (SCAPE) microscopy. These methods are proving effective for high-speed volumetric imaging of the nervous system of small organisms such as Drosophila (fruit fly) and D. Rerio (Zebrafish), and are showing promise for imaging activity in the living mammalian brain using both single and two-photon excitation. This article describes these approaches and presents a simple model that demonstrates key advantages of axially-extended illumination over point-scanning strategies for high-speed volumetric imaging, including longer integration times per voxel, improved photon efficiency and reduced photodamage. Copyright © 2018 Elsevier Ltd. All rights reserved.
Image-Based Predictive Modeling of Heart Mechanics.
Wang, V Y; Nielsen, P M F; Nash, M P
2015-01-01
Personalized biophysical modeling of the heart is a useful approach for noninvasively analyzing and predicting in vivo cardiac mechanics. Three main developments support this style of analysis: state-of-the-art cardiac imaging technologies, modern computational infrastructure, and advanced mathematical modeling techniques. In vivo measurements of cardiac structure and function can be integrated using sophisticated computational methods to investigate mechanisms of myocardial function and dysfunction, and can aid in clinical diagnosis and developing personalized treatment. In this article, we review the state-of-the-art in cardiac imaging modalities, model-based interpretation of 3D images of cardiac structure and function, and recent advances in modeling that allow personalized predictions of heart mechanics. We discuss how using such image-based modeling frameworks can increase the understanding of the fundamental biophysics behind cardiac mechanics, and assist with diagnosis, surgical guidance, and treatment planning. Addressing the challenges in this field will require a coordinated effort from both the clinical-imaging and modeling communities. We also discuss future directions that can be taken to bridge the gap between basic science and clinical translation.
Virtual microscopy in virtual tumor banking.
Isabelle, M; Teodorovic, I; Oosterhuis, J W; Riegman, P H J; Passioukov, A; Lejeune, S; Therasse, P; Dinjens, W N M; Lam, K H; Oomen, M H A; Spatz, A; Ratcliffe, C; Knox, K; Mager, R; Kerr, D; Pezzella, F; Van Damme, B; Van de Vijver, M; Van Boven, H; Morente, M M; Alonso, S; Kerjaschki, D; Pammer, J; López-Guerrero, J A; Llombart-Bosch, A; Carbone, A; Gloghini, A; Van Veen, E B
2006-01-01
Many systems have already been designed and successfully used for sharing histology images over large distances, without transfer of the original glass slides. Rapid evolution was seen when digital images could be transferred over the Internet. Nowadays, sophisticated virtual microscope systems can be acquired, with the capability to quickly scan large batches of glass slides at high magnification and compress and store the large images on disc, which subsequently can be consulted through the Internet. The images are stored on an image server, which can give simple, easy to transfer pictures to the user specifying a certain magnification on any position in the scan. This offers new opportunities in histology review, overcoming the necessity of the dynamic telepathology systems to have compatible software systems and microscopes and in addition, an adequate connection of sufficient bandwidth. Consulting the images now only requires an Internet connection and a computer with a high quality monitor. A system of complete pathology review supporting biorepositories is described, based on the implementation of this technique in the European Human Frozen Tumor Tissue Bank (TuBaFrost).
TuBaFrost 6: virtual microscopy in virtual tumour banking.
Teodorovic, I; Isabelle, M; Carbone, A; Passioukov, A; Lejeune, S; Jaminé, D; Therasse, P; Gloghini, A; Dinjens, W N M; Lam, K H; Oomen, M H A; Spatz, A; Ratcliffe, C; Knox, K; Mager, R; Kerr, D; Pezzella, F; van Damme, B; van de Vijver, M; van Boven, H; Morente, M M; Alonso, S; Kerjaschki, D; Pammer, J; Lopez-Guerrero, J A; Llombart Bosch, A; van Veen, E-B; Oosterhuis, J W; Riegman, P H J
2006-12-01
Many systems have already been designed and successfully used for sharing histology images over large distances, without transfer of the original glass slides. Rapid evolution was seen when digital images could be transferred over the Internet. Nowadays, sophisticated Virtual Microscope systems can be acquired, with the capability to quickly scan large batches of glass slides at high magnification and compress and store the large images on disc, which subsequently can be consulted through the Internet. The images are stored on an image server, which can give simple, easy to transfer pictures to the user specifying a certain magnification on any position in the scan. This offers new opportunities in histology review, overcoming the necessity of the dynamic telepathology systems to have compatible software systems and microscopes and in addition, an adequate connection of sufficient bandwidth. Consulting the images now only requires an Internet connection and a computer with a high quality monitor. A system of complete pathology review supporting bio-repositories is described, based on the implementation of this technique in the European Human Frozen Tumor Tissue Bank (TuBaFrost).
NASA Astrophysics Data System (ADS)
Eilon, Zachary; Fischer, Karen M.; Dalton, Colleen A.
2018-07-01
We present a methodology for 1-D imaging of upper-mantle structure using a Bayesian approach that incorporates a novel combination of seismic data types and an adaptive parametrization based on piecewise discontinuous splines. Our inversion algorithm lays the groundwork for improved seismic velocity models of the lithosphere and asthenosphere by harnessing the recent expansion of large seismic arrays and computational power alongside sophisticated data analysis. Careful processing of P- and S-wave arrivals isolates converted phases generated at velocity gradients between the mid-crust and 300 km depth. This data is allied with ambient noise and earthquake Rayleigh wave phase velocities to obtain detailed VS and VP velocity models. Synthetic tests demonstrate that converted phases are necessary to accurately constrain velocity gradients, and S-p phases are particularly important for resolving mantle structure, while surface waves are necessary for capturing absolute velocities. We apply the method to several stations in the northwest and north-central United States, finding that the imaged structure improves upon existing models by sharpening the vertical resolution of absolute velocity profiles, offering robust uncertainty estimates, and revealing mid-lithospheric velocity gradients indicative of thermochemical cratonic layering. This flexible method holds promise for increasingly detailed understanding of the upper mantle.
Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel
2015-01-01
Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315
Ocular biomarkers of Alzheimer's disease.
Heaton, George R; Davis, Benjamin M; Turner, Lisa A; Cordeiro, Maria F
2015-01-01
Alzheimer's disease (AD) is a devastating neurodegenerative disease characterised clinically by a progressive decline in executive functions, memory and cognition. Classic neuropathological hallmarks of AD include intracellular hyper-phosphorylated tau protein which forms neurofibrillary tangles (NFT), and extracellular deposits of amyloid β (Aβ) protein, the primary constituent of senile plaques (SP). The gradual process of pathogenic amyloid accumulation is thought to occur 10-20 years prior to symptomatic manifestation. Advance detection of these deposits therefore offers a highly promising avenue for prodromal AD diagnosis. Currently, the most sophisticated method of 'probable AD' diagnosis is via neuroimaging or cerebral spinal fluid (CSF) biomarker analysis. Whilst these methods have reported a high degree of diagnostic specificity and accuracy, they fall significantly short in terms of practicality; they are often highly invasive, expensive or unsuitable for large-scale population screening. In recent years, ocular screening has received substantial attention from the scientific community due to its potential for non-invasive and inexpensive central nervous system (CNS) imaging. In this appraisal we build upon our previous reviews detailing ocular structural and functional changes in AD (Retinal manifestations of Alzheimer's disease, Alzheimer's disease and Retinal Neurodegeneration) and consider their use as biomarkers. In addition, we present an overview of current advances in the use of fluorescent reporters to detect AD pathology through non-invasive retinal imaging.
Taking advantage of acoustic inhomogeneities in photoacoustic measurements
NASA Astrophysics Data System (ADS)
Da Silva, Anabela; Handschin, Charles; Riedinger, Christophe; Piasecki, Julien; Mensah, Serge; Litman, Amélie; Akhouayri, Hassan
2016-03-01
Photoacoustic offers promising perspectives in probing and imaging subsurface optically absorbing structures in biological tissues. The optical uence absorbed is partly dissipated into heat accompanied with microdilatations that generate acoustic pressure waves, the intensity which is related to the amount of fluuence absorbed. Hence the photoacoustic signal measured offers access, at least potentially, to a local monitoring of the absorption coefficient, in 3D if tomographic measurements are considered. However, due to both the diffusing and absorbing nature of the surrounding tissues, the major part of the uence is deposited locally at the periphery of the tissue, generating an intense acoustic pressure wave that may hide relevant photoacoustic signals. Experimental strategies have been developed in order to measure exclusively the photoacoustic waves generated by the structure of interest (orthogonal illumination and detection). Temporal or more sophisticated filters (wavelets) can also be applied. However, the measurement of this primary acoustic wave carries a lot of information about the acoustically inhomogeneous nature of the medium. We propose a protocol that includes the processing of this primary intense acoustic wave, leading to the quantification of the surrounding medium sound speed, and, if appropriate to an acoustical parametric image of the heterogeneities. This information is then included as prior knowledge in the photoacoustic reconstruction scheme to improve the localization and quantification.
NASA Astrophysics Data System (ADS)
Eilon, Zachary; Fischer, Karen M.; Dalton, Colleen A.
2018-04-01
We present a methodology for 1-D imaging of upper mantle structure using a Bayesian approach that incorporates a novel combination of seismic data types and an adaptive parameterisation based on piecewise discontinuous splines. Our inversion algorithm lays the groundwork for improved seismic velocity models of the lithosphere and asthenosphere by harnessing the recent expansion of large seismic arrays and computational power alongside sophisticated data analysis. Careful processing of P- and S-wave arrivals isolates converted phases generated at velocity gradients between the mid-crust and 300 km depth. This data is allied with ambient noise and earthquake Rayleigh wave phase velocities to obtain detailed VS and VP velocity models. Synthetic tests demonstrate that converted phases are necessary to accurately constrain velocity gradients, and S-p phases are particularly important for resolving mantle structure, while surface waves are necessary for capturing absolute velocities. We apply the method to several stations in the northwest and north-central United States, finding that the imaged structure improves upon existing models by sharpening the vertical resolution of absolute velocity profiles, offering robust uncertainty estimates, and revealing mid-lithospheric velocity gradients indicative of thermochemical cratonic layering. This flexible method holds promise for increasingly detailed understanding of the upper mantle.
Towards constructing multi-bit binary adder based on Belousov-Zhabotinsky reaction
NASA Astrophysics Data System (ADS)
Zhang, Guo-Mao; Wong, Ieong; Chou, Meng-Ta; Zhao, Xin
2012-04-01
It has been proposed that the spatial excitable media can perform a wide range of computational operations, from image processing, to path planning, to logical and arithmetic computations. The realizations in the field of chemical logical and arithmetic computations are mainly concerned with single simple logical functions in experiments. In this study, based on Belousov-Zhabotinsky reaction, we performed simulations toward the realization of a more complex operation, the binary adder. Combining with some of the existing functional structures that have been verified experimentally, we designed a planar geometrical binary adder chemical device. Through numerical simulations, we first demonstrated that the device can implement the function of a single-bit full binary adder. Then we show that the binary adder units can be further extended in plane, and coupled together to realize a two-bit, or even multi-bit binary adder. The realization of chemical adders can guide the constructions of other sophisticated arithmetic functions, ultimately leading to the implementation of chemical computer and other intelligent systems.
Detection of protease activity in cells and animals.
Verdoes, Martijn; Verhelst, Steven H L
2016-01-01
Proteases are involved in a wide variety of biologically and medically important events. They are entangled in a complex network of processes that regulate their activity, which makes their study intriguing, but challenging. For comprehensive understanding of protease biology and effective drug discovery, it is therefore essential to study proteases in models that are close to their complex native environments such as live cells or whole organisms. Protease activity can be detected by reporter substrates and activity-based probes, but not all of these reagents are suitable for intracellular or in vivo use. This review focuses on the detection of proteases in cells and in vivo. We summarize the use of probes and substrates as molecular tools, discuss strategies to deliver these tools inside cells, and describe sophisticated read-out techniques such as mass spectrometry and various imaging applications. This article is part of a Special Issue entitled: Physiological Enzymology and Protein Functions. Copyright © 2015 Elsevier B.V. All rights reserved.
Amateur Radio Flash Mob: Citizen Radio Science Response to a Solar Eclipse
NASA Astrophysics Data System (ADS)
Hirsch, M.; Frissell, N. A.
2017-12-01
Over a decade's worth of scientifically useful data from radio amateurs worldwide is publicly available, with momentum building in science exploitation of this data. For the 2017 solar eclipse, a "flash mob" of radio amateurs were organized in the form of a contest. Licensed radio amateurs transmitted on specific frequency bands, with awards given for a new generation of raw data collection allowing sophisticated post-processing of raw ADC data, to extract quantities such as Doppler shift due to ionospheric lifting for example. We discuss transitioning science priorities to gamified scoring procedures incentivizing the public to submit the highest quality and quantity of archival raw radio science data. The choices of frequency bands to encourage in the face of regulatory limitations is discussed. An update on initial field experiments using wideband experimental modulation specially licensed yet receivable by radio amateurs for high spatiotemporal resolution imaging of the ionosphere is given. The cost of this equipment is less than $500 per node, comparing favorably to legacy oblique ionospheric sounding networks.
McKenna, J W; Williams, K N
1993-01-01
Focus group research conducted by the Centers for Disease Control and Prevention's Office on Smoking and Health suggested that the desire of teenagers to gain control over their lives would make them responsive to a counteradvertising strategy aimed at exposing the predatory marketing techniques of the tobacco industry. On the basis of this strategy, the office developed draft print advertisements and a rough TV commercial featuring such theme lines as "You get an image. They get an addict." In those ads, "they" referred to cigarette companies. Subsequent testing of the campaign materials, however, indicated that the subtle, sophisticated execution of this concept of manipulation by the industry did not communicate clearly and effectively to an audience of young teens. In fact, 38 percent of those who viewed the rough TV spot believed that the main message promoted smoking. These negative test findings underscore the critical need for ongoing audience research throughout the creative process to ensure that campaign planners stay "in tune" with their consumers. PMID:8210278
Evolution of Biological Image Stabilization.
Hardcastle, Ben J; Krapp, Holger G
2016-10-24
The use of vision to coordinate behavior requires an efficient control design that stabilizes the world on the retina or directs the gaze towards salient features in the surroundings. With a level gaze, visual processing tasks are simplified and behaviorally relevant features from the visual environment can be extracted. No matter how simple or sophisticated the eye design, mechanisms have evolved across phyla to stabilize gaze. In this review, we describe functional similarities in eyes and gaze stabilization reflexes, emphasizing their fundamental role in transforming sensory information into motor commands that support postural and locomotor control. We then focus on gaze stabilization design in flying insects and detail some of the underlying principles. Systems analysis reveals that gaze stabilization often involves several sensory modalities, including vision itself, and makes use of feedback as well as feedforward signals. Independent of phylogenetic distance, the physical interaction between an animal and its natural environment - its available senses and how it moves - appears to shape the adaptation of all aspects of gaze stabilization. Copyright © 2016 Elsevier Ltd. All rights reserved.
SAW chirp filter technology for satellite on-board processing applications
NASA Astrophysics Data System (ADS)
Shaw, M. D.; Miller, N. D. J.; Malarky, A. P.; Warne, D. H.
1989-11-01
Market growth in the area of thin route satellite communications services has led to consideration of nontraditional system architectures requiring sophisticated on-board processing functions. Surface acoustic wave (SAW) technology exists today which can provide implementation of key on-board processing subsystems by using multicarrier demodulators. This paper presents a review of this signal processing technology, along with a brief review of dispersive SAW device technology as applied to the implementation of multicarrier demodulators for on-board signal processing.
NASA Astrophysics Data System (ADS)
Brahmi, Djamel; Serruys, Camille; Cassoux, Nathalie; Giron, Alain; Triller, Raoul; Lehoang, Phuc; Fertil, Bernard
2000-06-01
Medical images provide experienced physicians with meaningful visual stimuli but their features are frequently hard to decipher. The development of a computational model to mimic physicians' expertise is a demanding task, especially if a significant and sophisticated preprocessing of images is required. Learning from well-expertised images may be a more convenient approach, inasmuch a large and representative bunch of samples is available. A four-stage approach has been designed, which combines image sub-sampling with unsupervised image coding, supervised classification and image reconstruction in order to directly extract medical expertise from raw images. The system has been applied (1) to the detection of some features related to the diagnosis of black tumors of skin (a classification issue) and (2) to the detection of virus-infected and healthy areas in retina angiography in order to locate precisely the border between them and characterize the evolution of infection. For reasonably balanced training sets, we are able to obtained about 90% correct classification of features (black tumors). Boundaries generated by our system mimic reproducibility of hand-outlines drawn by experts (segmentation of virus-infected area).
Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors.
Qu, Chen; Bi, Du-Yan; Sui, Ping; Chao, Ai-Nong; Wang, Yun-Fei
2017-09-22
The CMOS (Complementary Metal-Oxide-Semiconductor) is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze), causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF) framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.
Medical Students' Understanding of Directed Questioning by Their Clinical Preceptors.
Lo, Lawrence; Regehr, Glenn
2017-01-01
Phenomenon: Throughout clerkship, preceptors ask medical students questions for both assessment and teaching purposes. However, the cognitive and strategic aspects of students' approaches to managing this situation have not been explored. Without an understanding of how students approach the question and answer activity, medical educators are unable to appreciate how effectively this activity fulfills their purposes of assessment or determine the activity's associated educational effects. A convenience sample of nine 4th-year medical students participated in semistructured one-on-one interviews exploring their approaches to managing situations in which they have been challenged with questions from preceptors to which they do not know the answer. Through an iterative and recursive analytic reading of the interview transcripts, data were coded and organized to identify themes relevant to the students' considerations in answering such questions. Students articulated deliberate strategies for managing the directed questioning activity, which at times focused on the optimization of their learning but always included considerations of image management. Managing image involved projecting not only being knowledgeable but also being teachable. The students indicated that their considerations in selecting an appropriate strategy in a given situation involved their perceptions of their preceptors' intentions and preferences as well as several contextual factors. Insights: The medical students we interviewed were quite sophisticated in their understanding of the social nuances of the directed questioning process and described a variety of contextually invoked strategies to manage the situation and maintain a positive image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martini, B; Silver, E; Pickles, W
2004-03-25
Growing interest and exploration dollars within the geothermal sector have paved the way for increasingly sophisticated suites of geophysical and geochemical tools and methodologies. The efforts to characterize and assess known geothermal fields and find new, previously unknown resources has been aided by the advent of higher spatial resolution airborne geophysics (e.g. aeromagnetics), development of new seismic processing techniques, and the genesis of modern multi-dimensional fluid flow and structural modeling algorithms, just to name a few. One of the newest techniques on the scene, is hyperspectral imaging. Really an optical analytical geochemical tool, hyperspectral imagers (or imaging spectrometers as theymore » are also called), are generally flown at medium to high altitudes aboard mid-sized aircraft and much in the same way more familiar geophysics are flown. The hyperspectral data records a continuous spatial record of the earth's surface, as well as measuring a continuous spectral record of reflected sunlight or emitted thermal radiation. This high fidelity, uninterrupted spatial and spectral record allows for accurate material distribution mapping and quantitative identification at the pixel to sub-pixel level. In volcanic/geothermal regions, this capability translates to synoptic, high spatial resolution, large-area mineral maps generated at time scales conducive to both the faster pace of the exploration and drilling managers, as well as to the slower pace of geologists and other researchers trying to understand the geothermal system over the long run.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W L; Martini, B A; Silver, E A
2004-03-03
Growing interest and exploration dollars within the geothermal sector have paved the way for increasingly sophisticated suites of geophysical and geochemical tools and methodologies. The efforts to characterize and assess known geothermal fields and find new, previously unknown resources has been aided by the advent of higher spatial resolution airborne geophysics (e.g. aeromagnetics), development of new seismic processing techniques, and the genesis of modern multi-dimensional fluid flow and structural modeling algorithms, just to name a few. One of the newest techniques on the scene, is hyperspectral imaging. Really an optical analytical geochemical tool, hyperspectral imagers (or imaging spectrometers as theymore » are also called), are generally flown at medium to high altitudes aboard mid-sized aircraft and much in the same way more familiar geophysics are flown. The hyperspectral data records a continuous spatial record of the earth's surface, as well as measuring a continuous spectral record of reflected sunlight or emitted thermal radiation. This high fidelity, uninterrupted spatial and spectral record allows for accurate material distribution mapping and quantitative identification at the pixel to sub-pixel level. In volcanic/geothermal regions, this capability translates to synoptic, high spatial resolution, large-area mineral maps generated at time scales conducive to both the faster pace of the exploration and drilling managers, as well as to the slower pace of geologists and other researchers trying to understand the geothermal system over the long run.« less
Dried fruits quality assessment by hyperspectral imaging
NASA Astrophysics Data System (ADS)
Serranti, Silvia; Gargiulo, Aldo; Bonifazi, Giuseppe
2012-05-01
Dried fruits products present different market values according to their quality. Such a quality is usually quantified in terms of freshness of the products, as well as presence of contaminants (pieces of shell, husk, and small stones), defects, mould and decays. The combination of these parameters, in terms of relative presence, represent a fundamental set of attributes conditioning dried fruits humans-senses-detectable-attributes (visual appearance, organolectic properties, etc.) and their overall quality in terms of marketable products. Sorting-selection strategies exist but sometimes they fail when a higher degree of detection is required especially if addressed to discriminate between dried fruits of relatively small dimensions and when aiming to perform an "early detection" of pathogen agents responsible of future moulds and decays development. Surface characteristics of dried fruits can be investigated by hyperspectral imaging (HSI). In this paper, specific and "ad hoc" applications addressed to propose quality detection logics, adopting a hyperspectral imaging (HSI) based approach, are described, compared and critically evaluated. Reflectance spectra of selected dried fruits (hazelnuts) of different quality and characterized by the presence of different contaminants and defects have been acquired by a laboratory device equipped with two HSI systems working in two different spectral ranges: visible-near infrared field (400-1000 nm) and near infrared field (1000-1700 nm). The spectra have been processed and results evaluated adopting both a simple and fast wavelength band ratio approach and a more sophisticated classification logic based on principal component (PCA) analysis.
Future Perspective of Single-Molecule FRET Biosensors and Intravital FRET Microscopy.
Hirata, Eishu; Kiyokawa, Etsuko
2016-09-20
Förster (or fluorescence) resonance energy transfer (FRET) is a nonradiative energy transfer process between two fluorophores located in close proximity to each other. To date, a variety of biosensors based on the principle of FRET have been developed to monitor the activity of kinases, proteases, GTPases or lipid concentration in living cells. In addition, generation of biosensors that can monitor physical stresses such as mechanical power, heat, or electric/magnetic fields is also expected based on recent discoveries on the effects of these stressors on cell behavior. These biosensors can now be stably expressed in cells and mice by transposon technologies. In addition, two-photon excitation microscopy can be used to detect the activities or concentrations of bioactive molecules in vivo. In the future, more sophisticated techniques for image acquisition and quantitative analysis will be needed to obtain more precise FRET signals in spatiotemporal dimensions. Improvement of tissue/organ position fixation methods for mouse imaging is the first step toward effective image acquisition. Progress in the development of fluorescent proteins that can be excited with longer wavelength should be applied to FRET biosensors to obtain deeper structures. The development of computational programs that can separately quantify signals from single cells embedded in complicated three-dimensional environments is also expected. Along with the progress in these methodologies, two-photon excitation intravital FRET microscopy will be a powerful and valuable tool for the comprehensive understanding of biomedical phenomena. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Guerrero, Adán; Carneiro, Jorge; Pimentel, Arturo; Wood, Christopher D.; Corkidi, Gabriel; Darszon, Alberto
2011-01-01
The spermatozoon must find its female gamete partner and deliver its genetic material to generate a new individual. This requires that the spermatozoon be motile and endowed with sophisticated swimming strategies to locate the oocyte. A common strategy is chemotaxis, in which spermatozoa detect and follow a gradient of chemical signals released by the egg and its associated structures. Decoding the female gamete’s positional information is a process that spermatozoa undergo in a three-dimensional (3D) space; however, due to their speed and small size, this process has been studied almost exclusively in spermatozoa restricted to swimming in two dimensions (2D). This review examines the relationship between the mechanics of sperm propulsion and the physiological function of these cells in 3D. It also considers whether it is possible to derive all the 3D sperm swimming characteristics by extrapolating from 2D measurements. It is concluded that full insight into flagellar beat dynamics, swimming paths and chemotaxis under physiological conditions will eventually require quantitative imaging of flagellar form, ion flux changes, cell trajectories and modelling of free-swimming spermatozoa in 3D. PMID:21642645
Learning process in fashion design students: link with industry and social media
NASA Astrophysics Data System (ADS)
Marques, A. D.; Moschatou, A.
2017-10-01
Portugal is today an important player in the European fashion industry. The Portuguese footwear industry, “low-tech”, mature and traditional, dominated by SMEs, is also a success case in the Portuguese economy. With own brands, own collections and own products, the quality, innovation and international image of the Portuguese clothes, accessories and shoes is increasing year by year in the most sophisticated markets worldwide. The new information economy and social media presents a new set of opportunities and threats to established companies, new challenges and new markets, and demanding to all the companies to rethink their strategy and to prepare new business plans. Portuguese companies in the fashion industry are starting to perceive that the brand’s transition to social media means a transformation of the customer relationship, wherein social media and the community members is an ally of the brand and not an “audience”. Also the universities are preparing new professionals to the fashion industry and the learning process has to be managed according these new challenges. And the University of Minho has the Bachelor in Fashion Design and Marketing, an excellent course to prepare new skills to these fashion companies: textile, clothing and footwear industries.
NASA Astrophysics Data System (ADS)
Lohweg, Volker; Schaede, Johannes; Türke, Thomas
2006-02-01
The authenticity checking and inspection of bank notes is a high labour intensive process where traditionally every note on every sheet is inspected manually. However with the advent of more and more sophisticated security features, both visible and invisible, and the requirement of cost reduction in the printing process, it is clear that automation is required. As more and more print techniques and new security features will be established, total quality security, authenticity and bank note printing must be assured. Therefore, this factor necessitates amplification of a sensorial concept in general. We propose a concept for both authenticity checking and inspection methods for pattern recognition and classification for securities and banknotes, which is based on the concept of sensor fusion and fuzzy interpretation of data measures. In the approach different methods of authenticity analysis and print flaw detection are combined, which can be used for vending or sorting machines, as well as for printing machines. Usually only the existence or appearance of colours and their textures are checked by cameras. Our method combines the visible camera images with IR-spectral sensitive sensors, acoustical and other measurements like temperature and pressure of printing machines.
A NEW LAND-SURFACE MODEL IN MM5
There has recently been a general realization that more sophisticated modeling of land-surface processes can be important for mesoscale meteorology models. Land-surface models (LSMs) have long been important components in global-scale climate models because of their more compl...
Instructional Systems Development
ERIC Educational Resources Information Center
Watson, Russell
The United States Army, confronted with sophisticated defense machinery and entry level soldiers with low educational backgrounds, selected a systems approach to training that was developed in 1975 by Florida State University. Instructional Systems Development (IDS), a five-phase process encompassing the entire educational environment, is…
Handbook of automated data collection methods for the National Transit Database
DOT National Transportation Integrated Search
2003-10-01
In recent years, with the increasing sophistication and capabilities of information processing technologies, there has been a renewed interest on the part of transit systems to tap the rich information potential of the National Transit Database (NTD)...
ERIC Educational Resources Information Center
Azevedo, Roger; Guthrie, John T.; Seibert, Diane
2004-01-01
This study examines the role of self-regulated learning (SRL) in facilitating students' shifts to more sophisticated mental models of the circulatory system as indicated by both performance and process data. We began with Winne and colleagues' information processing model of SRL (Winne, 2001; Winne & Hadwin, 1998) and used it to examine how…
ERIC Educational Resources Information Center
Tierney, Robert J.
This 2-year longitudinal study explored whether computers promote more sophisticated thinking, and examined how students' thinking changes as they become experienced computer users. The first-year study examined the thinking process of four ninth-grade Apple Classrooms of Tomorrow (ACOT) students. The second-year study continued following these…
2009-04-18
intake and sophisticated signal processing of electroencephalographic (EEG), electrooculographic ( EOG ), electrocardiographic (ECG), and...electroencephalographic (EEG), electrooculographic ( EOG ), electrocardiographic (ECG), and electromyographic (EMG) physiological signals . It also has markedly...ambulatory physiological acquisition and quantitative signal processing; (2) Brain Amp MR Plus 32 and BrainVision Recorder Professional Software Package for
Multi-Sensor Data Fusion Project
2000-02-28
seismic network by detecting T phases generated by underground events ( generally earthquakes ) and associating these phases to seismic events. The...between underwater explosions (H), underground sources, mostly earthquake - generated (7), and noise detections (N). The phases classified as H are the only...processing for infrasound sensors is most similar to seismic array processing with the exception that the detections are based on a more sophisticated
Numerical Order Processing in Children: From Reversing the Distance-Effect to Predicting Arithmetic
ERIC Educational Resources Information Center
Lyons, Ian M.; Ansari, Daniel
2015-01-01
Recent work has demonstrated that how we process the relative order--ordinality--of numbers may be key to understanding how we represent numbers symbolically, and has proven to be a robust predictor of more sophisticated math skills in both children and adults. However, it remains unclear whether numerical ordinality is primarily a by-product of…
CTC Sentinel. Volume 9, Issue 4. April 2016
2016-09-21
stakes are high. Since the Islamic State, unlike al-Qa`ida and its various regional affiliates, places such great em - phasis on its image as state...not going to bomb all the banks in Mosul or starve the economy of millions of people. There are material constraints to what we can do while ISIS...close to the investigation told CNN that the laptop bomb was sophisticated and went undetected by an airport X-ray machine.c In a statement released on
Digital holographic microscopy for toxicity testing and cell culture quality control
NASA Astrophysics Data System (ADS)
Kemper, Björn
2018-02-01
For the example of digital holographic microscopy (DHM), it is illustrated how label-free biophysical parameter sets can be extracted from quantitative phase images of adherent and suspended cells, and how the retrieved data can be applied for in-vitro toxicity testing and cell culture quality assessment. This includes results from the quantification of the reactions of cells to toxic substances as well as data from sophisticated monitoring of cell alterations that are related to changes of cell culture conditions.
Light in diagnosis, therapy and surgery
Yun, Seok Hyun; Kwok, Sheldon J. J.
2016-01-01
Light and optical techniques have made profound impacts on modern medicine, with numerous lasers and optical devices being currently used in clinical practice to assess health and treat disease. Recent advances in biomedical optics have enabled increasingly sophisticated technologies — in particular those that integrate photonics with nanotechnology, biomaterials and genetic engineering. In this Review, we revisit the fundamentals of light–matter interactions, describe the applications of light in imaging, diagnosis, therapy and surgery, overview their clinical use, and discuss the promise of emerging light-based technologies. PMID:28649464
Autonomous chemical and biological miniature wireless-sensor
NASA Astrophysics Data System (ADS)
Goldberg, Bar-Giora
2005-05-01
The presentation discusses a new concept and a paradigm shift in biological, chemical and explosive sensor system design and deployment. From large, heavy, centralized and expensive systems to distributed wireless sensor networks utilizing miniature platforms (nodes) that are lightweight, low cost and wirelessly connected. These new systems are possible due to the emergence and convergence of new innovative radio, imaging, networking and sensor technologies. Miniature integrated radio-sensor networks, is a technology whose time has come. These network systems are based on large numbers of distributed low cost and short-range wireless platforms that sense and process their environment and communicate data thru a network to a command center. The recent emergence of chemical and explosive sensor technology based on silicon nanostructures, coupled with the fast evolution of low-cost CMOS imagers, low power DSP engines and integrated radio chips, has created an opportunity to realize the vision of autonomous wireless networks. These threat detection networks will perform sophisticated analysis at the sensor node and convey alarm information up the command chain. Sensor networks of this type are expected to revolutionize the ability to detect and locate biological, chemical, or explosive threats. The ability to distribute large numbers of low-cost sensors over large areas enables these devices to be close to the targeted threats and therefore improve detection efficiencies and enable rapid counter responses. These sensor networks will be used for homeland security, shipping container monitoring, and other applications such as laboratory medical analysis, drug discovery, automotive, environmental and/or in-vivo monitoring. Avaak"s system concept is to image a chromatic biological, chemical and/or explosive sensor utilizing a digital imager, analyze the images and distribute alarm or image data wirelessly through the network. All the imaging, processing and communications would take place within the miniature, low cost distributed sensor platforms. This concept however presents a significant challenge due to a combination and convergence of required new technologies, as mentioned above. Passive biological and chemical sensors with very high sensitivity and which require no assaying are in development using a technique to optically and chemically encode silicon wafers with tailored nanostructures. The silicon wafer is patterned with nano-structures designed to change colors ad patterns when exposed to the target analytes (TICs, TIMs, VOC). A small video camera detects the color and pattern changes on the sensor. To determine if an alarm condition is present, an on board DSP processor, using specialized image processing algorithms and statistical analysis, determines if color gradient changes occurred on the sensor array. These sensors can detect several agents simultaneously. This system is currently under development by Avaak, with funding from DARPA through an SBIR grant.
NASA Astrophysics Data System (ADS)
Kergosien, Yannick L.; Racoceanu, Daniel
2017-11-01
This article presents our vision about the next generation of challenges in computational/digital pathology. The key role of the domain ontology, developed in a sustainable manner (i.e. using reference checklists and protocols, as the living semantic repositories), opens the way to effective/sustainable traceability and relevance feedback concerning the use of existing machine learning algorithms, proven to be very performant in the latest digital pathology challenges (i.e. convolutional neural networks). Being able to work in an accessible web-service environment, with strictly controlled issues regarding intellectual property (image and data processing/analysis algorithms) and medical data/image confidentiality is essential for the future. Among the web-services involved in the proposed approach, the living yellow pages in the area of computational pathology seems to be very important in order to reach an operational awareness, validation, and feasibility. This represents a very promising way to go to the next generation of tools, able to bring more guidance to the computer scientists and confidence to the pathologists, towards an effective/efficient daily use. Besides, a consistent feedback and insights will be more likely to emerge in the near future - from these sophisticated machine learning tools - back to the pathologists-, strengthening, therefore, the interaction between the different actors of a sustainable biomedical ecosystem (patients, clinicians, biologists, engineers, scientists etc.). Beside going digital/computational - with virtual slide technology demanding new workflows-, Pathology must prepare for another coming revolution: semantic web technologies now enable the knowledge of experts to be stored in databases, shared through the Internet, and accessible by machines. Traceability, disambiguation of reports, quality monitoring, interoperability between health centers are some of the associated benefits that pathologists were seeking. However, major changes are also to be expected for the relation of human diagnosis to machine based procedures. Improving on a former imaging platform which used a local knowledge base and a reasoning engine to combine image processing modules into higher level tasks, we propose a framework where different actors of the histopathology imaging world can cooperate using web services - exchanging knowledge as well as imaging services - and where the results of such collaborations on diagnostic related tasks can be evaluated in international challenges such as those recently organized for mitosis detection, nuclear atypia, or tissue architecture in the context of cancer grading. This framework is likely to offer an effective context-guidance and traceability to Deep Learning approaches, with an interesting promising perspective given by the multi-task learning (MTL) paradigm, distinguished by its applicability to several different learning algorithms, its non- reliance on specialized architectures and the promising results demonstrated, in particular towards the problem of weak supervision-, an issue found when direct links from pathology terms in reports to corresponding regions within images are missing.
Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc
2014-05-01
Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maier, Joscha, E-mail: joscha.maier@dkfz.de; Sawall, Stefan; Kachelrieß, Marc
2014-05-15
Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levelsmore » from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. Conclusions: LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.« less
Sophistry, the Sophists and modern medical education.
Macsuibhne, S P
2010-01-01
The term 'sophist' has become a term of intellectual abuse in both general discourse and that of educational theory. However the actual thought of the fifth century BC Athenian-based philosophers who were the original Sophists was very different from the caricature. In this essay, I draw parallels between trends in modern medical educational practice and the thought of the Sophists. Specific areas discussed are the professionalisation of medical education, the teaching of higher-order characterological attributes such as personal development skills, and evidence-based medical education. Using the specific example of the Sophist Protagoras, it is argued that the Sophists were precursors of philosophical approaches and practices of enquiry underlying modern medical education.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, J; Karmanos Cancer Institute - International Imaging Center, Detroit, MI; Malyarenko, E
Purpose: To test the feasibility of developing a practical medium frequency ultrasound tomography method for small animal imaging. The ability to produce cross-sectional or full body images of a live small animal using a low-cost tabletop ultrasound scanner without any special license would be very beneficial to long term biological studies, where repeated scanning is often required over an extended period of time. Methods: The cross sectional images were produced by compounding multiple B-scans of a laboratory phantom or an animal acquired at different projection angles. Two imaging systems were used to test the concept. The first system included amore » programmable 64-channel phased array controller driving a 128-channel, 5–10 MHz linear probe to produce 143 B-Mode projections of the spinning object. The second system designed and manufactured in house, produced 64 or 128 B-Mode projections with a single unfocused 8 MHz transducer scanning with a 0.116 mm step size. Results: The phased array system provided good penetration through the phantoms/mice (with the exception of the lungs) and allowed to acquire data in a very short time. The cross-sectional images have enough resolution and dynamic range to detect both high- and low-contrast organs. The single transducer system takes longer to scan, and the data require more sophisticated processing. To date, our images allow seeing details as small as 1–2 mm in the phantoms and in small animals, with the contrast mostly due to highly reflecting bones and air inclusions. Conclusion: The work indicates that very detailed and anatomically correct images can be created by relatively simple and inexpensive means. With more advanced algorithms and improved system design, scan time can be reduced considerably, enabling high-resolution full 3D imaging. This will allow for quick and easy scans that can help monitor tumor growth and/or regression without contributing any dose to the animal. The authors would like to acknowledge the financial and engineering support from Tessonics.« less
There's gold in them thar' databases.
Gillespie, G
2000-11-01
Some health care organizations are using sophisticated data mining applications to unearth hidden truths buried in their online clinical and financial information. But the lack of a standard clinical vocabulary and standard work processes is an obstacle CIOs must blast through to reach their treasure.
Off-axis holographic laser speckle contrast imaging of blood vessels in tissues
NASA Astrophysics Data System (ADS)
Abdurashitov, Arkady; Bragina, Olga; Sindeeva, Olga; Sergey, Sindeev; Semyachkina-Glushkovskaya, Oxana V.; Tuchin, Valery V.
2017-09-01
Laser speckle contrast imaging (LSCI) has become one of the most common tools for functional imaging in tissues. Incomplete theoretical description and sophisticated interpretation of measurement results are completely sidelined by a low-cost and simple hardware, fastness, consistent results, and repeatability. In addition to the relatively low measuring volume with around 700 μm of the probing depth for the visible spectral range of illumination, there is no depth selectivity in conventional LSCI configuration; furthermore, in a case of high NA objective, the actual penetration depth of light in tissues is greater than depth of field (DOF) of an imaging system. Thus, the information about these out-of-focus regions persists in the recorded frames but cannot be retrieved due to intensity-based registration method. We propose a simple modification of LSCI system based on the off-axis holography to introduce after-registration refocusing ability to overcome both depth-selectivity and DOF problems as well as to get the potential possibility of producing a cross-section view of the specimen.
In situ real-time imaging of self-sorted supramolecular nanofibres
NASA Astrophysics Data System (ADS)
Onogi, Shoji; Shigemitsu, Hajime; Yoshii, Tatsuyuki; Tanida, Tatsuya; Ikeda, Masato; Kubota, Ryou; Hamachi, Itaru
2016-08-01
Self-sorted supramolecular nanofibres—a multicomponent system that consists of several types of fibre, each composed of distinct building units—play a crucial role in complex, well-organized systems with sophisticated functions, such as living cells. Designing and controlling self-sorting events in synthetic materials and understanding their structures and dynamics in detail are important elements in developing functional artificial systems. Here, we describe the in situ real-time imaging of self-sorted supramolecular nanofibre hydrogels consisting of a peptide gelator and an amphiphilic phosphate. The use of appropriate fluorescent probes enabled the visualization of self-sorted fibres entangled in two and three dimensions through confocal laser scanning microscopy and super-resolution imaging, with 80 nm resolution. In situ time-lapse imaging showed that the two types of fibre have different formation rates and that their respective physicochemical properties remain intact in the gel. Moreover, we directly visualized stochastic non-synchronous fibre formation and observed a cooperative mechanism.
Mouse blood vessel imaging by in-line x-ray phase-contrast imaging
NASA Astrophysics Data System (ADS)
Zhang, Xi; Liu, Xiao-Song; Yang, Xin-Rong; Chen, Shao-Liang; Zhu, Pei-Ping; Yuan, Qing-Xi
2008-10-01
It is virtually impossible to observe blood vessels by conventional x-ray imaging techniques without using contrast agents. In addition, such x-ray systems are typically incapable of detecting vessels with diameters less than 200 µm. Here we show that vessels as small as 30 µm could be detected using in-line phase-contrast x-ray imaging without the use of contrast agents. Image quality was greatly improved by replacing resident blood with physiological saline. Furthermore, an entire branch of the portal vein from the main axial portal vein to the eighth generation of branching could be captured in a single phase-contrast image. Prior to our work, detection of 30 µm diameter blood vessels could only be achieved using x-ray interferometry, which requires sophisticated x-ray optics. Our results thus demonstrate that in-line phase-contrast x-ray imaging, using physiological saline as a contrast agent, provides an alternative to the interferometric method that can be much more easily implemented and also offers the advantage of a larger field of view. A possible application of this methodology is in animal tumor models, where it can be used to observe tumor angiogenesis and the treatment effects of antineoplastic agents.
NASA Technical Reports Server (NTRS)
2002-01-01
Nuclear magnetic resonance (NMR) is a powerful and versatile, noninvasive method for studying fluid transport problems, However, its applications to these types of investigations have been limited. A primary factor that limits the application of NMR has been the lack of a user-friendly, versatile, and inexpensive NMR imaging apparatus that can be used by scientists who are not familiar with sophisticated NMR. To rectify this situation, we developed a user-friendly, NMR imager for projects of relevance to the MRD science community. To that end, we performed preliminary collaborative experiments between NASA, NCMR, and New Mexico Resonance in the high field NMR set up at New Mexico Resonance to track wetting front dynamics in foams under gravity. The experiments were done in a 30 cm, 1.9T Oxford magnet with a TECMAG Libra spectrometer (Tecmag, Inc., Houston, TX). We used two different imaging strategies depending on whether the water in the foam sample was static or moving. Stationary water distributions were imaged with the standard Fourier imaging method, as used in medical MRI, in which data are acquired from all parts of the region of interest at all times and Fourier transformed into a static spatial image.
Migration of the digital interactive breast-imaging teaching file
NASA Astrophysics Data System (ADS)
Cao, Fei; Sickles, Edward A.; Huang, H. K.; Zhou, Xiaoqiang
1998-06-01
The digital breast imaging teaching file developed during the last two years in our laboratory has been used successfully at UCSF (University of California, San Francisco) as a routine teaching tool for training radiology residents and fellows in mammography. Building on this success, we have ported the teaching file from an old Pixar imaging/Sun SPARC 470 display system to our newly designed telemammography display workstation (Ultra SPARC 2 platform with two DOME Md5/SBX display boards). The old Pixar/Sun 470 system, although adequate for fast and high-resolution image display, is 4- year-old technology, expensive to maintain and difficult to upgrade. The new display workstation is more cost-effective and is also compatible with the digital image format from a full-field direct digital mammography system. The digital teaching file is built on a sophisticated computer-aided instruction (CAI) model, which simulates the management sequences used in imaging interpretation and work-up. Each user can be prompted to respond by making his/her own observations, assessments, and work-up decisions as well as the marking of image abnormalities. This effectively replaces the traditional 'show-and-tell' teaching file experience with an interactive, response-driven type of instruction.
Pathmanathan, Angela U; van As, Nicholas J; Kerkmeijer, Linda G W; Christodouleas, John; Lawton, Colleen A F; Vesprini, Danny; van der Heide, Uulke A; Frank, Steven J; Nill, Simeon; Oelfke, Uwe; van Herk, Marcel; Li, X Allen; Mittauer, Kathryn; Ritter, Mark; Choudhury, Ananya; Tree, Alison C
2018-02-01
Radiation therapy to the prostate involves increasingly sophisticated delivery techniques and changing fractionation schedules. With a low estimated α/β ratio, a larger dose per fraction would be beneficial, with moderate fractionation schedules rapidly becoming a standard of care. The integration of a magnetic resonance imaging (MRI) scanner and linear accelerator allows for accurate soft tissue tracking with the capacity to replan for the anatomy of the day. Extreme hypofractionation schedules become a possibility using the potentially automated steps of autosegmentation, MRI-only workflow, and real-time adaptive planning. The present report reviews the steps involved in hypofractionated adaptive MRI-guided prostate radiation therapy and addresses the challenges for implementation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
MRI-guided focused ultrasound surgery in musculoskeletal diseases: the hot topics
Napoli, Alessandro; Sacconi, Beatrice; Battista, Giuseppe; Guglielmi, Giuseppe; Catalano, Carlo; Albisinni, Ugo
2016-01-01
MRI-guided focused ultrasound surgery (MRgFUS) is a minimally invasive treatment guided by the most sophisticated imaging tool available in today's clinical practice. Both the imaging and therapeutic sides of the equipment are based on non-ionizing energy. This technique is a very promising option as potential treatment for several pathologies, including musculoskeletal (MSK) disorders. Apart from clinical applications, MRgFUS technology is the result of long, heavy and cumulative efforts exploring the effects of ultrasound on biological tissues and function, the generation of focused ultrasound and treatment monitoring by MRI. The aim of this article is to give an updated overview on a “new” interventional technique and on its applications for MSK and allied sciences. PMID:26607640
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virnstein, R.; Tepera, M.; Beazley, L.
1997-06-01
A pilot study is very briefly summarized in the article. The study tested the potential of multi-spectral digital imagery for discrimination of seagrass densities and species, algae, and bottom types. Imagery was obtained with the Compact Airborne Spectral Imager (casi) and two flight lines flown with hyper-spectral mode. The photogrammetric method used allowed interpretation of the highest quality product, eliminating limitations caused by outdated or poor quality base maps and the errors associated with transfer of polygons. Initial image analysis indicates that the multi-spectral imagery has several advantages, including sophisticated spectral signature recognition and classification, ease of geo-referencing, and rapidmore » mosaicking.« less
Meteorological data fields 'in perspective'
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Pierce, H.; Morris, K. R.; Dodge, J.
1985-01-01
Perspective display techniques can be applied to meteorological data sets to aid in their interpretation. Examples of a perspective display procedure applied to satellite and aircraft visible and infrared image pairs and to stereo cloud-top height analyses are presented. The procedure uses a sophisticated shading algorithm that produces perspective images with greatly improved comprehensibility when compared with the wire-frame perspective displays that have been used in the past. By changing the 'eye-point' and 'view-point' inputs to the program in a systematic way, movie loops that give the impression of flying over or through the data field have been made. This paper gives examples that show how several kinds of meteorological data fields are more effectively illustrated using the perspective technique.
Ultrafast Pulse Sequencing for Fast Projective Measurements of Atomic Hyperfine Qubits
NASA Astrophysics Data System (ADS)
Ip, Michael; Ransford, Anthony; Campbell, Wesley
2015-05-01
Projective readout of quantum information stored in atomic hyperfine structure typically uses state-dependent CW laser-induced fluorescence. This method requires an often sophisticated imaging system to spatially filter out the background CW laser light. We present an alternative approach that instead uses simple pulse sequences from a mode-locked laser to affect the same state-dependent excitations in less than 1 ns. The resulting atomic fluorescence occurs in the dark, allowing the placement of non-imaging detectors right next to the atom to improve the qubit state detection efficiency and speed. We also discuss methods of Doppler cooling with mode-locked lasers for trapped ions, where the creation of the necessary UV light is often difficult with CW lasers.
Using neuroimaging to understand the cortical mechanisms of auditory selective attention
Lee, Adrian KC; Larson, Eric; Maddox, Ross K; Shinn-Cunningham, Barbara G
2013-01-01
Over the last four decades, a range of different neuroimaging tools have been used to study human auditory attention, spanning from classic event-related potential studies using electroencephalography to modern multimodal imaging approaches (e.g., combining anatomical information based on magnetic resonance imaging with magneto- and electroencephalography). This review begins by exploring the different strengths and limitations inherent to different neuroimaging methods, and then outlines some common behavioral paradigms that have been adopted to study auditory attention. We argue that in order to design a neuroimaging experiment that produces interpretable, unambiguous results, the experimenter must not only have a deep appreciation of the imaging technique employed, but also a sophisticated understanding of perception and behavior. Only with the proper caveats in mind can one begin to infer how the cortex supports a human in solving the “cocktail party” problem. PMID:23850664
Time-lapse cinematography in living Drosophila tissues: preparation of material.
Davis, Ilan; Parton, Richard M
2006-11-01
The fruit fly, Drosophila melanogaster, has been an extraordinarily successful model organism for studying the genetic basis of development and evolution. It is arguably the best-understood complex multicellular model system, owing its success to many factors. Recent developments in imaging techniques, in particular sophisticated fluorescence microscopy methods and equipment, now allow cellular events to be studied at high resolution in living material. This ability has enabled the study of features that tend to be lost or damaged by fixation, such as transient or dynamic events. Although many of the techniques of live cell imaging in Drosophila are shared with the greater community of cell biologists working on other model systems, studying living fly tissues presents unique difficulties in keeping the cells alive, introducing fluorescent probes, and imaging through thick hazy cytoplasm. This protocol outlines the preparation of major tissue types amenable to study by time-lapse cinematography and different methods for keeping them alive.
Single pulse two photon fluorescence lifetime imaging (SP-FLIM) with MHz pixel rate.
Eibl, Matthias; Karpf, Sebastian; Weng, Daniel; Hakert, Hubertus; Pfeiffer, Tom; Kolb, Jan Philip; Huber, Robert
2017-07-01
Two-photon-excited fluorescence lifetime imaging microscopy (FLIM) is a chemically specific 3-D sensing modality providing valuable information about the microstructure, composition and function of a sample. However, a more widespread application of this technique is hindered by the need for a sophisticated ultra-short pulse laser source and by speed limitations of current FLIM detection systems. To overcome these limitations, we combined a robust sub-nanosecond fiber laser as the excitation source with high analog bandwidth detection. Due to the long pulse length in our configuration, more fluorescence photons are generated per pulse, which allows us to derive the lifetime with a single excitation pulse only. In this paper, we show high quality FLIM images acquired at a pixel rate of 1 MHz. This approach is a promising candidate for an easy-to-use and benchtop FLIM system to make this technique available to a wider research community.
Segmentation and classification of cell cycle phases in fluorescence imaging.
Ersoy, Ilker; Bunyak, Filiz; Chagin, Vadim; Cardoso, M Christina; Palaniappan, Kannappan
2009-01-01
Current chemical biology methods for studying spatiotemporal correlation between biochemical networks and cell cycle phase progression in live-cells typically use fluorescence-based imaging of fusion proteins. Stable cell lines expressing fluorescently tagged protein GFP-PCNA produce rich, dynamically varying sub-cellular foci patterns characterizing the cell cycle phases, including the progress during the S-phase. Variable fluorescence patterns, drastic changes in SNR, shape and position changes and abundance of touching cells require sophisticated algorithms for reliable automatic segmentation and cell cycle classification. We extend the recently proposed graph partitioning active contours (GPAC) for fluorescence-based nucleus segmentation using regional density functions and dramatically improve its efficiency, making it scalable for high content microscopy imaging. We utilize surface shape properties of GFP-PCNA intensity field to obtain descriptors of foci patterns and perform automated cell cycle phase classification, and give quantitative performance by comparing our results to manually labeled data.
Assessing the Risks for Modern Diagnostic Ultrasound Imaging
NASA Astrophysics Data System (ADS)
William, Jr.
1998-05-01
Some 35 years after Paul-Jacques and Pierre Curie discovered piezoelectricity, ultrasonic imaging was developed by Paul Langevin. During this work, ultrasonic energy was observed to have a detrimental biological effect. These observations were confirmed a decade later by R. W. Wood and A. L. Loomis. It was not until the early 1950s that ultrasonic exposure conditions were controlled and specified so that studies could focus on the mechanisms by which ultrasound influenced biological materials. In the late 1940s, pioneering work was initiated to image the human body by ultrasonic techniques. These engineers and physicians were aware of the deleterious ultrasound effects at sufficiently high levels; this endeavored them to keep the exposure levels reasonably low. Over the past three decades, diagnostic ultrasound has become a sophisticated technology. Yet, our understanding of the potential risks has not changed appreciably. It is very encouraging that human injury has never been attributed to clinical practice of diagnostic ultrasound.
iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker.
Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak
2014-06-01
Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.
iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker
Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak
2015-01-01
Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees. PMID:26539565
Automated Functional Analysis of Astrocytes from Chronic Time-Lapse Calcium Imaging Data
Wang, Yinxue; Shi, Guilai; Miller, David J.; Wang, Yizhi; Wang, Congchao; Broussard, Gerard; Wang, Yue; Tian, Lin; Yu, Guoqiang
2017-01-01
Recent discoveries that astrocytes exert proactive regulatory effects on neural information processing and that they are deeply involved in normal brain development and disease pathology have stimulated broad interest in understanding astrocyte functional roles in brain circuit. Measuring astrocyte functional status is now technically feasible, due to recent advances in modern microscopy and ultrasensitive cell-type specific genetically encoded Ca2+ indicators for chronic imaging. However, there is a big gap between the capability of generating large dataset via calcium imaging and the availability of sophisticated analytical tools for decoding the astrocyte function. Current practice is essentially manual, which not only limits analysis throughput but also risks introducing bias and missing important information latent in complex, dynamic big data. Here, we report a suite of computational tools, called Functional AStrocyte Phenotyping (FASP), for automatically quantifying the functional status of astrocytes. Considering the complex nature of Ca2+ signaling in astrocytes and low signal to noise ratio, FASP is designed with data-driven and probabilistic principles, to flexibly account for various patterns and to perform robustly with noisy data. In particular, FASP explicitly models signal propagation, which rules out the applicability of tools designed for other types of data. We demonstrate the effectiveness of FASP using extensive synthetic and real data sets. The findings by FASP were verified by manual inspection. FASP also detected signals that were missed by purely manual analysis but could be confirmed by more careful manual examination under the guidance of automatic analysis. All algorithms and the analysis pipeline are packaged into a plugin for Fiji (ImageJ), with the source code freely available online at https://github.com/VTcbil/FASP. PMID:28769780
Automated Functional Analysis of Astrocytes from Chronic Time-Lapse Calcium Imaging Data.
Wang, Yinxue; Shi, Guilai; Miller, David J; Wang, Yizhi; Wang, Congchao; Broussard, Gerard; Wang, Yue; Tian, Lin; Yu, Guoqiang
2017-01-01
Recent discoveries that astrocytes exert proactive regulatory effects on neural information processing and that they are deeply involved in normal brain development and disease pathology have stimulated broad interest in understanding astrocyte functional roles in brain circuit. Measuring astrocyte functional status is now technically feasible, due to recent advances in modern microscopy and ultrasensitive cell-type specific genetically encoded Ca 2+ indicators for chronic imaging. However, there is a big gap between the capability of generating large dataset via calcium imaging and the availability of sophisticated analytical tools for decoding the astrocyte function. Current practice is essentially manual, which not only limits analysis throughput but also risks introducing bias and missing important information latent in complex, dynamic big data. Here, we report a suite of computational tools, called Functional AStrocyte Phenotyping (FASP), for automatically quantifying the functional status of astrocytes. Considering the complex nature of Ca 2+ signaling in astrocytes and low signal to noise ratio, FASP is designed with data-driven and probabilistic principles, to flexibly account for various patterns and to perform robustly with noisy data. In particular, FASP explicitly models signal propagation, which rules out the applicability of tools designed for other types of data. We demonstrate the effectiveness of FASP using extensive synthetic and real data sets. The findings by FASP were verified by manual inspection. FASP also detected signals that were missed by purely manual analysis but could be confirmed by more careful manual examination under the guidance of automatic analysis. All algorithms and the analysis pipeline are packaged into a plugin for Fiji (ImageJ), with the source code freely available online at https://github.com/VTcbil/FASP.
Carasso, Alfred S; Vladár, András E
2012-01-01
Helium ion microscopes (HIM) are capable of acquiring images with better than 1 nm resolution, and HIM images are particularly rich in morphological surface details. However, such images are generally quite noisy. A major challenge is to denoise these images while preserving delicate surface information. This paper presents a powerful slow motion denoising technique, based on solving linear fractional diffusion equations forward in time. The method is easily implemented computationally, using fast Fourier transform (FFT) algorithms. When applied to actual HIM images, the method is found to reproduce the essential surface morphology of the sample with high fidelity. In contrast, such highly sophisticated methodologies as Curvelet Transform denoising, and Total Variation denoising using split Bregman iterations, are found to eliminate vital fine scale information, along with the noise. Image Lipschitz exponents are a useful image metrology tool for quantifying the fine structure content in an image. In this paper, this tool is applied to rank order the above three distinct denoising approaches, in terms of their texture preserving properties. In several denoising experiments on actual HIM images, it was found that fractional diffusion smoothing performed noticeably better than split Bregman TV, which in turn, performed slightly better than Curvelet denoising.
Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Huang, H. K.
1989-05-01
A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.
Landsat Science: 40 Years of Innovation and Opportunity
NASA Technical Reports Server (NTRS)
Cook, Bruce D.; Irons, James R.; Masek, Jeffrey G.; Loveland, Thomas R.
2012-01-01
Landsat satellites have provided unparalleled Earth-observing data for nearly 40 years, allowing scientists to describe, monitor and model the global environment during a period of time that has seen dramatic changes in population growth, land use, and climate. The success of the Landsat program can be attributed to well-designed instrument specifications, astute engineering, comprehensive global acquisition and calibration strategies, and innovative scientists who have developed analytical techniques and applications to address a wide range of needs at local to global scales (e.g., crop production, water resource management, human health and environmental quality, urbanization, deforestation and biodiversity). Early Landsat contributions included inventories of natural resources and land cover classification maps, which were initially prepared by a visual interpretation of Landsat imagery. Over time, advances in computer technology facilitated the development of sophisticated image processing algorithms and complex ecosystem modeling, enabling scientists to create accurate, reproducible, and more realistic simulations of biogeochemical processes (e.g., plant production and ecosystem dynamics). Today, the Landsat data archive is freely available for download through the USGS, creating new opportunities for scientists to generate global image datasets, develop new change detection algorithms, and provide products in support of operational programs such as Reducing Emissions from Deforestation and Forest Degradation in Developing Countries (REDD). In particular, the use of dense (approximately annual) time series to characterize both rapid and progressive landscape change has yielded new insights into how the land environment is responding to anthropogenic and natural pressures. The launch of the Landsat Data Continuity Mission (LDCM) satellite in 2012 will continue to propel innovative Landsat science.
NASA Astrophysics Data System (ADS)
Traganos, D.; Cerra, D.; Reinartz, P.
2017-05-01
Seagrasses are one of the most productive and widespread yet threatened coastal ecosystems on Earth. Despite their importance, they are declining due to various threats, which are mainly anthropogenic. Lack of data on their distribution hinders any effort to rectify this decline through effective detection, mapping and monitoring. Remote sensing can mitigate this data gap by allowing retrospective quantitative assessment of seagrass beds over large and remote areas. In this paper, we evaluate the quantitative application of Planet high resolution imagery for the detection of seagrasses in the Thermaikos Gulf, NW Aegean Sea, Greece. The low Signal-to-noise Ratio (SNR), which characterizes spectral bands at shorter wavelengths, prompts the application of the Unmixing-based denoising (UBD) as a pre-processing step for seagrass detection. A total of 15 spectral-temporal patterns is extracted from a Planet image time series to restore the corrupted blue and green band in the processed Planet image. Subsequently, we implement Lyzenga's empirical water column correction and Support Vector Machines (SVM) to evaluate quantitative benefits of denoising. Denoising aids detection of Posidonia oceanica seagrass species by increasing its producer and user accuracy by 31.7 % and 10.4 %, correspondingly, with a respective increase in its Kappa value from 0.3 to 0.48. In the near future, our objective is to improve accuracies in seagrass detection by applying more sophisticated, analytical water column correction algorithms to Planet imagery, developing time- and cost-effective monitoring of seagrass distribution that will enable in turn the effective management and conservation of these highly valuable and productive ecosystems.
Dong, Biqin; Li, Hao; Zhang, Zhen; Zhang, Kevin; Chen, Siyu; Sun, Cheng; Zhang, Hao F
2015-01-01
Photoacoustic microscopy (PAM) is an attractive imaging tool complementary to established optical microscopic modalities by providing additional molecular specificities through imaging optical absorption contrast. While the development of optical resolution photoacoustic microscopy (ORPAM) offers high lateral resolution, the acoustically-determined axial resolution is limited due to the constraint in ultrasonic detection bandwidth. ORPAM with isometric spatial resolution along both axial and lateral direction is yet to be developed. Although recently developed sophisticated optical illumination and reconstruction methods offer improved axial resolution in ORPAM, the image acquisition procedures are rather complicated, limiting their capabilities for high-speed imaging and being easily integrated with established optical microscopic modalities. Here we report an isometric ORPAM based on an optically transparent micro-ring resonator ultrasonic detector and a commercial inverted microscope platform. Owing to the superior spatial resolution and the ease of integrating our ORPAM with established microscopic modalities, single cell imaging with extrinsic fluorescence staining, intrinsic autofluorescence, and optical absorption can be achieved simultaneously. This technique holds promise to greatly improve the accessibility of PAM to the broader biomedical researchers.
Red, Purple and Pink: The Colors of Diffusion on Pinterest
Bakhshi, Saeideh; Gilbert, Eric
2015-01-01
Many lab studies have shown that colors can evoke powerful emotions and impact human behavior. Might these phenomena drive how we act online? A key research challenge for image-sharing communities is uncovering the mechanisms by which content spreads through the community. In this paper, we investigate whether there is link between color and diffusion. Drawing on a corpus of one million images crawled from Pinterest, we find that color significantly impacts the diffusion of images and adoption of content on image sharing communities such as Pinterest, even after partially controlling for network structure and activity. Specifically, Red, Purple and pink seem to promote diffusion, while Green, Blue, Black and Yellow suppress it. To our knowledge, our study is the first to investigate how colors relate to online user behavior. In addition to contributing to the research conversation surrounding diffusion, these findings suggest future work using sophisticated computer vision techniques. We conclude with a discussion on the theoretical, practical and design implications suggested by this work—e.g. design of engaging image filters. PMID:25658423
The Future of Weapons of Mass Destruction: Their Nature and Role in 2030
2014-06-01
substantial improvements are al- lowed under the rubric of life extension. Other states are not so constrained and may find different ways to develop pure...The foregoing capabilities do not involve genetic manipulation or bioen- gineering; they utilize longstanding biological knowledge and processes. More...sophisticated understanding of biological systems (genomic and proteomic infor- mation) and processes ( genetic modification, bioengineering) for
Image-guided surgery and therapy: current status and future directions
NASA Astrophysics Data System (ADS)
Peters, Terence M.
2001-05-01
Image-guided surgery and therapy is assuming an increasingly important role, particularly considering the current emphasis on minimally-invasive surgical procedures. Volumetric CT and MR images have been used now for some time in conjunction with stereotactic frames, to guide many neurosurgical procedures. With the development of systems that permit surgical instruments to be tracked in space, image-guided surgery now includes the use of frame-less procedures, and the application of the technology has spread beyond neurosurgery to include orthopedic applications and therapy of various soft-tissue organs such as the breast, prostate and heart. Since tracking systems allow image- guided surgery to be undertaken without frames, a great deal of effort has been spent on image-to-image and image-to- patient registration techniques, and upon the means of combining real-time intra-operative images with images acquired pre-operatively. As image-guided surgery systems have become increasingly sophisticated, the greatest challenges to their successful adoption in the operating room of the future relate to the interface between the user and the system. To date, little effort has been expended to ensure that the human factors issues relating to the use of such equipment in the operating room have been adequately addressed. Such systems will only be employed routinely in the OR when they are designed to be intuitive, unobtrusive, and provide simple access to the source of the images.
ERIC Educational Resources Information Center
Benedis-Grab, Gregory
2011-01-01
Computers have changed the landscape of scientific research in profound ways. Technology has always played an important role in scientific experimentation--through the development of increasingly sophisticated tools, the measurement of elusive quantities, and the processing of large amounts of data. However, the advent of social networking and the…
Television camera as a scientific instrument
NASA Technical Reports Server (NTRS)
Smokler, M. I.
1970-01-01
Rigorous calibration program, coupled with a sophisticated data-processing program that introduced compensation for system response to correct photometry, geometric linearity, and resolution, converted a television camera to a quantitative measuring instrument. The output data are in the forms of both numeric printout records and photographs.
The New Paradox of the College Textbook.
ERIC Educational Resources Information Center
Lichtenberg, James
1992-01-01
As college textbooks have become more attractive, sophisticated, and useful, the textbook industry is suffering from high costs, increased popularity of used books, effects of rapidly advancing information and instructional technology, the atypical business structure of the college textbook market, and changing textbook development processes. (MSE)
Biofiltration represents a novel strategy for controlling VOC emissions from a variety of industrial processes. As commercial applications of these systems increase, sophisticated theoretical models will be useful in establishing design criteria for providing insights into impor...
A Methodology for Distributing the Corporate Database.
ERIC Educational Resources Information Center
McFadden, Fred R.
The trend to distributed processing is being fueled by numerous forces, including advances in technology, corporate downsizing, increasing user sophistication, and acquisitions and mergers. Increasingly, the trend in corporate information systems (IS) departments is toward sharing resources over a network of multiple types of processors, operating…
Statistical mechanics of complex economies
NASA Astrophysics Data System (ADS)
Bardoscia, Marco; Livan, Giacomo; Marsili, Matteo
2017-04-01
In the pursuit of ever increasing efficiency and growth, our economies have evolved to remarkable degrees of complexity, with nested production processes feeding each other in order to create products of greater sophistication from less sophisticated ones, down to raw materials. The engine of such an expansion have been competitive markets that, according to general equilibrium theory (GET), achieve efficient allocations under specific conditions. We study large random economies within the GET framework, as templates of complex economies, and we find that a non-trivial phase transition occurs: the economy freezes in a state where all production processes collapse when either the number of primary goods or the number of available technologies fall below a critical threshold. As in other examples of phase transitions in large random systems, this is an unintended consequence of the growth in complexity. Our findings suggest that the Industrial Revolution can be regarded as a sharp transition between different phases, but also imply that well developed economies can collapse if too many intermediate goods are introduced.
NASA Astrophysics Data System (ADS)
Vagh, Hardik A.; Baghai-Wadji, Alireza
2008-12-01
Current technological challenges in materials science and high-tech device industry require the solution of boundary value problems (BVPs) involving regions of various scales, e.g. multiple thin layers, fibre-reinforced composites, and nano/micro pores. In most cases straightforward application of standard variational techniques to BVPs of practical relevance necessarily leads to unsatisfactorily ill-conditioned analytical and/or numerical results. To remedy the computational challenges associated with sub-sectional heterogeneities various sophisticated homogenization techniques need to be employed. Homogenization refers to the systematic process of smoothing out the sub-structural heterogeneities, leading to the determination of effective constitutive coefficients. Ordinarily, homogenization involves a sophisticated averaging and asymptotic order analysis to obtain solutions. In the majority of the cases only zero-order terms are constructed due to the complexity of the processes involved. In this paper we propose a constructive scheme for obtaining homogenized solutions involving higher order terms, and thus, guaranteeing higher accuracy and greater robustness of the numerical results. We present
Time-Lapse Acoustic Impedance Inversion in CO2 Sequestration Study (Weyburn Field, Canada)
NASA Astrophysics Data System (ADS)
Wang, Y.; Morozov, I. B.
2016-12-01
Acoustic-impedance (AI) pseudo-logs are useful for characterising subtle variations of fluid content during seismic monitoring of reservoirs undergoing enhanced oil recovery and/or geologic CO2 sequestration. However, highly accurate AI images are required for time-lapse analysis, which may be difficult to achieve with conventional inversion approaches. In this study, two enhancements of time-lapse AI analysis are proposed. First, a well-known uncertainty of AI inversion is caused by the lack of low-frequency signal in reflection seismic data. To resolve this difficulty, we utilize an integrated AI inversion approach combining seismic data, acoustic well logs and seismic-processing velocities. The use of well logs helps stabilizing the recursive AI inverse, and seismic-processing velocities are used to complement the low-frequency information in seismic records. To derive the low-frequency AI from seismic-processing velocity data, an empirical relation is determined by using the available acoustic logs. This method is simple and does not require subjective choices of parameters and regularization schemes as in the more sophisticated joint inversion methods. The second improvement to accurate time-lapse AI imaging consists in time-variant calibration of reflectivity. Calibration corrections consist of time shifts, amplitude corrections, spectral shaping and phase rotations. Following the calibration, average and differential reflection amplitudes are calculated, from which the average and differential AI are obtained. The approaches are applied to a time-lapse 3-D 3-C dataset from Weyburn CO2 sequestration project in southern Saskatchewan, Canada. High quality time-lapse AI volumes are obtained. Comparisons with traditional recursive and colored AI inversions (obtained without using seismic-processing velocities) show that the new method gives a better representation of spatial AI variations. Although only early stages of monitoring seismic data are available, time-lapse AI variations mapped within and near the reservoir zone suggest correlations with CO2 injection. By extending this procedure to elastic impedances, additional constraints on the variations of physical properties within the reservoir can be obtained.
ERIC Educational Resources Information Center
Murray, Jenny; Bartelmay, Kathy
2005-01-01
Can second-grade students construct an understanding of sophisticated science processes and explore physics concepts while creating their own inventions? Yes! Students accomplished this and much more through a month-long project in which they used Legos and Robolab, the Lego computer programing software, to create their own inventions. One…
ENDOCRINE DISRUPTING COMPOUNDS: PROCESSES FOR REMOVAL FROM DRINKING WATER AND WASTEWATER
Although the list of potentially harmful substances is still being compiled and more sophisticated laboratory tests for detection of endocrine disrupting chemicals (EDCs) are being developed, an initial list of known EDCs has been made and an array of drinking water and wastewate...
A Critical Review of Some Qualitative Research Methods Used to Explore Rater Cognition
ERIC Educational Resources Information Center
Suto, Irenka
2012-01-01
Internationally, many assessment systems rely predominantly on human raters to score examinations. Arguably, this facilitates the assessment of multiple sophisticated educational constructs, strengthening assessment validity. It can introduce subjectivity into the scoring process, however, engendering threats to accuracy. The present objectives…
Agriculture and the Community: The Sociological Perspective.
ERIC Educational Resources Information Center
Heffernan, William D.; Campbell, Rex R.
Emergence of a dual agricultural system, need for sophisticated knowledge and equipment, declining importance of labor, and geographic and organizational concentration of the production and processing of certain commodities are creating changes in rural communities. While some changes will have negative social/economic impacts, the importance of…
Single-cell PCR of genomic DNA enabled by automated single-cell printing for cell isolation.
Stumpf, F; Schoendube, J; Gross, A; Rath, C; Niekrawietz, S; Koltay, P; Roth, G
2015-07-15
Single-cell analysis has developed into a key topic in cell biology with future applications in personalized medicine, tumor identification as well as tumor discovery (Editorial, 2013). Here we employ inkjet-like printing to isolate individual living single human B cells (Raji cell line) and load them directly into standard PCR tubes. Single cells are optically detected in the nozzle of the microfluidic piezoelectric dispenser chip to ensure printing of droplets with single cells only. The printing process has been characterized by using microbeads (10µm diameter) resulting in a single bead delivery in 27 out of 28 cases and relative positional precision of ±350µm at a printing distance of 6mm between nozzle and tube lid. Process-integrated optical imaging enabled to identify the printing failure as void droplet and to exclude it from downstream processing. PCR of truly single-cell DNA was performed without pre-amplification directly from single Raji cells with 33% success rate (N=197) and Cq values of 36.3±2.5. Additionally single cell whole genome amplification (WGA) was employed to pre-amplify the single-cell DNA by a factor of >1000. This facilitated subsequent PCR for the same gene yielding a success rate of 64% (N=33) which will allow more sophisticated downstream analysis like sequencing, electrophoresis or multiplexing. Copyright © 2015 Elsevier B.V. All rights reserved.
Theory of Mind: Did Evolution Fool Us?
Devaine, Marie; Hollard, Guillaume; Daunizeau, Jean
2014-01-01
Theory of Mind (ToM) is the ability to attribute mental states (e.g., beliefs and desires) to other people in order to understand and predict their behaviour. If others are rewarded to compete or cooperate with you, then what they will do depends upon what they believe about you. This is the reason why social interaction induces recursive ToM, of the sort “I think that you think that I think, etc.”. Critically, recursion is the common notion behind the definition of sophistication of human language, strategic thinking in games, and, arguably, ToM. Although sophisticated ToM is believed to have high adaptive fitness, broad experimental evidence from behavioural economics, experimental psychology and linguistics point towards limited recursivity in representing other’s beliefs. In this work, we test whether such apparent limitation may not in fact be proven to be adaptive, i.e. optimal in an evolutionary sense. First, we propose a meta-Bayesian approach that can predict the behaviour of ToM sophistication phenotypes who engage in social interactions. Second, we measure their adaptive fitness using evolutionary game theory. Our main contribution is to show that one does not have to appeal to biological costs to explain our limited ToM sophistication. In fact, the evolutionary cost/benefit ratio of ToM sophistication is non trivial. This is partly because an informational cost prevents highly sophisticated ToM phenotypes to fully exploit less sophisticated ones (in a competitive context). In addition, cooperation surprisingly favours lower levels of ToM sophistication. Taken together, these quantitative corollaries of the “social Bayesian brain” hypothesis provide an evolutionary account for both the limitation of ToM sophistication in humans as well as the persistence of low ToM sophistication levels. PMID:24505296
Theory of mind: did evolution fool us?
Devaine, Marie; Hollard, Guillaume; Daunizeau, Jean
2014-01-01
Theory of Mind (ToM) is the ability to attribute mental states (e.g., beliefs and desires) to other people in order to understand and predict their behaviour. If others are rewarded to compete or cooperate with you, then what they will do depends upon what they believe about you. This is the reason why social interaction induces recursive ToM, of the sort "I think that you think that I think, etc.". Critically, recursion is the common notion behind the definition of sophistication of human language, strategic thinking in games, and, arguably, ToM. Although sophisticated ToM is believed to have high adaptive fitness, broad experimental evidence from behavioural economics, experimental psychology and linguistics point towards limited recursivity in representing other's beliefs. In this work, we test whether such apparent limitation may not in fact be proven to be adaptive, i.e. optimal in an evolutionary sense. First, we propose a meta-Bayesian approach that can predict the behaviour of ToM sophistication phenotypes who engage in social interactions. Second, we measure their adaptive fitness using evolutionary game theory. Our main contribution is to show that one does not have to appeal to biological costs to explain our limited ToM sophistication. In fact, the evolutionary cost/benefit ratio of ToM sophistication is non trivial. This is partly because an informational cost prevents highly sophisticated ToM phenotypes to fully exploit less sophisticated ones (in a competitive context). In addition, cooperation surprisingly favours lower levels of ToM sophistication. Taken together, these quantitative corollaries of the "social Bayesian brain" hypothesis provide an evolutionary account for both the limitation of ToM sophistication in humans as well as the persistence of low ToM sophistication levels.
SimHap GUI: an intuitive graphical user interface for genetic association analysis.
Carter, Kim W; McCaskie, Pamela A; Palmer, Lyle J
2008-12-25
Researchers wishing to conduct genetic association analysis involving single nucleotide polymorphisms (SNPs) or haplotypes are often confronted with the lack of user-friendly graphical analysis tools, requiring sophisticated statistical and informatics expertise to perform relatively straightforward tasks. Tools, such as the SimHap package for the R statistics language, provide the necessary statistical operations to conduct sophisticated genetic analysis, but lacks a graphical user interface that allows anyone but a professional statistician to effectively utilise the tool. We have developed SimHap GUI, a cross-platform integrated graphical analysis tool for conducting epidemiological, single SNP and haplotype-based association analysis. SimHap GUI features a novel workflow interface that guides the user through each logical step of the analysis process, making it accessible to both novice and advanced users. This tool provides a seamless interface to the SimHap R package, while providing enhanced functionality such as sophisticated data checking, automated data conversion, and real-time estimations of haplotype simulation progress. SimHap GUI provides a novel, easy-to-use, cross-platform solution for conducting a range of genetic and non-genetic association analyses. This provides a free alternative to commercial statistics packages that is specifically designed for genetic association analysis.
1982-06-23
Administration Systems Research and Development Service 14, Spseq Aese Ce ’ Washington, D.C. 20591 It. SeppkW•aae metm The work reported in this document was...consider sophisticated signal processing techniques as an alternative method of improving system performanceH Some work in this area has already taken place...demands on the frequency spectrum. As noted in Table 1-1, there has been considerable work on advanced signal processing in the MLS context
Research in speech communication.
Flanagan, J
1995-01-01
Advances in digital speech processing are now supporting application and deployment of a variety of speech technologies for human/machine communication. In fact, new businesses are rapidly forming about these technologies. But these capabilities are of little use unless society can afford them. Happily, explosive advances in microelectronics over the past two decades have assured affordable access to this sophistication as well as to the underlying computing technology. The research challenges in speech processing remain in the traditionally identified areas of recognition, synthesis, and coding. These three areas have typically been addressed individually, often with significant isolation among the efforts. But they are all facets of the same fundamental issue--how to represent and quantify the information in the speech signal. This implies deeper understanding of the physics of speech production, the constraints that the conventions of language impose, and the mechanism for information processing in the auditory system. In ongoing research, therefore, we seek more accurate models of speech generation, better computational formulations of language, and realistic perceptual guides for speech processing--along with ways to coalesce the fundamental issues of recognition, synthesis, and coding. Successful solution will yield the long-sought dictation machine, high-quality synthesis from text, and the ultimate in low bit-rate transmission of speech. It will also open the door to language-translating telephony, where the synthetic foreign translation can be in the voice of the originating talker. Images Fig. 1 Fig. 2 Fig. 5 Fig. 8 Fig. 11 Fig. 12 Fig. 13 PMID:7479806
Transformation as a Design Process and Runtime Architecture for High Integrity Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bespalko, S.J.; Winter, V.L.
1999-04-05
We have discussed two aspects of creating high integrity software that greatly benefit from the availability of transformation technology, which in this case is manifest by the requirement for a sophisticated backtracking parser. First, because of the potential for correctly manipulating programs via small changes, an automated non-procedural transformation system can be a valuable tool for constructing high assurance software. Second, modeling the processing of translating data into information as a, perhaps, context-dependent grammar leads to an efficient, compact implementation. From a practical perspective, the transformation process should begin in the domain language in which a problem is initially expressed.more » Thus in order for a transformation system to be practical it must be flexible with respect to domain-specific languages. We have argued that transformation applied to specification results in a highly reliable system. We also attempted to briefly demonstrate that transformation technology applied to the runtime environment will result in a safe and secure system. We thus believe that the sophisticated multi-lookahead backtracking parsing technology is central to the task of being in a position to demonstrate the existence of HIS.« less