Quantum Image Processing and Its Application to Edge Detection: Theory and Experiment
NASA Astrophysics Data System (ADS)
Yao, Xi-Wei; Wang, Hengyan; Liao, Zeyang; Chen, Ming-Cheng; Pan, Jian; Li, Jun; Zhang, Kechao; Lin, Xingcheng; Wang, Zhehui; Luo, Zhihuang; Zheng, Wenqiang; Li, Jianzhong; Zhao, Meisheng; Peng, Xinhua; Suter, Dieter
2017-07-01
Processing of digital images is continuously gaining in volume and relevance, with concomitant demands on data storage, transmission, and processing power. Encoding the image information in quantum-mechanical systems instead of classical ones and replacing classical with quantum information processing may alleviate some of these challenges. By encoding and processing the image information in quantum-mechanical systems, we here demonstrate the framework of quantum image processing, where a pure quantum state encodes the image information: we encode the pixel values in the probability amplitudes and the pixel positions in the computational basis states. Our quantum image representation reduces the required number of qubits compared to existing implementations, and we present image processing algorithms that provide exponential speed-up over their classical counterparts. For the commonly used task of detecting the edge of an image, we propose and implement a quantum algorithm that completes the task with only one single-qubit operation, independent of the size of the image. This demonstrates the potential of quantum image processing for highly efficient image and video processing in the big data era.
Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging
NASA Astrophysics Data System (ADS)
Chen, Tao; Jin, Guanghu; Dong, Zhen
2018-04-01
Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.
OSM-Classic : An optical imaging technique for accurately determining strain
NASA Astrophysics Data System (ADS)
Aldrich, Daniel R.; Ayranci, Cagri; Nobes, David S.
OSM-Classic is a program designed in MATLAB® to provide a method of accurately determining strain in a test sample using an optical imaging technique. Measuring strain for the mechanical characterization of materials is most commonly performed with extensometers, LVDT (linear variable differential transistors), and strain gauges; however, these strain measurement methods suffer from their fragile nature and it is not particularly easy to attach these devices to the material for testing. To alleviate these potential problems, an optical approach that does not require contact with the specimen can be implemented to measure the strain. OSM-Classic is a software that interrogates a series of images to determine elongation in a test sample and hence, strain of the specimen. It was designed to provide a graphical user interface that includes image processing with a dynamic region of interest. Additionally, the stain is calculated directly while providing active feedback during the processing.
Quantum realization of the bilinear interpolation method for NEQR.
Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou
2017-05-31
In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.
Imaging learning and memory: classical conditioning.
Schreurs, B G; Alkon, D L
2001-12-15
The search for the biological basis of learning and memory has, until recently, been constrained by the limits of technology to classic anatomic and electrophysiologic studies. With the advent of functional imaging, we have begun to delve into what, for many, was a "black box." We review several different types of imaging experiments, including steady state animal experiments that image the functional labeling of fixed tissues, and dynamic human studies based on functional imaging of the intact brain during learning. The data suggest that learning and memory involve a surprising conservation of mechanisms and the integrated networking of a number of structures and processes. Copyright 2001 Wiley-Liss, Inc.
Iplt--image processing library and toolkit for the electron microscopy community.
Philippsen, Ansgar; Schenk, Andreas D; Stahlberg, Henning; Engel, Andreas
2003-01-01
We present the foundation for establishing a modular, collaborative, integrated, open-source architecture for image processing of electron microscopy images, named iplt. It is designed around object oriented paradigms and implemented using the programming languages C++ and Python. In many aspects it deviates from classical image processing approaches. This paper intends to motivate developers within the community to participate in this on-going project. The iplt homepage can be found at http://www.iplt.org.
Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A
2015-11-01
This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional.
Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A.
2015-01-01
This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional. PMID:26644943
Khrennikov, Andrei
2011-09-01
We propose a model of quantum-like (QL) processing of mental information. This model is based on quantum information theory. However, in contrast to models of "quantum physical brain" reducing mental activity (at least at the highest level) to quantum physical phenomena in the brain, our model matches well with the basic neuronal paradigm of the cognitive science. QL information processing is based (surprisingly) on classical electromagnetic signals induced by joint activity of neurons. This novel approach to quantum information is based on representation of quantum mechanics as a version of classical signal theory which was recently elaborated by the author. The brain uses the QL representation (QLR) for working with abstract concepts; concrete images are described by classical information theory. Two processes, classical and QL, are performed parallely. Moreover, information is actively transmitted from one representation to another. A QL concept given in our model by a density operator can generate a variety of concrete images given by temporal realizations of the corresponding (Gaussian) random signal. This signal has the covariance operator coinciding with the density operator encoding the abstract concept under consideration. The presence of various temporal scales in the brain plays the crucial role in creation of QLR in the brain. Moreover, in our model electromagnetic noise produced by neurons is a source of superstrong QL correlations between processes in different spatial domains in the brain; the binding problem is solved on the QL level, but with the aid of the classical background fluctuations. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Image model: new perspective for image processing and computer vision
NASA Astrophysics Data System (ADS)
Ziou, Djemel; Allili, Madjid
2004-05-01
We propose a new image model in which the image support and image quantities are modeled using algebraic topology concepts. The image support is viewed as a collection of chains encoding combination of pixels grouped by dimension and linking different dimensions with the boundary operators. Image quantities are encoded using the notion of cochain which associates values for pixels of given dimension that can be scalar, vector, or tensor depending on the problem that is considered. This allows obtaining algebraic equations directly from the physical laws. The coboundary and codual operators, which are generic operations on cochains allow to formulate the classical differential operators as applied for field functions and differential forms in both global and local forms. This image model makes the association between the image support and the image quantities explicit which results in several advantages: it allows the derivation of efficient algorithms that operate in any dimension and the unification of mathematics and physics to solve classical problems in image processing and computer vision. We show the effectiveness of this model by considering the isotropic diffusion.
Woźniak, Krzysztof; Moskała, Artur; Urbanik, Andrzej; Kopacz, Paweł; Kłys, Małgorzata
2009-01-01
The techniques employed in "classic" forensic autopsy have been virtually unchanged for many years. One of the fundamental purposes of forensic documentation is to register as objectively as possible the changes found by forensic pathologists. The authors present the review of techniques of postmortem imaging studies, which aim not only at increased objectivity of observations, but also at extending the scope of the registered data. The paper is illustrated by images originating from research carried out by the authors.
Quantum image median filtering in the spatial domain
NASA Astrophysics Data System (ADS)
Li, Panchi; Liu, Xiande; Xiao, Hong
2018-03-01
Spatial filtering is one principal tool used in image processing for a broad spectrum of applications. Median filtering has become a prominent representation of spatial filtering because its performance in noise reduction is excellent. Although filtering of quantum images in the frequency domain has been described in the literature, and there is a one-to-one correspondence between linear spatial filters and filters in the frequency domain, median filtering is a nonlinear process that cannot be achieved in the frequency domain. We therefore investigated the spatial filtering of quantum image, focusing on the design method of the quantum median filter and applications in image de-noising. To this end, first, we presented the quantum circuits for three basic modules (i.e., Cycle Shift, Comparator, and Swap), and then, we design two composite modules (i.e., Sort and Median Calculation). We next constructed a complete quantum circuit that implements the median filtering task and present the results of several simulation experiments on some grayscale images with different noise patterns. Although experimental results show that the proposed scheme has almost the same noise suppression capacity as its classical counterpart, the complexity analysis shows that the proposed scheme can reduce the computational complexity of the classical median filter from the exponential function of image size n to the second-order polynomial function of image size n, so that the classical method can be speeded up.
MRI Superresolution Using Self-Similarity and Image Priors
Manjón, José V.; Coupé, Pierrick; Buades, Antonio; Collins, D. Louis; Robles, Montserrat
2010-01-01
In Magnetic Resonance Imaging typical clinical settings, both low- and high-resolution images of different types are routinarily acquired. In some cases, the acquired low-resolution images have to be upsampled to match with other high-resolution images for posterior analysis or postprocessing such as registration or multimodal segmentation. However, classical interpolation techniques are not able to recover the high-frequency information lost during the acquisition process. In the present paper, a new superresolution method is proposed to reconstruct high-resolution images from the low-resolution ones using information from coplanar high resolution images acquired of the same subject. Furthermore, the reconstruction process is constrained to be physically plausible with the MR acquisition model that allows a meaningful interpretation of the results. Experiments on synthetic and real data are supplied to show the effectiveness of the proposed approach. A comparison with classical state-of-the-art interpolation techniques is presented to demonstrate the improved performance of the proposed methodology. PMID:21197094
Quantum computation in the analysis of hyperspectral data
NASA Astrophysics Data System (ADS)
Gomez, Richard B.; Ghoshal, Debabrata; Jayanna, Anil
2004-08-01
Recent research on the topic of quantum computation provides us with some quantum algorithms with higher efficiency and speedup compared to their classical counterparts. In this paper, it is our intent to provide the results of our investigation of several applications of such quantum algorithms - especially the Grover's Search algorithm - in the analysis of Hyperspectral Data. We found many parallels with Grover's method in existing data processing work that make use of classical spectral matching algorithms. Our efforts also included the study of several methods dealing with hyperspectral image analysis work where classical computation methods involving large data sets could be replaced with quantum computation methods. The crux of the problem in computation involving a hyperspectral image data cube is to convert the large amount of data in high dimensional space to real information. Currently, using the classical model, different time consuming methods and steps are necessary to analyze these data including: Animation, Minimum Noise Fraction Transform, Pixel Purity Index algorithm, N-dimensional scatter plot, Identification of Endmember spectra - are such steps. If a quantum model of computation involving hyperspectral image data can be developed and formalized - it is highly likely that information retrieval from hyperspectral image data cubes would be a much easier process and the final information content would be much more meaningful and timely. In this case, dimensionality would not be a curse, but a blessing.
Classical Photogrammetry and Uav - Selected Ascpects
NASA Astrophysics Data System (ADS)
Mikrut, S.
2016-06-01
The UAV technology seems to be highly future-oriented due to its low costs as compared to traditional aerial images taken from classical photogrammetry aircrafts. The AGH University of Science and Technology in Cracow - Department of Geoinformation, Photogrammetry and Environmental Remote Sensing focuses mainly on geometry and radiometry of recorded images. Various scientific research centres all over the world have been conducting the relevant research for years. The paper presents selected aspects of processing digital images made with the UAV technology. It provides on a practical example a comparison between a digital image taken from an airborne (classical) height, and the one made from an UAV level. In his research the author of the paper is trying to find an answer to the question: to what extent does the UAV technology diverge today from classical photogrammetry, and what are the advantages and disadvantages of both methods? The flight plan was made over the Tokarnia Village Museum (more than 0.5 km2) for two separate flights: the first was made by an UAV - System FT-03A built by FlyTech Solution Ltd. The second was made with the use of a classical photogrammetric Cesna aircraft furnished with an airborne photogrammetric camera (Ultra Cam Eagle). Both sets of photographs were taken with pixel size of about 3 cm, in order to have reliable data allowing for both systems to be compared. The project has made aerotriangulation independently for the two flights. The DTM was generated automatically, and the last step was the generation of an orthophoto. The geometry of images was checked under the process of aerotriangulation. To compare the accuracy of these two flights, control and check points were used. RMSE were calculated. The radiometry was checked by a visual method and using the author's own algorithm for feature extraction (to define edges with subpixel accuracy). After initial pre-processing of data, the images were put together, and shown side by side. Buildings and strips on the road were selected from whole data for the comparison of edges and details. The details on UAV images were not worse than those on classical photogrammetric ones. One might suppose that geometrically they also were correct. The results of aerotriangulation prove these facts, too. Final results from aerotriangulation were on the level of RMS = 1 pixel (about 3 cm). In general it can be said that photographs from UAVs are not worse than classic ones. In the author's opinion, geometric and radiometric qualities are at a similar level for this kind of area (a small village). This is a very significant result as regards mapping. It means that UAV data can be used in mapping production.
[Application of computed tomography (CT) examination for forensic medicine].
Urbanik, Andrzej; Chrzan, Robert
2013-01-01
The aim of the study is to present a own experiences in usage of post mortem CT examination for forensic medicine. With the help of 16-slice CT scanner 181 corpses were examined. Obtained during acquisition imaging data are later developed with dedicated programmes. Analyzed images were extracted from axial sections, multiplanar reconstructions as well as 3D reconstructions. Gained information helped greatly when classical autopsy was performed by making it more accurate. A CT scan images recorded digitally enable to evaluate corpses at any time, despite processes of putrefaction or cremation. If possible CT examination should precede classical autopsy.
(Pea)nuts and Bolts of Visual Narrative: Structure and Meaning in Sequential Image Comprehension
ERIC Educational Resources Information Center
Cohn, Neil; Paczynski, Martin; Jackendoff, Ray; Holcomb, Phillip J.; Kuperberg, Gina R.
2012-01-01
Just as syntax differentiates coherent sentences from scrambled word strings, the comprehension of sequential images must also use a cognitive system to distinguish coherent narrative sequences from random strings of images. We conducted experiments analogous to two classic studies of language processing to examine the contributions of narrative…
Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation
NASA Astrophysics Data System (ADS)
Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.
2017-05-01
In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.
NASA Technical Reports Server (NTRS)
Chien, S.
1994-01-01
This paper describes work on the Multimission VICAR Planner (MVP) system to automatically construct executable image processing procedures for custom image processing requests for the JPL Multimission Image Processing Lab (MIPL). This paper focuses on two issues. First, large search spaces caused by complex plans required the use of hand encoded control information. In order to address this in a manner similar to that used by human experts, MVP uses a decomposition-based planner to implement hierarchical/skeletal planning at the higher level and then uses a classical operator based planner to solve subproblems in contexts defined by the high-level decomposition.
Memory preservation made prestigious but easy
NASA Astrophysics Data System (ADS)
Fageth, Reiner; Debus, Christina; Sandhaus, Philipp
2011-01-01
Preserving memories combined with story-telling using either photo books for multiple images or high quality products such as one or a few images printed on canvas or images mounted on acryl to create high-quality wall decorations are gradually becoming more popular than classical 4*6 prints and classical silver halide posters. Digital printing via electro photography and ink jet is increasingly replacing classical silver halide technology as the dominant production technology for these kinds of products. Maintaining a consistent and comparable quality of output is becoming more challenging than using silver halide paper for both, prints and posters. This paper describes a unique approach of combining both desktop based software to initiate a compelling project and the use of online capabilities in order to finalize and optimize that project in an online environment in a community process. A comparison of the consumer behavior between online and desktop based solutions for generating photo books will be presented.
Classical and neural methods of image sequence interpolation
NASA Astrophysics Data System (ADS)
Skoneczny, Slawomir; Szostakowski, Jaroslaw
2001-08-01
An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.
Restoration of motion blurred images
NASA Astrophysics Data System (ADS)
Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.
2017-08-01
Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.
Atomic Scale Imaging of Nucleation and Growth Trajectories of an Interfacial Bismuth Nanodroplet.
Li, Yingxuan; Bunes, Benjamin R; Zang, Ling; Zhao, Jie; Li, Yan; Zhu, Yunqing; Wang, Chuanyi
2016-02-23
Because of the lack of experimental evidence, much confusion still exists on the nucleation and growth dynamics of a nanostructure, particularly of metal. The situation is even worse for nanodroplets because it is more difficult to induce the formation of a nanodroplet while imaging the dynamic process with atomic resolution. Here, taking advantage of an electron beam to induce the growth of Bi nanodroplets on a SrBi2Ta2O9 platelet under a high resolution transmission electron microscope (HRTEM), we directly observed the detailed growth pathways of Bi nanodroplets from the earliest stage of nucleation that were previously inaccessible. Atomic scale imaging reveals that the dynamics of nucleation involves a much more complex trajectory than previously predicted based on classical nucleation theory (CNT). The monatomic Bi layer was first formed in the nucleation process, which induced the formation of the prenucleated clusters. Following that, critical nuclei for the nanodroplets formed both directly from the addition of atoms to the prenucleated clusters by the classical growth process and indirectly through transformation of an intermediate liquid film based on the Stranski-Krastanov growth mode, in which the liquid film was induced by the self-assembly of the prenucleated clusters. Finally, the growth of the Bi nanodroplets advanced through the classical pathway and sudden droplet coalescence. This study allows us to visualize the critical steps in the nucleation process of an interfacial nanodroplet, which suggests a revision of the perspective of CNT.
Analysis and improvement of the quantum image matching
NASA Astrophysics Data System (ADS)
Dang, Yijie; Jiang, Nan; Hu, Hao; Zhang, Wenyin
2017-11-01
We investigate the quantum image matching algorithm proposed by Jiang et al. (Quantum Inf Process 15(9):3543-3572, 2016). Although the complexity of this algorithm is much better than the classical exhaustive algorithm, there may be an error in it: After matching the area between two images, only the pixel at the upper left corner of the matched area played part in following steps. That is to say, the paper only matched one pixel, instead of an area. If more than one pixels in the big image are the same as the one at the upper left corner of the small image, the algorithm will randomly measure one of them, which causes the error. In this paper, an improved version is presented which takes full advantage of the whole matched area to locate a small image in a big image. The theoretical analysis indicates that the network complexity is higher than the previous algorithm, but it is still far lower than the classical algorithm. Hence, this algorithm is still efficient.
A comparison of classical histology to anatomy revealed by hard x-rays
NASA Astrophysics Data System (ADS)
Richter, Claus-Peter; Tan, Xiaodong; Young, Hunter; Stock, Stuart; Robinson, Alan; Byskosh, Orest; Zheng, Jing; Soriano, Carmen; Xiao, Xianghui; Whitlon, Donna
2016-10-01
Many diseases trigger morphological changes in affected tissue. Today, classical histology is still the "gold standard" used to study and describe those changes. Classical histology, however, is time consuming and requires chemical tissue manipulations that can result in significant tissue distortions. It is sometimes difficult to separate tissue-processing artifacts from changes caused by the disease process. We show that synchrotron X-ray phase-contrast micro-computed tomography (micro-CT) can be used to examine non-embedded, hydrated tissue at a resolution comparable to that obtained with classical histology. The data analysis from stacks of reconstructed micro-CT images is more flexible and faster than when using the classical, physically embedded sections that are by necessity fixed in a particular orientation. We show that in a three-dimensional (3D) structure with meticulous structural details such as the cochlea and the kidney, micro-CT is more flexible, faster and more convenient for morphological studies and disease diagnoses.
Adaptive marginal median filter for colour images.
Morillas, Samuel; Gregori, Valentín; Sapena, Almanzor
2011-01-01
This paper describes a new filter for impulse noise reduction in colour images which is aimed at improving the noise reduction capability of the classical vector median filter. The filter is inspired by the application of a vector marginal median filtering process over a selected group of pixels in each filtering window. This selection, which is based on the vector median, along with the application of the marginal median operation constitutes an adaptive process that leads to a more robust filter design. Also, the proposed method is able to process colour images without introducing colour artifacts. Experimental results show that the images filtered with the proposed method contain less noisy pixels than those obtained through the vector median filter.
Image charge effects on electron capture by dust grains in dusty plasmas.
Jung, Y D; Tawara, H
2001-07-01
Electron-capture processes by negatively charged dust grains from hydrogenic ions in dusty plasmas are investigated in accordance with the classical Bohr-Lindhard model. The attractive interaction between the electron in a hydrogenic ion and its own image charge inside the dust grain is included to obtain the total interaction energy between the electron and the dust grain. The electron-capture radius is determined by the total interaction energy and the kinetic energy of the released electron in the frame of the projectile dust grain. The classical straight-line trajectory approximation is applied to the motion of the ion in order to visualize the electron-capture cross section as a function of the impact parameter, kinetic energy of the projectile ion, and dust charge. It is found that the image charge inside the dust grain plays a significant role in the electron-capture process near the surface of the dust grain. The electron-capture cross section is found to be quite sensitive to the collision energy and dust charge.
Multimodal evaluation of ultra-short laser pulses treatment for skin burn injuries.
Santos, Moises Oliveira Dos; Latrive, Anne; De Castro, Pedro Arthur Augusto; De Rossi, Wagner; Zorn, Telma Maria Tenorio; Samad, Ricardo Elgul; Freitas, Anderson Zanardi; Cesar, Carlos Lenz; Junior, Nilson Dias Vieira; Zezell, Denise Maria
2017-03-01
Thousands of people die every year from burn injuries. The aim of this study is to evaluate the feasibility of high intensity femtosecond lasers as an auxiliary treatment of skin burns. We used an in vivo animal model and monitored the healing process using 4 different imaging modalities: histology, Optical Coherence Tomography (OCT), Second Harmonic Generation (SHG), and Fourier Transform Infrared (FTIR) spectroscopy. 3 dorsal areas of 20 anesthetized Wistar rats were burned by water vapor exposure and subsequently treated either by classical surgical debridement, by laser ablation, or left without treatment. Skin burn tissues were non-invasively characterized by OCT images and biopsied for further histopathology analysis, SHG imaging and FTIR spectroscopy at 3, 5, 7 and 14 days after burn. The laser protocol was found as efficient as the classical treatment for promoting the healing process. The study concludes to the validation of femtosecond ultra-short pulses laser treatment for skinburns, with the advantage of minimizing operatory trauma.
Noise reduction and image enhancement using a hardware implementation of artificial neural networks
NASA Astrophysics Data System (ADS)
David, Robert; Williams, Erin; de Tremiolles, Ghislain; Tannhof, Pascal
1999-03-01
In this paper, we present a neural based solution developed for noise reduction and image enhancement using the ZISC, an IBM hardware processor which implements the Restricted Coulomb Energy algorithm and the K-Nearest Neighbor algorithm. Artificial neural networks present the advantages of processing time reduction in comparison with classical models, adaptability, and the weighted property of pattern learning. The goal of the developed application is image enhancement in order to restore old movies (noise reduction, focus correction, etc.), to improve digital television images, or to treat images which require adaptive processing (medical images, spatial images, special effects, etc.). Image results show a quantitative improvement over the noisy image as well as the efficiency of this system. Further enhancements are being examined to improve the output of the system.
Positron Emission Tomography in Cochlear Implant and Auditory Brainstem Implant Recipients.
ERIC Educational Resources Information Center
Miyamoto, Richard T.; Wong, Donald
2001-01-01
Positron emission tomography imaging was used to evaluate the brain's response to auditory stimulation, including speech, in deaf adults (five with cochlear implants and one with an auditory brainstem implant). Functional speech processing was associated with activation in areas classically associated with speech processing. (Contains five…
NASA Technical Reports Server (NTRS)
Caron, R. H.; Rifman, S. S.; Simon, K. W.
1974-01-01
The development of an ERTS/MSS image processing system responsive to the needs of the user community is discussed. An overview of the TRW ERTS/MSS processor is presented, followed by a more detailed discussion of image processing functions satisfied by the system. The particular functions chosen for discussion are evolved from advanced signal processing techniques rooted in the areas of communication and control. These examples show how classical aerospace technology can be transferred to solve the more contemporary problems confronting the users of spaceborne imagery.
Ghost imaging of phase objects with classical incoherent light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirai, Tomohiro; Setaelae, Tero; Friberg, Ari T.
2011-10-15
We describe an optical setup for performing spatial Fourier filtering in ghost imaging with classical incoherent light. This is achieved by a modification of the conventional geometry for lensless ghost imaging. It is shown on the basis of classical coherence theory that with this technique one can realize what we call phase-contrast ghost imaging to visualize pure phase objects.
Fixed-Cell Imaging of Schizosaccharomyces pombe.
Hagan, Iain M; Bagley, Steven
2016-07-01
The acknowledged genetic malleability of fission yeast has been matched by impressive cytology to drive major advances in our understanding of basic molecular cell biological processes. In many of the more recent studies, traditional approaches of fixation followed by processing to accommodate classical staining procedures have been superseded by live-cell imaging approaches that monitor the distribution of fusion proteins between a molecule of interest and a fluorescent protein. Although such live-cell imaging is uniquely informative for many questions, fixed-cell imaging remains the better option for others and is an important-sometimes critical-complement to the analysis of fluorescent fusion proteins by live-cell imaging. Here, we discuss the merits of fixed- and live-cell imaging as well as specific issues for fluorescence microscopy imaging of fission yeast. © 2016 Cold Spring Harbor Laboratory Press.
Image processing and 3D visualization in forensic pathologic examination
NASA Astrophysics Data System (ADS)
Oliver, William R.; Altschuler, Bruce R.
1996-02-01
The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing and three-dimensional visualization in the analysis of patterned injuries and tissue damage. While image processing will never replace classical understanding and interpretation of how injuries develop and evolve, it can be a useful tool in helping an observer notice features in an image, may help provide correlation of surface to deep tissue injury, and provide a mechanism for the development of a metric for analyzing how likely it may be that a given object may have caused a given wound. We are also exploring methods of acquiring three-dimensional data for such measurements, which is the subject of a second paper.
Ultrasonic Imaging Techniques for Breast Cancer Detection
NASA Astrophysics Data System (ADS)
Goulding, N. R.; Marquez, J. D.; Prewett, E. M.; Claytor, T. N.; Nadler, B. R.
2008-02-01
Improving the resolution and specificity of current ultrasonic imaging technology is needed to enhance its relevance to breast cancer detection. A novel ultrasonic imaging reconstruction method is described that exploits classical straight-ray migration. This novel method improves signal processing for better image resolution and uses novel staging hardware options using a pulse-echo approach. A breast phantom with various inclusions is imaged using the classical migration method and is compared to standard computed tomography (CT) scans. These innovative ultrasonic methods incorporate ultrasound data acquisition, beam profile characterization, and image reconstruction. For an ultrasonic frequency of 2.25 MHz, imaged inclusions of approximately 1 cm are resolved and identified. Better resolution is expected with minor modifications. Improved image quality and resolution enables earlier detection and more accurate diagnoses of tumors thus reducing the number of biopsies performed, increasing treatment options, and lowering remission percentages. Using these new techniques the inclusions in the phantom are resolved and compared to the results of standard methods. Refinement of this application using other imaging techniques such as time-reversal mirrors (TRM), synthetic aperture focusing technique (SAFT), decomposition of the time reversal operator (DORT), and factorization methods is also discussed.
Quantum image pseudocolor coding based on the density-stratified method
NASA Astrophysics Data System (ADS)
Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na
2015-05-01
Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.
Defocusing effects of lensless ghost imaging and ghost diffraction with partially coherent sources
NASA Astrophysics Data System (ADS)
Zhou, Shuang-Xi; Sheng, Wei; Bi, Yu-Bo; Luo, Chun-Ling
2018-04-01
The defocusing effect is inevitable and degrades the image quality in the conventional optical imaging process significantly due to the close confinement of the imaging lens. Based on classical optical coherent theory and linear algebra, we develop a unified formula to describe the defocusing effects of both lensless ghost imaging (LGI) and lensless ghost diffraction (LGD) systems with a partially coherent source. Numerical examples are given to illustrate the influence of defocusing length on the quality of LGI and LGD. We find that the defocusing effects of the test and reference paths in the LGI or LGD systems are entirely different, while the LGD system is more robust against defocusing than the LGI system. Specifically, we find that the imaging process for LGD systems can be viewed as pinhole imaging, which may find applications in ultra-short-wave band imaging without imaging lenses, e.g. x-ray diffraction and γ-ray imaging.
Reducing noise component on medical images
NASA Astrophysics Data System (ADS)
Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana
2018-04-01
Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2011-06-01
Hypercomplex approaches are seeing increased application to signal and image processing problems. The use of multicomponent hypercomplex numbers, such as quaternions, enables the simultaneous co-processing of multiple signal or image components. This joint processing capability can provide improved exploitation of the information contained in the data, thereby leading to improved performance in detection and recognition problems. In this paper, we apply hypercomplex processing techniques to the logo image recognition problem. Specifically, we develop an image matcher by generalizing classical phase correlation to the biquaternion case. We further incorporate biquaternion Fourier domain alpha-rooting enhancement to create Alpha-Rooted Biquaternion Phase Correlation (ARBPC). We present the mathematical properties which justify use of ARBPC as an image matcher. We present numerical performance results of a logo verification problem using real-world logo data, demonstrating the performance improvement obtained using the hypercomplex approach. We compare results of the hypercomplex approach to standard multi-template matching approaches.
Real-time algorithm for acoustic imaging with a microphone array.
Huang, Xun
2009-05-01
Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.
Moment inference from tomograms
Day-Lewis, F. D.; Chen, Y.; Singha, K.
2007-01-01
Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error. Copyright 2007 by the American Geophysical Union.
Moment inference from tomograms
Day-Lewis, Frederick D.; Chen, Yongping; Singha, Kamini
2007-01-01
Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error.
Zhang, T; Godavarthi, C; Chaumet, P C; Maire, G; Giovannini, H; Talneau, A; Prada, C; Sentenac, A; Belkebir, K
2015-02-15
Tomographic diffractive microscopy is a marker-free optical digital imaging technique in which three-dimensional samples are reconstructed from a set of holograms recorded under different angles of incidence. We show experimentally that, by processing the holograms with singular value decomposition, it is possible to image objects in a noisy background that are invisible with classical wide-field microscopy and conventional tomographic reconstruction procedure. The targets can be further characterized with a selective quantitative inversion.
Phase-Sensitive Coherence and the Classical-Quantum Boundary in Ghost Imaging
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Hardy, Nicholas D.; Venkatraman, Dheera; Wong, Franco N. C.; Shapiro, Jeffrey H.
2011-01-01
The theory of partial coherence has a long and storied history in classical statistical optics. the vast majority of this work addresses fields that are statistically stationary in time, hence their complex envelopes only have phase-insensitive correlations. The quantum optics of squeezed-state generation, however, depends on nonlinear interactions producing baseband field operators with phase-insensitive and phase-sensitive correlations. Utilizing quantum light to enhance imaging has been a topic of considerable current interest, much of it involving biphotons, i.e., streams of entangled-photon pairs. Biphotons have been employed for quantum versions of optical coherence tomography, ghost imaging, holography, and lithography. However, their seemingly quantum features have been mimicked with classical-sate light, questioning wherein lies the classical-quantum boundary. We have shown, for the case of Gaussian-state light, that this boundary is intimately connected to the theory of phase-sensitive partial coherence. Here we present that theory, contrasting it with the familiar case of phase-insensitive partial coherence, and use it to elucidate the classical-quantum boundary of ghost imaging. We show, both theoretically and experimentally, that classical phase-sensitive light produces ghost imaging most closely mimicking those obtained in biphotons, and we derived the spatial resolution, image contrast, and signal-to-noise ratio of a standoff-sensing ghost imager, taking into account target-induced speckle.
Micro-optical artificial compound eyes.
Duparré, J W; Wippermann, F C
2006-03-01
Natural compound eyes combine small eye volumes with a large field of view at the cost of comparatively low spatial resolution. For small invertebrates such as flies or moths, compound eyes are the perfectly adapted solution to obtaining sufficient visual information about their environment without overloading their brains with the necessary image processing. However, to date little effort has been made to adopt this principle in optics. Classical imaging always had its archetype in natural single aperture eyes which, for example, human vision is based on. But a high-resolution image is not always required. Often the focus is on very compact, robust and cheap vision systems. The main question is consequently: what is the better approach for extremely miniaturized imaging systems-just scaling of classical lens designs or being inspired by alternative imaging principles evolved by nature in the case of small insects? In this paper, it is shown that such optical systems can be achieved using state-of-the-art micro-optics technology. This enables the generation of highly precise and uniform microlens arrays and their accurate alignment to the subsequent optics-, spacing- and optoelectronics structures. The results are thin, simple and monolithic imaging devices with a high accuracy of photolithography. Two different artificial compound eye concepts for compact vision systems have been investigated in detail: the artificial apposition compound eye and the cluster eye. Novel optical design methods and characterization tools were developed to allow the layout and experimental testing of the planar micro-optical imaging systems, which were fabricated for the first time by micro-optics technology. The artificial apposition compound eye can be considered as a simple imaging optical sensor while the cluster eye is capable of becoming a valid alternative to classical bulk objectives but is much more complex than the first system.
Local wavelet transform: a cost-efficient custom processor for space image compression
NASA Astrophysics Data System (ADS)
Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier
2002-11-01
Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.
A fast discrete S-transform for biomedical signal processing.
Brown, Robert A; Frayne, Richard
2008-01-01
Determining the frequency content of a signal is a basic operation in signal and image processing. The S-transform provides both the true frequency and globally referenced phase measurements characteristic of the Fourier transform and also generates local spectra, as does the wavelet transform. Due to this combination, the S-transform has been successfully demonstrated in a variety of biomedical signal and image processing tasks. However, the computational demands of the S-transform have limited its application in medicine to this point in time. This abstract introduces the fast S-transform, a more efficient discrete implementation of the classic S-transform with dramatically reduced computational requirements.
Research on Palmprint Identification Method Based on Quantum Algorithms
Zhang, Zhanzhan
2014-01-01
Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165
A novel quantum steganography scheme for color images
NASA Astrophysics Data System (ADS)
Li, Panchi; Liu, Xiande
In quantum image steganography, embedding capacity and security are two important issues. This paper presents a novel quantum steganography scheme using color images as cover images. First, the secret information is divided into 3-bit segments, and then each 3-bit segment is embedded into the LSB of one color pixel in the cover image according to its own value and using Gray code mapping rules. Extraction is the inverse of embedding. We designed the quantum circuits that implement the embedding and extracting process. The simulation results on a classical computer show that the proposed scheme outperforms several other existing schemes in terms of embedding capacity and security.
Advanced sensor-simulation capability
NASA Astrophysics Data System (ADS)
Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.
1990-09-01
This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.
Guided filtering for solar image/video processing
NASA Astrophysics Data System (ADS)
Xu, Long; Yan, Yihua; Cheng, Jun
2017-06-01
A new image enhancement algorithm employing guided filtering is proposed in this work for the enhancement of solar images and videos so that users can easily figure out important fine structures embedded in the recorded images/movies for solar observation. The proposed algorithm can efficiently remove image noises, including Gaussian and impulse noises. Meanwhile, it can further highlight fibrous structures on/beyond the solar disk. These fibrous structures can clearly demonstrate the progress of solar flare, prominence coronal mass emission, magnetic field, and so on. The experimental results prove that the proposed algorithm gives significant enhancement of visual quality of solar images beyond original input and several classical image enhancement algorithms, thus facilitating easier determination of interesting solar burst activities from recorded images/movies.
NASA Astrophysics Data System (ADS)
Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong
2016-12-01
To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.
High visibility temporal ghost imaging with classical light
NASA Astrophysics Data System (ADS)
Liu, Jianbin; Wang, Jingjing; Chen, Hui; Zheng, Huaibin; Liu, Yanyan; Zhou, Yu; Li, Fu-li; Xu, Zhuo
2018-03-01
High visibility temporal ghost imaging with classical light is possible when superbunching pseudothermal light is employed. In the numerical simulation, the visibility of temporal ghost imaging with pseudothermal light, equaling (4 . 7 ± 0 . 2)%, can be increased to (75 ± 8)% in the same scheme with superbunching pseudothermal light. The reasons for that the retrieved images are different for superbunching pseudothermal light with different values of degree of second-order coherence are discussed in detail. It is concluded that high visibility and high quality temporal ghost image can be obtained by collecting sufficient number of data points. The results are helpful to understand the difference between ghost imaging with classical light and entangled photon pairs. The superbunching pseudothermal light can be employed to improve the image quality in ghost imaging applications.
Consciousness and values in the quantum universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stapp, H.P.
1985-01-01
Application of quantum mechanical description to neurophysiological processes appears to provide for a natural unification of the physical and humanistic sciences. The categories of thought used to represent physical and psychical processes become united, and the mechanical conception of man created by classical physics is replaced by a profoundly different quantum conception. This revised image of man allows human values to be rooted in contemporary science.
Three-dimensional digital projection in neurosurgical education: technical note.
Martins, Carolina; Ribas, Eduardo Carvalhal; Rhoton, Albert L; Ribas, Guilherme Carvalhal
2015-10-01
Three-dimensional images have become an important tool in teaching surgical anatomy, and its didactic power is enhanced when combined with 3D surgical images and videos. This paper describes the method used by the last author (G.C.R.) since 2002 to project 3D anatomical and surgical images using a computer source. Projecting 3D images requires the superposition of 2 similar but slightly different images of the same object. The set of images, one mimicking the view of the left eye and the other mimicking the view of the right eye, constitute the stereoscopic pair and can be processed using anaglyphic or horizontal-vertical polarization of light for individual use or presentation to larger audiences. Classically, 3D projection could be obtained by using a double set of slides, projected through 2 slide projectors, each of them equipped with complementary filters, shooting over a medium that keeps light polarized (a silver screen) and having the audience wear appropriate glasses. More recently, a digital method of 3D projection has been perfected. In this method, a personal computer is used as the source of the images, which are arranged in a Microsoft PowerPoint presentation. A beam splitter device is used to connect the computer source to 2 digital, portable projectors. Filters, a silver screen, and glasses are used, similar to the classic method. Among other advantages, this method brings flexibility to 3D presentations by allowing the combination of 3D anatomical and surgical still images and videos. It eliminates the need for using film and film developing, lowering the costs of the process. In using small, powerful digital projectors, this method substitutes for the previous technology, without incurring a loss of quality, and enhances portability.
NASA Astrophysics Data System (ADS)
Bednar, Earl; Drager, Steven L.
2007-04-01
Quantum information processing's objective is to utilize revolutionary computing capability based on harnessing the paradigm shift offered by quantum computing to solve classically hard and computationally challenging problems. Some of our computationally challenging problems of interest include: the capability for rapid image processing, rapid optimization of logistics, protecting information, secure distributed simulation, and massively parallel computation. Currently, one important problem with quantum information processing is that the implementation of quantum computers is difficult to realize due to poor scalability and great presence of errors. Therefore, we have supported the development of Quantum eXpress and QuIDD Pro, two quantum computer simulators running on classical computers for the development and testing of new quantum algorithms and processes. This paper examines the different methods used by these two quantum computing simulators. It reviews both simulators, highlighting each simulators background, interface, and special features. It also demonstrates the implementation of current quantum algorithms on each simulator. It concludes with summary comments on both simulators.
A Communication Configuration of AIDS.
ERIC Educational Resources Information Center
Hughey, Jim D.
A study focused on the way that image, knowledge, behavioral intent, and communicative responsiveness are configured for Acquired Immunodeficiency Syndrome (AIDS). The classic model of the adoption process expects that knowledge about a subject will lead to a favorable evaluation of it, which in turn will lead to a decision to act. But the…
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
Massive ovarian edema, due to adjacent appendicitis.
Callen, Andrew L; Illangasekare, Tushani; Poder, Liina
2017-04-01
Massive ovarian edema is a benign clinical entity, the imaging findings of which can mimic an adnexal mass or ovarian torsion. In the setting of acute abdominal pain, identifying massive ovarian edema is a key in avoiding potential fertility-threatening surgery in young women. In addition, it is important to consider other contributing pathology when ovarian edema is secondary to another process. We present a case of a young woman presenting with subacute abdominal pain, whose initial workup revealed marked enlarged right ovary. Further imaging, diagnostic tests, and eventually diagnostic laparoscopy revealed that the ovarian enlargement was secondary to subacute appendicitis, rather than a primary adnexal process. We review the classic ultrasound and MRI imaging findings and pitfalls that relate to this diagnosis.
NASA Astrophysics Data System (ADS)
Szu, Harold H.
1993-09-01
Classical artificial neural networks (ANN) and neurocomputing are reviewed for implementing a real time medical image diagnosis. An algorithm known as the self-reference matched filter that emulates the spatio-temporal integration ability of the human visual system might be utilized for multi-frame processing of medical imaging data. A Cauchy machine, implementing a fast simulated annealing schedule, can determine the degree of abnormality by the degree of orthogonality between the patient imagery and the class of features of healthy persons. An automatic inspection process based on multiple modality image sequences is simulated by incorporating the following new developments: (1) 1-D space-filling Peano curves to preserve the 2-D neighborhood pixels' relationship; (2) fast simulated Cauchy annealing for the global optimization of self-feature extraction; and (3) a mini-max energy function for the intra-inter cluster-segregation respectively useful for top-down ANN designs.
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2014-11-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud masks were designed to be numerically efficient and suited for the processing of large amounts of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient amounts of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2015-04-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud detection schemes were designed to be numerically efficient and suited for the processing of large numbers of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient numbers of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
Multiscale hidden Markov models for photon-limited imaging
NASA Astrophysics Data System (ADS)
Nowak, Robert D.
1999-06-01
Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.
Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering
NASA Technical Reports Server (NTRS)
Tilton, James C.
2002-01-01
This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.
Content based image retrieval using local binary pattern operator and data mining techniques.
Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan
2015-01-01
Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.
Shearlet Features for Registration of Remotely Sensed Multitemporal Images
NASA Technical Reports Server (NTRS)
Murphy, James M.; Le Moigne, Jacqueline
2015-01-01
We investigate the role of anisotropic feature extraction methods for automatic image registration of remotely sensed multitemporal images. Building on the classical use of wavelets in image registration, we develop an algorithm based on shearlets, a mathematical generalization of wavelets that offers increased directional sensitivity. Initial experimental results on LANDSAT images are presented, which indicate superior performance of the shearlet algorithm when compared to classical wavelet algorithms.
NASA Astrophysics Data System (ADS)
Chávez, G. Moreno; Sarocchi, D.; Santana, E. Arce; Borselli, L.
2015-12-01
The study of grain size distribution is fundamental for understanding sedimentological environments. Through these analyses, clast erosion, transport and deposition processes can be interpreted and modeled. However, grain size distribution analysis can be difficult in some outcrops due to the number and complexity of the arrangement of clasts and matrix and their physical size. Despite various technological advances, it is almost impossible to get the full grain size distribution (blocks to sand grain size) with a single method or instrument of analysis. For this reason development in this area continues to be fundamental. In recent years, various methods of particle size analysis by automatic image processing have been developed, due to their potential advantages with respect to classical ones; speed and final detailed content of information (virtually for each analyzed particle). In this framework, we have developed a novel algorithm and software for grain size distribution analysis, based on color image segmentation using an entropy-controlled quadratic Markov measure field algorithm and the Rosiwal method for counting intersections between clast and linear transects in the images. We test the novel algorithm in different sedimentary deposit types from 14 varieties of sedimentological environments. The results of the new algorithm were compared with grain counts performed manually by the same Rosiwal methods applied by experts. The new algorithm has the same accuracy as a classical manual count process, but the application of this innovative methodology is much easier and dramatically less time-consuming. The final productivity of the new software for analysis of clasts deposits after recording field outcrop images can be increased significantly.
Images as embedding maps and minimal surfaces: Movies, color, and volumetric medical images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimmel, R.; Malladi, R.; Sochen, N.
A general geometrical framework for image processing is presented. The authors consider intensity images as surfaces in the (x,I) space. The image is thereby a two dimensional surface in three dimensional space for gray level images. The new formulation unifies many classical schemes, algorithms, and measures via choices of parameters in a {open_quote}master{close_quotes} geometrical measure. More important, it is a simple and efficient tool for the design of natural schemes for image enhancement, segmentation, and scale space. Here the authors give the basic motivation and apply the scheme to enhance images. They present the concept of an image as amore » surface in dimensions higher than the three dimensional intuitive space. This will help them handle movies, color, and volumetric medical images.« less
Mental Visualization of Objects from Cross-Sectional Images
ERIC Educational Resources Information Center
Wu, Bing; Klatzky, Roberta L.; Stetten, George D.
2012-01-01
We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object…
A perspective view of the plane mixing layer
NASA Technical Reports Server (NTRS)
Jimenez, J.; Cogollos, M.; Bernal, L. P.
1984-01-01
A three-dimensional model of the plane mixing layer is constructed by applying digital image processing and computer graphic techniques to laser fluorescent motion pictures of its transversal sections. A system of streamwise vortex pairs is shown to exist on top of the classical spanwise eddies. Its influence on mixing is examined.
Educational Technology Classics: Educational Technology Doesn't Really Exist
ERIC Educational Resources Information Center
Silvern, Leonard C.
2013-01-01
The improvement of a professional group is due, in part, to its ability for introspection and self-evaluation. This is essentially the process of "analyzing" the profession as it currently is practiced, identifying necessary changes and improvements, and "synthesizing" or creating a new image or model of the profession to…
Language Proficiency Modulates the Recruitment of Non-Classical Language Areas in Bilinguals
Leonard, Matthew K.; Torres, Christina; Travis, Katherine E.; Brown, Timothy T.; Hagler, Donald J.; Dale, Anders M.; Elman, Jeffrey L.; Halgren, Eric
2011-01-01
Bilingualism provides a unique opportunity for understanding the relative roles of proficiency and order of acquisition in determining how the brain represents language. In a previous study, we combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine the spatiotemporal dynamics of word processing in a group of Spanish-English bilinguals who were more proficient in their native language. We found that from the earliest stages of lexical processing, words in the second language evoke greater activity in bilateral posterior visual regions, while activity to the native language is largely confined to classical left hemisphere fronto-temporal areas. In the present study, we sought to examine whether these effects relate to language proficiency or order of language acquisition by testing Spanish-English bilingual subjects who had become dominant in their second language. Additionally, we wanted to determine whether activity in bilateral visual regions was related to the presentation of written words in our previous study, so we presented subjects with both written and auditory words. We found greater activity for the less proficient native language in bilateral posterior visual regions for both the visual and auditory modalities, which started during the earliest word encoding stages and continued through lexico-semantic processing. In classical left fronto-temporal regions, the two languages evoked similar activity. Therefore, it is the lack of proficiency rather than secondary acquisition order that determines the recruitment of non-classical areas for word processing. PMID:21455315
Aberration-free superresolution imaging via binary speckle pattern encoding and processing
NASA Astrophysics Data System (ADS)
Ben-Eliezer, Eyal; Marom, Emanuel
2007-04-01
We present an approach that provides superresolution beyond the classical limit as well as image restoration in the presence of aberrations; in particular, the ability to obtain superresolution while extending the depth of field (DOF) simultaneously is tested experimentally. It is based on an approach, recently proposed, shown to increase the resolution significantly for in-focus images by speckle encoding and decoding. In our approach, an object multiplied by a fine binary speckle pattern may be located anywhere along an extended DOF region. Since the exact magnification is not known in the presence of defocus aberration, the acquired low-resolution image is electronically processed via a parallel-branch decoding scheme, where in each branch the image is multiplied by the same high-resolution synchronized time-varying binary speckle but with different magnification. Finally, a hard-decision algorithm chooses the branch that provides the highest-resolution output image, thus achieving insensitivity to aberrations as well as DOF variations. Simulation as well as experimental results are presented, exhibiting significant resolution improvement factors.
NASA Astrophysics Data System (ADS)
Pesaresi, Martino; Ouzounis, Georgios K.; Gueguen, Lionel
2012-06-01
A new compact representation of dierential morphological prole (DMP) vector elds is presented. It is referred to as the CSL model and is conceived to radically reduce the dimensionality of the DMP descriptors. The model maps three characteristic parameters, namely scale, saliency and level, into the RGB space through a HSV transform. The result is a a medium abstraction semantic layer used for visual exploration, image information mining and pattern classication. Fused with the PANTEX built-up presence index, the CSL model converges to an approximate building footprint representation layer in which color represents building class labels. This process is demonstrated on the rst high resolution (HR) global human settlement layer (GHSL) computed from multi-modal HR and VHR satellite images. Results of the rst massive processing exercise involving several thousands of scenes around the globe are reported along with validation gures.
Optimized Laplacian image sharpening algorithm based on graphic processing unit
NASA Astrophysics Data System (ADS)
Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah
2014-12-01
In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.
Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis.
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué
2015-10-01
In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%.
Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué
2015-01-01
In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%. PMID:26504638
Zaitsev, Vladimir Y; Matveyev, Alexandr L; Matveev, Lev A; Gelikonov, Grigory V; Gelikonov, Valentin M; Vitkin, Alex
2015-07-01
Feasibility of speckle tracking in optical coherence tomography (OCT) based on digital image correlation (DIC) is discussed in the context of elastography problems. Specifics of applying DIC methods to OCT, compared to processing of photographic images in mechanical engineering applications, are emphasized and main complications are pointed out. Analytical arguments are augmented by accurate numerical simulations of OCT speckle patterns. In contrast to DIC processing for displacement and strain estimation in photographic images, the accuracy of correlational speckle tracking in deformed OCT images is strongly affected by the coherent nature of speckles, for which strain-induced complications of speckle “blinking” and “boiling” are typical. The tracking accuracy is further compromised by the usually more pronounced pixelated structure of OCT scans compared with digital photographic images in classical DIC applications. Processing of complex-valued OCT data (comprising both amplitude and phase) compared to intensity-only scans mitigates these deleterious effects to some degree. Criteria of the attainable speckle tracking accuracy and its dependence on the key OCT system parameters are established.
Moderate Deviation Analysis for Classical Communication over Quantum Channels
NASA Astrophysics Data System (ADS)
Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco
2017-11-01
We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.
"Needle and Stick" Save the World: Sustainable Development and the Universal Child
ERIC Educational Resources Information Center
Dahlbeck, Johan; De Lucia Dahlbeck, Moa
2012-01-01
This text deals with a problem concerning processes of the productive power of knowledge. We draw on the so-called poststructural theories challenging the classical image of thought--as hinged upon a representational logic identifying entities in a rigid sense--when formulating a problem concerning the gap between knowledge and the object of…
From quantum to classical interactions between a free electron and a surface
NASA Astrophysics Data System (ADS)
Beierle, Peter James
Quantum theory is often cited as being one of the most empirically validated theories in terms of its predictive power and precision. These attributes have led to numerous scientific discoveries and technological advancements. However, the precise relationship between quantum and classical physics remains obscure. The prevailing description is known as decoherence theory, where classical physics emerges from a more general quantum theory through environmental interaction. Sometimes referred to as the decoherence program, it does not solve the quantum measurement problem. We believe experiments performed between the microscopic and macroscopic world may help finish the program. The following considers a free electron that interacts with a surface (the environment), providing a controlled decoherence mechanism. There are non-decohering interactions to be examined and quantified before the weaker decohering effects are filtered out. In the first experiment, an electron beam passes over a surface that's illuminated by low-power laser light. This induces a surface charge redistribution causing the electron deflection. This phenomenon's parameters are investigated. This system can be well understood in terms of classical electrodynamics, and the technological applications of this electron beam switch are considered. Such phenomena may mask decoherence effects. A second experiment tests decoherence theory by introducing a nanofabricated diffraction grating before the surface. The electron undergoes diffraction through the grating, but as the electron passes over the surface it's predicted by various physical models that the electron will lose its wave interference property. Image charge based models, which predict a larger loss of contrast than what is observed, are falsified (despite experiencing an image charge force). A theoretical study demonstrates how a loss of contrast may not be due to the irreversible process decoherence, but dephasing (a reversible process due to randomization of the wavefunction's phase). To resolve this ambiguity, a correlation function on an ensemble of diffraction patterns is analyzed after an electron undergoes either process in a path integral calculation. The diffraction pattern is successfully recovered for dephasing, but not for decoherence, thus verifying it as a potential tool in experimental studies to determine the nature of the observed process.
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
Edge directed image interpolation with Bamberger pyramids
NASA Astrophysics Data System (ADS)
Rosiles, Jose Gerardo
2005-08-01
Image interpolation is a standard feature in digital image editing software, digital camera systems and printers. Classical methods for resizing produce blurred images with unacceptable quality. Bamberger Pyramids and filter banks have been successfully used for texture and image analysis. They provide excellent multiresolution and directional selectivity. In this paper we present an edge-directed image interpolation algorithm which takes advantage of the simultaneous spatial-directional edge localization at the subband level. The proposed algorithm outperform classical schemes like bilinear and bicubic schemes from the visual and numerical point of views.
Classical Statistics and Statistical Learning in Imaging Neuroscience
Bzdok, Danilo
2017-01-01
Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques. PMID:29056896
The Effect of Mental Rotation on Surgical Pathological Diagnosis.
Park, Heejung; Kim, Hyun Soo; Cha, Yoon Jin; Choi, Junjeong; Minn, Yangki; Kim, Kyung Sik; Kim, Se Hoon
2018-05-01
Pathological diagnosis involves very delicate and complex consequent processing that is conducted by a pathologist. The recognition of false patterns might be an important cause of misdiagnosis in the field of surgical pathology. In this study, we evaluated the influence of visual and cognitive bias in surgical pathologic diagnosis, focusing on the influence of "mental rotation." We designed three sets of the same images of uterine cervix biopsied specimens (original, left to right mirror images, and 180-degree rotated images), and recruited 32 pathologists to diagnose the 3 set items individually. First, the items found to be adequate for analysis by classical test theory, Generalizability theory, and item response theory. The results showed statistically no differences in difficulty, discrimination indices, and response duration time between the image sets. Mental rotation did not influence the pathologists' diagnosis in practice. Interestingly, outliers were more frequent in rotated image sets, suggesting that the mental rotation process may influence the pathological diagnoses of a few individual pathologists. © Copyright: Yonsei University College of Medicine 2018.
Single-Scale Fusion: An Effective Approach to Merging Images.
Ancuti, Codruta O; Ancuti, Cosmin; De Vleeschouwer, Christophe; Bovik, Alan C
2017-01-01
Due to its robustness and effectiveness, multi-scale fusion (MSF) based on the Laplacian pyramid decomposition has emerged as a popular technique that has shown utility in many applications. Guided by several intuitive measures (weight maps) the MSF process is versatile and straightforward to be implemented. However, the number of pyramid levels increases with the image size, which implies sophisticated data management and memory accesses, as well as additional computations. Here, we introduce a simplified formulation that reduces MSF to only a single level process. Starting from the MSF decomposition, we explain both mathematically and intuitively (visually) a way to simplify the classical MSF approach with minimal loss of information. The resulting single-scale fusion (SSF) solution is a close approximation of the MSF process that eliminates important redundant computations. It also provides insights regarding why MSF is so effective. While our simplified expression is derived in the context of high dynamic range imaging, we show its generality on several well-known fusion-based applications, such as image compositing, extended depth of field, medical imaging, and blending thermal (infrared) images with visible light. Besides visual validation, quantitative evaluations demonstrate that our SSF strategy is able to yield results that are highly competitive with traditional MSF approaches.
Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.
Fromm, S A; Sachse, C
2016-01-01
Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method. © 2016 Elsevier Inc. All rights reserved.
Generation of Classical DInSAR and PSI Ground Motion Maps on a Cloud Thematic Platform
NASA Astrophysics Data System (ADS)
Mora, Oscar; Ordoqui, Patrick; Romero, Laia
2016-08-01
This paper presents the experience of ALTAMIRA INFORMATION uploading InSAR (Synthetic Aperture Radar Interferometry) services in the Geohazard Exploitation Platform (GEP), supported by ESA. Two different processing chains are presented jointly with ground motion maps obtained from the cloud computing, one being DIAPASON for classical DInSAR and SPN (Stable Point Network) for PSI (Persistent Scatterer Interferometry) processing. The product obtained from DIAPASON is the interferometric phase related to ground motion (phase fringes from a SAR pair). SPN provides motion data (mean velocity and time series) on high-quality pixels from a stack of SAR images. DIAPASON is already implemented, and SPN is under development to be exploited with historical data coming from ERS-1/2 and ENVISAT satellites, and current acquisitions of SENTINEL-1 in SLC and TOPSAR modes.
Autonomous quantum to classical transitions and the generalized imaging theorem
NASA Astrophysics Data System (ADS)
Briggs, John S.; Feagin, James M.
2016-03-01
The mechanism of the transition of a dynamical system from quantum to classical mechanics is of continuing interest. Practically it is of importance for the interpretation of multi-particle coincidence measurements performed at macroscopic distances from a microscopic reaction zone. Here we prove the generalized imaging theorem which shows that the spatial wave function of any multi-particle quantum system, propagating over distances and times large on an atomic scale but still microscopic, and subject to deterministic external fields and particle interactions, becomes proportional to the initial momentum wave function where the position and momentum coordinates define a classical trajectory. Currently, the quantum to classical transition is considered to occur via decoherence caused by stochastic interaction with an environment. The imaging theorem arises from unitary Schrödinger propagation and so is valid without any environmental interaction. It implies that a simultaneous measurement of both position and momentum will define a unique classical trajectory, whereas a less complete measurement of say position alone can lead to quantum interference effects.
Autonomous quantum to classical transitions and the generalized imaging theorem
Briggs, John S.; Feagin, James M.
2016-03-16
The mechanism of the transition of a dynamical system from quantum to classical mechanics is of continuing interest. Practically it is of importance for the interpretation of multi-particle coincidence measurements performed at macroscopic distances from a microscopic reaction zone. We prove the generalized imaging theorem which shows that the spatial wave function of any multi-particle quantum system, propagating over distances and times large on an atomic scale but still microscopic, and subject to deterministic external fields and particle interactions, becomes proportional to the initial momentum wave function where the position and momentum coordinates define a classical trajectory. Now, the quantummore » to classical transition is considered to occur via decoherence caused by stochastic interaction with an environment. The imaging theorem arises from unitary Schrödinger propagation and so is valid without any environmental interaction. It implies that a simultaneous measurement of both position and momentum will define a unique classical trajectory, whereas a less complete measurement of say position alone can lead to quantum interference effects.« less
Spatially: resolved heterogeneous dynamics in a strong colloidal gel
NASA Astrophysics Data System (ADS)
Buzzaccaro, Stefano; Alaimo, Matteo David; Secchi, Eleonora; Piazza, Roberto
2015-05-01
We re-examine the classical problem of irreversible colloid aggregation, showing that the application of Digital Fourier Imaging (DFI), a class of optical correlation methods that combine the power of light scattering and imaging, allows one to pick out novel useful evidence concerning the restructuring processes taking place in a strong colloidal gel. In particular, the spatially-resolved displacement fields provided by DFI strongly suggest that the temporally-intermittent local rearrangements taking place in the course of gel ageing are characterized by very long-ranged spatial correlations.
ERIC Educational Resources Information Center
Villarreal, Ronald P.; Steinmetz, Joseph E.
2005-01-01
How the nervous system encodes learning and memory processes has interested researchers for 100 years. Over this span of time, a number of basic neuroscience methods has been developed to explore the relationship between learning and the brain, including brain lesion, stimulation, pharmacology, anatomy, imaging, and recording techniques. In this…
Cost analysis of a project to digitize classic articles in neurosurgery*
Bauer, Kathleen
2002-01-01
In summer 2000, the Cushing/Whitney Medical Library at Yale University began a demonstration project to digitize classic articles in neurosurgery from the late 1800s and early 1900s. The objective of the first phase of the project was to measure the time and costs involved in digitization, and those results are reported here. In the second phase, metadata will be added to the digitized articles, and the project will be publicized. Thirteen articles were scanned using optical character recognition (OCR) software, and the resulting text files were carefully proofread. Time for photocopying, scanning, and proofreading were recorded. This project achieved an average cost per item (total pages plus images) of $4.12, a figure at the high end of average costs found in other studies. This project experienced high costs for two reasons. First, the articles contained many images, which required extra processing. Second, the older fonts and the poor condition of many of these articles complicated the OCR process. The average article cost $84.46 to digitize. Although costs were high, the selection of historically important articles maximized the benefit gained from the investment in digitization. PMID:11999182
Cost analysis of a project to digitize classic articles in neurosurgery.
Bauer, Kathleen
2002-04-01
In summer 2000, the Cushing/Whitney Medical Library at Yale University began a demonstration project to digitize classic articles in neurosurgery from the late 1800s and early 1900s. The objective of the first phase of the project was to measure the time and costs involved in digitization, and those results are reported here. In the second phase, metadata will be added to the digitized articles, and the project will be publicized. Thirteen articles were scanned using optical character recognition (OCR) software, and the resulting text files were carefully proofread. Time for photocopying, scanning, and proofreading were recorded. This project achieved an average cost per item (total pages plus images) of $4.12, a figure at the high end of average costs found in other studies. This project experienced high costs for two reasons. First, the articles contained many images, which required extra processing. Second, the older fonts and the poor condition of many of these articles complicated the OCR process. The average article cost $84.46 to digitize. Although costs were high, the selection of historically important articles maximized the benefit gained from the investment in digitization.
On Distinctions between Classical and Modern Rhetoric.
ERIC Educational Resources Information Center
Ede, Lisa; Lunsford, Andrea
The emergence of a modern or "new" rhetoric has been characterized by its attempt both to recover and reexamine the concepts of classical rhetoric and to define itself against that classical tradition. The distinctions that are persistently drawn between classical and modern rhetoric fall under four related heads: images of man and…
New approach to gallbladder ultrasonic images analysis and lesions recognition.
Bodzioch, Sławomir; Ogiela, Marek R
2009-03-01
This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards detection of disease symptoms on processed images. First, in this paper, there is presented a new method of filtering gallbladder contours from USG images. A major stage in this filtration is to segment and section off areas occupied by the said organ. In most cases this procedure is based on filtration that plays a key role in the process of diagnosing pathological changes. Unfortunately ultrasound images present among the most troublesome methods of analysis owing to the echogenic inconsistency of structures under observation. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours. The algorithm is based on rank filtration, as well as on the analysis of histogram sections on tested organs. The second part concerns detecting lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. Usually the final stage is to make a diagnosis based on the detected symptoms. This last stage can be carried out through either dedicated expert systems or more classic pattern analysis approach like using rules to determine illness basing on detected symptoms. This paper discusses the pattern analysis algorithms for gallbladder image interpretation towards classification of the most frequent illness symptoms of this organ.
Improving Zernike moments comparison for optimal similarity and rotation angle retrieval.
Revaud, Jérôme; Lavoué, Guillaume; Baskurt, Atilla
2009-04-01
Zernike moments constitute a powerful shape descriptor in terms of robustness and description capability. However the classical way of comparing two Zernike descriptors only takes into account the magnitude of the moments and loses the phase information. The novelty of our approach is to take advantage of the phase information in the comparison process while still preserving the invariance to rotation. This new Zernike comparator provides a more accurate similarity measure together with the optimal rotation angle between the patterns, while keeping the same complexity as the classical approach. This angle information is particularly of interest for many applications, including 3D scene understanding through images. Experiments demonstrate that our comparator outperforms the classical one in terms of similarity measure. In particular the robustness of the retrieval against noise and geometric deformation is greatly improved. Moreover, the rotation angle estimation is also more accurate than state-of-the-art algorithms.
NASA Astrophysics Data System (ADS)
Guerrero Prado, Patricio; Nguyen, Mai K.; Dumas, Laurent; Cohen, Serge X.
2017-01-01
Characterization and interpretation of flat ancient material objects, such as those found in archaeology, paleoenvironments, paleontology, and cultural heritage, have remained a challenging task to perform by means of conventional x-ray tomography methods due to their anisotropic morphology and flattened geometry. To overcome the limitations of the mentioned methodologies for such samples, an imaging modality based on Compton scattering is proposed in this work. Classical x-ray tomography treats Compton scattering data as noise in the image formation process, while in Compton scattering tomography the conditions are set such that Compton data become the principal image contrasting agent. Under these conditions, we are able, first, to avoid relative rotations between the sample and the imaging setup, and second, to obtain three-dimensional data even when the object is supported by a dense material by exploiting backscattered photons. Mathematically this problem is addressed by means of a conical Radon transform and its inversion. The image formation process and object reconstruction model are presented. The feasibility of this methodology is supported by numerical simulations.
Error image aware content restoration
NASA Astrophysics Data System (ADS)
Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee
2015-12-01
As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.
Atomic photoionization processes under magnification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lepine, F.; Bordas, Ch.; Nicole, C.
2004-09-01
Recently, classical simulations of threshold photoionization in the presence of an electric field have shown that a clear distinction between direct and indirect trajectories followed by the outgoing electron can be observed in the patterns of electron impacts on a two-dimensional detector. Subsequently, slow photoelectron imaging experiments have been reported where this distinction could be observed in atomic xenon. Furthermore, using a magnifying electrostatic lens to improve the velocity-map imaging technique, oscillatory patterns were observed modulating the classical envelope that was measured in the experiments of Nicole et al. [Phys. Rev. Lett. 88, 133001 (2002)]. This extension of slow photoelectronmore » imaging, called photoionization microscopy, relies on the existence of interferences between various trajectories by which the electron moves from the atom to the plane of observation. In this article we present the main experimental results obtained both in slow photoelectron imaging and in photoionization microscopy. The formation of the interference pattern is discussed in the framework of a semiclassical model that is described in detail elsewhere. The qualitative information that can be drawn from the experiments is discussed, and the potential applications of photoionization microscopy are considered. Particular attention is paid to the role of continuum Stark resonances that appear between the saddle point in the Coulomb+dc field potential and the field-free ionization limit.« less
Contour sensitive saliency and depth application in image retargeting
NASA Astrophysics Data System (ADS)
Lu, Hongju; Yue, Pengfei; Zhao, Yanhui; Liu, Rui; Fu, Yuanbin; Zheng, Yuanjie; Cui, Jia
2018-04-01
Image retargeting technique requires important information preservation and less edge distortion during increasing/decreasing image size. The major existed content-aware methods perform well. However, there are two problems should be improved: the slight distortion appeared at the object edges and the structure distortion in the nonsalient area. According to psychological theories, people evaluate image quality based on multi-level judgments and comparison between different areas, both image content and image structure. The paper proposes a new standard: the structure preserving in non-salient area. After observation and image analysis, blur (slight blur) is generally existed at the edge of objects. The blur feature is used to estimate the depth cue, named blur depth descriptor. It can be used in the process of saliency computation for balanced image retargeting result. In order to keep the structure information in nonsalient area, the salient edge map is presented in Seam Carving process, instead of field-based saliency computation. The derivative saliency from x- and y-direction can avoid the redundant energy seam around salient objects causing structure distortion. After the comparison experiments between classical approaches and ours, the feasibility of our algorithm is proved.
Imaging multiple sclerosis and other neurodegenerative diseases
Inglese, Matilde; Petracca, Maria
2013-01-01
Although the prevalence of neurodegenerative diseases is increasing as a consequence of the growing aging population, the exact pathophysiological mechanisms leading to these diseases remains obscure. Multiple sclerosis (MS), an autoimmune disease of the central nervous system and the most frequent cause of disability among young people after traumatic brain injury, is characterized by inflammatory/demyelinating and neurodegenerative processes that occurr earlier in life. The ability to make an early diagnosis of MS with the support of conventional MRI techniques, provides the opportunity to study neurodegeneration and the underlying pathophysiological processes in earlier stages than in classical neurodegenerative diseases. This review summarizes mechanisms of neurodegeneration common to MS and to Alzheimer disease, Parkinson disease, and amiotrophic lateral sclerosis, and provides a brief overview of the neuroimaging studies employing MRI and PET techniques to investigate and monitor neurodegeneration in both MS and classical neurodegenerative diseases. PMID:23117868
NASA Astrophysics Data System (ADS)
Ullah, Kaleem; Garcia-Camara, Braulio; Habib, Muhammad; Yadav, N. P.; Liu, Xuefeng
2018-07-01
In this work, we report an indirect way to image the Stokes parameters of a sample under test (SUT) with sub-diffraction scattering information. We apply our previously reported technique called parametric indirect microscopic imaging (PIMI) based on a fitting and filtration process to measure the Stokes parameters of a submicron particle. A comparison with a classical Stokes measurement is also shown. By modulating the incident field in a precise way, fitting and filtration process at each pixel of the detector in PIMI make us enable to resolve and sense the scattering information of SUT and map them in terms of the Stokes parameters. We believe that our finding can be very useful in fields like singular optics, optical nanoantenna, biomedicine and much more. The spatial signature of the Stokes parameters given by our method has been confirmed with finite difference time domain (FDTD) method.
Functional imaging of conditioned aversive emotional responses in antisocial personality disorder.
Schneider, F; Habel, U; Kessler, C; Posse, S; Grodd, W; Müller-Gärtner, H W
2000-01-01
Individuals with antisocial personality disorder (n = 12) and healthy controls (n = 12) were examined for cerebral regional activation involved in the processing of negative affect. A differential aversive classical conditioning paradigm was applied with odors as unconditioned stimuli and faces as conditioned stimuli. Functional magnetic resonance imaging (fMRI) based on echo-planar imaging was used while cerebral activity was studied during habituation, acquisition, and extinction. Individually defined cerebral regions were analyzed. Both groups indicated behavioral conditioning following subjective ratings of emotional valence to conditioned stimuli. Differential effects were found during acquisition in the amygdala and dorsolateral prefrontal cortex. Controls showed signal decreases, patients signal increases. These preliminary results revealed unexpected signal increases in cortical/subcortical areas of patients. The increases may result from an additional effort put in by these individuals to form negative emotional associations, a pattern of processing that may correspond to their characteristic deviant emotional behavior. Copyright 2000 S. Karger AG, Basel.
Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D
NASA Astrophysics Data System (ADS)
Bales, Ben; Pollock, Tresa; Petzold, Linda
2017-06-01
Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.
NASA Astrophysics Data System (ADS)
García Juan, David; Delattre, Bénédicte M. A.; Trombella, Sara; Lynch, Sean; Becker, Matthias; Choi, Hon Fai; Ratib, Osman
2014-03-01
Musculoskeletal disorders (MSD) are becoming a big healthcare economical burden in developed countries with aging population. Classical methods like biopsy or EMG used in clinical practice for muscle assessment are invasive and not accurately sufficient for measurement of impairments of muscular performance. Non-invasive imaging techniques can nowadays provide effective alternatives for static and dynamic assessment of muscle function. In this paper we present work aimed toward the development of a generic data structure for handling n-dimensional metabolic and anatomical data acquired from hybrid PET/MR scanners. Special static and dynamic protocols were developed for assessment of physical and functional images of individual muscles of the lower limb. In an initial stage of the project a manual segmentation of selected muscles was performed on high-resolution 3D static images and subsequently interpolated to full dynamic set of contours from selected 2D dynamic images across different levels of the leg. This results in a full set of 4D data of lower limb muscles at rest and during exercise. These data can further be extended to a 5D data by adding metabolic data obtained from PET images. Our data structure and corresponding image processing extension allows for better evaluation of large volumes of multidimensional imaging data that are acquired and processed to generate dynamic models of the moving lower limb and its muscular function.
Benameur, S.; Mignotte, M.; Meunier, J.; Soucy, J. -P.
2009-01-01
Image restoration is usually viewed as an ill-posed problem in image processing, since there is no unique solution associated with it. The quality of restored image closely depends on the constraints imposed of the characteristics of the solution. In this paper, we propose an original extension of the NAS-RIF restoration technique by using information fusion as prior information with application in SPECT medical imaging. That extension allows the restoration process to be constrained by efficiently incorporating, within the NAS-RIF method, a regularization term which stabilizes the inverse solution. Our restoration method is constrained by anatomical information extracted from a high resolution anatomical procedure such as magnetic resonance imaging (MRI). This structural anatomy-based regularization term uses the result of an unsupervised Markovian segmentation obtained after a preliminary registration step between the MRI and SPECT data volumes from each patient. This method was successfully tested on 30 pairs of brain MRI and SPECT acquisitions from different subjects and on Hoffman and Jaszczak SPECT phantoms. The experiments demonstrated that the method performs better, in terms of signal-to-noise ratio, than a classical supervised restoration approach using a Metz filter. PMID:19812704
Second-Order Asymptotics for the Classical Capacity of Image-Additive Quantum Channels
NASA Astrophysics Data System (ADS)
Tomamichel, Marco; Tan, Vincent Y. F.
2015-08-01
We study non-asymptotic fundamental limits for transmitting classical information over memoryless quantum channels, i.e. we investigate the amount of classical information that can be transmitted when a quantum channel is used a finite number of times and a fixed, non-vanishing average error is permissible. In this work we consider the classical capacity of quantum channels that are image-additive, including all classical to quantum channels, as well as the product state capacity of arbitrary quantum channels. In both cases we show that the non-asymptotic fundamental limit admits a second-order approximation that illustrates the speed at which the rate of optimal codes converges to the Holevo capacity as the blocklength tends to infinity. The behavior is governed by a new channel parameter, called channel dispersion, for which we provide a geometrical interpretation.
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-01-01
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606
Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.
Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin
2016-04-07
With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.
[Spatial domain display for interference image dataset].
Wang, Cai-Ling; Li, Yu-Shan; Liu, Xue-Bin; Hu, Bing-Liang; Jing, Juan-Juan; Wen, Jia
2011-11-01
The requirements of imaging interferometer visualization is imminent for the user of image interpretation and information extraction. However, the conventional researches on visualization only focus on the spectral image dataset in spectral domain. Hence, the quick show of interference spectral image dataset display is one of the nodes in interference image processing. The conventional visualization of interference dataset chooses classical spectral image dataset display method after Fourier transformation. In the present paper, the problem of quick view of interferometer imager in image domain is addressed and the algorithm is proposed which simplifies the matter. The Fourier transformation is an obstacle since its computation time is very large and the complexion would be even deteriorated with the size of dataset increasing. The algorithm proposed, named interference weighted envelopes, makes the dataset divorced from transformation. The authors choose three interference weighted envelopes respectively based on the Fourier transformation, features of interference data and human visual system. After comparing the proposed with the conventional methods, the results show the huge difference in display time.
Vaquero-Cristóbal, Raquel; Kazarez, Miguel; Esparza-Ros, Francisco
2017-11-16
Dance discipline could modulate the presence of alterations in body image, which is a factor relationship with eating disorders. To analyze the body image distortion and dissatisfaction in student dancers based on dance discipline. Two hundred and ninety-eight preadolescents, adolescents and young classical, contemporary and Spanish dancers took part in the study. Participants self-fulfilled the "silhouette scale for adolescents" in order to determine the perceived and ideal image. The real body image was calculated with the body mass index data (BMI). After that, distortion index, dissatisfaction index and the relation between real and ideal image were calculated. About distortion index, classical and contemporary dancers perceived themselves with a higher BMI than they had, whereas Spanish dance showed the opposite tendency. Significant differences among classical dancers and the other modalities were found (p < 0.017). Based on distortion index results, ten dancers showed a high risk for developing an eating disorder. In the dissatisfaction index, all disciplines selected as ideal to be thinner as they perceived themselves, without significant differences. About the real/ideal index, contemporary and Spanish dancers considered as ideal silhouettes with a lower BMI than they had. Classical dancers showed the opposite tendency, with significant differences among this group and the others (p < 0.017). Most dancers have a self-image which is not related with the reality. This could act as a factor which induce eating disorders.
The role of images in the development of Renaissance natural history.
Kusukawa, Sachiko
2011-01-01
This review surveys recent scholarship on the history of natural history with special attention to the role of images in the Renaissance. It discusses how classicism, collecting and printing were important catalysts for the Renaissance study of nature. Classicism provided inspiration of how to study and what kind of object to examine in nature, and several images from the period can be shown to reflect these classical values. The development of the passion for collecting and the rise of commerce in nature's commodities led to the circulation of a large number of exotic flora and fauna. Pictures enabled scholars to access unobtainable objects, build up knowledge of rare objects over time, and study them long after the live specimens had died away. Printing replicated pictures alongside texts and enabled scholars to share and accumulate knowledge. Images, alongside objects and text, were an important means of studying nature. Naturalists' images, in turn, became part of a larger visual culture in which nature was regarded as a beautiful and fascinating object of admiration.
High-Speed Imaging Analysis of Register Transitions in Classically and Jazz-Trained Male Voices.
Dippold, Sebastian; Voigt, Daniel; Richter, Bernhard; Echternach, Matthias
2015-01-01
Little data are available concerning register functions in different styles of singing such as classically or jazz-trained voices. Differences between registers seem to be much more audible in jazz singing than classical singing, and so we hypothesized that classically trained singers exhibit a smoother register transition, stemming from more regular vocal fold oscillation patterns. High-speed digital imaging (HSDI) was used for 19 male singers (10 jazz-trained singers, 9 classically trained) who performed a glissando from modal to falsetto register across the register transition. Vocal fold oscillation patterns were analyzed in terms of different parameters of regularity such as relative average perturbation (RAP), correlation dimension (D2) and shimmer. HSDI observations showed more regular vocal fold oscillation patterns during the register transition for the classically trained singers. Additionally, the RAP and D2 values were generally lower and more consistent for the classically trained singers compared to the jazz singers. However, intergroup comparisons showed no statistically significant differences. Some of our results may support the hypothesis that classically trained singers exhibit a smoother register transition from modal to falsetto register. © 2015 S. Karger AG, Basel.
Smart sensor for terminal homing
NASA Astrophysics Data System (ADS)
Panda, D.; Aggarwal, R.; Hummel, R.
1980-01-01
The practical scene matching problem is considered to present certain complications which must extend classical image processing capabilities. Certain aspects of the scene matching problem which must be addressed by a smart sensor for terminal homing are discussed. First a philosophy for treating the matching problem for the terminal homing scenario is outlined. Then certain aspects of the feature extraction process and symbolic pattern matching are considered. It is thought that in the future general ideas from artificial intelligence will be more useful for terminal homing requirements of fast scene recognition and pattern matching.
Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software
NASA Technical Reports Server (NTRS)
Tilton, James C.
2003-01-01
A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic region growing.
NASA Astrophysics Data System (ADS)
Nijssen, B.; Hamman, J.; Bohn, T. J.
2015-12-01
The Variable Infiltration Capacity (VIC) model is a macro-scale semi-distributed hydrologic model. VIC development began in the early 1990s and it has been used extensively, applied from basin to global scales. VIC has been applied in a many use cases, including the construction of hydrologic data sets, trend analysis, data evaluation and assimilation, forecasting, coupled climate modeling, and climate change impact analysis. Ongoing applications of the VIC model include the University of Washington's drought monitor and forecast systems, and NASA's land data assimilation systems. The development of VIC version 5.0 focused on reconfiguring the legacy VIC source code to support a wider range of modern modeling applications. The VIC source code has been moved to a public Github repository to encourage participation by the model development community-at-large. The reconfiguration has separated the physical core of the model from the driver, which is responsible for memory allocation, pre- and post-processing and I/O. VIC 5.0 includes four drivers that use the same physical model core: classic, image, CESM, and Python. The classic driver supports legacy VIC configurations and runs in the traditional time-before-space configuration. The image driver includes a space-before-time configuration, netCDF I/O, and uses MPI for parallel processing. This configuration facilitates the direct coupling of streamflow routing, reservoir, and irrigation processes within VIC. The image driver is the foundation of the CESM driver; which couples VIC to CESM's CPL7 and a prognostic atmosphere. Finally, we have added a Python driver that provides access to the functions and datatypes of VIC's physical core from a Python interface. This presentation demonstrates how reconfiguring legacy source code extends the life and applicability of a research model.
ERIC Educational Resources Information Center
Moraes, Edgar P.; da Silva, Nilbert S. A.; de Morais, Camilo de L. M.; das Neves, Luiz S.; de Lima, Kassio M. G.
2014-01-01
The flame test is a classical analytical method that is often used to teach students how to identify specific metals. However, some universities in developing countries have difficulties acquiring the sophisticated instrumentation needed to demonstrate how to identify and quantify metals. In this context, a method was developed based on the flame…
Validation of Inertial and Optical Navigation Techniques for Space Applications with UAVS
NASA Astrophysics Data System (ADS)
Montaño, J.; Wis, M.; Pulido, J. A.; Latorre, A.; Molina, P.; Fernández, E.; Angelats, E.; Colomina, I.
2015-09-01
PERIGEO is an R&D project, funded by the INNPRONTA 2011-2014 programme from Spanish CDTI, which aims to investigate the use of UAV technologies and processes for the validation of space oriented technologies. For this purpose, among different space missions and technologies, a set of activities for absolute and relative navigation are being carried out to deal with the attitude and position estimation problem from a temporal image sequence from a camera on the visible spectrum and/or Light Detection and Ranging (LIDAR) sensor. The process is covered entirely: from sensor measurements and data acquisition (images, LiDAR ranges and angles), data pre-processing (calibration and co-registration of camera and LIDAR data), features and landmarks extraction from the images and image/LiDAR-based state estimation. In addition to image processing area, classical navigation system based on inertial sensors is also included in the research. The reason of combining both approaches is to enable the possibility to keep navigation capability in environments or missions where the radio beacon or reference signal as the GNSS satellite is not available (as for example an atmospheric flight in Titan). The rationale behind the combination of those systems is that they complement each other. The INS is capable of providing accurate position, velocity and full attitude estimations at high data rates. However, they need an absolute reference observation to compensate the time accumulative errors caused by inertial sensor inaccuracies. On the other hand, imaging observables can provide absolute and relative positioning and attitude estimations. However they need that the sensor head is pointing toward ground (something that may not be possible if the carrying platform is maneuvering) to provide accurate estimations and they are not capable of provide some hundreds of Hz that can deliver an INS. This mutual complementarity has been observed in PERIGEO and because of this they are combined into one system. The inertial navigation system implemented in PERIGEO is based on a classical loosely coupled INS/GNSS approach that is very similar to the implementation of the INS/Imaging navigation system that is mentioned above. The activities envisaged in PERIGEO cover the algorithms development and validation and technology testing on UAVs under representative conditions. Past activities have covered the design and development of the algorithms and systems. This paper presents the most recent activities and results on the area of image processing for robust estimation within PERIGEO, which are related with the hardware platforms definition (including sensors) and its integration in UAVs. Results for the tests performed during the flight campaigns in representative outdoor environments will be also presented (at the time of the full paper submission the tests will be performed), as well as analyzed, together with a roadmap definition for future developments.
A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system
NASA Astrophysics Data System (ADS)
Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan
2018-01-01
This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.
A pipeline for comprehensive and automated processing of electron diffraction data in IPLT.
Schenk, Andreas D; Philippsen, Ansgar; Engel, Andreas; Walz, Thomas
2013-05-01
Electron crystallography of two-dimensional crystals allows the structural study of membrane proteins in their native environment, the lipid bilayer. Determining the structure of a membrane protein at near-atomic resolution by electron crystallography remains, however, a very labor-intense and time-consuming task. To simplify and accelerate the data processing aspect of electron crystallography, we implemented a pipeline for the processing of electron diffraction data using the Image Processing Library and Toolbox (IPLT), which provides a modular, flexible, integrated, and extendable cross-platform, open-source framework for image processing. The diffraction data processing pipeline is organized as several independent modules implemented in Python. The modules can be accessed either from a graphical user interface or through a command line interface, thus meeting the needs of both novice and expert users. The low-level image processing algorithms are implemented in C++ to achieve optimal processing performance, and their interface is exported to Python using a wrapper. For enhanced performance, the Python processing modules are complemented with a central data managing facility that provides a caching infrastructure. The validity of our data processing algorithms was verified by processing a set of aquaporin-0 diffraction patterns with the IPLT pipeline and comparing the resulting merged data set with that obtained by processing the same diffraction patterns with the classical set of MRC programs. Copyright © 2013 Elsevier Inc. All rights reserved.
A pipeline for comprehensive and automated processing of electron diffraction data in IPLT
Schenk, Andreas D.; Philippsen, Ansgar; Engel, Andreas; Walz, Thomas
2013-01-01
Electron crystallography of two-dimensional crystals allows the structural study of membrane proteins in their native environment, the lipid bilayer. Determining the structure of a membrane protein at near-atomic resolution by electron crystallography remains, however, a very labor-intense and time-consuming task. To simplify and accelerate the data processing aspect of electron crystallography, we implemented a pipeline for the processing of electron diffraction data using the Image Processing Library & Toolbox (IPLT), which provides a modular, flexible, integrated, and extendable cross-platform, open-source framework for image processing. The diffraction data processing pipeline is organized as several independent modules implemented in Python. The modules can be accessed either from a graphical user interface or through a command line interface, thus meeting the needs of both novice and expert users. The low-level image processing algorithms are implemented in C++ to achieve optimal processing performance, and their interface is exported to Python using a wrapper. For enhanced performance, the Python processing modules are complemented with a central data managing facility that provides a caching infrastructure. The validity of our data processing algorithms was verified by processing a set of aquaporin-0 diffraction patterns with the IPLT pipeline and comparing the resulting merged data set with that obtained by processing the same diffraction patterns with the classical set of MRC programs. PMID:23500887
Large-scale quantitative analysis of painting arts.
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-12-11
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.
Jia, Yuanyuan; He, Zhongshi; Gholipour, Ali; Warfield, Simon K
2016-11-01
In magnetic resonance (MR), hardware limitation, scanning time, and patient comfort often result in the acquisition of anisotropic 3-D MR images. Enhancing image resolution is desired but has been very challenging in medical image processing. Super resolution reconstruction based on sparse representation and overcomplete dictionary has been lately employed to address this problem; however, these methods require extra training sets, which may not be always available. This paper proposes a novel single anisotropic 3-D MR image upsampling method via sparse representation and overcomplete dictionary that is trained from in-plane high resolution slices to upsample in the out-of-plane dimensions. The proposed method, therefore, does not require extra training sets. Abundant experiments, conducted on simulated and clinical brain MR images, show that the proposed method is more accurate than classical interpolation. When compared to a recent upsampling method based on the nonlocal means approach, the proposed method did not show improved results at low upsampling factors with simulated images, but generated comparable results with much better computational efficiency in clinical cases. Therefore, the proposed approach can be efficiently implemented and routinely used to upsample MR images in the out-of-planes views for radiologic assessment and postacquisition processing.
Development of a Mobile User Interface for Image-based Dietary Assessment.
Kim, Sungye; Schap, Tusarebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J; Ebert, David S; Boushey, Carol J
2010-12-31
In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records.
A Simple Encryption Algorithm for Quantum Color Image
NASA Astrophysics Data System (ADS)
Li, Panchi; Zhao, Ya
2017-06-01
In this paper, a simple encryption scheme for quantum color image is proposed. Firstly, a color image is transformed into a quantum superposition state by employing NEQR (novel enhanced quantum representation), where the R,G,B values of every pixel in a 24-bit RGB true color image are represented by 24 single-qubit basic states, and each value has 8 qubits. Then, these 24 qubits are respectively transformed from a basic state into a balanced superposition state by employed the controlled rotation gates. At this time, the gray-scale values of R, G, B of every pixel are in a balanced superposition of 224 multi-qubits basic states. After measuring, the whole image is an uniform white noise, which does not provide any information. Decryption is the reverse process of encryption. The experimental results on the classical computer show that the proposed encryption scheme has better security.
The anterior temporal lobes support residual comprehension in Wernicke’s aphasia
Robson, Holly; Zahn, Roland; Keidel, James L.; Binney, Richard J.; Sage, Karen; Lambon Ralph, Matthew A.
2014-01-01
Wernicke’s aphasia occurs after a stroke to classical language comprehension regions in the left temporoparietal cortex. Consequently, auditory–verbal comprehension is significantly impaired in Wernicke’s aphasia but the capacity to comprehend visually presented materials (written words and pictures) is partially spared. This study used functional magnetic resonance imaging to investigate the neural basis of written word and picture semantic processing in Wernicke’s aphasia, with the wider aim of examining how the semantic system is altered after damage to the classical comprehension regions. Twelve participants with chronic Wernicke’s aphasia and 12 control participants performed semantic animate–inanimate judgements and a visual height judgement baseline task. Whole brain and region of interest analysis in Wernicke’s aphasia and control participants found that semantic judgements were underpinned by activation in the ventral and anterior temporal lobes bilaterally. The Wernicke’s aphasia group displayed an ‘over-activation’ in comparison with control participants, indicating that anterior temporal lobe regions become increasingly influential following reduction in posterior semantic resources. Semantic processing of written words in Wernicke’s aphasia was additionally supported by recruitment of the right anterior superior temporal lobe, a region previously associated with recovery from auditory-verbal comprehension impairments. Overall, the results provide support for models in which the anterior temporal lobes are crucial for multimodal semantic processing and that these regions may be accessed without support from classic posterior comprehension regions. PMID:24519979
The anterior temporal lobes support residual comprehension in Wernicke's aphasia.
Robson, Holly; Zahn, Roland; Keidel, James L; Binney, Richard J; Sage, Karen; Lambon Ralph, Matthew A
2014-03-01
Wernicke's aphasia occurs after a stroke to classical language comprehension regions in the left temporoparietal cortex. Consequently, auditory-verbal comprehension is significantly impaired in Wernicke's aphasia but the capacity to comprehend visually presented materials (written words and pictures) is partially spared. This study used functional magnetic resonance imaging to investigate the neural basis of written word and picture semantic processing in Wernicke's aphasia, with the wider aim of examining how the semantic system is altered after damage to the classical comprehension regions. Twelve participants with chronic Wernicke's aphasia and 12 control participants performed semantic animate-inanimate judgements and a visual height judgement baseline task. Whole brain and region of interest analysis in Wernicke's aphasia and control participants found that semantic judgements were underpinned by activation in the ventral and anterior temporal lobes bilaterally. The Wernicke's aphasia group displayed an 'over-activation' in comparison with control participants, indicating that anterior temporal lobe regions become increasingly influential following reduction in posterior semantic resources. Semantic processing of written words in Wernicke's aphasia was additionally supported by recruitment of the right anterior superior temporal lobe, a region previously associated with recovery from auditory-verbal comprehension impairments. Overall, the results provide support for models in which the anterior temporal lobes are crucial for multimodal semantic processing and that these regions may be accessed without support from classic posterior comprehension regions.
NASA Astrophysics Data System (ADS)
Harvey, James E.
2012-10-01
Professor Bill Wolfe was an exceptional mentor for his graduate students, and he made a major contribution to the field of optical engineering by teaching the (largely ignored) principles of radiometry for over forty years. This paper describes an extension of Bill's work on surface scatter behavior and the application of the BRDF to practical optical engineering problems. Most currently-available image analysis codes require the BRDF data as input in order to calculate the image degradation from residual optical fabrication errors. This BRDF data is difficult to measure and rarely available for short EUV wavelengths of interest. Due to a smooth-surface approximation, the classical Rayleigh-Rice surface scatter theory cannot be used to calculate BRDFs from surface metrology data for even slightly rough surfaces. The classical Beckmann-Kirchhoff theory has a paraxial limitation and only provides a closed-form solution for Gaussian surfaces. Recognizing that surface scatter is a diffraction process, and by utilizing sound radiometric principles, we first developed a linear systems theory of non-paraxial scalar diffraction in which diffracted radiance is shift-invariant in direction cosine space. Since random rough surfaces are merely a superposition of sinusoidal phase gratings, it was a straightforward extension of this non-paraxial scalar diffraction theory to develop a unified surface scatter theory that is valid for moderately rough surfaces at arbitrary incident and scattered angles. Finally, the above two steps are combined to yield a linear systems approach to modeling image quality for systems suffering from a variety of image degradation mechanisms. A comparison of image quality predictions with experimental results taken from on-orbit Solar X-ray Imager (SXI) data is presented.
Wang, Hongkai; Zhou, Zongwei; Li, Yingci; Chen, Zhonghua; Lu, Peiou; Wang, Wenzhi; Liu, Wanyu; Yu, Lijuan
2017-12-01
This study aimed to compare one state-of-the-art deep learning method and four classical machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer (NSCLC) from 18 F-FDG PET/CT images. Another objective was to compare the discriminative power of the recently popular PET/CT texture features with the widely used diagnostic features such as tumor size, CT value, SUV, image contrast, and intensity standard deviation. The four classical machine learning methods included random forests, support vector machines, adaptive boosting, and artificial neural network. The deep learning method was the convolutional neural networks (CNN). The five methods were evaluated using 1397 lymph nodes collected from PET/CT images of 168 patients, with corresponding pathology analysis results as gold standard. The comparison was conducted using 10 times 10-fold cross-validation based on the criterion of sensitivity, specificity, accuracy (ACC), and area under the ROC curve (AUC). For each classical method, different input features were compared to select the optimal feature set. Based on the optimal feature set, the classical methods were compared with CNN, as well as with human doctors from our institute. For the classical methods, the diagnostic features resulted in 81~85% ACC and 0.87~0.92 AUC, which were significantly higher than the results of texture features. CNN's sensitivity, specificity, ACC, and AUC were 84, 88, 86, and 0.91, respectively. There was no significant difference between the results of CNN and the best classical method. The sensitivity, specificity, and ACC of human doctors were 73, 90, and 82, respectively. All the five machine learning methods had higher sensitivities but lower specificities than human doctors. The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research.
Pitfalls in classical nuclear medicine: myocardial perfusion imaging
NASA Astrophysics Data System (ADS)
Fragkaki, C.; Giannopoulou, Ch
2011-09-01
Scintigraphic imaging is a complex functional procedure subject to a variety of artefacts and pitfalls that may limit its clinical and diagnostic accuracy. It is important to be aware of and to recognize them when present and to eliminate them whenever possible. Pitfalls may occur at any stage of the imaging procedure and can be related with the γ-camera or other equipment, personnel handling, patient preparation, image processing or the procedure itself. Often, potential causes of artefacts and pitfalls may overlap. In this short review, special interest will be given to cardiac scintigraphic imaging. Most common causes of artefact in myocardial perfusion imaging are soft tissue attenuation as well as motion and gating errors. Additionally, clinical problems like cardiac abnormalities may cause interpretation pitfalls and nuclear medicine physicians should be familiar with these in order to ensure the correct evaluation of the study. Artefacts or suboptimal image quality can also result from infiltrated injections, misalignment in patient positioning, power instability or interruption, flood field non-uniformities, cracked crystal and several other technical reasons.
Double-image storage optimized by cross-phase modulation in a cold atomic system
NASA Astrophysics Data System (ADS)
Qiu, Tianhui; Xie, Min
2017-09-01
A tripod-type cold atomic system driven by double-probe fields and a coupling field is explored to store double images based on the electromagnetically induced transparency (EIT). During the storage time, an intensity-dependent signal field is applied further to extend the system with the fifth level involved, then the cross-phase modulation is introduced for coherently manipulating the stored images. Both analytical analysis and numerical simulation clearly demonstrate a tunable phase shift with low nonlinear absorption can be imprinted on the stored images, which effectively can improve the visibility of the reconstructed images. The phase shift and the energy retrieving rate of the probe fields are immune to the coupling intensity and the atomic optical density. The proposed scheme can easily be extended to the simultaneous storage of multiple images. This work may be exploited toward the end of EIT-based multiple-image storage devices for all-optical classical and quantum information processings.
Salient man-made structure detection in infrared images
NASA Astrophysics Data System (ADS)
Li, Dong-jie; Zhou, Fu-gen; Jin, Ting
2013-09-01
Target detection, segmentation and recognition is a hot research topic in the field of image processing and pattern recognition nowadays, among which salient area or object detection is one of core technologies of precision guided weapon. Many theories have been raised in this paper; we detect salient objects in a series of input infrared images by using the classical feature integration theory and Itti's visual attention system. In order to find the salient object in an image accurately, we present a new method to solve the edge blur problem by calculating and using the edge mask. We also greatly improve the computing speed by improving the center-surround differences method. Unlike the traditional algorithm, we calculate the center-surround differences through rows and columns separately. Experimental results show that our method is effective in detecting salient object accurately and rapidly.
NASA Astrophysics Data System (ADS)
Barros, George O.; Navarro, Brenda; Duarte, Angelo; Dos-Santos, Washington L. C.
2017-04-01
PathoSpotter is a computational system designed to assist pathologists in teaching about and researching kidney diseases. PathoSpotter-K is the version that was developed to detect nephrological lesions in digital images of kidneys. Here, we present the results obtained using the first version of PathoSpotter-K, which uses classical image processing and pattern recognition methods to detect proliferative glomerular lesions with an accuracy of 88.3 ± 3.6%. Such performance is only achieved by similar systems if they use images of cell in contexts that are much less complex than the glomerular structure. The results indicate that the approach can be applied to the development of systems designed to train pathology students and to assist pathologists in determining large-scale clinicopathological correlations in morphological research.
Imaging in Classic Form of Maple Syrup Urine Disease: A Rare Metabolic Central Nervous System
Jain, Aditi; Jagdeesh, K.; Mane, Ranoji; Singla, Saurabh
2013-01-01
Maple syrup urine disease (MSUD) is a rare autosomal recessive disorder of branched-chain amino acid metabolism. The condition gets its name from the distinctive sweet odour of affected infants’ urine. MSUD is caused by a deficiency of the branched-chain α-ketoacid dehydrogenase enzyme complex, leading to accumulation of the branched-chain amino acids (leucine, isoleucine, and valine) and their toxic by-products (ketoacids) in the blood and urine. Imaging is characterestized by MSUD oedema affecting the myelinated white matter. We present a neonate with classic type of MSUD and its imaging features on computed tomography, conventional magnetic resonance imaging, diffusion-weighted imaging, and magnetic resonance spectroscopy. PMID:24049754
Imaging in classic form of maple syrup urine disease: a rare metabolic central nervous system.
Jain, Aditi; Jagdeesh, K; Mane, Ranoji; Singla, Saurabh
2013-04-01
Maple syrup urine disease (MSUD) is a rare autosomal recessive disorder of branched-chain amino acid metabolism. The condition gets its name from the distinctive sweet odour of affected infants' urine. MSUD is caused by a deficiency of the branched-chain α-ketoacid dehydrogenase enzyme complex, leading to accumulation of the branched-chain amino acids (leucine, isoleucine, and valine) and their toxic by-products (ketoacids) in the blood and urine. Imaging is characterestized by MSUD oedema affecting the myelinated white matter. We present a neonate with classic type of MSUD and its imaging features on computed tomography, conventional magnetic resonance imaging, diffusion-weighted imaging, and magnetic resonance spectroscopy.
Fully Convolutional Neural Networks Improve Abdominal Organ Segmentation.
Bobo, Meg F; Bao, Shunxing; Huo, Yuankai; Yao, Yuang; Virostko, Jack; Plassard, Andrew J; Lyu, Ilwoo; Assad, Albert; Abramson, Richard G; Hilmes, Melissa A; Landman, Bennett A
2018-03-01
Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities.
Fully convolutional neural networks improve abdominal organ segmentation
NASA Astrophysics Data System (ADS)
Bobo, Meg F.; Bao, Shunxing; Huo, Yuankai; Yao, Yuang; Virostko, Jack; Plassard, Andrew J.; Lyu, Ilwoo; Assad, Albert; Abramson, Richard G.; Hilmes, Melissa A.; Landman, Bennett A.
2018-03-01
Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities. 1
Ablikim, Utuq; Bomme, Cédric; Xiong, Hui; Savelyev, Evgeny; Obaid, Razib; Kaderiya, Balram; Augustin, Sven; Schnorr, Kirsten; Dumitriu, Ileana; Osipov, Timur; Bilodeau, René; Kilcoyne, David; Kumarappan, Vinod; Rudenko, Artem; Berrah, Nora; Rolles, Daniel
2016-12-02
An experimental route to identify and separate geometric isomers by means of coincident Coulomb explosion imaging is presented, allowing isomer-resolved photoionization studies on isomerically mixed samples. We demonstrate the technique on cis/trans 1,2-dibromoethene (C 2 H 2 Br 2 ). The momentum correlation between the bromine ions in a three-body fragmentation process induced by bromine 3d inner-shell photoionization is used to identify the cis and trans structures of the isomers. The experimentally determined momentum correlations and the isomer-resolved fragment-ion kinetic energies are matched closely by a classical Coulomb explosion model.
Ablikim, Utuq; Bomme, Cédric; Xiong, Hui; Savelyev, Evgeny; Obaid, Razib; Kaderiya, Balram; Augustin, Sven; Schnorr, Kirsten; Dumitriu, Ileana; Osipov, Timur; Bilodeau, René; Kilcoyne, David; Kumarappan, Vinod; Rudenko, Artem; Berrah, Nora; Rolles, Daniel
2016-01-01
An experimental route to identify and separate geometric isomers by means of coincident Coulomb explosion imaging is presented, allowing isomer-resolved photoionization studies on isomerically mixed samples. We demonstrate the technique on cis/trans 1,2-dibromoethene (C2H2Br2). The momentum correlation between the bromine ions in a three-body fragmentation process induced by bromine 3d inner-shell photoionization is used to identify the cis and trans structures of the isomers. The experimentally determined momentum correlations and the isomer-resolved fragment-ion kinetic energies are matched closely by a classical Coulomb explosion model. PMID:27910943
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ablikim, Utuq; Bomme, Cédric; Xiong, Hui
An experimental route to identify and separate geometric isomers by means of coincident Coulomb explosion imaging is presented, allowing isomer-resolved photoionization studies on isomerically mixed samples. We demonstrate the technique on cis/trans 1,2-dibromoethene (C 2H 2Br 2). The momentum correlation between the bromine ions in a three-body fragmentation process induced by bromine 3d inner-shell photoionization is used to identify the cis and trans structures of the isomers. Lastly, the experimentally determined momentum correlations and the isomer-resolved fragment-ion kinetic energies are matched closely by a classical Coulomb explosion model.
Ablikim, Utuq; Bomme, Cédric; Xiong, Hui; ...
2016-12-02
An experimental route to identify and separate geometric isomers by means of coincident Coulomb explosion imaging is presented, allowing isomer-resolved photoionization studies on isomerically mixed samples. We demonstrate the technique on cis/trans 1,2-dibromoethene (C 2H 2Br 2). The momentum correlation between the bromine ions in a three-body fragmentation process induced by bromine 3d inner-shell photoionization is used to identify the cis and trans structures of the isomers. Lastly, the experimentally determined momentum correlations and the isomer-resolved fragment-ion kinetic energies are matched closely by a classical Coulomb explosion model.
Goycoolea, Marcos; Levy, Raquel; Ramírez, Carlos
2013-04-01
There is seemingly some inherent component in selected musical compositions that elicits specific emotional perceptions, feelings, and physical conduct. The purpose of the study was to determine if the emotional perceptions of those listening to classical music are inherent in the composition or acquired by the listeners. Fifteen kindergarten students, aged 5 years, from three different sociocultural groups, were evaluated. They were exposed to portions of five purposefully selected classical compositions and asked to describe their emotions when listening to these musical pieces. All were instrumental compositions without human voices or spoken language. In addition, they were played to an audience of an age at which they were capable of describing their perceptions and supposedly had no significant previous experience of classical music. Regardless of their sociocultural background, the children in the three groups consistently identified similar emotions (e.g. fear, happiness, sadness), feelings (e.g. love), and mental images (e.g. giants or dangerous animals walking) when listening to specific compositions. In addition, the musical compositions generated physical conducts that were reflected by the children's corporal expressions. Although the sensations were similar, the way of expressing them differed according to their background.
Acousto-optic RF signal acquisition system
NASA Astrophysics Data System (ADS)
Bloxham, Laurence H.
1990-09-01
This paper describes the architecture and performance of a prototype Acousto-Optic RF Signal Acquisition System designed to intercept, automatically identify, and track communication signals in the VHF band. The system covers 28.0 to 92.0 MHz with five manually selectable, dual conversion; 12.8 MHZ bandwidth front ends. An acousto-optic spectrum analyzer (AOSA) implemented using a tellurium dioxide (Te02) Bragg cell is used to channelize the 12.8 MHz pass band into 512 25 KHz channels. Polarization switching is used to suppress optical noise. Excellent isolation and dynamic range are achieved by using a linear array of 512 custom 40/50 micron fiber optic cables to collect the light at the focal plane of the AOSA and route the light to individual photodetectors. The photodetectors are operated in the photovoltaic mode to compress the greater than 60 dB input optical dynamic range into an easily processed electrical signal. The 512 signals are multiplexed and processed as a line in a video image by a customized digital image processing system. The image processor simultaneously analyzes the channelized signal data and produces a classical waterfall display.
NASA Astrophysics Data System (ADS)
Daněk, J.; Klaiber, M.; Hatsagortsyan, K. Z.; Keitel, C. H.; Willenberg, B.; Maurer, J.; Mayer, B. W.; Phillips, C. R.; Gallmann, L.; Keller, U.
2018-06-01
We study strong-field ionization and rescattering beyond the long-wavelength limit of the dipole approximation with elliptically polarized mid-IR laser pulses. Full three-dimensional photoelectron momentum distributions (PMDs) measured with velocity map imaging and tomographic reconstruction revealed an unexpected sharp ridge structure in the polarization plane (2018 Phys. Rev. A 97 013404). This thin line-shaped ridge structure for low-energy photoelectrons is correlated with the ellipticity-dependent asymmetry of the PMD along the beam propagation direction. The peak of the projection of the PMD onto the beam propagation axis is shifted from negative to positive values when the sharp ridge fades away with increasing ellipticity. With classical trajectory Monte Carlo simulations and analytical analysis, we study the underlying physics of this feature. The underlying physics is based on the interplay between the lateral drift of the ionized electron, the laser magnetic field induced drift in the laser propagation direction, and Coulomb focusing. To apply our observations to emerging techniques relying on strong-field ionization processes, including time-resolved holography and molecular imaging, we present a detailed classical trajectory-based analysis of our observations. The analysis leads to the explanation of the fine structure of the ridge and its non-dipole behavior upon rescattering while introducing restrictions on the ellipticity. These restrictions as well as the ionization and recollision phases provide additional observables to gain information on the timing of the ionization and recollision process and non-dipole properties of the ionization process.
A General Purpose Feature Extractor for Light Detection and Ranging Data
2010-11-17
datasets, and the 3D MIT DARPA Urban Challenge dataset. Keywords: SLAM ; LIDARs ; feature detection; uncertainty estimates; descriptors 1. Introduction The...November 2010 Abstract: Feature extraction is a central step of processing Light Detection and Ranging ( LIDAR ) data. Existing detectors tend to exploit...detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image
A Comparison of Wood Density between Classical Cremonese and Modern Violins
Stoel, Berend C.; Borman, Terry M.
2008-01-01
Classical violins created by Cremonese masters, such as Antonio Stradivari and Giuseppe Guarneri Del Gesu, have become the benchmark to which the sound of all violins are compared in terms of their abilities of expressiveness and projection. By general consensus, no luthier since that time has been able to replicate the sound quality of these classical instruments. The vibration and sound radiation characteristics of a violin are determined by an instrument's geometry and the material properties of the wood. New test methods allow the non-destructive examination of one of the key material properties, the wood density, at the growth ring level of detail. The densities of five classical and eight modern violins were compared, using computed tomography and specially developed image-processing software. No significant differences were found between the median densities of the modern and the antique violins, however the density difference between wood grains of early and late growth was significantly smaller in the classical Cremonese violins compared with modern violins, in both the top (Spruce) and back (Maple) plates (p = 0.028 and 0.008, respectively). The mean density differential (SE) of the top plates of the modern and classical violins was 274 (26.6) and 183 (11.7) gram/liter. For the back plates, the values were 128 (2.6) and 115 (2.0) gram/liter. These differences in density differentials may reflect similar changes in stiffness distributions, which could directly impact vibrational efficacy or indirectly modify sound radiation via altered damping characteristics. Either of these mechanisms may help explain the acoustical differences between the classical and modern violins. PMID:18596937
Citation classics in neuro-oncology: assessment of historical trends and scientific progress.
Hachem, Laureen D; Mansouri, Alireza; Juraschka, Kyle; Taslimi, Shervin; Pirouzmand, Farhad; Zadeh, Gelareh
2017-09-01
Citation classics represent the highest cited works in a field and are often regarded as the most influential literature. Analyzing thematic trends in citation classics across eras enables recognition of important historical advances within a field. We present the first analysis of the citation classics in neuro-oncology. The Web of Science database was searched using terms relevant to "neuro-oncology." Articles with >400 citations were identified and the top 100 cited articles were evaluated. The top 100 neuro-oncology citation classics consisted of 43 clinical studies (17 retrospective, 10 prospective, 16 randomized trials), 43 laboratory investigations, 8 reviews/meta-analyses, and 6 guidelines/consensus statements. Articles were classified into 4 themes: 13 pertained to tumor classification, 37 to tumor pathogenesis/clinical presentation, 6 to imaging, 44 to therapy (15 chemotherapy, 10 radiotherapy, 5 surgery, 14 new agents). Gliomas were the most common tumor type examined, with 70 articles. There was a significant increase in the number of citation classics in the late 1990s, which was paralleled by an increase in studies examining tumor pathogenesis, chemotherapy, and new agents along with laboratory and randomized studies. The majority of citation classics in neuro-oncology are related to gliomas and pertain to tumor pathogenesis and treatment. The rise in citation classics in recent years investigating tumor biology, new treatment agents, and chemotherapeutics may reflect increasing scientific interest in nonsurgical treatments for CNS tumors and the need for fundamental investigations into disease processes. © The Author(s) 2017. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.
Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha
2017-04-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.
A novel double patterning approach for 30nm dense holes
NASA Astrophysics Data System (ADS)
Hsu, Dennis Shu-Hao; Wang, Walter; Hsieh, Wei-Hsien; Huang, Chun-Yen; Wu, Wen-Bin; Shih, Chiang-Lin; Shih, Steven
2011-04-01
Double Patterning Technology (DPT) was commonly accepted as the major workhorse beyond water immersion lithography for sub-38nm half-pitch line patterning before the EUV production. For dense hole patterning, classical DPT employs self-aligned spacer deposition and uses the intersection of horizontal and vertical lines to define the desired hole patterns. However, the increase in manufacturing cost and process complexity is tremendous. Several innovative approaches have been proposed and experimented to address the manufacturing and technical challenges. A novel process of double patterned pillars combined image reverse will be proposed for the realization of low cost dense holes in 30nm node DRAM. The nature of pillar formation lithography provides much better optical contrast compared to the counterpart hole patterning with similar CD requirements. By the utilization of a reliable freezing process, double patterned pillars can be readily implemented. A novel image reverse process at the last stage defines the hole patterns with high fidelity. In this paper, several freezing processes for the construction of the double patterned pillars were tested and compared, and 30nm double patterning pillars were demonstrated successfully. A variety of different image reverse processes will be investigated and discussed for their pros and cons. An economic approach with the optimized lithography performance will be proposed for the application of 30nm DRAM node.
Advanced multispectral dynamic thermography as a new tool for inspection of gas-fired furnaces
NASA Astrophysics Data System (ADS)
Pregowski, Piotr; Goleniewski, Grzegorz; Komosa, Wojciech; Korytkowski, Waldemar
2004-04-01
The main special feature of elaborated method is that the dynamic IR thermography (DIRT) bases on forming of single image consisting of pixels of chosen minimum (IMAX) or maximum (IMAX) value, noted during adequately long sequence of thermograms with total independence to the moment of its (image's) capture. In this way, additive or suppressed interferences of fluctuating character become bypassed. Due to this method thereafter elaborated in classic way such "artificial thermogram" offers the quality impossible to achieve with a classic "one shot" method. Although preliminary, results obtained clearly show great potential of the method. and confirmed the validity in decreasing errors caused by fluctuating disturbances. In the case of process furnaces of gas-fired type and especially of coal-fired, application of presented solutions should result in significant increasing the reliability of IR thermography application. By use of properly chosen optical filters and algorithm, elaborated method offers a new potential attractive to test temperature problems other than in tubes , as for example symmetry and efficiency of the furnace heaters.
Autonomous control systems: applications to remote sensing and image processing
NASA Astrophysics Data System (ADS)
Jamshidi, Mohammad
2001-11-01
One of the main challenges of any control (or image processing) paradigm is being able to handle complex systems under unforeseen uncertainties. A system may be called complex here if its dimension (order) is too high and its model (if available) is nonlinear, interconnected, and information on the system is uncertain such that classical techniques cannot easily handle the problem. Examples of complex systems are power networks, space robotic colonies, national air traffic control system, and integrated manufacturing plant, the Hubble Telescope, the International Space Station, etc. Soft computing, a consortia of methodologies such as fuzzy logic, neuro-computing, genetic algorithms and genetic programming, has proven to be powerful tools for adding autonomy and semi-autonomy to many complex systems. For such systems the size of soft computing control architecture will be nearly infinite. In this paper new paradigms using soft computing approaches are utilized to design autonomous controllers and image enhancers for a number of application areas. These applications are satellite array formations for synthetic aperture radar interferometry (InSAR) and enhancement of analog and digital images.
q-Space Deep Learning: Twelve-Fold Shorter and Model-Free Diffusion MRI Scans.
Golkov, Vladimir; Dosovitskiy, Alexey; Sperl, Jonathan I; Menzel, Marion I; Czisch, Michael; Samann, Philipp; Brox, Thomas; Cremers, Daniel
2016-05-01
Numerous scientific fields rely on elaborate but partly suboptimal data processing pipelines. An example is diffusion magnetic resonance imaging (diffusion MRI), a non-invasive microstructure assessment method with a prominent application in neuroimaging. Advanced diffusion models providing accurate microstructural characterization so far have required long acquisition times and thus have been inapplicable for children and adults who are uncooperative, uncomfortable, or unwell. We show that the long scan time requirements are mainly due to disadvantages of classical data processing. We demonstrate how deep learning, a group of algorithms based on recent advances in the field of artificial neural networks, can be applied to reduce diffusion MRI data processing to a single optimized step. This modification allows obtaining scalar measures from advanced models at twelve-fold reduced scan time and detecting abnormalities without using diffusion models. We set a new state of the art by estimating diffusion kurtosis measures from only 12 data points and neurite orientation dispersion and density measures from only 8 data points. This allows unprecedentedly fast and robust protocols facilitating clinical routine and demonstrates how classical data processing can be streamlined by means of deep learning.
Ross, William N; Miyazaki, Kenichi; Popovic, Marko A; Zecevic, Dejan
2015-04-01
Dynamic calcium and voltage imaging is a major tool in modern cellular neuroscience. Since the beginning of their use over 40 years ago, there have been major improvements in indicators, microscopes, imaging systems, and computers. While cutting edge research has trended toward the use of genetically encoded calcium or voltage indicators, two-photon microscopes, and in vivo preparations, it is worth noting that some questions still may be best approached using more classical methodologies and preparations. In this review, we highlight a few examples in neurons where the combination of charge-coupled device (CCD) imaging and classical organic indicators has revealed information that has so far been more informative than results using the more modern systems. These experiments take advantage of the high frame rates, sensitivity, and spatial integration of the best CCD cameras. These cameras can respond to the faster kinetics of organic voltage and calcium indicators, which closely reflect the fast dynamics of the underlying cellular events.
Quantum-enhanced feature selection with forward selection and backward elimination
NASA Astrophysics Data System (ADS)
He, Zhimin; Li, Lvzhou; Huang, Zhiming; Situ, Haozhen
2018-07-01
Feature selection is a well-known preprocessing technique in machine learning, which can remove irrelevant features to improve the generalization capability of a classifier and reduce training and inference time. However, feature selection is time-consuming, particularly for the applications those have thousands of features, such as image retrieval, text mining and microarray data analysis. It is crucial to accelerate the feature selection process. We propose a quantum version of wrapper-based feature selection, which converts a classical feature selection to its quantum counterpart. It is valuable for machine learning on quantum computer. In this paper, we focus on two popular kinds of feature selection methods, i.e., wrapper-based forward selection and backward elimination. The proposed feature selection algorithm can quadratically accelerate the classical one.
Mohammed, Ali I; Gritton, Howard J; Tseng, Hua-an; Bucklin, Mark E; Yao, Zhaojie; Han, Xue
2016-02-08
Advances in neurotechnology have been integral to the investigation of neural circuit function in systems neuroscience. Recent improvements in high performance fluorescent sensors and scientific CMOS cameras enables optical imaging of neural networks at a much larger scale. While exciting technical advances demonstrate the potential of this technique, further improvement in data acquisition and analysis, especially those that allow effective processing of increasingly larger datasets, would greatly promote the application of optical imaging in systems neuroscience. Here we demonstrate the ability of wide-field imaging to capture the concurrent dynamic activity from hundreds to thousands of neurons over millimeters of brain tissue in behaving mice. This system allows the visualization of morphological details at a higher spatial resolution than has been previously achieved using similar functional imaging modalities. To analyze the expansive data sets, we developed software to facilitate rapid downstream data processing. Using this system, we show that a large fraction of anatomically distinct hippocampal neurons respond to discrete environmental stimuli associated with classical conditioning, and that the observed temporal dynamics of transient calcium signals are sufficient for exploring certain spatiotemporal features of large neural networks.
NASA Astrophysics Data System (ADS)
Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo
2016-11-01
We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.
Large-Scale Quantitative Analysis of Painting Arts
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-01-01
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877
Visual attention based bag-of-words model for image classification
NASA Astrophysics Data System (ADS)
Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che
2014-04-01
Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.
High frequency ultrasound with color Doppler in dermatology*
Barcaui, Elisa de Oliveira; Carvalho, Antonio Carlos Pires; Lopes, Flavia Paiva Proença Lobo; Piñeiro-Maceira, Juan; Barcaui, Carlos Baptista
2016-01-01
Ultrasonography is a method of imaging that classically is used in dermatology to study changes in the hypoderma, as nodules and infectious and inflammatory processes. The introduction of high frequency and resolution equipments enabled the observation of superficial structures, allowing differentiation between skin layers and providing details for the analysis of the skin and its appendages. This paper aims to review the basic principles of high frequency ultrasound and its applications in different areas of dermatology. PMID:27438191
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knogler, Thomas; El-Rabadi, Karem; Weber, Michael
2014-12-15
Purpose: To determine the diagnostic performance of three-dimensional (3D) texture analysis (TA) of contrast-enhanced computed tomography (CE-CT) images for treatment response assessment in patients with Hodgkin lymphoma (HL), compared with F-18-fludeoxyglucose (FDG) positron emission tomography/CT. Methods: 3D TA of 48 lymph nodes in 29 patients was performed on venous-phase CE-CT images before and after chemotherapy. All lymph nodes showed pathologically elevated FDG uptake at baseline. A stepwise logistic regression with forward selection was performed to identify classic CT parameters and texture features (TF) that enable the separation of complete response (CR) and persistent disease. Results: The TF fraction of imagemore » in runs, calculated for the 45° direction, was able to correctly identify CR with an accuracy of 75%, a sensitivity of 79.3%, and a specificity of 68.4%. Classical CT features achieved an accuracy of 75%, a sensitivity of 86.2%, and a specificity of 57.9%, whereas the combination of TF and CT imaging achieved an accuracy of 83.3%, a sensitivity of 86.2%, and a specificity of 78.9%. Conclusions: 3D TA of CE-CT images is potentially useful to identify nodal residual disease in HL, with a performance comparable to that of classical CT parameters. Best results are achieved when TA and classical CT features are combined.« less
Statistical ultrasonics: the influence of Robert F. Wagner
NASA Astrophysics Data System (ADS)
Insana, Michael F.
2009-02-01
An important ongoing question for higher education is how to successfully mentor the next generation of scientists and engineers. It has been my privilege to have been mentored by one of the best, Dr Robert F. Wagner and his colleagues at the CDRH/FDA during the mid 1980s. Bob introduced many of us in medical ultrasonics to statistical imaging techniques. These ideas continue to broadly influence studies on adaptive aperture management (beamforming, speckle suppression, compounding), tissue characterization (texture features, Rayleigh/Rician statistics, scatterer size and number density estimators), and fundamental questions about how limitations of the human eye-brain system for extracting information from textured images can motivate image processing. He adapted the classical techniques of signal detection theory to coherent imaging systems that, for the first time in ultrasonics, related common engineering metrics for image quality to task-based clinical performance. This talk summarizes my wonderfully-exciting three years with Bob as I watched him explore topics in statistical image analysis that formed a rational basis for many of the signal processing techniques used in commercial systems today. It is a story of an exciting time in medical ultrasonics, and of how a sparkling personality guided and motivated the development of junior scientists who flocked around him in admiration and amazement.
Wars of Ideas and the War of Ideas
2008-06-01
product. A classic example is the “ Cola Wars” between Coca - Cola and Pepsi- Cola . Wars of Ideas: Some Conclusions. Inconclusive outcomes are not...classic example is the ongoing war of slogans and images between Coca - Cola and Pepsi- Cola . Each uses a combination of slogans, images, and celebrities in...to the United States with an acquired taste for Coca - Cola , and in a global bottling and distribution network. Another notable marketing move was
Fantoni, Frédéric; Hervé, Lionel; Poher, Vincent; Gioux, Sylvain; Mars, Jérôme I; Dinten, Jean-Marc
2014-01-01
Intraoperative fluorescence imaging in reflectance geometry is an attractive imaging modality to noninvasively monitor fluorescence-targeted tumors. In some situations, this kind of imaging suffers from poor resolution due to the diffusive nature of photons in tissue. The objective of the proposed technique is to tackle this limitation. It relies on the scanning of the medium with a laser line illumination and the acquisition of images at each position of excitation. The detection scheme proposed takes advantage of the stack of images acquired to enhance the resolution and the contrast of the final image. The experimental protocol is described to fully understand why we overpass the classical limits and validate the scheme on tissue-like phantoms and in vivo with a preliminary testing. The results are compared with those obtained with a classical wide-field illumination.
Image based performance analysis of thermal imagers
NASA Astrophysics Data System (ADS)
Wegner, D.; Repasi, E.
2016-05-01
Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.
NASA Astrophysics Data System (ADS)
Chalupka, Uwe; Rothe, Hendrik
2012-03-01
The progress on a laser- and stereo-camera-based trajectory measurement system that we already proposed and described in recent publications is given. The system design was extended from one to two more powerful, DSP-controllable LASER systems. Experimental results of the extended system using different projectile-/weapon combinations will be shown and discussed. Automatic processing of acquired images using common 3DIP techniques was realized. Processing steps to extract trajectory segments from images as representative for the current application will be presented. Used algorithms for backward-calculation of the projectile trajectory will be shown. Verification of produced results is done against simulated trajectories, once in terms of detection robustness and once in terms of detection accuracy. Fields of use for the current system are within the ballistic domain. The first purpose is for trajectory measurement of small and middle caliber projectiles on a shooting range. Extension to big caliber projectiles as well as an application for sniper detection is imaginable, but would require further work. Beside classical RADAR, acoustic and optical projectile detection methods, the current system represents a further projectile location method under the new class of electro-optical methods that have been evolved in recent decades and that uses 3D imaging acquisition and processing techniques.
Computer vision camera with embedded FPGA processing
NASA Astrophysics Data System (ADS)
Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel
2000-03-01
Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.
Amarante Andrade, Pedro; Švec, Jan G
2016-07-01
Differences in classical and non-classical singing are due primarily to aesthetic style requirements. The head position can affect the sound quality. This study aimed at comparing the head position for famous classical and non-classical male singers performing high notes. Images of 39 Western classical and 34 non-classical male singers during live performances were obtained from YouTube. Ten raters evaluated the frontal rotational head position (depression versus elevation) and transverse head position (retraction versus protraction) visually using a visual analogue scale. The results showed a significant difference for frontal rotational head position. Most non-classical singers in the sample elevated their heads for high notes while the classical singers were observed to keep it around the neutral position. This difference may be attributed to different singing techniques and phonatory system adjustments utilized by each group.
Twelve automated thresholding methods for segmentation of PET images: a phantom study.
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M
2012-06-21
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical (18)F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Twelve automated thresholding methods for segmentation of PET images: a phantom study
NASA Astrophysics Data System (ADS)
Prieto, Elena; Lecumberri, Pablo; Pagola, Miguel; Gómez, Marisol; Bilbao, Izaskun; Ecay, Margarita; Peñuelas, Iván; Martí-Climent, Josep M.
2012-06-01
Tumor volume delineation over positron emission tomography (PET) images is of great interest for proper diagnosis and therapy planning. However, standard segmentation techniques (manual or semi-automated) are operator dependent and time consuming while fully automated procedures are cumbersome or require complex mathematical development. The aim of this study was to segment PET images in a fully automated way by implementing a set of 12 automated thresholding algorithms, classical in the fields of optical character recognition, tissue engineering or non-destructive testing images in high-tech structures. Automated thresholding algorithms select a specific threshold for each image without any a priori spatial information of the segmented object or any special calibration of the tomograph, as opposed to usual thresholding methods for PET. Spherical 18F-filled objects of different volumes were acquired on clinical PET/CT and on a small animal PET scanner, with three different signal-to-background ratios. Images were segmented with 12 automatic thresholding algorithms and results were compared with the standard segmentation reference, a threshold at 42% of the maximum uptake. Ridler and Ramesh thresholding algorithms based on clustering and histogram-shape information, respectively, provided better results that the classical 42%-based threshold (p < 0.05). We have herein demonstrated that fully automated thresholding algorithms can provide better results than classical PET segmentation tools.
Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning.
van Ginneken, Bram
2017-03-01
Half a century ago, the term "computer-aided diagnosis" (CAD) was introduced in the scientific literature. Pulmonary imaging, with chest radiography and computed tomography, has always been one of the focus areas in this field. In this study, I describe how machine learning became the dominant technology for tackling CAD in the lungs, generally producing better results than do classical rule-based approaches, and how the field is now rapidly changing: in the last few years, we have seen how even better results can be obtained with deep learning. The key differences among rule-based processing, machine learning, and deep learning are summarized and illustrated for various applications of CAD in the chest.
Classical imaging with undetected light
NASA Astrophysics Data System (ADS)
Cardoso, A. C.; Berruezo, L. P.; Ávila, D. F.; Lemos, G. B.; Pimenta, W. M.; Monken, C. H.; Saldanha, P. L.; Pádua, S.
2018-03-01
We obtained the phase and intensity images of an object by detecting classical light which never interacted with it. With a double passage of a pump and a signal laser beams through a nonlinear crystal, we observe interference between the two idler beams produced by stimulated parametric down conversion. The object is placed in the amplified signal beam after its first passage through the crystal and the image is observed in the interference of the generated idler beams. High contrast images can be obtained even for objects with small transmittance coefficient due to the geometry of the interferometer and to the stimulated parametric emission. Like its quantum counterpart, this three-color imaging concept can be useful when the object must be probed with light at a wavelength for which detectors are not available.
Novel windowing technique realized in FPGA for radar system
NASA Astrophysics Data System (ADS)
Escamilla-Hernandez, E.; Kravchenko, V. F.; Ponomaryov, V. I.; Ikuo, Arai
2006-02-01
To improve the weak target detection ability in radar applications a pulse compression is usually used that in the case linear FM modulation can improve the SNR. One drawback in here is that it can add the range side-lobes in reflectivity measurements. Using weighting window processing in time domain it is possible to decrease significantly the side-lobe level (SLL) and resolve small or low power targets those are masked by powerful ones. There are usually used classical windows such as Hamming, Hanning, etc. in window processing. Additionally to classical ones in this paper we also use a novel class of windows based on atomic functions (AF) theory. For comparison of simulation and experimental results we applied the standard parameters, such as coefficient of amplification, maximum level of side-lobe, width of main lobe, etc. To implement the compression-windowing model on hardware level it has been employed FPGA. This work aims at demonstrating a reasonably flexible implementation of FM-linear signal, pulse compression and windowing employing FPGA's. Classical and novel AF window technique has been investigated to reduce the SLL taking into account the noise influence and increasing the detection ability of the small or weak targets in the imaging radar. Paper presents the experimental hardware results of windowing in pulse compression radar resolving several targets for rectangular, Hamming, Kaiser-Bessel, (see manuscript for formula) functions windows. The windows created by use the atomic functions offer sufficiently better decreasing of the SLL in case of noise presence and when we move away of the main lobe in comparison with classical windows.
Parmaksızoğlu, Selami; Alçı, Mustafa
2011-01-01
Cellular Neural Networks (CNNs) have been widely used recently in applications such as edge detection, noise reduction and object detection, which are among the main computer imaging processes. They can also be realized as hardware based imaging sensors. The fact that hardware CNN models produce robust and effective results has attracted the attention of researchers using these structures within image sensors. Realization of desired CNN behavior such as edge detection can be achieved by correctly setting a cloning template without changing the structure of the CNN. To achieve different behaviors effectively, designing a cloning template is one of the most important research topics in this field. In this study, the edge detecting process that is used as a preliminary process for segmentation, identification and coding applications is conducted by using CNN structures. In order to design the cloning template of goal-oriented CNN architecture, an Artificial Bee Colony (ABC) algorithm which is inspired from the foraging behavior of honeybees is used and the performance analysis of ABC for this application is examined with multiple runs. The CNN template generated by the ABC algorithm is tested by using artificial and real test images. The results are subjectively and quantitatively compared with well-known classical edge detection methods, and other CNN based edge detector cloning templates available in the imaging literature. The results show that the proposed method is more successful than other methods.
Parmaksızoğlu, Selami; Alçı, Mustafa
2011-01-01
Cellular Neural Networks (CNNs) have been widely used recently in applications such as edge detection, noise reduction and object detection, which are among the main computer imaging processes. They can also be realized as hardware based imaging sensors. The fact that hardware CNN models produce robust and effective results has attracted the attention of researchers using these structures within image sensors. Realization of desired CNN behavior such as edge detection can be achieved by correctly setting a cloning template without changing the structure of the CNN. To achieve different behaviors effectively, designing a cloning template is one of the most important research topics in this field. In this study, the edge detecting process that is used as a preliminary process for segmentation, identification and coding applications is conducted by using CNN structures. In order to design the cloning template of goal-oriented CNN architecture, an Artificial Bee Colony (ABC) algorithm which is inspired from the foraging behavior of honeybees is used and the performance analysis of ABC for this application is examined with multiple runs. The CNN template generated by the ABC algorithm is tested by using artificial and real test images. The results are subjectively and quantitatively compared with well-known classical edge detection methods, and other CNN based edge detector cloning templates available in the imaging literature. The results show that the proposed method is more successful than other methods. PMID:22163903
Development of a Mobile User Interface for Image-based Dietary Assessment
Kim, SungYe; Schap, TusaRebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J.; Ebert, David S.; Boushey, Carol J.
2011-01-01
In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records. PMID:24455755
The Large Angle Spectroscopic Coronagraph (LASCO): Visible light coronal imaging and spectroscopy
NASA Technical Reports Server (NTRS)
Brueckner, Guenter E.; Howard, Russell A.; Koomen, Martin J.; Korendyke, C.; Michels, D. J.; Socker, D. G.; Lamy, Philippe; Llebaria, Antoine; Maucherat, J.; Schwenn, Rainer
1992-01-01
The Large Angle Spectroscopic Coronagraph (LASCO) is a triple coronagraph being jointly developed for the Solar and Heliospheric Observatory (SOHO) mission. LASCO comprises three nested coronagraphs (C1, C2, and C3) that image the solar corona for 1.1 to 30 solar radii (C1: 1.1 to 3 solar radii, C2: 1.5 to 6 solar radii, and C3: 3 to 30.0 solar radii). The inner coronagraph (C1) is a newly developed mirror version of the classic Lyot coronagraph without an external occultor, while the middle coronagraph (C2) and the outer coronagraph (C3) are externally occulted instruments. High resolution coronal spectroscopy from 1.1 to 3 R solar radii can be performed by using a Fabry-Perot interferometer, which is part of C1. High volume memories and a high speed microprocessor enable extensive onboard image processing. Image compression by factors of 10 to 20 will result in the transmission of 10 to 20 full images per hour.
X-ray phase-contrast imaging: the quantum perspective
NASA Astrophysics Data System (ADS)
Slowik, J. M.; Santra, R.
2013-08-01
Time-resolved phase-contrast imaging using ultrafast x-ray sources is an emerging method to investigate ultrafast dynamical processes in matter. Schemes to generate attosecond x-ray pulses have been proposed, bringing electronic timescales into reach and emphasizing the demand for a quantum description. In this paper, we present a method to describe propagation-based x-ray phase-contrast imaging in nonrelativistic quantum electrodynamics. We explain why the standard scattering treatment via Fermi’s golden rule cannot be applied. Instead, the quantum electrodynamical treatment of phase-contrast imaging must be based on a different approach. It turns out that it is essential to select a suitable observable. Here, we choose the quantum-mechanical Poynting operator. We determine the expectation value of our observable and demonstrate that the leading order term describes phase-contrast imaging. It recovers the classical expression of phase-contrast imaging. Thus, it makes the instantaneous electron density of non-stationary electronic states accessible to time-resolved imaging. Interestingly, inelastic (Compton) scattering does automatically not contribute in leading order, explaining the success of the semiclassical description.
Scattering features for lung cancer detection in fibered confocal fluorescence microscopy images.
Rakotomamonjy, Alain; Petitjean, Caroline; Salaün, Mathieu; Thiberville, Luc
2014-06-01
To assess the feasibility of lung cancer diagnosis using fibered confocal fluorescence microscopy (FCFM) imaging technique and scattering features for pattern recognition. FCFM imaging technique is a new medical imaging technique for which interest has yet to be established for diagnosis. This paper addresses the problem of lung cancer detection using FCFM images and, as a first contribution, assesses the feasibility of computer-aided diagnosis through these images. Towards this aim, we have built a pattern recognition scheme which involves a feature extraction stage and a classification stage. The second contribution relies on the features used for discrimination. Indeed, we have employed the so-called scattering transform for extracting discriminative features, which are robust to small deformations in the images. We have also compared and combined these features with classical yet powerful features like local binary patterns (LBP) and their variants denoted as local quinary patterns (LQP). We show that scattering features yielded to better recognition performances than classical features like LBP and their LQP variants for the FCFM image classification problems. Another finding is that LBP-based and scattering-based features provide complementary discriminative information and, in some situations, we empirically establish that performance can be improved when jointly using LBP, LQP and scattering features. In this work we analyze the joint capability of FCFM images and scattering features for lung cancer diagnosis. The proposed method achieves a good recognition rate for such a diagnosis problem. It also performs well when used in conjunction with other features for other classical medical imaging classification problems. Copyright © 2014 Elsevier B.V. All rights reserved.
Tomographic PIV: particles versus blobs
NASA Astrophysics Data System (ADS)
Champagnat, Frédéric; Cornic, Philippe; Cheminet, Adam; Leclaire, Benjamin; Le Besnerais, Guy; Plyer, Aurélien
2014-08-01
We present an alternative approach to tomographic particle image velocimetry (tomo-PIV) that seeks to recover nearly single voxel particles rather than blobs of extended size. The baseline of our approach is a particle-based representation of image data. An appropriate discretization of this representation yields an original linear forward model with a weight matrix built with specific samples of the system’s point spread function (PSF). Such an approach requires only a few voxels to explain the image appearance, therefore it favors much more sparsely reconstructed volumes than classic tomo-PIV. The proposed forward model is general and flexible and can be embedded in a classical multiplicative algebraic reconstruction technique (MART) or a simultaneous multiplicative algebraic reconstruction technique (SMART) inversion procedure. We show, using synthetic PIV images and by way of a large exploration of the generating conditions and a variety of performance metrics, that the model leads to better results than the classical tomo-PIV approach, in particular in the case of seeding densities greater than 0.06 particles per pixel and of PSFs characterized by a standard deviation larger than 0.8 pixels.
Model of the lines of sight for an off-axis optical instrument Pleiades
NASA Astrophysics Data System (ADS)
Sauvage, Dominique; Gaudin-Delrieu, Catherine; Tournier, Thierry
2017-11-01
The future Earth observation missions aim at delivering images with a high resolution and a large field of view. These images have to be processed to get a very accurate localisation. In that goal, the individual lines of sight of each photosensitive element must be evaluated according to the localisation of the pixels in the focal plane. But, with off-axis Korsch telescope (like PLEIADES), the classical model has to be adapted. This is possible by using optical ground measurements made after the integration of the instrument. The processing of these results leads to several parameters, which are function of the offsets of the focal plane and the real focal length. All this study which has been proposed for the PLEIADES mission leads to a more elaborated model which provides the relation between the lines of sight and the location of the pixels, with a very good accuracy, close to the pixel size.
2007-08-01
5] Our experiments on the 3 kJ Nike KrF laser at NRL [6] seek detailed understanding of laser plasma interactions and the physical processes...Research Laboratory (NRL). It has been first used in our ICF-related hydrodynamic experiments on the NRL’s Nike KrF laser [17], and later implemented...as implemented on Nike . In Section 3 we present some results of our hydrodynamic experiments, which have been made possible by this diagnostics. In
Kyiv UkrVO glass archives: new life
NASA Astrophysics Data System (ADS)
Pakuliak, L.; Golovnya, V.; Andruk, V.; Shatokhina, S.; Yizhakevych, O.; Kazantseva, L.; Lukianchuk, V.
In the framework of UkrVO national project the new methods of plate digital image processing are developed. The photographic material of the UkrVO Joint Digital Archive (JDA) is used for the solution of classic astrometric problem - positional and photometric determinations of objects registered on the plates. The results of tested methods show that the positional rms errors are better than ±150 mas for both coordinates and photometric ones are better than ±0.20m with the Tycho-2 catalogue as reference.
Kinematic reconstruction in cardiovascular imaging.
Bastarrika, G; Huebra Rodríguez, I J González de la; Calvo-Imirizaldu, M; Suárez Vega, V M; Alonso-Burgos, A
2018-05-17
Advances in clinical applications of computed tomography have been accompanied by improvements in advanced post-processing tools. In addition to multiplanar reconstructions, curved planar reconstructions, maximum intensity projections, and volumetric reconstructions, very recently kinematic reconstruction has been developed. This new technique, based on mathematical models that simulate the propagation of light beams through a volume of data, makes it possible to obtain very realistic three dimensional images. This article illustrates examples of kinematic reconstructions and compares them with classical volumetric reconstructions in patients with cardiovascular disease in a way that makes it easy to establish the differences between the two types of reconstruction. Kinematic reconstruction is a new method for representing three dimensional images that facilitates the explanation and comprehension of the findings. Copyright © 2018 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
Mental visualization of objects from cross-sectional images
Wu, Bing; Klatzky, Roberta L.; Stetten, George D.
2011-01-01
We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level. PMID:22217386
Palmieri, Roberta; Bonifazi, Giuseppe; Serranti, Silvia
2014-11-01
This study characterizes the composition of plastic frames and printed circuit boards from end-of-life mobile phones. This knowledge may help define an optimal processing strategy for using these items as potential raw materials. Correct handling of such a waste is essential for its further "sustainable" recovery, especially to maximize the extraction of base, rare and precious metals, minimizing the environmental impact of the entire process chain. A combination of electronic and chemical imaging techniques was thus examined, applied and critically evaluated in order to optimize the processing, through the identification and the topological assessment of the materials of interest and their quantitative distribution. To reach this goal, end-of-life mobile phone derived wastes have been systematically characterized adopting both "traditional" (e.g. scanning electronic microscopy combined with microanalysis and Raman spectroscopy) and innovative (e.g. hyperspectral imaging in short wave infrared field) techniques, with reference to frames and printed circuit boards. Results showed as the combination of both the approaches (i.e. traditional and classical) could dramatically improve recycling strategies set up, as well as final products recovery. Copyright © 2014 Elsevier Ltd. All rights reserved.
Timmers, Inge; van den Hurk, Job; Di Salle, Francesco; Rubio-Gozalbo, M Estela; Jansma, Bernadette M
2011-04-01
Most humans are social beings and we express our thoughts and feelings through language. In contrast to the ease with which we speak, the underlying cognitive and neural processes of language production are fairly complex and still little understood. In the hereditary metabolic disease classic galactosemia, failures in language production processes are among the most reported difficulties. It is unclear, however, what the underlying neural cause of this cognitive problem is. Modern brain imaging techniques allow us to look into the brain of a thinking patient online - while she or he is performing a task, such as speaking. We can measure indirectly neural activity related to the output side of a process (e.g. articulation). But most importantly, we can look into the planning phase prior to an overt response, hence tapping into subcomponents of speech planning. These components include verbal memory, intention to speak, and the planning of meaning, syntax, and phonology. This paper briefly introduces cognitive theories on language production and methods used in cognitive neuroscience. It reviews the possibilities of applying them in experimental paradigms to investigate language production and verbal memory in galactosemia.
[Structuralist reading of radiologic images].
Wackenheim, A
1984-02-01
The author suggests analysing the radiological image according to classical principles of structuralism, gestaltism, semiology, semantics. He describes applications in routine radiology: perception of complete theoretical displacement of parts of the image, phenomenology of three images (A-B-C) in theory and exams, mistake in perception by analogy.
Research on Bayes matting algorithm based on Gaussian mixture model
NASA Astrophysics Data System (ADS)
Quan, Wei; Jiang, Shan; Han, Cheng; Zhang, Chao; Jiang, Zhengang
2015-12-01
The digital matting problem is a classical problem of imaging. It aims at separating non-rectangular foreground objects from a background image, and compositing with a new background image. Accurate matting determines the quality of the compositing image. A Bayesian matting Algorithm Based on Gaussian Mixture Model is proposed to solve this matting problem. Firstly, the traditional Bayesian framework is improved by introducing Gaussian mixture model. Then, a weighting factor is added in order to suppress the noises of the compositing images. Finally, the effect is further improved by regulating the user's input. This algorithm is applied to matting jobs of classical images. The results are compared to the traditional Bayesian method. It is shown that our algorithm has better performance in detail such as hair. Our algorithm eliminates the noise well. And it is very effectively in dealing with the kind of work, such as interested objects with intricate boundaries.
Retinal Connectomics: Towards Complete, Accurate Networks
Marc, Robert E.; Jones, Bryan W.; Watt, Carl B.; Anderson, James R.; Sigulinsky, Crystal; Lauritzen, Scott
2013-01-01
Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 1012–1015 byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532
The new classic data acquisition system for NPOI
NASA Astrophysics Data System (ADS)
Sun, B.; Jorgensen, A. M.; Landavazo, M.; Hutter, D. J.; van Belle, G. T.; Mozurkewich, David; Armstrong, J. T.; Schmitt, H. R.; Baines, E. K.; Restaino, S. R.
2014-07-01
The New Classic data acquisition system is an important portion of a new project of stellar surface imaging with the NPOI, funded by the National Science Foundation, and enables the data acquisition necessary for the project. The NPOI can simultaneously deliver beams from 6 telescopes to the beam combining facility, and in the Classic beam combiner these are combined 4 at a time on 3 separate spectrographs with all 15 possible baselines observed. The Classic data acquisition system is limited to 16 of 32 wavelength channels on two spectrographs and limited to 30 s integrations followed by a pause to ush data. Classic also has some limitations in its fringe-tracking capability. These factors, and the fact that Classic incorporates 1990s technology which cannot be easily replaced are motivation for upgrading the data acquisition system. The New Classic data acquisition system is based around modern electronics, including a high-end Stratix FPGA, a 200 MB/s Direct Memory Access card, and a fast modern Linux computer. These allow for continuous recording of all 96 channels across three spectrographs, increasing the total amount of data recorded by a an estimated order of magnitude. The additional computing power on the data acquisition system also allows for the implementation of more sophisticated fringe-tracking algorithms which are needed for the Stellar Surface Imaging project. In this paper we describe the New Classic system design and implementation, describe the background and motivation for the system as well as show some initial results from using it.
Classical and unusual imaging appearances of melorheostosis.
Suresh, S; Muthukumar, T; Saifuddin, A
2010-08-01
This comprehensive review will discuss the classical and unusual radiological features of melorheostosis, which is an uncommon, non-hereditary, benign, sclerosing mesodermal disease with an incidence of 0.9 cases per million. The presentation of melorheostosis in the appendicular skeleton (more commonly involved) and in the axial skeleton (very few documented case reports) will be discussed. The aim of the review is to illustrate the associations and rare, but recognized, complications of the disorder. The role of cross-sectional imaging in the form of magnetic resonance imaging (MRI) and computed tomography (CT) in revealing the spectrum of disease manifestation and differentiation from other disease entities and malignancy will be explored.
An efficient hole-filling method based on depth map in 3D view generation
NASA Astrophysics Data System (ADS)
Liang, Haitao; Su, Xiu; Liu, Yilin; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong
2018-01-01
New virtual view is synthesized through depth image based rendering(DIBR) using a single color image and its associated depth map in 3D view generation. Holes are unavoidably generated in the 2D to 3D conversion process. We propose a hole-filling method based on depth map to address the problem. Firstly, we improve the process of DIBR by proposing a one-to-four (OTF) algorithm. The "z-buffer" algorithm is used to solve overlap problem. Then, based on the classical patch-based algorithm of Criminisi et al., we propose a hole-filling algorithm using the information of depth map to handle the image after DIBR. In order to improve the accuracy of the virtual image, inpainting starts from the background side. In the calculation of the priority, in addition to the confidence term and the data term, we add the depth term. In the search for the most similar patch in the source region, we define the depth similarity to improve the accuracy of searching. Experimental results show that the proposed method can effectively improve the quality of the 3D virtual view subjectively and objectively.
A ganglion-cell-based primary image representation method and its contribution to object recognition
NASA Astrophysics Data System (ADS)
Wei, Hui; Dai, Zhi-Long; Zuo, Qing-Song
2016-10-01
A visual stimulus is represented by the biological visual system at several levels: in the order from low to high levels, they are: photoreceptor cells, ganglion cells (GCs), lateral geniculate nucleus cells and visual cortical neurons. Retinal GCs at the early level need to represent raw data only once, but meet a wide number of diverse requests from different vision-based tasks. This means the information representation at this level is general and not task-specific. Neurobiological findings have attributed this universal adaptation to GCs' receptive field (RF) mechanisms. For the purposes of developing a highly efficient image representation method that can facilitate information processing and interpretation at later stages, here we design a computational model to simulate the GC's non-classical RF. This new image presentation method can extract major structural features from raw data, and is consistent with other statistical measures of the image. Based on the new representation, the performances of other state-of-the-art algorithms in contour detection and segmentation can be upgraded remarkably. This work concludes that applying sophisticated representation schema at early state is an efficient and promising strategy in visual information processing.
NASA Astrophysics Data System (ADS)
Thiebaut, C.; Perraud, L.; Delvit, J. M.; Latry, C.
2016-07-01
We present an on-board satellite implementation of a gradient-based (optical flows) algorithm for the shifts estimation between images of a Shack-Hartmann wave-front sensor on extended landscapes. The proposed algorithm has low complexity in comparison with classical correlation methods which is a big advantage for being used on-board a satellite at high instrument data rate and in real-time. The electronic board used for this implementation is designed for space applications and is composed of radiation-hardened software and hardware. Processing times of both shift estimations and pre-processing steps are compatible of on-board real-time computation.
Developing Students' Ideas about Lens Imaging: Teaching Experiments with an Image-Based Approach
ERIC Educational Resources Information Center
Grusche, Sascha
2017-01-01
Lens imaging is a classic topic in physics education. To guide students from their holistic viewpoint to the scientists' analytic viewpoint, an image-based approach to lens imaging has recently been proposed. To study the effect of the image-based approach on undergraduate students' ideas, teaching experiments are performed and evaluated using…
Insight into efficient image registration techniques and the demons algorithm.
Vercauteren, Tom; Pennec, Xavier; Malis, Ezio; Perchant, Aymeric; Ayache, Nicholas
2007-01-01
As image registration becomes more and more central to many biomedical imaging applications, the efficiency of the algorithms becomes a key issue. Image registration is classically performed by optimizing a similarity criterion over a given spatial transformation space. Even if this problem is considered as almost solved for linear registration, we show in this paper that some tools that have recently been developed in the field of vision-based robot control can outperform classical solutions. The adequacy of these tools for linear image registration leads us to revisit non-linear registration and allows us to provide interesting theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage to the symmetric forces variant of the demons algorithm. We show that, on controlled experiments, this advantage is confirmed, and yields a faster convergence.
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service
Bao, Shunxing; Plassard, Andrew J.; Landman, Bennett A.; Gokhale, Aniruddha
2017-01-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage. PMID:28884169
Nestor, Adrian; Vettel, Jean M; Tarr, Michael J
2013-11-01
What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.
Multiple Point Statistics algorithm based on direct sampling and multi-resolution images
NASA Astrophysics Data System (ADS)
Julien, S.; Renard, P.; Chugunova, T.
2017-12-01
Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.
Fruit fly optimization based least square support vector regression for blind image restoration
NASA Astrophysics Data System (ADS)
Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei
2014-11-01
The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and performs better. Both objective and subjective restoration performances are studied in the comparison experiments.
Two-dimensional imaging via a narrowband MIMO radar system with two perpendicular linear arrays.
Wang, Dang-wei; Ma, Xiao-yan; Su, Yi
2010-05-01
This paper presents a system model and method for the 2-D imaging application via a narrowband multiple-input multiple-output (MIMO) radar system with two perpendicular linear arrays. Furthermore, the imaging formulation for our method is developed through a Fourier integral processing, and the parameters of antenna array including the cross-range resolution, required size, and sampling interval are also examined. Different from the spatial sequential procedure sampling the scattered echoes during multiple snapshot illuminations in inverse synthetic aperture radar (ISAR) imaging, the proposed method utilizes a spatial parallel procedure to sample the scattered echoes during a single snapshot illumination. Consequently, the complex motion compensation in ISAR imaging can be avoided. Moreover, in our array configuration, multiple narrowband spectrum-shared waveforms coded with orthogonal polyphase sequences are employed. The mainlobes of the compressed echoes from the different filter band could be located in the same range bin, and thus, the range alignment in classical ISAR imaging is not necessary. Numerical simulations based on synthetic data are provided for testing our proposed method.
Classical Wave Model of Quantum-Like Processing in Brain
NASA Astrophysics Data System (ADS)
Khrennikov, A.
2011-01-01
We discuss the conjecture on quantum-like (QL) processing of information in the brain. It is not based on the physical quantum brain (e.g., Penrose) - quantum physical carriers of information. In our approach the brain created the QL representation (QLR) of information in Hilbert space. It uses quantum information rules in decision making. The existence of such QLR was (at least preliminary) confirmed by experimental data from cognitive psychology. The violation of the law of total probability in these experiments is an important sign of nonclassicality of data. In so called "constructive wave function approach" such data can be represented by complex amplitudes. We presented 1,2 the QL model of decision making. In this paper we speculate on a possible physical realization of QLR in the brain: a classical wave model producing QLR . It is based on variety of time scales in the brain. Each pair of scales (fine - the background fluctuations of electromagnetic field and rough - the cognitive image scale) induces the QL representation. The background field plays the crucial role in creation of "superstrong QL correlations" in the brain.
Salient contour extraction from complex natural scene in night vision image
NASA Astrophysics Data System (ADS)
Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lian-fa
2014-03-01
The theory of center-surround interaction in non-classical receptive field can be applied in night vision information processing. In this work, an optimized compound receptive field modulation method is proposed to extract salient contour from complex natural scene in low-light-level (LLL) and infrared images. The kernel idea is that multi-feature analysis can recognize the inhomogeneity in modulatory coverage more accurately and that center and surround with the grouping structure satisfying Gestalt rule deserves high connection-probability. Computationally, a multi-feature contrast weighted inhibition model is presented to suppress background and lower mutual inhibition among contour elements; a fuzzy connection facilitation model is proposed to achieve the enhancement of contour response, the connection of discontinuous contour and the further elimination of randomly distributed noise and texture; a multi-scale iterative attention method is designed to accomplish dynamic modulation process and extract contours of targets in multi-size. This work provides a series of biologically motivated computational visual models with high-performance for contour detection from cluttered scene in night vision images.
Estimation of kinetic parameters from list-mode data using an indirect apporach
NASA Astrophysics Data System (ADS)
Ortiz, Joseph Christian
This dissertation explores the possibility of using an imaging approach to model classical pharmacokinetic (PK) problems. The kinetic parameters which describe the uptake rates of a drug within a biological system, are parameters of interest. Knowledge of the drug uptake in a system is useful in expediting the drug development process, as well as providing a dosage regimen for patients. Traditionally, the uptake rate of a drug in a system is obtained via sampling the concentration of the drug in a central compartment, usually the blood, and fitting the data to a curve. In a system consisting of multiple compartments, the number of kinetic parameters is proportional to the number of compartments, and in classical PK experiments, the number of identifiable parameters is less than the total number of parameters. Using an imaging approach to model classical PK problems, the support region of each compartment within the system will be exactly known, and all the kinetic parameters are uniquely identifiable. To solve for the kinetic parameters, an indirect approach, which is a two part process, was used. First the compartmental activity was obtained from data, and next the kinetic parameters were estimated. The novel aspect of the research is using listmode data to obtain the activity curves from a system as opposed to a traditional binned approach. Using techniques from information theoretic learning, particularly kernel density estimation, a non-parametric probability density function for the voltage outputs on each photo-multiplier tube, for each event, was generated on the fly, which was used in a least squares optimization routine to estimate the compartmental activity. The estimability of the activity curves for varying noise levels as well as time sample densities were explored. Once an estimate for the activity was obtained, the kinetic parameters were obtained using multiple cost functions, and the compared to each other using the mean squared error as the figure of merit.
Citation classics in periodontology: a controlled study.
Nieri, Michele; Saletta, Daniele; Guidi, Luisa; Buti, Jacopo; Franceschi, Debora; Mauro, Saverio; Pini-Prato, Giovanpaolo
2007-04-01
The aims of this study were to identify the most cited articles in Periodontology published from January 1990 to March 2005; and to analyse the differences between citation Classics and less cited articles. The search was carried out in four international periodontal journals: Journal of Periodontology, Journal of Clinical Periodontology, International Journal of Periodontics and Restorative Dentistry and Journal of Periodontal Research. The Classics, that are articles cited at least 100 times, were identified using the Science Citation Index database. From every issue of the journals that contained a Classic, another article was randomly selected and used as a Control. Fifty-five Classics and 55 Controls were identified. Classic articles were longer, used more images, had more authors, and contained more self-references than Controls. Moreover Classics had on the average a bigger sample size, often dealt with etiopathogenesis and prognosis, but were rarely controlled or randomized studies. Classic articles play an instructive role, but are often non-Controlled studies.
Windowing technique in FM radar realized by FPGA for better target resolution
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; Escamilla-Hernandez, Enrique; Kravchenko, Victor F.
2006-09-01
Remote sensing systems, such as SAR usually apply FM signals to resolve nearly placed targets (objects) and improve SNR. Main drawbacks in the pulse compression of FM radar signal that it can add the range side-lobes in reflectivity measurements. Using weighting window processing in time domain it is possible to decrease significantly the side-lobe level (SLL) of output radar signal that permits to resolve small or low power targets those are masked by powerful ones. There are usually used classical windows such as Hamming, Hanning, Blackman-Harris, Kaiser-Bessel, Dolph-Chebyshev, Gauss, etc. in window processing. Additionally to classical ones in here we also use a novel class of windows based on atomic functions (AF) theory. For comparison of simulation and experimental results we applied the standard parameters, such as coefficient of amplification, maximum level of side-lobe, width of main lobe, etc. In this paper we also proposed to implement the compression-windowing model on a hardware level employing Field Programmable Gate Array (FPGA) that offers some benefits like instantaneous implementation, dynamic reconfiguration, design, and field programmability. It has been investigated the pulse compression design on FPGA applying classical and novel window technique to reduce the SLL in absence and presence of noise. The paper presents simulated and experimental examples of detection of small or nearly placed targets in the imaging radar. Paper also presents the experimental hardware results of windowing in FM radar demonstrating resolution of the several targets for classical rectangular, Hamming, Kaiser-Bessel, and some novel ones: Up(x), fup 4(x)•D 3(x), fup 6(x)•G 3(x), etc. It is possible to conclude that windows created on base of the AFs offer better decreasing of the SLL in cases of presence or absence of noise and when we move away of the main lobe in comparison with classical windows.
Scanning transmission electron microscopy through-focal tilt-series on biological specimens.
Trepout, Sylvain; Messaoudi, Cédric; Perrot, Sylvie; Bastin, Philippe; Marco, Sergio
2015-10-01
Since scanning transmission electron microscopy can produce high signal-to-noise ratio bright-field images of thick (≥500 nm) specimens, this tool is emerging as the method of choice to study thick biological samples via tomographic approaches. However, in a convergent-beam configuration, the depth of field is limited because only a thin portion of the specimen (from a few nanometres to tens of nanometres depending on the convergence angle) can be imaged in focus. A method known as through-focal imaging enables recovery of the full depth of information by combining images acquired at different levels of focus. In this work, we compare tomographic reconstruction with the through-focal tilt-series approach (a multifocal series of images per tilt angle) with reconstruction with the classic tilt-series acquisition scheme (one single-focus image per tilt angle). We visualised the base of the flagellum in the protist Trypanosoma brucei via an acquisition and image-processing method tailored to obtain quantitative and qualitative descriptors of reconstruction volumes. Reconstructions using through-focal imaging contained more contrast and more details for thick (≥500 nm) biological samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
LSB-based Steganography Using Reflected Gray Code for Color Quantum Images
NASA Astrophysics Data System (ADS)
Li, Panchi; Lu, Aiping
2018-02-01
At present, the classical least-significant-bit (LSB) based image steganography has been extended to quantum image processing. For the existing LSB-based quantum image steganography schemes, the embedding capacity is no more than 3 bits per pixel. Therefore, it is meaningful to study how to improve the embedding capacity of quantum image steganography. This work presents a novel LSB-based steganography using reflected Gray code for colored quantum images, and the embedding capacity of this scheme is up to 4 bits per pixel. In proposed scheme, the secret qubit sequence is considered as a sequence of 4-bit segments. For the four bits in each segment, the first bit is embedded in the second LSB of B channel of the cover image, and and the remaining three bits are embedded in LSB of RGB channels of each color pixel simultaneously using reflected-Gray code to determine the embedded bit from secret information. Following the transforming rule, the LSB of stego-image are not always same as the secret bits and the differences are up to almost 50%. Experimental results confirm that the proposed scheme shows good performance and outperforms the previous ones currently found in the literature in terms of embedding capacity.
The Müller-Lyer Illusion in a Computational Model of Biological Object Recognition
Zeman, Astrid; Obst, Oliver; Brooks, Kevin R.; Rich, Anina N.
2013-01-01
Studying illusions provides insight into the way the brain processes information. The Müller-Lyer Illusion (MLI) is a classical geometrical illusion of size, in which perceived line length is decreased by arrowheads and increased by arrowtails. Many theories have been put forward to explain the MLI, such as misapplied size constancy scaling, the statistics of image-source relationships and the filtering properties of signal processing in primary visual areas. Artificial models of the ventral visual processing stream allow us to isolate factors hypothesised to cause the illusion and test how these affect classification performance. We trained a feed-forward feature hierarchical model, HMAX, to perform a dual category line length judgment task (short versus long) with over 90% accuracy. We then tested the system in its ability to judge relative line lengths for images in a control set versus images that induce the MLI in humans. Results from the computational model show an overall illusory effect similar to that experienced by human subjects. No natural images were used for training, implying that misapplied size constancy and image-source statistics are not necessary factors for generating the illusion. A post-hoc analysis of response weights within a representative trained network ruled out the possibility that the illusion is caused by a reliance on information at low spatial frequencies. Our results suggest that the MLI can be produced using only feed-forward, neurophysiological connections. PMID:23457510
Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.
Kärkkäinen, Salme; Lantuéjoul, Christian
2007-10-01
We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.
TU-AB-207-01: Introduction to Tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sechopoulos, I.
2015-06-15
Digital Tomosynthesis (DT) is becoming increasingly common in breast imaging and many other applications. DT is a form of computed tomography in which a limited set of projection images are acquired over a small angular range and reconstructed into a tomographic data set. The angular range and number of projections is determined both by the imaging task and equipment manufacturer. For example, in breast imaging between 9 and 25 projections are acquired over a range of 15° to 60°. It is equally valid to treat DT as the digital analog of classical tomography - for example, linear tomography. In fact,more » the name “tomosynthesis” is an acronym for “synthetic tomography”. DT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DT systems is a hybrid between CT and classical tomographic methods. This lecture will consist of three presentations that will provide a complete overview of DT, including a review of the fundamentals of DT, a discussion of testing methods for DT systems, and a description of the clinical applications of DT. While digital breast tomosynthesis will be emphasized, analogies will be drawn to body imaging to illustrate and compare tomosynthesis methods. Learning Objectives: To understand the fundamental principles behind tomosynthesis, including the determinants of image quality and dose. To learn how to test the performance of tomosynthesis imaging systems. To appreciate the uses of tomosynthesis in the clinic and the future applications of tomosynthesis.« less
TU-AB-207-03: Tomosynthesis: Clinical Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maidment, A.
2015-06-15
Digital Tomosynthesis (DT) is becoming increasingly common in breast imaging and many other applications. DT is a form of computed tomography in which a limited set of projection images are acquired over a small angular range and reconstructed into a tomographic data set. The angular range and number of projections is determined both by the imaging task and equipment manufacturer. For example, in breast imaging between 9 and 25 projections are acquired over a range of 15° to 60°. It is equally valid to treat DT as the digital analog of classical tomography - for example, linear tomography. In fact,more » the name “tomosynthesis” is an acronym for “synthetic tomography”. DT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DT systems is a hybrid between CT and classical tomographic methods. This lecture will consist of three presentations that will provide a complete overview of DT, including a review of the fundamentals of DT, a discussion of testing methods for DT systems, and a description of the clinical applications of DT. While digital breast tomosynthesis will be emphasized, analogies will be drawn to body imaging to illustrate and compare tomosynthesis methods. Learning Objectives: To understand the fundamental principles behind tomosynthesis, including the determinants of image quality and dose. To learn how to test the performance of tomosynthesis imaging systems. To appreciate the uses of tomosynthesis in the clinic and the future applications of tomosynthesis.« less
TU-AB-207-00: Digital Tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
Digital Tomosynthesis (DT) is becoming increasingly common in breast imaging and many other applications. DT is a form of computed tomography in which a limited set of projection images are acquired over a small angular range and reconstructed into a tomographic data set. The angular range and number of projections is determined both by the imaging task and equipment manufacturer. For example, in breast imaging between 9 and 25 projections are acquired over a range of 15° to 60°. It is equally valid to treat DT as the digital analog of classical tomography - for example, linear tomography. In fact,more » the name “tomosynthesis” is an acronym for “synthetic tomography”. DT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DT systems is a hybrid between CT and classical tomographic methods. This lecture will consist of three presentations that will provide a complete overview of DT, including a review of the fundamentals of DT, a discussion of testing methods for DT systems, and a description of the clinical applications of DT. While digital breast tomosynthesis will be emphasized, analogies will be drawn to body imaging to illustrate and compare tomosynthesis methods. Learning Objectives: To understand the fundamental principles behind tomosynthesis, including the determinants of image quality and dose. To learn how to test the performance of tomosynthesis imaging systems. To appreciate the uses of tomosynthesis in the clinic and the future applications of tomosynthesis.« less
TU-AB-207-02: Testing of Body and Breast Tomosynthesis Sytems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A.
2015-06-15
Digital Tomosynthesis (DT) is becoming increasingly common in breast imaging and many other applications. DT is a form of computed tomography in which a limited set of projection images are acquired over a small angular range and reconstructed into a tomographic data set. The angular range and number of projections is determined both by the imaging task and equipment manufacturer. For example, in breast imaging between 9 and 25 projections are acquired over a range of 15° to 60°. It is equally valid to treat DT as the digital analog of classical tomography - for example, linear tomography. In fact,more » the name “tomosynthesis” is an acronym for “synthetic tomography”. DT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DT systems is a hybrid between CT and classical tomographic methods. This lecture will consist of three presentations that will provide a complete overview of DT, including a review of the fundamentals of DT, a discussion of testing methods for DT systems, and a description of the clinical applications of DT. While digital breast tomosynthesis will be emphasized, analogies will be drawn to body imaging to illustrate and compare tomosynthesis methods. Learning Objectives: To understand the fundamental principles behind tomosynthesis, including the determinants of image quality and dose. To learn how to test the performance of tomosynthesis imaging systems. To appreciate the uses of tomosynthesis in the clinic and the future applications of tomosynthesis.« less
NASA Astrophysics Data System (ADS)
Dahms, Rainer N.; Oefelein, Joseph C.
2013-09-01
A theory that explains the operating pressures where liquid injection processes transition from exhibiting classical two-phase spray atomization phenomena to single-phase diffusion-dominated mixing is presented. Imaging from a variety of experiments have long shown that under certain conditions, typically when the pressure of the working fluid exceeds the thermodynamic critical pressure of the liquid phase, the presence of discrete two-phase flow processes become diminished. Instead, the classical gas-liquid interface is replaced by diffusion-dominated mixing. When and how this transition occurs, however, is not well understood. Modern theory still lacks a physically based model to quantify this transition and the precise mechanisms that lead to it. In this paper, we derive a new model that explains how the transition occurs in multicomponent fluids and present a detailed analysis to quantify it. The model applies a detailed property evaluation scheme based on a modified 32-term Benedict-Webb-Rubin equation of state that accounts for the relevant real-fluid thermodynamic and transport properties of the multicomponent system. This framework is combined with Linear Gradient Theory, which describes the detailed molecular structure of the vapor-liquid interface region. Our analysis reveals that the two-phase interface breaks down not necessarily due to vanishing surface tension forces, but due to thickened interfaces at high subcritical temperatures coupled with an inherent reduction of the mean free molecular path. At a certain point, the combination of reduced surface tension, the thicker interface, and reduced mean free molecular path enter the continuum length scale regime. When this occurs, inter-molecular forces approach that of the multicomponent continuum where transport processes dominate across the interfacial region. This leads to a continuous phase transition from compressed liquid to supercritical mixture states. Based on this theory, a regime diagram for liquid injection is developed that quantifies the conditions under which classical sprays transition to dense-fluid jets. It is shown that the chamber pressure required to support diffusion-dominated mixing dynamics depends on the composition and temperature of the injected liquid and ambient gas. To illustrate the method and analysis, we use conditions typical of diesel engine injection. We also present a companion set of high-speed images to provide experimental validation of the presented theory. The basic theory is quite general and applies to a wide range of modern propulsion and power systems such as liquid rockets, gas turbines, and reciprocating engines. Interestingly, the regime diagram associated with diesel engine injection suggests that classical spray phenomena at typical injection conditions do not occur.
Shapes of rotating superfluid helium nanodroplets
Bernando, Charles; Tanyag, Rico Mayro P.; Jones, Curtis; ...
2017-02-16
Rotating superfluid He droplets of approximately 1 μm in diameter were obtained in a free nozzle beam expansion of liquid He in vacuum and were studied by single-shot coherent diffractive imaging using an x-ray free electron laser. The formation of strongly deformed droplets is evidenced by large anisotropies and intensity anomalies (streaks) in the obtained diffraction images. The analysis of the images shows that in addition to previously described axially symmetric oblate shapes, some droplets exhibit prolate shapes. Forward modeling of the diffraction images indicates that the shapes of rotating superfluid droplets are very similar to their classical counterparts, givingmore » direct access to the droplet angular momenta and angular velocities. Here, the analyses of the radial intensity distribution and appearance statistics of the anisotropic images confirm the existence of oblate metastable superfluid droplets with large angular momenta beyond the classical bifurcation threshold.« less
Shapes of rotating superfluid helium nanodroplets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernando, Charles; Tanyag, Rico Mayro P.; Jones, Curtis
Rotating superfluid He droplets of approximately 1 μm in diameter were obtained in a free nozzle beam expansion of liquid He in vacuum and were studied by single-shot coherent diffractive imaging using an x-ray free electron laser. The formation of strongly deformed droplets is evidenced by large anisotropies and intensity anomalies (streaks) in the obtained diffraction images. The analysis of the images shows that in addition to previously described axially symmetric oblate shapes, some droplets exhibit prolate shapes. Forward modeling of the diffraction images indicates that the shapes of rotating superfluid droplets are very similar to their classical counterparts, givingmore » direct access to the droplet angular momenta and angular velocities. Here, the analyses of the radial intensity distribution and appearance statistics of the anisotropic images confirm the existence of oblate metastable superfluid droplets with large angular momenta beyond the classical bifurcation threshold.« less
Oechslin, Mathias S; Gschwind, Markus; James, Clara E
2018-04-01
As a functional homolog for left-hemispheric syntax processing in language, neuroimaging studies evidenced involvement of right prefrontal regions in musical syntax processing, of which underlying white matter connectivity remains unexplored so far. In the current experiment, we investigated the underlying pathway architecture in subjects with 3 levels of musical expertise. Employing diffusion tensor imaging tractography, departing from seeds from our previous functional magnetic resonance imaging study on music syntax processing in the same participants, we identified a pathway in the right ventral stream that connects the middle temporal lobe with the inferior frontal cortex via the extreme capsule, and corresponds to the left hemisphere ventral stream, classically attributed to syntax processing in language comprehension. Additional morphometric consistency analyses allowed dissociating tract core from more dispersed fiber portions. Musical expertise related to higher tract consistency of the right ventral stream pathway. Specifically, tract consistency in this pathway predicted the sensitivity for musical syntax violations. We conclude that enduring musical practice sculpts ventral stream architecture. Our results suggest that training-related pathway plasticity facilitates the right hemisphere ventral stream information transfer, supporting an improved sound-to-meaning mapping in music.
A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image
NASA Astrophysics Data System (ADS)
Barat, Christian; Phlypo, Ronald
2010-12-01
We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.
LensFlow: A Convolutional Neural Network in Search of Strong Gravitational Lenses
NASA Astrophysics Data System (ADS)
Pourrahmani, Milad; Nayyeri, Hooshang; Cooray, Asantha
2018-03-01
In this work, we present our machine learning classification algorithm for identifying strong gravitational lenses from wide-area surveys using convolutional neural networks; LENSFLOW. We train and test the algorithm using a wide variety of strong gravitational lens configurations from simulations of lensing events. Images are processed through multiple convolutional layers that extract feature maps necessary to assign a lens probability to each image. LENSFLOW provides a ranking scheme for all sources that could be used to identify potential gravitational lens candidates by significantly reducing the number of images that have to be visually inspected. We apply our algorithm to the HST/ACS i-band observations of the COSMOS field and present our sample of identified lensing candidates. The developed machine learning algorithm is more computationally efficient and complimentary to classical lens identification algorithms and is ideal for discovering such events across wide areas from current and future surveys such as LSST and WFIRST.
Adipose Tissue Quantification by Imaging Methods: A Proposed Classification
Shen, Wei; Wang, ZiMian; Punyanita, Mark; Lei, Jianbo; Sinav, Ahmet; Kral, John G.; Imielinska, Celina; Ross, Robert; Heymsfield, Steven B.
2007-01-01
Recent advances in imaging techniques and understanding of differences in the molecular biology of adipose tissue has rendered classical anatomy obsolete, requiring a new classification of the topography of adipose tissue. Adipose tissue is one of the largest body compartments, yet a classification that defines specific adipose tissue depots based on their anatomic location and related functions is lacking. The absence of an accepted taxonomy poses problems for investigators studying adipose tissue topography and its functional correlates. The aim of this review was to critically examine the literature on imaging of whole body and regional adipose tissue and to create the first systematic classification of adipose tissue topography. Adipose tissue terminology was examined in over 100 original publications. Our analysis revealed inconsistencies in the use of specific definitions, especially for the compartment termed “visceral” adipose tissue. This analysis leads us to propose an updated classification of total body and regional adipose tissue, providing a well-defined basis for correlating imaging studies of specific adipose tissue depots with molecular processes. PMID:12529479
Illumination invariant feature point matching for high-resolution planetary remote sensing images
NASA Astrophysics Data System (ADS)
Wu, Bo; Zeng, Hai; Hu, Han
2018-03-01
Despite its success with regular close-range and remote-sensing images, the scale-invariant feature transform (SIFT) algorithm is essentially not invariant to illumination differences due to the use of gradients for feature description. In planetary remote sensing imagery, which normally lacks sufficient textural information, salient regions are generally triggered by the shadow effects of keypoints, reducing the matching performance of classical SIFT. Based on the observation of dual peaks in a histogram of the dominant orientations of SIFT keypoints, this paper proposes an illumination-invariant SIFT matching method for high-resolution planetary remote sensing images. First, as the peaks in the orientation histogram are generally aligned closely with the sub-solar azimuth angle at the time of image collection, an adaptive suppression Gaussian function is tuned to level the histogram and thereby alleviate the differences in illumination caused by a changing solar angle. Next, the suppression function is incorporated into the original SIFT procedure for obtaining feature descriptors, which are used for initial image matching. Finally, as the distribution of feature descriptors changes after anisotropic suppression, and the ratio check used for matching and outlier removal in classical SIFT may produce inferior results, this paper proposes an improved matching procedure based on cross-checking and template image matching. The experimental results for several high-resolution remote sensing images from both the Moon and Mars, with illumination differences of 20°-180°, reveal that the proposed method retrieves about 40%-60% more matches than the classical SIFT method. The proposed method is of significance for matching or co-registration of planetary remote sensing images for their synergistic use in various applications. It also has the potential to be useful for flyby and rover images by integrating with the affine invariant feature detectors.
NASA Astrophysics Data System (ADS)
Mohammad, Fatimah; Ansari, Rashid; Shahidi, Mahnaz
2013-03-01
The visibility and continuity of the inner segment outer segment (ISOS) junction layer of the photoreceptors on spectral domain optical coherence tomography images is known to be related to visual acuity in patients with age-related macular degeneration (AMD). Automatic detection and segmentation of lesions and pathologies in retinal images is crucial for the screening, diagnosis, and follow-up of patients with retinal diseases. One of the challenges of using the classical level-set algorithms for segmentation involves the placement of the initial contour. Manually defining the contour or randomly placing it in the image may lead to segmentation of erroneous structures. It is important to be able to automatically define the contour by using information provided by image features. We explored a level-set method which is based on the classical Chan-Vese model and which utilizes image feature information for automatic contour placement for the segmentation of pathologies in fluorescein angiograms and en face retinal images of the ISOS layer. This was accomplished by exploiting a priori knowledge of the shape and intensity distribution allowing the use of projection profiles to detect the presence of pathologies that are characterized by intensity differences with surrounding areas in retinal images. We first tested our method by applying it to fluorescein angiograms. We then applied our method to en face retinal images of patients with AMD. The experimental results included demonstrate that the proposed method provided a quick and improved outcome as compared to the classical Chan-Vese method in which the initial contour is randomly placed, thus indicating the potential to provide a more accurate and detailed view of changes in pathologies due to disease progression and treatment.
Quantum-optical coherence tomography with classical light.
Lavoie, J; Kaltenbaek, R; Resch, K J
2009-03-02
Quantum-optical coherence tomography (Q-OCT) is an interferometric technique for axial imaging offering several advantages over conventional methods. Chirped-pulse interferometry (CPI) was recently demonstrated to exhibit all of the benefits of the quantum interferometer upon which Q-OCT is based. Here we use CPI to measure axial interferograms to profile a sample accruing the important benefits of Q-OCT, including automatic dispersion cancellation, but with 10 million times higher signal. Our technique solves the artifact problem in Q-OCT and highlights the power of classical correlation in optical imaging.
Seismic imaging: From classical to adjoint tomography
NASA Astrophysics Data System (ADS)
Liu, Q.; Gu, Y. J.
2012-09-01
Seismic tomography has been a vital tool in probing the Earth's internal structure and enhancing our knowledge of dynamical processes in the Earth's crust and mantle. While various tomographic techniques differ in data types utilized (e.g., body vs. surface waves), data sensitivity (ray vs. finite-frequency approximations), and choices of model parameterization and regularization, most global mantle tomographic models agree well at long wavelengths, owing to the presence and typical dimensions of cold subducted oceanic lithospheres and hot, ascending mantle plumes (e.g., in central Pacific and Africa). Structures at relatively small length scales remain controversial, though, as will be discussed in this paper, they are becoming increasingly resolvable with the fast expanding global and regional seismic networks and improved forward modeling and inversion techniques. This review paper aims to provide an overview of classical tomography methods, key debates pertaining to the resolution of mantle tomographic models, as well as to highlight recent theoretical and computational advances in forward-modeling methods that spearheaded the developments in accurate computation of sensitivity kernels and adjoint tomography. The first part of the paper is devoted to traditional traveltime and waveform tomography. While these approaches established a firm foundation for global and regional seismic tomography, data coverage and the use of approximate sensitivity kernels remained as key limiting factors in the resolution of the targeted structures. In comparison to classical tomography, adjoint tomography takes advantage of full 3D numerical simulations in forward modeling and, in many ways, revolutionizes the seismic imaging of heterogeneous structures with strong velocity contrasts. For this reason, this review provides details of the implementation, resolution and potential challenges of adjoint tomography. Further discussions of techniques that are presently popular in seismic array analysis, such as noise correlation functions, receiver functions, inverse scattering imaging, and the adaptation of adjoint tomography to these different datasets highlight the promising future of seismic tomography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaghmare, Kaustubh; Kembhavi, Ajit; Barway, Sudhanshu, E-mail: kaustubh@iucaa.ernet.in, E-mail: akk@iucaa.ernet.in, E-mail: barway@saao.ac.za
In this Letter, we present a systematic study of lenticular (S0) galaxies based on mid-infrared imaging data on 185 objects taken using the Spitzer Infrared Array Camera. We identify the S0s hosting pseudobulges based on the position of the bulge on the Kormendy diagram and the Sersic index of the bulge. We find that pseudobulges preferentially occur in the fainter luminosity class (defined as having total K-band absolute magnitude M{sub K} fainter than -22.66 in the AB system). We present relations between bulge and disk parameters obtained as a function of the bulge type. The disks in the pseudobulge hostingmore » galaxies are found to have distinct trends on the r{sub e}-r{sub d} and {mu}{sub d}(0)-r{sub d} correlations compared to those in galaxies with classical bulges. We show that the disks of pseudobulge hosts possess on average a smaller scale length and have a fainter central surface brightness than their counterparts occurring in classical bulge hosting galaxies. The differences found for discs in pseudobulge and classical bulge hosting galaxies may be a consequence of the different processes creating the central mass concentrations.« less
NASA Astrophysics Data System (ADS)
Riveros, H. G.; Rosenberger, Franz
2012-05-01
This article discusses two 'magic tricks' in terms of underlying optical principles. The first trick is new and produces a 'ghost' in the air, and the second is the classical real image produced with two parabolic mirrors.
Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel
2010-01-01
This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406
Bravo, Ignacio; Mazo, Manuel; Lázaro, José L; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel
2010-01-01
This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices.
Face recognition via sparse representation of SIFT feature on hexagonal-sampling image
NASA Astrophysics Data System (ADS)
Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong
2018-04-01
This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.
Designed tools for analysis of lithography patterns and nanostructures
NASA Astrophysics Data System (ADS)
Dervillé, Alexandre; Baderot, Julien; Bernard, Guilhem; Foucher, Johann; Grönqvist, Hanna; Labrosse, Aurélien; Martinez, Sergio; Zimmermann, Yann
2017-03-01
We introduce a set of designed tools for the analysis of lithography patterns and nano structures. The classical metrological analysis of these objects has the drawbacks of being time consuming, requiring manual tuning and lacking robustness and user friendliness. With the goal of improving the current situation, we propose new image processing tools at different levels: semi automatic, automatic and machine-learning enhanced tools. The complete set of tools has been integrated into a software platform designed to transform the lab into a virtual fab. The underlying idea is to master nano processes at the research and development level by accelerating the access to knowledge and hence speed up the implementation in product lines.
Liu, Xiaozheng; Yuan, Zhenming; Zhu, Junming; Xu, Dongrong
2013-12-07
The demons algorithm is a popular algorithm for non-rigid image registration because of its computational efficiency and simple implementation. The deformation forces of the classic demons algorithm were derived from image gradients by considering the deformation to decrease the intensity dissimilarity between images. However, the methods using the difference of image intensity for medical image registration are easily affected by image artifacts, such as image noise, non-uniform imaging and partial volume effects. The gradient magnitude image is constructed from the local information of an image, so the difference in a gradient magnitude image can be regarded as more reliable and robust for these artifacts. Then, registering medical images by considering the differences in both image intensity and gradient magnitude is a straightforward selection. In this paper, based on a diffeomorphic demons algorithm, we propose a chain-type diffeomorphic demons algorithm by combining the differences in both image intensity and gradient magnitude for medical image registration. Previous work had shown that the classic demons algorithm can be considered as an approximation of a second order gradient descent on the sum of the squared intensity differences. By optimizing the new dissimilarity criteria, we also present a set of new demons forces which were derived from the gradients of the image and gradient magnitude image. We show that, in controlled experiments, this advantage is confirmed, and yields a fast convergence.
The Golden Beauty: Brain Response to Classical and Renaissance Sculptures
Di Dio, Cinzia; Macaluso, Emiliano; Rizzolatti, Giacomo
2007-01-01
Is there an objective, biological basis for the experience of beauty in art? Or is aesthetic experience entirely subjective? Using fMRI technique, we addressed this question by presenting viewers, naïve to art criticism, with images of masterpieces of Classical and Renaissance sculpture. Employing proportion as the independent variable, we produced two sets of stimuli: one composed of images of original sculptures; the other of a modified version of the same images. The stimuli were presented in three conditions: observation, aesthetic judgment, and proportion judgment. In the observation condition, the viewers were required to observe the images with the same mind-set as if they were in a museum. In the other two conditions they were required to give an aesthetic or proportion judgment on the same images. Two types of analyses were carried out: one which contrasted brain response to the canonical and the modified sculptures, and one which contrasted beautiful vs. ugly sculptures as judged by each volunteer. The most striking result was that the observation of original sculptures, relative to the modified ones, produced activation of the right insula as well as of some lateral and medial cortical areas (lateral occipital gyrus, precuneus and prefrontal areas). The activation of the insula was particularly strong during the observation condition. Most interestingly, when volunteers were required to give an overt aesthetic judgment, the images judged as beautiful selectively activated the right amygdala, relative to those judged as ugly. We conclude that, in observers naïve to art criticism, the sense of beauty is mediated by two non-mutually exclusive processes: one based on a joint activation of sets of cortical neurons, triggered by parameters intrinsic to the stimuli, and the insula (objective beauty); the other based on the activation of the amygdala, driven by one's own emotional experiences (subjective beauty). PMID:18030335
Using Fractal And Morphological Criteria For Automatic Classification Of Lung Diseases
NASA Astrophysics Data System (ADS)
Vehel, Jacques Levy
1989-11-01
Medical Images are difficult to analyze by means of classical image processing tools because they are very complex and irregular. Such shapes are obtained for instance in Nuclear Medecine with the spatial distribution of activity for organs such as lungs, liver, and heart. We have tried to apply two different theories to these signals: - Fractal Geometry deals with the analysis of complex irregular shapes which cannot well be described by the classical Euclidean geometry. - Integral Geometry treats sets globally and allows to introduce robust measures. We have computed three parameters on three kinds of Lung's SPECT images: normal, pulmonary embolism and chronic desease: - The commonly used fractal dimension (FD), that gives a measurement of the irregularity of the 3D shape. - The generalized lacunarity dimension (GLD), defined as the variance of the ratio of the local activity by the mean activity, which is only sensitive to the distribution and the size of gaps in the surface. - The Favard length that gives an approximation of the surface of a 3-D shape. The results show that each slice of the lung, considered as a 3D surface, is fractal and that the fractal dimension is the same for each slice and for the three kind of lungs; as for the lacunarity and Favard length, they are clearly different for normal lungs, pulmonary embolisms and chronic diseases. These results indicate that automatic classification of Lung's SPECT can be achieved, and that a quantitative measurement of the evolution of the disease could be made.
Schlimpert, Susan; Flärdh, Klas; Buttner, Mark
2016-02-28
Live-cell imaging of biological processes at the single cell level has been instrumental to our current understanding of the subcellular organization of bacterial cells. However, the application of time-lapse microscopy to study the cell biological processes underpinning development in the sporulating filamentous bacteria Streptomyces has been hampered by technical difficulties. Here we present a protocol to overcome these limitations by growing the new model species, Streptomyces venezuelae, in a commercially available microfluidic device which is connected to an inverted fluorescence widefield microscope. Unlike the classical model species, Streptomyces coelicolor, S. venezuelae sporulates in liquid, allowing the application of microfluidic growth chambers to cultivate and microscopically monitor the cellular development and differentiation of S. venezuelae over long time periods. In addition to monitoring morphological changes, the spatio-temporal distribution of fluorescently labeled target proteins can also be visualized by time-lapse microscopy. Moreover, the microfluidic platform offers the experimental flexibility to exchange the culture medium, which is used in the detailed protocol to stimulate sporulation of S. venezuelae in the microfluidic chamber. Images of the entire S. venezuelae life cycle are acquired at specific intervals and processed in the open-source software Fiji to produce movies of the recorded time-series.
Fluorescence Time-lapse Imaging of the Complete S. venezuelae Life Cycle Using a Microfluidic Device
Schlimpert, Susan; Flärdh, Klas; Buttner, Mark
2016-01-01
Live-cell imaging of biological processes at the single cell level has been instrumental to our current understanding of the subcellular organization of bacterial cells. However, the application of time-lapse microscopy to study the cell biological processes underpinning development in the sporulating filamentous bacteria Streptomyces has been hampered by technical difficulties. Here we present a protocol to overcome these limitations by growing the new model species, Streptomyces venezuelae, in a commercially available microfluidic device which is connected to an inverted fluorescence widefield microscope. Unlike the classical model species, Streptomyces coelicolor, S. venezuelae sporulates in liquid, allowing the application of microfluidic growth chambers to cultivate and microscopically monitor the cellular development and differentiation of S. venezuelae over long time periods. In addition to monitoring morphological changes, the spatio-temporal distribution of fluorescently labeled target proteins can also be visualized by time-lapse microscopy. Moreover, the microfluidic platform offers the experimental flexibility to exchange the culture medium, which is used in the detailed protocol to stimulate sporulation of S. venezuelae in the microfluidic chamber. Images of the entire S. venezuelae life cycle are acquired at specific intervals and processed in the open-source software Fiji to produce movies of the recorded time-series. PMID:26967231
A robust embedded vision system feasible white balance algorithm
NASA Astrophysics Data System (ADS)
Wang, Yuan; Yu, Feihong
2018-01-01
White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.
Orthonormal aberration polynomials for anamorphic optical imaging systems with rectangular pupils.
Mahajan, Virendra N
2010-12-20
The classical aberrations of an anamorphic optical imaging system, representing the terms of a power-series expansion of its aberration function, are separable in the Cartesian coordinates of a point on its pupil. We discuss the balancing of a classical aberration of a certain order with one or more such aberrations of lower order to minimize its variance across a rectangular pupil of such a system. We show that the balanced aberrations are the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point. The compound Legendre polynomials are orthogonal across a rectangular pupil and, like the classical aberrations, are inherently separable in the Cartesian coordinates of the pupil point. They are different from the balanced aberrations and the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil.
Plenoptic imaging with second-order correlations of light
NASA Astrophysics Data System (ADS)
Pepe, Francesco V.; Scarcelli, Giuliano; Garuccio, Augusto; D'Angelo, Milena
2016-01-01
Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable tridimensional imaging in a single shot. We demonstrate that it is possible to implement plenoptic imaging through second-order correlations of chaotic light, thus enabling to overcome the typical limitations of classical plenoptic devices.
Edge detection for optical synthetic aperture based on deep neural network
NASA Astrophysics Data System (ADS)
Tan, Wenjie; Hui, Mei; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin
2017-09-01
Synthetic aperture optics systems can meet the demands of the next-generation space telescopes being lighter, larger and foldable. However, the boundaries of segmented aperture systems are much more complex than that of the whole aperture. More edge regions mean more imaging edge pixels, which are often mixed and discretized. In order to achieve high-resolution imaging, it is necessary to identify the gaps between the sub-apertures and the edges of the projected fringes. In this work, we introduced the algorithm of Deep Neural Network into the edge detection of optical synthetic aperture imaging. According to the detection needs, we constructed image sets by experiments and simulations. Based on MatConvNet, a toolbox of MATLAB, we ran the neural network, trained it on training image set and tested its performance on validation set. The training was stopped when the test error on validation set stopped declining. As an input image is given, each intra-neighbor area around the pixel is taken into the network, and scanned pixel by pixel with the trained multi-hidden layers. The network outputs make a judgment on whether the center of the input block is on edge of fringes. We experimented with various pre-processing and post-processing techniques to reveal their influence on edge detection performance. Compared with the traditional algorithms or their improvements, our method makes decision on a much larger intra-neighbor, and is more global and comprehensive. Experiments on more than 2,000 images are also given to prove that our method outperforms classical algorithms in optical images-based edge detection.
Continuous quantum measurement and the quantum to classical transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharya, Tanmoy; Habib, Salman; Jacobs, Kurt
2003-04-01
While ultimately they are described by quantum mechanics, macroscopic mechanical systems are nevertheless observed to follow the trajectories predicted by classical mechanics. Hence, in the regime defining macroscopic physics, the trajectories of the correct classical motion must emerge from quantum mechanics, a process referred to as the quantum to classical transition. Extending previous work [Bhattacharya, Habib, and Jacobs, Phys. Rev. Lett. 85, 4852 (2000)], here we elucidate this transition in some detail, showing that once the measurement processes that affect all macroscopic systems are taken into account, quantum mechanics indeed predicts the emergence of classical motion. We derive inequalities thatmore » describe the parameter regime in which classical motion is obtained, and provide numerical examples. We also demonstrate two further important properties of the classical limit: first, that multiple observers all agree on the motion of an object, and second, that classical statistical inference may be used to correctly track the classical motion.« less
Functional renal imaging: new trends in radiology and nuclear medicine.
Durand, Emmanuel; Chaumet-Riffaud, Philippe; Grenier, Nicolas
2011-01-01
The objective of this work is to compare the characteristics of various techniques for functional renal imaging, with a focus on nuclear medicine and magnetic resonance imaging. Even with low spatial resolution and rather poor signal-to-noise ratio, classical nuclear medicine has the advantage of linearity and good sensitivity. It remains the gold standard technique for renal relative functional assessment. Technetium-99m ((99m)Tc)-labeled diethylenetriamine penta-acetate remains the reference glomerular tracer. Tubular tracers have been improved: (123)I- or (131)I-hippuran, (99m)Tc-MAG3 and, recently, (99m)Tc-nitrilotriacetic acid. However, advancement in molecular imaging has not produced a groundbreaking tracer. Renal magnetic resonance imaging with classical gadolinated tracers probably has potential in this domain but has a lack of linearity and, therefore, its value still needs evaluation. Moreover, the advent of nephrogenic systemic fibrosis has delayed its expansion. Other developments, such as diffusion or blood oxygen level-dependent imaging, may have a role in the future. The other modalities have a limited role in clinical practice for functional renal imaging. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Marchand, Paul J.; Bouwens, Arno; Shamaei, Vincent; Nguyen, David; Extermann, Jerome; Bolmont, Tristan; Lasser, Theo
2016-03-01
Magnetic Resonance Imaging has revolutionised our understanding of brain function through its ability to image human cerebral structures non-invasively over the entire brain. By exploiting the different magnetic properties of oxygenated and deoxygenated blood, functional MRI can indirectly map areas undergoing neural activation. Alongside the development of fMRI, powerful statistical tools have been developed in an effort to shed light on the neural pathways involved in processing of sensory and cognitive information. In spite of the major improvements made in fMRI technology, the obtained spatial resolution of hundreds of microns prevents MRI in resolving and monitoring processes occurring at the cellular level. In this regard, Optical Coherence Microscopy is an ideal instrumentation as it can image at high spatio-temporal resolution. Moreover, by measuring the mean and the width of the Doppler spectra of light scattered by moving particles, OCM allows extracting the axial and lateral velocity components of red blood cells. The ability to assess quantitatively total blood velocity, as opposed to classical axial velocity Doppler OCM, is of paramount importance in brain imaging as a large proportion of cortical vascular is oriented perpendicularly to the optical axis. We combine here quantitative blood flow imaging with extended-focus Optical Coherence Microscopy and Statistical Parametric Mapping tools to generate maps of stimuli-evoked cortical hemodynamics at the capillary level.
Lu, Yisu; Jiang, Jun; Yang, Wei; Feng, Qianjin; Chen, Wufan
2014-01-01
Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use.
Lu, Yisu; Jiang, Jun; Chen, Wufan
2014-01-01
Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP) model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF) smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR) multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use. PMID:25254064
Hartmann, Sébastien; Elsäßer, Wolfgang
2017-01-01
Initially, ghost imaging (GI) was demonstrated with entangled light from parametric down conversion. Later, classical light sources were introduced with the development of thermal light GI concepts. State-of-the-art classical GI light sources rely either on complex combinations of coherent light with spatially randomizing optical elements or on incoherent lamps with monochromating optics, however suffering strong losses of efficiency and directionality. Here, a broad-area superluminescent diode is proposed as a new light source for classical ghost imaging. The coherence behavior of this spectrally broadband emitting opto-electronic light source is investigated in detail. An interferometric two-photon detection technique is exploited in order to resolve the ultra-short correlation timescales. We thereby quantify the coherence time, the photon statistics as well as the number of spatial modes unveiling a complete incoherent light behavior. With a one-dimensional proof-of-principle GI experiment, we introduce these compact emitters to the field which could be beneficial for high-speed GI systems as well as for long range GI sensing in future applications. PMID:28150737
Model Based Reconstruction of UT Array Data
NASA Astrophysics Data System (ADS)
Calmon, P.; Iakovleva, E.; Fidahoussen, A.; Ribay, G.; Chatillon, S.
2008-02-01
Beyond the detection of defects, their characterization (identification, positioning, sizing) is one goal of great importance often assigned to the analysis of NDT data. The first step of such analysis in the case of ultrasonic testing amounts to image in the part the detected echoes. This operation is in general achieved by considering time of flights and by applying simplified algorithms which are often valid only on canonical situations. In this communication we present an overview of different imaging techniques studied at CEA LIST and based on the exploitation of direct models which enable to address complex configurations and are available in the CIVA software plat-form. We discuss in particular ray-model based algorithms, algorithms derived from classical synthetic focusing and processing of the full inter-element matrix (MUSIC algorithm).
A DBN based anomaly targets detector for HSI
NASA Astrophysics Data System (ADS)
Ma, Ning; Wang, Shaojun; Yu, Jinxiang; Peng, Yu
2017-10-01
Due to the assumption that Hyperspectral image (HSI) should conform to Gaussian distribution, traditional Mahalanobis distance-based anomaly targets detectors perform poor because the assumption may not always hold. In order to solve those problems, a deep learning based detector, Deep Belief Network(DBN) anomaly detector(DBN-AD), was proposed to fit the unknown distribution of HSI by energy modeling, the reconstruction errors of this encode-decode processing are used for discriminating the anomaly targets. Experiments are implemented on real and synthesized HSI dataset which collection by Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS). Comparing to classic anomaly detector, the proposed method shows better performance, it performs about 0.17 higher in Area Under ROC Curve (AUC) than that of Reed-Xiaoli detector(RXD) and Kernel-RXD (K-RXD).
An iterative approach to region growing using associative memories
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Cowart, A.
1983-01-01
Region growing, often given as a classical example of the recursive control structures used in image processing which are often awkward to implement in hardware where the intent is the segmentation of an image at raster scan rates, is addressed in light of the postulate that any computation which can be performed recursively can be performed easily and efficiently by iteration coupled with association. Attention is given to an algorithm and hardware structure able to perform region labeling iteratively at scan rates. Every pixel is individually labeled with an identifier which signifies the region to which it belongs. Difficulties otherwise requiring recursion are handled by maintaining an equivalence table in hardware transparent to the computer, which reads the labeled pixels. A simulation of the associative memory has demonstrated its effectiveness.
Two fast approximate wavelet algorithms for image processing, classification, and recognition
NASA Astrophysics Data System (ADS)
Wickerhauser, Mladen V.
1994-07-01
We use large libraries of template waveforms with remarkable orthogonality properties to recast the relatively complex principal orthogonal decomposition (POD) into an optimization problem with a fast solution algorithm. Then it becomes practical to use POD to solve two related problems: recognizing or classifying images, and inverting a complicated map from a low-dimensional configuration space to a high-dimensional measurement space. In the case where the number N of pixels or measurements is more than 1000 or so, the classical O(N3) POD algorithms becomes very costly, but it can be replaced with an approximate best-basis method that has complexity O(N2logN). A variation of POD can also be used to compute an approximate Jacobian for the complicated map.
Differential dynamic microscopy to characterize Brownian motion and bacteria motility
NASA Astrophysics Data System (ADS)
Germain, David; Leocmach, Mathieu; Gibaud, Thomas
2016-03-01
We have developed a lab module for undergraduate students, which involves the process of quantifying the dynamics of a suspension of microscopic particles using Differential Dynamic Microscopy (DDM). DDM is a relatively new technique that constitutes an alternative method to more classical techniques such as dynamic light scattering (DLS) or video particle tracking (VPT). The technique consists of imaging a particle dispersion with a standard light microscope and a camera and analyzing the images using a digital Fourier transform to obtain the intermediate scattering function, an autocorrelation function that characterizes the dynamics of the dispersion. We first illustrate DDM in the textbook case of colloids under Brownian motion, where we measure the diffusion coefficient. Then we show that DDM is a pertinent tool to characterize biological systems such as motile bacteria.
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
Welch, David A.; Mehdi, Beata L.; Hatchell, Hanna J.; ...
2015-03-25
Understanding the fundamental processes taking place at the electrode-electrolyte interface in batteries will play a key role in the development of next generation energy storage technologies. One of the most fundamental aspects of the electrode-electrolyte interface is the electrical double layer (EDL). Given the recent development of high spatial resolution in-situ electrochemical cells for scanning transmission electron microscopy (STEM), there now exists the possibility that we can directly observe the formation and dynamics of the EDL. In this paper we predict electrolyte structure within the EDL using classical models and atomistic Molecular Dynamics (MD) simulations. The MD simulations show thatmore » the classical models fail to accurately reproduce concentration profiles that exist within the electrolyte. It is thus suggested that MD must be used in order to accurately predict STEM images of the electrode-electrolyte interface. Using MD and image simulations together for a high contrast electrolyte (the high atomic number CsCl electrolyte), it is determined that, for a smooth interface, concentration profiles within the EDL should be visible experimentally. When normal experimental parameters such as rough interfaces and low-Z electrolytes (like those used in Li-ion batteries) are considered, observation of the EDL appears to be more difficult.« less
Quality and utilization of food co-products and residues
NASA Astrophysics Data System (ADS)
Cooke, P.; Bao, G.; Broderick, C.; Fishman, M.; Liu, L.; Onwulata, C.
2010-06-01
Some agricultural industries generate large amounts of low value co-products/residues, including citrus peel, sugar beet pulp and whey protein from the production of orange juice, sugar and cheese commodities, respectively. National Program #306 of the USDA Agricultural Research Service aims to characterize and enhance quality and develop new processes and uses for value-added foods and bio-based products. In parallel projects, we applied scanning microscopies to examine the molecular organization of citrus pectin gels, covalent crosslinking to reduce debonding in sugar beet pulp-PLA composites and functional modification of whey protein through extrusion in order to evaluate new methods of processing and formulating new products. Also, qualitative attributes of fresh produce that could potentially guide germ line development and crop management were explored through fluorescence imaging: synthesis and accumulation of oleoresin in habanero peppers suggest a complicated mechanism of secretion that differs from the classical scheme. Integrated imaging appears to offer significant structural insights to help understand practical properties and features of important food co-products/residues.
The Statistical Interpretation of Classical Thermodynamic Heating and Expansion Processes
ERIC Educational Resources Information Center
Cartier, Stephen F.
2011-01-01
A statistical model has been developed and applied to interpret thermodynamic processes typically presented from the macroscopic, classical perspective. Through this model, students learn and apply the concepts of statistical mechanics, quantum mechanics, and classical thermodynamics in the analysis of the (i) constant volume heating, (ii)…
Optimization in First Semester Calculus: A Look at a Classic Problem
ERIC Educational Resources Information Center
LaRue, Renee; Infante, Nicole Engelke
2015-01-01
Optimization problems in first semester calculus have historically been a challenge for students. Focusing on the classic optimization problem of finding the minimum amount of fencing required to enclose a fixed area, we examine students' activity through the lens of Tall and Vinner's concept image and Carlson and Bloom's multidimensional…
Exploring Classical Art at the Museum of Fine Arts, Boston.
ERIC Educational Resources Information Center
Burchenal, Margaret; Foote, Allison
This resource packet is designed to help teachers incorporate the study of ancient Greek and Roman art into junior and senior high school classrooms. The packet consists of four curriculum units based upon aspects of classical life or culture. These units are: "Daily Life; Mythology"; "Images of Power"; and "Echoes of…
Fonseca, Eduardo Kaiser Ururahy Nunes; Yamauchi, Fernando Ide; Tridente, Cassia Franco; Baroni, Ronaldo Hueb
2017-03-01
Corkscrew esophagus (also referred as rosary bead esophagus) is a classic finding of diffuse esophageal spasm (DES) in barium studies reflecting abnormal contractions, leading to compartmentalization and curling of the esophagus, ultimately giving an appearance similar to a corkscrew or rosary beads. We review the pathophysiology of this finding, correlating it to corkscrew and rosary images that originated this classic description.
Purmann, Sascha; Pollmann, Stefan
2015-01-01
To process information selectively and to continuously fine-tune selectivity of information processing are important abilities for successful goal-directed behavior. One phenomenon thought to represent this fine-tuning are conflict adaptation effects in interference tasks, i.e., reduction of interference after an incompatible trial and when incompatible trials are frequent. The neurocognitive mechanisms of these effects are currently only partly understood and results from brainimaging studies so far are mixed. In our study we validate and extend recent findings by examining adaption to recent conflict in the classical Stroop task using functional magnetic resonance imaging. Consistent with previous research we found increased activity in a fronto-parietal network comprising the medial prefrontal cortex, ventro-lateral prefrontal cortex, and posterior parietal cortex when contrasting incompatible with compatible trials. These areas have been associated with attentional processes and might reflect increased cognitive conflict and resolution thereof during incompatible trials. While carefully controlling for non-attentional sequential effects we found smaller Stroop interference after an incompatible trial (conflict adaptation effect). These behavioral conflict adaptation effects were accompanied by changes in activity in visual color-selective areas (V4, V4α), while there was no modulation by previous trial compatibility in a visual word-selective area (VWFA). Our results provide further evidence for the notion, that adaptation to recent conflict seems to be based mainly on enhancement of processing of the task-relevant information.
Spectral-Spatial Scale Invariant Feature Transform for Hyperspectral Images.
Al-Khafaji, Suhad Lateef; Jun Zhou; Zia, Ali; Liew, Alan Wee-Chung
2018-02-01
Spectral-spatial feature extraction is an important task in hyperspectral image processing. In this paper we propose a novel method to extract distinctive invariant features from hyperspectral images for registration of hyperspectral images with different spectral conditions. Spectral condition means images are captured with different incident lights, viewing angles, or using different hyperspectral cameras. In addition, spectral condition includes images of objects with the same shape but different materials. This method, which is named spectral-spatial scale invariant feature transform (SS-SIFT), explores both spectral and spatial dimensions simultaneously to extract spectral and geometric transformation invariant features. Similar to the classic SIFT algorithm, SS-SIFT consists of keypoint detection and descriptor construction steps. Keypoints are extracted from spectral-spatial scale space and are detected from extrema after 3D difference of Gaussian is applied to the data cube. Two descriptors are proposed for each keypoint by exploring the distribution of spectral-spatial gradient magnitude in its local 3D neighborhood. The effectiveness of the SS-SIFT approach is validated on images collected in different light conditions, different geometric projections, and using two hyperspectral cameras with different spectral wavelength ranges and resolutions. The experimental results show that our method generates robust invariant features for spectral-spatial image matching.
A Laplacian based image filtering using switching noise detector.
Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar
2015-01-01
This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.
Dispersion-cancelled biological imaging with quantum-inspired interferometry
Mazurek, M. D.; Schreiter, K. M.; Prevedel, R.; Kaltenbaek, R.; Resch, K. J.
2013-01-01
Quantum information science promises transformative impact over a range of key technologies in computing, communication, and sensing. A prominent example uses entangled photons to overcome the resolution-degrading effects of dispersion in the medical-imaging technology, optical coherence tomography. The quantum solution introduces new challenges: inherently low signal and artifacts, additional unwanted signal features. It has recently been shown that entanglement is not a requirement for automatic dispersion cancellation. Such classical techniques could solve the low-signal problem, however they all still suffer from artifacts. Here, we introduce a method of chirped-pulse interferometry based on shaped laser pulses, and use it to produce artifact-free, high-resolution, dispersion-cancelled images of the internal structure of a biological sample. Our work fulfills one of the promises of quantum technologies: automatic-dispersion-cancellation interferometry in biomedical imaging. It also shows how subtle differences between a quantum technique and its classical analogue may have unforeseen, yet beneficial, consequences. PMID:23545597
Evolution of illustrations in anatomy: a study from the classical period in Europe to modern times.
Ghosh, Sanjib Kumar
2015-01-01
Illustrations constitute an essential element of learning anatomy in modern times. However it required a significant evolutionary process spread over centuries, for illustrations to achieve the present status in the subject of anatomy. This review article attempts to outline the evolutionary process by highlighting on the works of esteemed anatomists in a chronological manner. Available literature suggests that illustrations were not used in anatomy during the classical period when the subject was dominated by the descriptive text of Galen. Guido da Vigevano was first to use illustrations in anatomy during the Late Middle Ages and this concept developed further during the Renaissance period when Andreas Vesalius pioneered in illustrations becoming an indispensable tool in conveying anatomical details. Toward later stages of the Renaissance period, Fabricius ab Aquapendente endeavored to restrict dramatization of anatomical illustrations which was a prevalent trend in early Renaissance. During the 18th century, anatomical artwork was characterized by the individual styles of prominent anatomists leading to suppression of anatomical details. In the 19th century, Henry Gray used illustrations in his anatomical masterpiece that focused on depicting anatomical structures and were free from any artistic style. From early part of the 20th century medical images and photographs started to complement traditional handmade anatomical illustrations. Computer technology and advanced software systems played a key role in the evolution of anatomical illustrations during the late 20th century resulting in new generation 3D image datasets that are being used in the 21st century in innovative formats for teaching and learning anatomy. © 2014 American Association of Anatomists.
The emotional power of music: how music enhances the feeling of affective pictures.
Baumgartner, Thomas; Lutz, Kai; Schmidt, Conny F; Jäncke, Lutz
2006-02-23
Music is an intriguing stimulus widely used in movies to increase the emotional experience. However, no brain imaging study has to date examined this enhancement effect using emotional pictures (the modality mostly used in emotion research) and musical excerpts. Therefore, we designed this functional magnetic resonance imaging study to explore how musical stimuli enhance the feeling of affective pictures. In a classical block design carefully controlling for habituation and order effects, we presented fearful and sad pictures (mostly taken from the IAPS) either alone or combined with congruent emotional musical excerpts (classical pieces). Subjective ratings clearly indicated that the emotional experience was markedly increased in the combined relative to the picture condition. Furthermore, using a second-level analysis and regions of interest approach, we observed a clear functional and structural dissociation between the combined and the picture condition. Besides increased activation in brain areas known to be involved in auditory as well as in neutral and emotional visual-auditory integration processes, the combined condition showed increased activation in many structures known to be involved in emotion processing (including for example amygdala, hippocampus, parahippocampus, insula, striatum, medial ventral frontal cortex, cerebellum, fusiform gyrus). In contrast, the picture condition only showed an activation increase in the cognitive part of the prefrontal cortex, mainly in the right dorsolateral prefrontal cortex. Based on these findings, we suggest that emotional pictures evoke a more cognitive mode of emotion perception, whereas congruent presentations of emotional visual and musical stimuli rather automatically evoke strong emotional feelings and experiences.
Least significant qubit algorithm for quantum images
NASA Astrophysics Data System (ADS)
Sang, Jianzhi; Wang, Shen; Li, Qiong
2016-11-01
To study the feasibility of the classical image least significant bit (LSB) information hiding algorithm on quantum computer, a least significant qubit (LSQb) information hiding algorithm of quantum image is proposed. In this paper, we focus on a novel quantum representation for color digital images (NCQI). Firstly, by designing the three qubits comparator and unitary operators, the reasonability and feasibility of LSQb based on NCQI are presented. Then, the concrete LSQb information hiding algorithm is proposed, which can realize the aim of embedding the secret qubits into the least significant qubits of RGB channels of quantum cover image. Quantum circuit of the LSQb information hiding algorithm is also illustrated. Furthermore, the secrets extracting algorithm and circuit are illustrated through utilizing control-swap gates. The two merits of our algorithm are: (1) it is absolutely blind and (2) when extracting secret binary qubits, it does not need any quantum measurement operation or any other help from classical computer. Finally, simulation and comparative analysis show the performance of our algorithm.
Multiple Active Contours Guided by Differential Evolution for Medical Image Segmentation
Cruz-Aceves, I.; Avina-Cervantes, J. G.; Lopez-Hernandez, J. M.; Rostro-Gonzalez, H.; Garcia-Capulin, C. H.; Torres-Cisneros, M.; Guzman-Cabrera, R.
2013-01-01
This paper presents a new image segmentation method based on multiple active contours guided by differential evolution, called MACDE. The segmentation method uses differential evolution over a polar coordinate system to increase the exploration and exploitation capabilities regarding the classical active contour model. To evaluate the performance of the proposed method, a set of synthetic images with complex objects, Gaussian noise, and deep concavities is introduced. Subsequently, MACDE is applied on datasets of sequential computed tomography and magnetic resonance images which contain the human heart and the human left ventricle, respectively. Finally, to obtain a quantitative and qualitative evaluation of the medical image segmentations compared to regions outlined by experts, a set of distance and similarity metrics has been adopted. According to the experimental results, MACDE outperforms the classical active contour model and the interactive Tseng method in terms of efficiency and robustness for obtaining the optimal control points and attains a high accuracy segmentation. PMID:23983809
[Research and development strategies in classical herbal formulae].
Chen, Chang; Cheng, Jin-Tang; Liu, An
2017-05-01
As an outstanding representative of traditional Chinese medicine prescription, classical herbal formulae are the essence of traditional Chinese medicine great treasure. To support the development of classical herbal formulae, the state and relevant administrative departments have successively promulgated the relevant encouraged policies.But some key issues of classic herbal formulae in the development process have not reached a unified consensus and standard, and these problems were discussed in depth here.The authors discussed the registration requirements of classical herbal formulae, proposed the screening specific indicators of classical herbal formulae, determination basis of prescription and dosage,screening method of production process, and the basic principle of clinical localization, in order to bring out valuable opinions and provide a reference for classical herbal formulae development and policy formulation. Copyright© by the Chinese Pharmaceutical Association.
Classical trajectory studies on the dynamics of one-photon double photionization of H2O
NASA Astrophysics Data System (ADS)
Streeter, Zachary; Yip, Frank; Reedy, Dylan P.; Landers, Allen; McCurdy, C. William
2017-04-01
Recent momentum imaging experiments at the Advanced Light Source have opened the possibility of measuring the complete triple differential cross section (TDCS) for one-photon double ionization of H2O in the molecular frame. The measurements depend on the complete breakup process, H2O + hν -> 2e-+ H+ + H+ +O. At the 57 eV photon energy of the experiment this process could proceed via any of the nine energetically accessible electronic states of H2O++. To discover which ionization channels contribute to the observed TDCS for the electrons measured in coincidence with different kinetic energy releases, we have carried out classical trajectory studies for breakup of the water dication on all nine potential surfaces, sampling from a Wigner phase space distribution for the vibrational ground state of H2O. The final momentum distributions of the protons and branching ratios between two- and three-body breakup are then analyzed and the results are compared with experiment to identify which ionization channels contribute to the TDCS observed in coincidence measurements of the ejected electrons. Office of Basic Energy Sciences, U.S. DOE.
Liver transplantation for classical maple syrup urine disease: long-term follow-up.
Díaz, Victoria M; Camarena, Carmen; de la Vega, Ángela; Martínez-Pardo, Mercedes; Díaz, Carmen; López, Manuel; Hernández, Francisco; Andrés, Ane; Jara, Paloma
2014-11-01
The aim of the study was to evaluate indications, results, and clinical and neurological evolution in children who have undergone liver transplantation for classical maple syrup urine disease (MSUD). Descriptive study of liver transplantation for MSUD between 1991 and 2012. Eight patients were transplanted. Indications for transplant were poor metabolic control expressed as significant psychomotor disabilities (4 had psychomotor delays, 5 had spasticity, and 5 had epilepsy) and poor quality of life (mean number of acute metabolic decompensations and mean number of total hospitalizations before transplantation 5 and 12, respectively). Four required nasogastric tube, with a maximum 4 g/day protein-restricted diet in all of them. Seven sustained significant alterations in brain magnetic resonance imaging. Mean leucine and alloisoleucine levels were 608 (standard deviation [SD] 516) and 218 μmol/L (SD 216), respectively. All of the patients received transplants with deceased-donor livers, with ages between 1.5 and 2.5 years (mean 1.78 years). Mean posttransplantation follow-up period was 12.2 years (range 5-21 years). Final patient and graft survival was 87.5% and 75%, respectively. Following transplantation, none required hospitalization in the last 3 years nor did any have new acute metabolic decompensations following a normal diet. Five followed normal schooling, 2 had motor disabilities, and 2 had convulsive crises. Brain magnetic resonance imaging was taken in 4 patients, showing neuroimage improvement in 3 of them. Mean leucine levels were <350 μmol/L from the immediate posttransplantation period (mean 225 μmol/L, SD 78), with a maximum alloisoleucine level of 20 μmol/L. Liver transplantation is an effective treatment for classical MSUD that arrests brain damage, although it does not reverse the process.
Students' Ideas about Prismatic Images: Teaching Experiments for an Image-Based Approach
ERIC Educational Resources Information Center
Grusche, Sascha
2017-01-01
Prismatic refraction is a classic topic in science education. To investigate how undergraduate students think about prismatic dispersion, and to see how they change their thinking when observing dispersed images, five teaching experiments were done and analysed according to the Model of Educational Reconstruction. For projection through a prism,…
ERIC Educational Resources Information Center
Riveros, H. G.; Rosenberger, Franz
2012-01-01
This article discusses two "magic tricks" in terms of underlying optical principles. The first trick is new and produces a "ghost" in the air, and the second is the classical real image produced with two parabolic mirrors. (Contains 2 figure and 6 photos.)
NASA Technical Reports Server (NTRS)
2011-01-01
Topics covered include: Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation; Airborne Radar Interferometric Repeat-Pass Processing; Plug-and-Play Environmental Monitoring Spacecraft Subsystem; Power-Combined GaN Amplifier with 2.28-W Output Power at 87 GHz; Wallops Ship Surveillance System; Source Lines Counter (SLiC) Version 4.0; Guidance, Navigation, and Control Program; Single-Frame Terrain Mapping Software for Robotic Vehicles; Auto Draw from Excel Input Files; Observation Scheduling System; CFDP for Interplanetary Overlay Network; X-Windows Widget for Image Display; Binary-Signal Recovery; Volumetric 3D Display System with Static Screen; MMIC Replacement for Gunn Diode Oscillators; Feature Acquisition with Imbalanced Training Data; Mount Protects Thin-Walled Glass or Ceramic Tubes from Large Thermal and Vibration Loads; Carbon Nanotube-Based Structural Health Monitoring Sensors; Wireless Inductive Power Device Suppresses Blade Vibrations; Safe, Advanced, Adaptable Isolation System Eliminates the Need for Critical Lifts; Anti-Rotation Device Releasable by Insertion of a Tool; A Magnetically Coupled Cryogenic Pump; Single Piezo-Actuator Rotary-Hammering Drill; Fire-Retardant Polymeric Additives; Catalytic Generation of Lift Gases for Balloons; Ionic Liquids to Replace Hydrazine; Variable Emittance Electrochromics Using Ionic Electrolytes and Low Solar Absorptance Coatings; Spacecraft Radiator Freeze Protection Using a Regenerative Heat Exchanger; Multi-Mission Power Analysis Tool; Correction for Self-Heating When Using Thermometers as Heaters in Precision Control Applications; Gravitational Wave Detection with Single-Laser Atom Interferometers; Titanium Alloy Strong Back for IXO Mirror Segments; Improved Ambient Pressure Pyroelectric Ion Source; Multi-Modal Image Registration and Matching for Localization of a Balloon on Titan; Entanglement in Quantum-Classical Hybrid; Algorithm for Autonomous Landing; Quantum-Classical Hybrid for Information Processing; Small-Scale Dissipation in Binary-Species Transitional Mixing Layers; Superpixel-Augmented Endmember Detection for Hyperspectral Images; Coding for Parallel Links to Maximize the Expected Value of Decodable Messages; and Microwave Tissue Soldering for Immediate Wound Closure.
Classical theory of atomic collisions - The first hundred years
NASA Astrophysics Data System (ADS)
Grujić, Petar V.
2012-05-01
Classical calculations of the atomic processes started in 1911 with famous Rutherford's evaluation of the differential cross section for α particles scattered on foil atoms [1]. The success of these calculations was soon overshadowed by the rise of Quantum Mechanics in 1925 and its triumphal success in describing processes at the atomic and subatomic levels. It was generally recognized that the classical approach should be inadequate and it was neglected until 1953, when the famous paper by Gregory Wannier appeared, in which the threshold law for the single ionization cross section behaviour by electron impact was derived. All later calculations and experimental studies confirmed the law derived by purely classical theory. The next step was taken by Ian Percival and collaborators in 60s, who developed a general classical three-body computer code, which was used by many researchers in evaluating various atomic processes like ionization, excitation, detachment, dissociation, etc. Another approach was pursued by Michal Gryzinski from Warsaw, who started a far reaching programme for treating atomic particles and processes as purely classical objects [2]. Though often criticized for overestimating the domain of the classical theory, results of his group were able to match many experimental data. Belgrade group was pursuing the classical approach using both analytical and numerical calculations, studying a number of atomic collisions, in particular near-threshold processes. Riga group, lead by Modris Gailitis [3], contributed considerably to the field, as it was done by Valentin Ostrovsky and coworkers from Sanct Petersbourg, who developed powerful analytical methods within purely classical mechanics [4]. We shall make an overview of these approaches and show some of the remarkable results, which were subsequently confirmed by semiclassical and quantum mechanical calculations, as well as by the experimental evidence. Finally we discuss the theoretical and epistemological background of the classical calculations and explain why these turned out so successful, despite the essentially quantum nature of the atomic and subatomic systems.
ERIC Educational Resources Information Center
Sodd, Mary Jo
Moliere's "Tartuffe" is an attack, not on religion, but on people who hide behind religion and exploit it. As a college professor in charge of student production searched for a director's concept for "Tartuffe," she realized that it would be unwise to attempt a museum staging of neo-classical theater with limited funding. She…
Implementation of a watershed algorithm on FPGAs
NASA Astrophysics Data System (ADS)
Zahirazami, Shahram; Akil, Mohamed
1998-10-01
In this article we present an implementation of a watershed algorithm on a multi-FPGA architecture. This implementation is based on an hierarchical FIFO. A separate FIFO for each gray level. The gray scale value of a pixel is taken for the altitude of the point. In this way we look at the image as a relief. We proceed by a flooding step. It's like as we immerse the relief in a lake. The water begins to come up and when the water of two different catchment basins reach each other, we will construct a separator or a `Watershed'. This approach is data dependent, hence the process time is different for different images. The H-FIFO is used to guarantee the nature of immersion, it means that we need two types of priority. All the points of an altitude `n' are processed before any point of altitude `n + 1'. And inside an altitude water propagates with a constant velocity in all directions from the source. This operator needs two images as input. An original image or it's gradient and the marker image. A classic way to construct the marker image is to build an image of minimal regions. Each minimal region has it's unique label. This label is the color of the water and will be used to see whether two different water touch each other. The algorithm at first fill the hierarchy FIFO with neighbors of all the regions who are not colored. Next it fetches the first pixel from the first non-empty FIFO and treats this pixel. This pixel will take the color of its neighbor, and all the neighbors who are not already in the H-FIFO are put in their correspondent FIFO. The process is over when the H-FIFO is empty. The result is a segmented and labeled image.
Minimized state complexity of quantum-encoded cryptic processes
NASA Astrophysics Data System (ADS)
Riechers, Paul M.; Mahoney, John R.; Aghamohammadi, Cina; Crutchfield, James P.
2016-05-01
The predictive information required for proper trajectory sampling of a stochastic process can be more efficiently transmitted via a quantum channel than a classical one. This recent discovery allows quantum information processing to drastically reduce the memory necessary to simulate complex classical stochastic processes. It also points to a new perspective on the intrinsic complexity that nature must employ in generating the processes we observe. The quantum advantage increases with codeword length: the length of process sequences used in constructing the quantum communication scheme. In analogy with the classical complexity measure, statistical complexity, we use this reduced communication cost as an entropic measure of state complexity in the quantum representation. Previously difficult to compute, the quantum advantage is expressed here in closed form using spectral decomposition. This allows for efficient numerical computation of the quantum-reduced state complexity at all encoding lengths, including infinite. Additionally, it makes clear how finite-codeword reduction in state complexity is controlled by the classical process's cryptic order, and it allows asymptotic analysis of infinite-cryptic-order processes.
Fast principal component analysis for stacking seismic data
NASA Astrophysics Data System (ADS)
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
Qualification process of CR system and quantification of digital image quality
NASA Astrophysics Data System (ADS)
Garnier, P.; Hun, L.; Klein, J.; Lemerle, C.
2013-01-01
CEA Valduc uses several X-Ray generators to carry out many inspections: void search, welding expertise, gap measurements, etc. Most of these inspections are carried out on silver based plates. For several years, the CEA/Valduc has decided to qualify new devices such as digital plates or CCD/flat panel plates. On one hand, the choice of this technological orientation is to forecast the assumed and eventual disappearance of silver based plates; on the other hand, it is also to keep our skills mastering up-to-date. The main improvement brought by numerical plates is the continuous progress of the measurement accuracy, especially with image data processing. It is now common to measure defects thickness or depth position within a part. In such applications, data image processing is used to obtain complementary information compared to scanned silver based plates. This scanning procedure is harmful for measurements which imply a data corruption of the resolution, the adding of numerical noise and is time expensive. Digital plates enable to suppress the scanning procedure and to increase resolution. It is nonetheless difficult to define, for digital images, single criteria for the image quality. A procedure has to be defined in order to estimate quality of the digital data itself; the impact of the scanning device and the configuration parameters are also to be taken into account. This presentation deals with the qualification process developed by CEA/Valduc for digital plates (DUR-NDT) based on the study of quantitative criteria chosen to define a direct numerical image quality that could be compared with scanned silver based pictures and the classical optical density. The versatility of the X-Ray parameters is also discussed (X-ray tension, intensity, time exposure). The aim is to be able to transfer the year long experience of CEA/Valduc with silver-based plates inspection to these new digital plates supports. This is an industrial stake.
ERIC Educational Resources Information Center
Brembs, Bjorn; Baxter, Douglas A.; Byrne, John H.
2004-01-01
Operant and classical conditioning are major processes shaping behavioral responses in all animals. Although the understanding of the mechanisms of classical conditioning has expanded significantly, the understanding of the mechanisms of operant conditioning is more limited. Recent developments in "Aplysia" are helping to narrow the gap in the…
An extended algebraic reconstruction technique (E-ART) for dual spectral CT.
Zhao, Yunsong; Zhao, Xing; Zhang, Peng
2015-03-01
Compared with standard computed tomography (CT), dual spectral CT (DSCT) has many advantages for object separation, contrast enhancement, artifact reduction, and material composition assessment. But it is generally difficult to reconstruct images from polychromatic projections acquired by DSCT, because of the nonlinear relation between the polychromatic projections and the images to be reconstructed. This paper first models the DSCT reconstruction problem as a nonlinear system problem; and then extend the classic ART method to solve the nonlinear system. One feature of the proposed method is its flexibility. It fits for any scanning configurations commonly used and does not require consistent rays for different X-ray spectra. Another feature of the proposed method is its high degree of parallelism, which means that the method is suitable for acceleration on GPUs (graphic processing units) or other parallel systems. The method is validated with numerical experiments from simulated noise free and noisy data. High quality images are reconstructed with the proposed method from the polychromatic projections of DSCT. The reconstructed images are still satisfactory even if there are certain errors in the estimated X-ray spectra.
Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video
NASA Astrophysics Data System (ADS)
Li, Honggui
2017-09-01
This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.
The physics of a popsicle stick bomb
NASA Astrophysics Data System (ADS)
Sautel, Jérémy; Bourges, Andréane; Caussarieu, Aude; Plihon, Nicolas; Taberlet, Nicolas
2017-10-01
Popsicle sticks can be interlocked in the so-called "cobra weave" to form a chain under tension. When one end of the chain is released, the sticks rapidly disentangle, forming a traveling wave that propagates down the chain. In this paper, the properties of the traveling front are studied experimentally, and classical results from the theory of elasticity allow for a dimensional analysis of the height and speed of the traveling wave. The study presented here can help undergraduate students familiarize themselves with experimental techniques of image processing, and it also demonstrates the power of dimensional analysis and scaling laws.
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.
2016-04-01
Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.
Wang, Xuefeng
2017-01-01
This paper presents a survey on a system that uses digital image processing techniques to identify anthracnose and powdery mildew diseases of sandalwood from digital images. Our main objective is researching the most suitable identification technology for the anthracnose and powdery mildew diseases of the sandalwood leaf, which provides algorithmic support for the real-time machine judgment of the health status and disease level of sandalwood. We conducted real-time monitoring of Hainan sandalwood leaves with varying severity levels of anthracnose and powdery mildew beginning in March 2014. We used image segmentation, feature extraction and digital image classification and recognition technology to carry out a comparative experimental study for the image analysis of powdery mildew, anthracnose disease and healthy leaves in the field. Performing the actual test for a large number of diseased leaves pointed to three conclusions: (1) Distinguishing effects of BP (Back Propagation) neural network method, in all kinds of classical methods, for sandalwood leaf anthracnose and powdery mildew disease are relatively good; the size of the lesion areas were closest to the actual. (2) The differences between two diseases can be shown well by the shape feature, color feature and texture feature of the disease image. (3) Identifying and diagnosing the diseased leaves have ideal results by SVM, which is based on radial basis kernel function. The identification rate of the anthracnose and healthy leaves was 92% respectively, and that of powdery mildew was 84%. Disease identification technology lays the foundation for remote monitoring disease diagnosis, preparing for remote transmission of the disease images, which is a very good guide and reference for further research of the disease identification and diagnosis system in sandalwood and other species of trees. PMID:28749977
Wu, Chunyan; Wang, Xuefeng
2017-01-01
This paper presents a survey on a system that uses digital image processing techniques to identify anthracnose and powdery mildew diseases of sandalwood from digital images. Our main objective is researching the most suitable identification technology for the anthracnose and powdery mildew diseases of the sandalwood leaf, which provides algorithmic support for the real-time machine judgment of the health status and disease level of sandalwood. We conducted real-time monitoring of Hainan sandalwood leaves with varying severity levels of anthracnose and powdery mildew beginning in March 2014. We used image segmentation, feature extraction and digital image classification and recognition technology to carry out a comparative experimental study for the image analysis of powdery mildew, anthracnose disease and healthy leaves in the field. Performing the actual test for a large number of diseased leaves pointed to three conclusions: (1) Distinguishing effects of BP (Back Propagation) neural network method, in all kinds of classical methods, for sandalwood leaf anthracnose and powdery mildew disease are relatively good; the size of the lesion areas were closest to the actual. (2) The differences between two diseases can be shown well by the shape feature, color feature and texture feature of the disease image. (3) Identifying and diagnosing the diseased leaves have ideal results by SVM, which is based on radial basis kernel function. The identification rate of the anthracnose and healthy leaves was 92% respectively, and that of powdery mildew was 84%. Disease identification technology lays the foundation for remote monitoring disease diagnosis, preparing for remote transmission of the disease images, which is a very good guide and reference for further research of the disease identification and diagnosis system in sandalwood and other species of trees.
Extreme Quantum Memory Advantage for Rare-Event Sampling
NASA Astrophysics Data System (ADS)
Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.
2018-02-01
We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.
NASA Astrophysics Data System (ADS)
Oblow, E. M.
1982-10-01
An evaluation was made of the mathematical and economic basis for conversion processes in the Long-term Energy Analysis Program (LEAP) energy economy model. Conversion processes are the main modeling subunit in LEAP used to represent energy conversion industries and are supposedly based on the classical economic theory of the firm. Questions about uniqueness and existence of LEAP solutions and their relation to classical equilibrium economic theory prompted the study. An analysis of classical theory and LEAP model equations was made to determine their exact relationship. The conclusions drawn from this analysis were that LEAP theory is not consistent with the classical theory of the firm. Specifically, the capacity factor formalism used by LEAP does not support a classical interpretation in terms of a technological production function for energy conversion processes. The economic implications of this inconsistency are suboptimal process operation and short term negative profits in years where plant operation should be terminated. A new capacity factor formalism, which retains the behavioral features of the original model, is proposed to resolve these discrepancies.
Arnold, Corey W; Bui, Alex A T; Morioka, Craig; El-Saden, Suzie; Kangarloo, Hooshang
2007-01-01
The communication of imaging findings to a referring physician is an important role of the radiologist. However, communication between onsite and offsite physicians is a time-consuming process that can obstruct work flow and frequently involves no exchange of visual information, which is especially problematic given the importance of radiologic images for diagnosis and treatment. A prototype World Wide Web-based image documentation and reporting system was developed for use in supporting a "communication loop" that is based on the concept of a classic "wet-read" system. The proposed system represents an attempt to address many of the problems seen in current communication work flows by implementing a well-documented and easily accessible communication loop that is adaptable to different types of imaging study evaluation. Images are displayed in a native (DICOM) Digital Imaging and Communications in Medicine format with a Java applet, which allows accurate presentation along with use of various image manipulation tools. The Web-based infrastructure consists of a server that stores imaging studies and reports, with Web browsers that download and install necessary client software on demand. Application logic consists of a set of PHP (hypertext preprocessor) modules that are accessible with an application programming interface. The system may be adapted to any clinician-specialist communication loop, and, because it integrates radiologic standards with Web-based technologies, can more effectively communicate and document imaging data. RSNA, 2007
Multi-modal automatic montaging of adaptive optics retinal images
Chen, Min; Cooper, Robert F.; Han, Grace K.; Gee, James; Brainard, David H.; Morgan, Jessica I. W.
2016-01-01
We present a fully automated adaptive optics (AO) retinal image montaging algorithm using classic scale invariant feature transform with random sample consensus for outlier removal. Our approach is capable of using information from multiple AO modalities (confocal, split detection, and dark field) and can accurately detect discontinuities in the montage. The algorithm output is compared to manual montaging by evaluating the similarity of the overlapping regions after montaging, and calculating the detection rate of discontinuities in the montage. Our results show that the proposed algorithm has high alignment accuracy and a discontinuity detection rate that is comparable (and often superior) to manual montaging. In addition, we analyze and show the benefits of using multiple modalities in the montaging process. We provide the algorithm presented in this paper as open-source and freely available to download. PMID:28018714
A functional MRI study of happy and sad affective states induced by classical music.
Mitterschiffthaler, Martina T; Fu, Cynthia H Y; Dalton, Jeffrey A; Andrew, Christopher M; Williams, Steven C R
2007-11-01
The present study investigated the functional neuroanatomy of transient mood changes in response to Western classical music. In a pilot experiment, 53 healthy volunteers (mean age: 32.0; SD = 9.6) evaluated their emotional responses to 60 classical musical pieces using a visual analogue scale (VAS) ranging from 0 (sad) through 50 (neutral) to 100 (happy). Twenty pieces were found to accurately induce the intended emotional states with good reliability, consisting of 5 happy, 5 sad, and 10 emotionally unevocative, neutral musical pieces. In a subsequent functional magnetic resonance imaging (fMRI) study, the blood oxygenation level dependent (BOLD) signal contrast was measured in response to the mood state induced by each musical stimulus in a separate group of 16 healthy participants (mean age: 29.5; SD = 5.5). Mood state ratings during scanning were made by a VAS, which confirmed the emotional valence of the selected stimuli. Increased BOLD signal contrast during presentation of happy music was found in the ventral and dorsal striatum, anterior cingulate, parahippocampal gyrus, and auditory association areas. With sad music, increased BOLD signal responses were noted in the hippocampus/amygdala and auditory association areas. Presentation of neutral music was associated with increased BOLD signal responses in the insula and auditory association areas. Our findings suggest that an emotion processing network in response to music integrates the ventral and dorsal striatum, areas involved in reward experience and movement; the anterior cingulate, which is important for targeting attention; and medial temporal areas, traditionally found in the appraisal and processing of emotions. Copyright 2006 Wiley-Liss, Inc.
ERIC Educational Resources Information Center
Olson, Joel A.; Nordell, Karen J.; Chesnik, Marla A.; Landis, Clark R.; Ellis, Arthur B.; Rzchowski, M. S.; Condren, S. Michael; Lisensky, George C.
2000-01-01
Describes a set of simple, inexpensive, classical demonstrations of nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) principles that illustrate the resonance condition associated with magnetic dipoles and the dependence of the resonance frequency on environment. (WRM)
Carcaud, Julie; Giurfa, Martin; Sandoz, Jean Christophe
2016-01-01
The function of parallel neural processing is a fundamental problem in Neuroscience, as it is found across sensory modalities and evolutionary lineages, from insects to humans. Recently, parallel processing has attracted increased attention in the olfactory domain, with the demonstration in both insects and mammals that different populations of second-order neurons encode and/or process odorant information differently. Among insects, Hymenoptera present a striking olfactory system with a clear neural dichotomy from the periphery to higher-order centers, based on two main tracts of second-order (projection) neurons: the medial and lateral antennal lobe tracts (m-ALT and l-ALT). To unravel the functional role of these two pathways, we combined specific lesions of the m-ALT tract with behavioral experiments, using the classical conditioning of the proboscis extension response (PER conditioning). Lesioned and intact bees had to learn to associate an odorant (1-nonanol) with sucrose. Then the bees were subjected to a generalization procedure with a range of odorants differing in terms of their carbon chain length or functional group. We show that m-ALT lesion strongly affects acquisition of an odor-sucrose association. However, lesioned bees that still learned the association showed a normal gradient of decreasing generalization responses to increasingly dissimilar odorants. Generalization responses could be predicted to some extent by in vivo calcium imaging recordings of l-ALT neurons. The m-ALT pathway therefore seems necessary for normal classical olfactory conditioning performance. PMID:26834589
[Diagnostic possibilities of digital volume tomography].
Lemkamp, Michael; Filippi, Andreas; Berndt, Dorothea; Lambrecht, J Thomas
2006-01-01
Cone beam computed tomography allows high quality 3D images of cranio-facial structures. Although detail resolution is increased, x-ray exposition is reduced compared to classic computer tomography. The volume is analysed in three orthogonal plains, which can be rotated independently without quality loss. Cone beam computed tomography seems to be a less expensive and less x-ray exposing alternative to classic computer tomography.
Eger, Evelyn; Dolan, Raymond; Henson, Richard N.
2009-01-01
It is often assumed that neural activity in face-responsive regions of primate cortex correlates with conscious perception of faces. However, whether such activity occurs without awareness is still debated. Using functional magnetic resonance imaging (fMRI) in conjunction with a novel masked face priming paradigm, we observed neural modulations that could not be attributed to perceptual awareness. More specifically, we found reduced activity in several classic face-processing regions, including the “fusiform face area,” “occipital face area,” and superior temporal sulcus, when a face was preceded by a briefly flashed image of the same face, relative to a different face, even when 2 images of the same face differed. Importantly, unlike most previous studies, which have minimized awareness by using conditions of inattention, the present results occurred when the stimuli (the primes) were attended. By contrast, when primes were perceived consciously, in a long-lag priming paradigm, we found repetition-related activity increases in additional frontal and parietal regions. These data not only demonstrate that fMRI activity in face-responsive regions can be modulated independently of perceptual awareness, but also document where such subliminal face-processing occurs (i.e., restricted to face-responsive regions of occipital and temporal cortex) and to what extent (i.e., independent of the specific image). PMID:18400791
Theoretical foundations of spatially-variant mathematical morphology part ii: gray-level images.
Bouaynaya, Nidhal; Schonfeld, Dan
2008-05-01
In this paper, we develop a spatially-variant (SV) mathematical morphology theory for gray-level signals and images in the Euclidean space. The proposed theory preserves the geometrical concept of the structuring function, which provides the foundation of classical morphology and is essential in signal and image processing applications. We define the basic SV gray-level morphological operators (i.e., SV gray-level erosion, dilation, opening, and closing) and investigate their properties. We demonstrate the ubiquity of SV gray-level morphological systems by deriving a kernel representation for a large class of systems, called V-systems, in terms of the basic SV graylevel morphological operators. A V-system is defined to be a gray-level operator, which is invariant under gray-level (vertical) translations. Particular attention is focused on the class of SV flat gray-level operators. The kernel representation for increasing V-systems is a generalization of Maragos' kernel representation for increasing and translation-invariant function-processing systems. A representation of V-systems in terms of their kernel elements is established for increasing and upper-semi-continuous V-systems. This representation unifies a large class of spatially-variant linear and non-linear systems under the same mathematical framework. Finally, simulation results show the potential power of the general theory of gray-level spatially-variant mathematical morphology in several image analysis and computer vision applications.
Cellular image segmentation using n-agent cooperative game theory
NASA Astrophysics Data System (ADS)
Dimock, Ian B.; Wan, Justin W. L.
2016-03-01
Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.
NASA Astrophysics Data System (ADS)
Strocchi, S.; Ghielmi, M.; Basilico, F.; Macchi, A.; Novario, R.; Ferretti, R.; Binaghi, E.
2016-03-01
This work quantitatively evaluates the effects induced by susceptibility characteristics of materials commonly used in dental practice on the quality of head MR images in a clinical 1.5T device. The proposed evaluation procedure measures the image artifacts induced by susceptibility in MR images by providing an index consistent with the global degradation as perceived by the experts. Susceptibility artifacts were evaluated in a near-clinical setup, using a phantom with susceptibility and geometric characteristics similar to that of a human head. We tested different dentist materials, called PAL Keramit, Ti6Al4V-ELI, Keramit NP, ILOR F, Zirconia and used different clinical MR acquisition sequences, such as "classical" SE and fast, gradient, and diffusion sequences. The evaluation is designed as a matching process between reference and artifacts affected images recording the same scene. The extent of the degradation induced by susceptibility is then measured in terms of similarity with the corresponding reference image. The matching process involves a multimodal registration task and the use an adequate similarity index psychophysically validated, based on correlation coefficient. The proposed analyses are integrated within a computer-supported procedure that interactively guides the users in the different phases of the evaluation method. 2-Dimensional and 3-dimensional indexes are used for each material and each acquisition sequence. From these, we drew a ranking of the materials, averaging the results obtained. Zirconia and ILOR F appear to be the best choice from the susceptibility artefacts point of view, followed, in order, by PAL Keramit, Ti6Al4V-ELI and Keramit NP.
Sandoval, Guillermo A; Brown, Adalsteinn D; Wodchis, Walter P; Anderson, Geoffrey M
2018-05-17
Measuring the value of medical imaging is challenging, in part, due to the lack of conceptual frameworks underlying potential mechanisms where value may be assessed. To address this gap, this article proposes a framework that builds on the large body of literature on quality of hospital care and the classic structure-process-outcome paradigm. The framework was also informed by the literature on adoption of technological innovations and introduces 2 distinct though related aspects of imaging technology not previously addressed specifically in the literature on quality of hospital care: adoption (a structural hospital characteristic) and use (an attribute of the process of care). The framework hypothesizes a 2-part causality where adoption is proposed to be a central, linking factor between hospital structural characteristics, market factors, and hospital outcomes (ie, quality and efficiency). The first part indicates that hospital structural characteristics and market factors influence or facilitate the adoption of high technology medical imaging within an institution. The presence of this technology, in turn, is hypothesized to improve the ability of the hospital to deliver high quality and efficient care. The second part describes this ability throughout 3 main mechanisms pointing to the importance of imaging use on patients, to the presence of staff and qualified care providers, and to some elements of organizational capacity capturing an enhanced clinical environment. The framework has the potential to assist empirical investigations of the value of adoption and use of medical imaging, and to advance understanding of the mechanisms that produce quality and efficiency in hospitals. Copyright © 2018 John Wiley & Sons, Ltd.
Multi-image acquisition-based distance sensor using agile laser spot beam.
Riza, Nabeel A; Amin, M Junaid
2014-09-01
We present a novel laser-based distance measurement technique that uses multiple-image-based spatial processing to enable distance measurements. Compared with the first-generation distance sensor using spatial processing, the modified sensor is no longer hindered by the classic Rayleigh axial resolution limit for the propagating laser beam at its minimum beam waist location. The proposed high-resolution distance sensor design uses an electronically controlled variable focus lens (ECVFL) in combination with an optical imaging device, such as a charged-coupled device (CCD), to produce and capture different laser spot size images on a target with these beam spot sizes different from the minimal spot size possible at this target distance. By exploiting the unique relationship of the target located spot sizes with the varying ECVFL focal length for each target distance, the proposed distance sensor can compute the target distance with a distance measurement resolution better than the axial resolution via the Rayleigh resolution criterion. Using a 30 mW 633 nm He-Ne laser coupled with an electromagnetically actuated liquid ECVFL, along with a 20 cm focal length bias lens, and using five spot images captured per target position by a CCD-based Nikon camera, a proof-of-concept proposed distance sensor is successfully implemented in the laboratory over target ranges from 10 to 100 cm with a demonstrated sub-cm axial resolution, which is better than the axial Rayleigh resolution limit at these target distances. Applications for the proposed potentially cost-effective distance sensor are diverse and include industrial inspection and measurement and 3D object shape mapping and imaging.
NASA Astrophysics Data System (ADS)
Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.
2012-10-01
Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.
Real-time windowing in imaging radar using FPGA technique
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; Escamilla-Hernandez, Enrique
2005-02-01
The imaging radar uses the high frequency electromagnetic waves reflected from different objects for estimating of its parameters. Pulse compression is a standard signal processing technique used to minimize the peak transmission power and to maximize SNR, and to get a better resolution. Usually the pulse compression can be achieved using a matched filter. The level of the side-lobes in the imaging radar can be reduced using the special weighting function processing. There are very known different weighting functions: Hamming, Hanning, Blackman, Chebyshev, Blackman-Harris, Kaiser-Bessel, etc., widely used in the signal processing applications. Field Programmable Gate Arrays (FPGAs) offers great benefits like instantaneous implementation, dynamic reconfiguration, design, and field programmability. This reconfiguration makes FPGAs a better solution over custom-made integrated circuits. This work aims at demonstrating a reasonably flexible implementation of FM-linear signal and pulse compression using Matlab, Simulink, and System Generator. Employing FPGA and mentioned software we have proposed the pulse compression design on FPGA using classical and novel windows technique to reduce the side-lobes level. This permits increasing the detection ability of the small or nearly placed targets in imaging radar. The advantage of FPGA that can do parallelism in real time processing permits to realize the proposed algorithms. The paper also presents the experimental results of proposed windowing procedure in the marine radar with such the parameters: signal is linear FM (Chirp); frequency deviation DF is 9.375MHz; the pulse width T is 3.2μs taps number in the matched filter is 800 taps; sampling frequency 253.125*106 MHz. It has been realized the reducing of side-lobes levels in real time permitting better resolution of the small targets.
Computation and Dynamics: Classical and Quantum
NASA Astrophysics Data System (ADS)
Kisil, Vladimir V.
2010-05-01
We discuss classical and quantum computations in terms of corresponding Hamiltonian dynamics. This allows us to introduce quantum computations which involve parallel processing of both: the data and programme instructions. Using mixed quantum-classical dynamics we look for a full cost of computations on quantum computers with classical terminals.
ERIC Educational Resources Information Center
Álvarez-Rúa, Carmen; Borge, Javier
2016-01-01
Thermodynamic processes are complex phenomena that can be understood as a set of successive stages. When treating processes, classical thermodynamics (and most particularly, the Gibbsian formulation, predominantly used in chemistry) only pays attention to initial and final states. However, reintroducing the notion of process is absolutely…
Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time
NASA Astrophysics Data System (ADS)
Smith, James F.
2017-03-01
A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.
ERIC Educational Resources Information Center
Bauer-Dantoin, Angela C.; Hanke, Craig J.
2007-01-01
Two significant benefits derived from reading and discussing classic scientific papers in undergraduate biology courses are 1) providing students with the realistic perspective that science is an ongoing process (rather than a set of inarguable facts) and 2) deepening the students' understanding of physiological processes. A classic paper that is…
NASA Astrophysics Data System (ADS)
Dinten, Jean-Marc; Petié, Philippe; da Silva, Anabela; Boutet, Jérôme; Koenig, Anne; Hervé, Lionel; Berger, Michel; Laidevant, Aurélie; Rizo, Philippe
2006-03-01
Optical imaging of fluorescent probes is an essential tool for investigation of molecular events in small animals for drug developments. In order to get localization and quantification information of fluorescent labels, CEA-LETI has developed efficient approaches in classical reflectance imaging as well as in diffuse optical tomographic imaging with continuous and temporal signals. This paper presents an overview of the different approaches investigated and their performances. High quality fluorescence reflectance imaging is obtained thanks to the development of an original "multiple wavelengths" system. The uniformity of the excitation light surface area is better than 15%. Combined with the use of adapted fluorescent probes, this system enables an accurate detection of pathological tissues, such as nodules, beneath the animal's observed area. Performances for the detection of ovarian nodules on a nude mouse are shown. In order to investigate deeper inside animals and get 3D localization, diffuse optical tomography systems are being developed for both slab and cylindrical geometries. For these two geometries, our reconstruction algorithms are based on analytical expression of light diffusion. Thanks to an accurate introduction of light/matter interaction process in the algorithms, high quality reconstructions of tumors in mice have been obtained. Reconstruction of lung tumors on mice are presented. By the use of temporal diffuse optical imaging, localization and quantification performances can be improved at the price of a more sophisticated acquisition system and more elaborate information processing methods. Such a system based on a pulsed laser diode and a time correlated single photon counting system has been set up. Performances of this system for localization and quantification of fluorescent probes are presented.
Morawski, Markus; Kirilina, Evgeniya; Scherf, Nico; Jäger, Carsten; Reimann, Katja; Trampel, Robert; Gavriilidis, Filippos; Geyer, Stefan; Biedermann, Bernd; Arendt, Thomas; Weiskopf, Nikolaus
2017-11-28
Recent breakthroughs in magnetic resonance imaging (MRI) enabled quantitative relaxometry and diffusion-weighted imaging with sub-millimeter resolution. Combined with biophysical models of MR contrast the emerging methods promise in vivo mapping of cyto- and myelo-architectonics, i.e., in vivo histology using MRI (hMRI) in humans. The hMRI methods require histological reference data for model building and validation. This is currently provided by MRI on post mortem human brain tissue in combination with classical histology on sections. However, this well established approach is limited to qualitative 2D information, while a systematic validation of hMRI requires quantitative 3D information on macroscopic voxels. We present a promising histological method based on optical 3D imaging combined with a tissue clearing method, Clear Lipid-exchanged Acrylamide-hybridized Rigid Imaging compatible Tissue hYdrogel (CLARITY), adapted for hMRI validation. Adapting CLARITY to the needs of hMRI is challenging due to poor antibody penetration into large sample volumes and high opacity of aged post mortem human brain tissue. In a pilot experiment we achieved transparency of up to 8 mm-thick and immunohistochemical staining of up to 5 mm-thick post mortem brain tissue by a combination of active and passive clearing, prolonged clearing and staining times. We combined 3D optical imaging of the cleared samples with tailored image processing methods. We demonstrated the feasibility for quantification of neuron density, fiber orientation distribution and cell type classification within a volume with size similar to a typical MRI voxel. The presented combination of MRI, 3D optical microscopy and image processing is a promising tool for validation of MRI-based microstructure estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Demi, Libertario; Ramalli, Alessandro; Giannini, Gabriele; Mischi, Massimo
2015-01-01
In classic pulse-echo ultrasound imaging, the data acquisition rate is limited by the speed of sound. To overcome this, parallel beamforming techniques in transmit (PBT) and in receive (PBR) mode have been proposed. In particular, PBT techniques, based on the transmission of focused beams, are more suitable for harmonic imaging because they are capable of generating stronger harmonics. Recently, orthogonal frequency division multiplexing (OFDM) has been investigated as a means to obtain parallel beamformed tissue harmonic images. To date, only numerical studies and experiments in water have been performed, hence neglecting the effect of frequencydependent absorption. Here we present the first in vitro and in vivo tissue harmonic images obtained with PBT by means of OFDM, and we compare the results with classic B-mode tissue harmonic imaging. The resulting contrast-to-noise ratio, here used as a performance metric, is comparable. A reduction by 2 dB is observed for the case in which three parallel lines are reconstructed. In conclusion, the applicability of this technique to ultrasonography as a means to improve the data acquisition rate is confirmed.
Early differential processing of material images: Evidence from ERP classification.
Wiebel, Christiane B; Valsecchi, Matteo; Gegenfurtner, Karl R
2014-06-24
Investigating the temporal dynamics of natural image processing using event-related potentials (ERPs) has a long tradition in object recognition research. In a classical Go-NoGo task two characteristic effects have been emphasized: an early task independent category effect and a later task-dependent target effect. Here, we set out to use this well-established Go-NoGo paradigm to study the time course of material categorization. Material perception has gained more and more interest over the years as its importance in natural viewing conditions has been ignored for a long time. In addition to analyzing standard ERPs, we conducted a single trial ERP pattern analysis. To validate this procedure, we also measured ERPs in two object categories (people and animals). Our linear classification procedure was able to largely capture the overall pattern of results from the canonical analysis of the ERPs and even extend it. We replicate the known target effect (differential Go-NoGo potential at frontal sites) for the material images. Furthermore, we observe task-independent differential activity between the two material categories as early as 140 ms after stimulus onset. Using our linear classification approach, we show that material categories can be differentiated consistently based on the ERP pattern in single trials around 100 ms after stimulus onset, independent of the target-related status. This strengthens the idea of early differential visual processing of material categories independent of the task, probably due to differences in low-level image properties and suggests pattern classification of ERP topographies as a strong instrument for investigating electrophysiological brain activity. © 2014 ARVO.
A Freeware Path to Neutron Computed Tomography
NASA Astrophysics Data System (ADS)
Schillinger, Burkhard; Craft, Aaron E.
Neutron computed tomography has become a routine method at many neutron sources due to the availability of digital detection systems, powerful computers and advanced software. The commercial packages Octopus by Inside Matters and VGStudio by Volume Graphics have been established as a quasi-standard for high-end computed tomography. However, these packages require a stiff investment and are available to the users only on-site at the imaging facility to do their data processing. There is a demand from users to have image processing software at home to do further data processing; in addition, neutron computed tomography is now being introduced even at smaller and older reactors. Operators need to show a first working tomography setup before they can obtain a budget to build an advanced tomography system. Several packages are available on the web for free; however, these have been developed for X-rays or synchrotron radiation and are not immediately useable for neutron computed tomography. Three reconstruction packages and three 3D-viewers have been identified and used even for Gigabyte datasets. This paper is not a scientific publication in the classic sense, but is intended as a review to provide searchable help to make the described packages usable for the tomography community. It presents the necessary additional preprocessing in ImageJ, some workarounds for bugs in the software, and undocumented or badly documented parameters that need to be adapted for neutron computed tomography. The result is a slightly complicated, but surprisingly high-quality path to neutron computed tomography images in 3D, but not a replacement for the even more powerful commercial software mentioned above.
On the co-creation of classical and modern physics.
Staley, Richard
2005-12-01
While the concept of "classical physics" has long framed our understanding of the environment from which modern physics emerged, it has consistently been read back into a period in which the physicists concerned initially considered their work in quite other terms. This essay explores the shifting currency of the rich cultural image of the classical/ modern divide by tracing empirically different uses of "classical" within the physics community from the 1890s to 1911. A study of fin-de-siècle addresses shows that the earliest general uses of the concept proved controversial. Our present understanding of the term was in large part shaped by its incorporation (in different ways) within the emerging theories of relativity and quantum theory--where the content of "classical" physics was defined by proponents of the new. Studying the diverse ways in which Boltzmann, Larmor, Poincaré, Einstein, Minkowski, and Planck invoked the term "classical" will help clarify the critical relations between physicists' research programs and their use of worldview arguments in fashioning modern physics.
MO-DE-209-03: Assessing Image Quality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, W.
Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBTmore » shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support, Hologic, Inc.; Research Support, Barco, Inc.; Scientific Advisory Board, Gamma Medica, Inc.; Scientific Advisory Board, Real-Time Tomography, LLC.; Shareholder, Real-Time Tomography, LLC; J. Mainprize, Our lab has a research agreement with GE Healthcare on various topics in digital mammography and digital tomosynthesis; W. Zhao, Research grant from Siemens Health Care.« less
NASA Astrophysics Data System (ADS)
Aydogan, D.
2007-04-01
An image processing technique called the cellular neural network (CNN) approach is used in this study to locate geological features giving rise to gravity anomalies such as faults or the boundary of two geologic zones. CNN is a stochastic image processing technique based on template optimization using the neighborhood relationships of cells. These cells can be characterized by a functional block diagram that is typical of neural network theory. The functionality of CNN is described in its entirety by a number of small matrices (A, B and I) called the cloning template. CNN can also be considered to be a nonlinear convolution of these matrices. This template describes the strength of the nearest neighbor interconnections in the network. The recurrent perceptron learning algorithm (RPLA) is used in optimization of cloning template. The CNN and standard Canny algorithms were first tested on two sets of synthetic gravity data with the aim of checking the reliability of the proposed approach. The CNN method was compared with classical derivative techniques by applying the cross-correlation method (CC) to the same anomaly map as this latter approach can detect some features that are difficult to identify on the Bouguer anomaly maps. This approach was then applied to the Bouguer anomaly map of Biga and its surrounding area, in Turkey. Structural features in the area between Bandirma, Biga, Yenice and Gonen in the southwest Marmara region are investigated by applying the CNN and CC to the Bouguer anomaly map. Faults identified by these algorithms are generally in accordance with previously mapped surface faults. These examples show that the geologic boundaries can be detected from Bouguer anomaly maps using the cloning template approach. A visual evaluation of the outputs of the CNN and CC approaches is carried out, and the results are compared with each other. This approach provides quantitative solutions based on just a few assumptions, which makes the method more powerful than the classical methods.
Photosynthetic Energy Transfer at the Quantum/Classical Border.
Keren, Nir; Paltiel, Yossi
2018-06-01
Quantum mechanics diverges from the classical description of our world when very small scales or very fast processes are involved. Unlike classical mechanics, quantum effects cannot be easily related to our everyday experience and are often counterintuitive to us. Nevertheless, the dimensions and time scales of the photosynthetic energy transfer processes puts them close to the quantum/classical border, bringing them into the range of measurable quantum effects. Here we review recent advances in the field and suggest that photosynthetic processes can take advantage of the sensitivity of quantum effects to the environmental 'noise' as means of tuning exciton energy transfer efficiency. If true, this design principle could be a base for 'nontrivial' coherent wave property nano-devices. Copyright © 2018 Elsevier Ltd. All rights reserved.
Thermodynamics and Kinetics of Prenucleation Clusters, Classical and Non-Classical Nucleation
Zahn, Dirk
2015-01-01
Recent observations of prenucleation species and multi-stage crystal nucleation processes challenge the long-established view on the thermodynamics of crystal formation. Here, we review and generalize extensions to classical nucleation theory. Going beyond the conventional implementation as has been used for more than a century now, nucleation inhibitors, precursor clusters and non-classical nucleation processes are rationalized as well by analogous concepts based on competing interface and bulk energy terms. This is illustrated by recent examples of species formed prior to/instead of crystal nucleation and multi-step nucleation processes. Much of the discussed insights were obtained from molecular simulation using advanced sampling techniques, briefly summarized herein for both nucleation-controlled and diffusion-controlled aggregate formation. PMID:25914369
Sarink, M J; Koelewijn, R; Slingerland, B C G C; Tielens, A G M; van Genderen, P J J; van Hellemond, J J
2018-06-28
Diagnosis of cystic echinococcosis (CE) is at present mainly based on imaging techniques. Serology has a complementary role, partly due to the small number of standardized and commercially available assays. Therefore we examined the clinical performance of the SERION ELISA classic Echinococcus IgG test. Using 10 U/ml as a cut-off point, and serum samples from 50 CE patients and 105 healthy controls, the sensitivity and specificity were 98.0% and 96.2%, respectively. If patients with other infectious diseases were used as negative controls, the specificity decreased to 76.9%, which causes poor positive predictive values. However, if results between 10 and 15 U/ml are classified as indecisive, the specificity of positive results (≥15 U/ml) increased to 92.5% without greatly affecting the sensitivity (92.0%). Using this approach in combination with imaging studies, the SERION ELISA classic Echinococcosis IgG test can be a useful aid in the diagnosis of CE.
Principle of minimal work fluctuations.
Xiao, Gaoyang; Gong, Jiangbin
2015-08-01
Understanding and manipulating work fluctuations in microscale and nanoscale systems are of both fundamental and practical interest. For example, in considering the Jarzynski equality 〈e-βW〉=e-βΔF, a change in the fluctuations of e-βW may impact how rapidly the statistical average of e-βW converges towards the theoretical value e-βΔF, where W is the work, β is the inverse temperature, and ΔF is the free energy difference between two equilibrium states. Motivated by our previous study aiming at the suppression of work fluctuations, here we obtain a principle of minimal work fluctuations. In brief, adiabatic processes as treated in quantum and classical adiabatic theorems yield the minimal fluctuations in e-βW. In the quantum domain, if a system initially prepared at thermal equilibrium is subjected to a work protocol but isolated from a bath during the time evolution, then a quantum adiabatic process without energy level crossing (or an assisted adiabatic process reaching the same final states as in a conventional adiabatic process) yields the minimal fluctuations in e-βW, where W is the quantum work defined by two energy measurements at the beginning and at the end of the process. In the classical domain where the classical work protocol is realizable by an adiabatic process, then the classical adiabatic process also yields the minimal fluctuations in e-βW. Numerical experiments based on a Landau-Zener process confirm our theory in the quantum domain, and our theory in the classical domain explains our previous numerical findings regarding the suppression of classical work fluctuations [G. Y. Xiao and J. B. Gong, Phys. Rev. E 90, 052132 (2014)].
Ulrich, Martin; Adams, Sarah C; Kiefer, Markus
2014-11-01
In classical theories of attention, unconscious automatic processes are thought to be independent of higher-level attentional influences. Here, we propose that unconscious processing depends on attentional enhancement of task-congruent processing pathways implemented by a dynamic modulation of the functional communication between brain regions. Using functional magnetic resonance imaging, we tested our model with a subliminally primed lexical decision task preceded by an induction task preparing either a semantic or a perceptual task set. Subliminal semantic priming was significantly greater after semantic compared to perceptual induction in ventral occipito-temporal (vOT) and inferior frontal cortex, brain areas known to be involved in semantic processing. The functional connectivity pattern of vOT varied depending on the induction task and successfully predicted the magnitude of behavioral and neural priming. Together, these findings support the proposal that dynamic establishment of functional networks by task sets is an important mechanism in the attentional control of unconscious processing. © 2014 Wiley Periodicals, Inc.
Secure Oblivious Hiding, Authentication, Tamper Proofing, and Verification Techniques
2002-08-01
compressing the bit- planes. The algorithm always starts with inspecting the 5th LSB plane. For color images , all three color-channels are compressed...use classical encryption engines, such as IDEA or DES . These algorithms have a fixed encryption block size, and, depending on the image dimensions, we...information can be stored either in a separate file, in the image header, or embedded in the image itself utilizing the modern concepts of steganography
Bókkon, I; Salari, V; Tuszynski, J A; Antal, I
2010-09-02
Recently, we have proposed a redox molecular hypothesis about the natural biophysical substrate of visual perception and imagery [1,6]. Namely, the retina transforms external photon signals into electrical signals that are carried to the V1 (striatecortex). Then, V1 retinotopic electrical signals (spike-related electrical signals along classical axonal-dendritic pathways) can be converted into regulated ultraweak bioluminescent photons (biophotons) through redox processes within retinotopic visual neurons that make it possible to create intrinsic biophysical pictures during visual perception and imagery. However, the consensus opinion is to consider biophotons as by-products of cellular metabolism. This paper argues that biophotons are not by-products, other than originating from regulated cellular radical/redox processes. It also shows that the biophoton intensity can be considerably higher inside cells than outside. Our simple calculations, within a level of accuracy, suggest that the real biophoton intensity in retinotopic neurons may be sufficient for creating intrinsic biophysical picture representation of a single-object image during visual perception. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Bidirectional Classical Stochastic Processes with Measurements and Feedback
NASA Technical Reports Server (NTRS)
Hahne, G. E.
2005-01-01
A measurement on a quantum system is said to cause the "collapse" of the quantum state vector or density matrix. An analogous collapse occurs with measurements on a classical stochastic process. This paper addresses the question of describing the response of a classical stochastic process when there is feedback from the output of a measurement to the input, and is intended to give a model for quantum-mechanical processes that occur along a space-like reaction coordinate. The classical system can be thought of in physical terms as two counterflowing probability streams, which stochastically exchange probability currents in a way that the net probability current, and hence the overall probability, suitably interpreted, is conserved. The proposed formalism extends the . mathematics of those stochastic processes describable with linear, single-step, unidirectional transition probabilities, known as Markov chains and stochastic matrices. It is shown that a certain rearrangement and combination of the input and output of two stochastic matrices of the same order yields another matrix of the same type. Each measurement causes the partial collapse of the probability current distribution in the midst of such a process, giving rise to calculable, but non-Markov, values for the ensuing modification of the system's output probability distribution. The paper concludes with an analysis of a classical probabilistic version of the so-called grandfather paradox.
Photodissociation of methyl formate: Conical intersections, roaming and triple fragmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, King-Chuen; Tsai, Po-Yu; Institute of Atomic and Molecular Sciences, Academia Sinica, Taipei 106, Taiwan
2015-12-31
The photodissociation channels of methyl formate have been extensively investigated by two different advanced experimental techniques, ion imaging and Fourier-Transform-Infrared emission spectroscopy, combined with quantum chemical calculations and molecular dynamics simulations. Our aim is to characterize the role of alternative routes to the conventional transition-state mediated pathway: the roaming and the triple fragmentation processes. The photolysis experiments, carried out at a range of laser wavelengths in the vicinity of the triple fragmentation threshold, beside the simulation of large bunches of classical trajectories with different initial conditions, have shown that both mechanisms share a common path that involves a conical intersectionmore » during the relaxation process from the electronic excited state S{sub 1} to the ground state S{sub 0}.« less
A review of biomedical multiphoton microscopy and its laser sources
NASA Astrophysics Data System (ADS)
Lefort, Claire
2017-10-01
Multiphoton microscopy (MPM) has been the subject of major development efforts for about 25 years for imaging biological specimens at micron scale and presented as an elegant alternative to classical fluorescence methods such as confocal microscopy. In this topical review, the main interests and technical requirements of MPM are addressed with a focus on the crucial role of excitation source for optimization of multiphoton processes. Then, an overview of the different sources successfully demonstrated in literature for MPM is presented, and their physical parameters are inventoried. A classification of these sources in function with their ability to optimize multiphoton processes is proposed, following a protocol found in literature. Starting from these considerations, a suggestion of a possible identikit of the ideal laser source for MPM concludes this topical review. Dedicated to Martin.
Conditionally prepared photon and quantum imaging
NASA Astrophysics Data System (ADS)
Lvovsky, Alexander I.; Aichele, Thomas
2004-10-01
We discuss a classical model allowing one to visualize and characterize the optical mode of the single photon generated by means of a conditional measurement on a biphoton produced in parametric down-conversion. The model is based on Klyshko's advanced wave interpretation, but extends beyond it, providing a precise mathematical description of the advanced wave. The optical mode of the conditional photon is shown to be identical to the mode of the classical difference-frequency field generated due to nonlinear interaction of the partially coherent advanced wave with the pump pulse. With this "nonlinear advanced wave model" most coherence properties of the conditional photon become manifest, which permits one to intuitively understand many recent results, in particular, in quantum imaging.
Conservative classical and quantum resolution limits for incoherent imaging
NASA Astrophysics Data System (ADS)
Tsang, Mankei
2018-06-01
I propose classical and quantum limits to the statistical resolution of two incoherent optical point sources from the perspective of minimax parameter estimation. Unlike earlier results based on the Cramér-Rao bound (CRB), the limits proposed here, based on the worst-case error criterion and a Bayesian version of the CRB, are valid for any biased or unbiased estimator and obey photon-number scalings that are consistent with the behaviours of actual estimators. These results prove that, from the minimax perspective, the spatial-mode demultiplexing measurement scheme recently proposed by Tsang, Nair, and Lu [Phys. Rev. X 2016, 6 031033.] remains superior to direct imaging for sufficiently high photon numbers.
Trajectory description of the quantum–classical transition for wave packet interference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw
2016-08-15
The quantum–classical transition for wave packet interference is investigated using a hydrodynamic description. A nonlinear quantum–classical transition equation is obtained by introducing a degree of quantumness ranging from zero to one into the classical time-dependent Schrödinger equation. This equation provides a continuous description for the transition process of physical systems from purely quantum to purely classical regimes. In this study, the transition trajectory formalism is developed to provide a hydrodynamic description for the quantum–classical transition. The flow momentum of transition trajectories is defined by the gradient of the action function in the transition wave function and these trajectories follow themore » main features of the evolving probability density. Then, the transition trajectory formalism is employed to analyze the quantum–classical transition of wave packet interference. For the collision-like wave packet interference where the propagation velocity is faster than the spreading speed of the wave packet, the interference process remains collision-like for all the degree of quantumness. However, the interference features demonstrated by transition trajectories gradually disappear when the degree of quantumness approaches zero. For the diffraction-like wave packet interference, the interference process changes continuously from a diffraction-like to collision-like case when the degree of quantumness gradually decreases. This study provides an insightful trajectory interpretation for the quantum–classical transition of wave packet interference.« less
Change detection of polarimetric SAR images based on the KummerU Distribution
NASA Astrophysics Data System (ADS)
Chen, Quan; Zou, Pengfei; Li, Zhen; Zhang, Ping
2014-11-01
In the society of PolSAR image segmentation, change detection and classification, the classical Wishart distribution has been used for a long time, but it especially suit to low-resolution SAR image, because in traditional sensors, only a small number of scatterers are present in each resolution cell. With the improving of SAR systems these years, the classical statistical models can therefore be reconsidered for high resolution and polarimetric information contained in the images acquired by these advanced systems. In this study, SAR image segmentation algorithm based on level-set method, added with distance regularized level-set evolution (DRLSE) is performed using Envisat/ASAR single-polarization data and Radarsat-2 polarimetric images, respectively. KummerU heterogeneous clutter model is used in the later to overcome the homogeneous hypothesis at high resolution cell. An enhanced distance regularized level-set evolution (DRLSE-E) is also applied in the later, to ensure accurate computation and stable level-set evolution. Finally, change detection based on four polarimetric Radarsat-2 time series images is carried out at Genhe area of Inner Mongolia Autonomous Region, NorthEastern of China, where a heavy flood disaster occurred during the summer of 2013, result shows the recommend segmentation method can detect the change of watershed effectively.
Novel hyperspectral prediction method and apparatus
NASA Astrophysics Data System (ADS)
Kemeny, Gabor J.; Crothers, Natalie A.; Groth, Gard A.; Speck, Kathy A.; Marbach, Ralf
2009-05-01
Both the power and the challenge of hyperspectral technologies is the very large amount of data produced by spectral cameras. While off-line methodologies allow the collection of gigabytes of data, extended data analysis sessions are required to convert the data into useful information. In contrast, real-time monitoring, such as on-line process control, requires that compression of spectral data and analysis occur at a sustained full camera data rate. Efficient, high-speed practical methods for calibration and prediction are therefore sought to optimize the value of hyperspectral imaging. A novel method of matched filtering known as science based multivariate calibration (SBC) was developed for hyperspectral calibration. Classical (MLR) and inverse (PLS, PCR) methods are combined by spectroscopically measuring the spectral "signal" and by statistically estimating the spectral "noise." The accuracy of the inverse model is thus combined with the easy interpretability of the classical model. The SBC method is optimized for hyperspectral data in the Hyper-CalTM software used for the present work. The prediction algorithms can then be downloaded into a dedicated FPGA based High-Speed Prediction EngineTM module. Spectral pretreatments and calibration coefficients are stored on interchangeable SD memory cards, and predicted compositions are produced on a USB interface at real-time camera output rates. Applications include minerals, pharmaceuticals, food processing and remote sensing.
Eagle's syndrome-A non-perceived differential diagnosis of temporomandibular disorder.
Thoenissen, P; Bittermann, G; Schmelzeisen, R; Oshima, T; Fretwurst, T
2015-01-01
This article unveils a case of the classic styloid syndrome and states that panoramic imaging and ultrasound can be an alternative to computed tomography. In addition, the endoscope-assisted extraoral approach using CT-based navigation is useful. Eagle's Syndrome is an aggregate of symptoms described by Eagle in 1937. He described different forms: the classic styloid syndrome consisting of elongation of the styloid process which causes pain. Second, the stylo-carotid-artery syndrome which is responsible for transient ischemic attack or stroke. Using the example of a 66 years old male patient suffering from long term pain, we explain our diagnostic and surgical approach. After dissecting the styloid process of the right side using an extraoral approach, the pain ceased and the patient could be discharged without any recurrence of the pain up to this point. Eagle's syndrome, with its similar symptoms, is rather difficult to differentiate from temporomandibular joint disorders (TMD), but can be easily excluded from possible differential diagnoses of TMD using panoramic radiographs and ultrasound. Making use of low cost and easily accessible diagnostic workup techniques can reveal this particular cause for chronic pain restricting quality of life. Thereby differentiation from the TMD symptomatic complex is possible. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Non-stationary pre-envelope covariances of non-classically damped systems
NASA Astrophysics Data System (ADS)
Muscolino, G.
1991-08-01
A new formulation is given to evaluate the stationary and non-stationary response of linear non-classically damped systems subjected to multi-correlated non-separable Gaussian input processes. This formulation is based on a new and more suitable definition of the impulse response function matrix for such systems. It is shown that, when using this definition, the stochastic response of non-classically damped systems involves the evaluation of quantities similar to those of classically damped ones. Furthermore, considerations about non-stationary cross-covariances, spectral moments and pre-envelope cross-covariances are presented for a monocorrelated input process.
NASA Astrophysics Data System (ADS)
Chiu, L.; Vongsaard, J.; El-Ghazawi, T.; Weinman, J.; Yang, R.; Kafatos, M.
U Due to the poor temporal sampling by satellites, data gaps exist in satellite derived time series of precipitation. This poses a challenge for assimilating rain- fall data into forecast models. To yield a continuous time series, the classic image processing technique of digital image morphing has been used. However, the digital morphing technique was applied manually and that is time consuming. In order to avoid human intervention in the process, an automatic procedure for image morphing is needed for real-time operations. For this purpose, Genetic Algorithm Based Image Registration Automatic Morphing (GRAM) model was developed and tested in this paper. Specifically, automatic morphing technique was integrated with Genetic Algo- rithm and Feature Based Image Metamorphosis technique to fill in data gaps between satellite coverage. The technique was tested using NOWRAD data which are gener- ated from the network of NEXRAD radars. Time series of NOWRAD data from storm Floyd that occurred at the US eastern region on September 16, 1999 for 00:00, 01:00, 02:00,03:00, and 04:00am were used. The GRAM technique was applied to data col- lected at 00:00 and 04:00am. These images were also manually morphed. Images at 01:00, 02:00 and 03:00am were interpolated from the GRAM and manual morphing and compared with the original NOWRAD rainrates. The results show that the GRAM technique outperforms manual morphing. The correlation coefficients between the im- ages generated using manual morphing are 0.905, 0.900, and 0.905 for the images at 01:00, 02:00,and 03:00 am, while the corresponding correlation coefficients are 0.946, 0.911, and 0.913, respectively, based on the GRAM technique. Index terms Remote Sensing, Image Registration, Hydrology, Genetic Algorithm, Morphing, NEXRAD
Quantum color image watermarking based on Arnold transformation and LSB steganography
NASA Astrophysics Data System (ADS)
Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Luo, Gaofeng
In this paper, a quantum color image watermarking scheme is proposed through twice-scrambling of Arnold transformations and steganography of least significant bit (LSB). Both carrier image and watermark images are represented by the novel quantum representation of color digital images model (NCQI). The image sizes for carrier and watermark are assumed to be 2n×2n and 2n‑1×2n‑1, respectively. At first, the watermark is scrambled into a disordered form through image preprocessing technique of exchanging the image pixel position and altering the color information based on Arnold transforms, simultaneously. Then, the scrambled watermark with 2n‑1×2n‑1 image size and 24-qubit grayscale is further expanded to an image with size 2n×2n and 6-qubit grayscale using the nearest-neighbor interpolation method. Finally, the scrambled and expanded watermark is embedded into the carrier by steganography of LSB scheme, and a key image with 2n×2n size and 3-qubit information is generated at the meantime, which only can use the key image to retrieve the original watermark. The extraction of watermark is the reverse process of embedding, which is achieved by applying a sequence of operations in the reverse order. Simulation-based experimental results involving different carrier and watermark images (i.e. conventional or non-quantum) are simulated based on the classical computer’s MATLAB 2014b software, which illustrates that the present method has a good performance in terms of three items: visual quality, robustness and steganography capacity.
(Pea)nuts and bolts of visual narrative: Structure and meaning in sequential image comprehension
Cohn, Neil; Paczynski, Martin; Jackendoff, Ray; Holcomb, Phillip J.; Kuperberg, Gina R.
2012-01-01
Just as syntax differentiates coherent sentences from scrambled word strings, the comprehension of sequential images must also use a cognitive system to distinguish coherent narrative sequences from random strings of images. We conducted experiments analogous to two classic studies of language processing to examine the contributions of narrative structure and semantic relatedness to processing sequential images. We compared four types of comic strips: 1) Normal sequences with both structure and meaning, 2) Semantic Only sequences (in which the panels were related to a common semantic theme, but had no narrative structure), 3) Structural Only sequences (narrative structure but no semantic relatedness), and 4) Scrambled sequences of randomly-ordered panels. In Experiment 1, participants monitored for target panels in sequences presented panel-by-panel. Reaction times were slowest to panels in Scrambled sequences, intermediate in both Structural Only and Semantic Only sequences, and fastest in Normal sequences. This suggests that both semantic relatedness and narrative structure offer advantages to processing. Experiment 2 measured ERPs to all panels across the whole sequence. The N300/N400 was largest to panels in both the Scrambled and Structural Only sequences, intermediate in Semantic Only sequences and smallest in the Normal sequences. This implies that a combination of narrative structure and semantic relatedness can facilitate semantic processing of upcoming panels (as reflected by the N300/N400). Also, panels in the Scrambled sequences evoked a larger left-lateralized anterior negativity than panels in the Structural Only sequences. This localized effect was distinct from the N300/N400, and appeared despite the fact that these two sequence types were matched on local semantic relatedness between individual panels. These findings suggest that sequential image comprehension uses a narrative structure that may be independent of semantic relatedness. Altogether, we argue that the comprehension of visual narrative is guided by an interaction between structure and meaning. PMID:22387723
Restoration of uneven illumination in light sheet microscopy images.
Uddin, Mohammad Shorif; Lee, Hwee Kuan; Preibisch, Stephan; Tomancak, Pavel
2011-08-01
Light microscopy images suffer from poor contrast due to light absorption and scattering by the media. The resulting decay in contrast varies exponentially across the image along the incident light path. Classical space invariant deconvolution approaches, while very effective in deblurring, are not designed for the restoration of uneven illumination in microscopy images. In this article, we present a modified radiative transfer theory approach to solve the contrast degradation problem of light sheet microscopy (LSM) images. We confirmed the effectiveness of our approach through simulation as well as real LSM images.
Quantum realization of the nearest-neighbor interpolation method for FRQI and NEQR
NASA Astrophysics Data System (ADS)
Sang, Jianzhi; Wang, Shen; Niu, Xiamu
2016-01-01
This paper is concerned with the feasibility of the classical nearest-neighbor interpolation based on flexible representation of quantum images (FRQI) and novel enhanced quantum representation (NEQR). Firstly, the feasibility of the classical image nearest-neighbor interpolation for quantum images of FRQI and NEQR is proven. Then, by defining the halving operation and by making use of quantum rotation gates, the concrete quantum circuit of the nearest-neighbor interpolation for FRQI is designed for the first time. Furthermore, quantum circuit of the nearest-neighbor interpolation for NEQR is given. The merit of the proposed NEQR circuit lies in their low complexity, which is achieved by utilizing the halving operation and the quantum oracle operator. Finally, in order to further improve the performance of the former circuits, new interpolation circuits for FRQI and NEQR are presented by using Control-NOT gates instead of a halving operation. Simulation results show the effectiveness of the proposed circuits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacGillavry, Harold D., E-mail: h.d.macgillavry@uu.nl; Hoogenraad, Casper C., E-mail: c.hoogenraad@uu.nl
2015-07-15
The molecular architecture of dendritic spines defines the efficiency of signal transmission across excitatory synapses. It is therefore critical to understand the mechanisms that control the dynamic localization of the molecular constituents within spines. However, because of the small scale at which most processes within spines take place, conventional light microscopy techniques are not adequate to provide the necessary level of resolution. Recently, super-resolution imaging techniques have overcome the classical barrier imposed by the diffraction of light, and can now resolve the localization and dynamic behavior of proteins within small compartments with nanometer precision, revolutionizing the study of dendritic spinemore » architecture. Here, we highlight exciting new findings from recent super-resolution studies on neuronal spines, and discuss how these studies revealed important new insights into how protein complexes are assembled and how their dynamic behavior shapes the efficiency of synaptic transmission.« less
Improved color constancy in honey bees enabled by parallel visual projections from dorsal ocelli.
Garcia, Jair E; Hung, Yu-Shan; Greentree, Andrew D; Rosa, Marcello G P; Endler, John A; Dyer, Adrian G
2017-07-18
How can a pollinator, like the honey bee, perceive the same colors on visited flowers, despite continuous and rapid changes in ambient illumination and background color? A hundred years ago, von Kries proposed an elegant solution to this problem, color constancy, which is currently incorporated in many imaging and technological applications. However, empirical evidence on how this method can operate on animal brains remains tenuous. Our mathematical modeling proposes that the observed spectral tuning of simple ocellar photoreceptors in the honey bee allows for the necessary input for an optimal color constancy solution to most natural light environments. The model is fully supported by our detailed description of a neural pathway allowing for the integration of signals originating from the ocellar photoreceptors to the information processing regions in the bee brain. These findings reveal a neural implementation to the classic color constancy problem that can be easily translated into artificial color imaging systems.
Geada, Isidro Lorenzo; Ramezani-Dakhel, Hadi; Jamil, Tariq; Sulpizi, Marialore; Heinz, Hendrik
2018-02-19
Metallic nanostructures have become popular for applications in therapeutics, catalysts, imaging, and gene delivery. Molecular dynamics simulations are gaining influence to predict nanostructure assembly and performance; however, instantaneous polarization effects due to induced charges in the free electron gas are not routinely included. Here we present a simple, compatible, and accurate polarizable potential for gold that consists of a Lennard-Jones potential and a harmonically coupled core-shell charge pair for every metal atom. The model reproduces the classical image potential of adsorbed ions as well as surface, bulk, and aqueous interfacial properties in excellent agreement with experiment. Induced charges affect the adsorption of ions onto gold surfaces in the gas phase at a strength similar to chemical bonds while ions and charged peptides in solution are influenced at a strength similar to intermolecular bonds. The proposed model can be applied to complex gold interfaces, electrode processes, and extended to other metals.
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Buss, James R.; Kopriva, Ivica
2004-04-01
We proposed the physics approach to solve a physical inverse problem, namely to choose the unique equilibrium solution (at the minimum free energy: H= E - ToS, including the Wiener, l.m.s E, and ICA, Max S, as special cases). The "unsupervised classification" presumes that required information must be learned and derived directly and solely from the data alone, in consistence with the classical Duda-Hart ATR definition of the "unlabelled data". Such truly unsupervised methodology is presented for space-variant imaging processing for a single pixel in the real world case of remote sensing, early tumor detections and SARS. The indeterminacy of the multiple solutions of the inverse problem is regulated or selected by means of the absolute minimum of isothermal free energy as the ground truth of local equilibrium condition at the single-pixel foot print.
Topview stereo: combining vehicle-mounted wide-angle cameras to a distance sensor array
NASA Astrophysics Data System (ADS)
Houben, Sebastian
2015-03-01
The variety of vehicle-mounted sensors in order to fulfill a growing number of driver assistance tasks has become a substantial factor in automobile manufacturing cost. We present a stereo distance method exploiting the overlapping field of view of a multi-camera fisheye surround view system, as they are used for near-range vehicle surveillance tasks, e.g. in parking maneuvers. Hence, we aim at creating a new input signal from sensors that are already installed. Particular properties of wide-angle cameras (e.g. hanging resolution) demand an adaptation of the image processing pipeline to several problems that do not arise in classical stereo vision performed with cameras carefully designed for this purpose. We introduce the algorithms for rectification, correspondence analysis, and regularization of the disparity image, discuss reasons and avoidance of the shown caveats, and present first results on a prototype topview setup.
Experimental investigation of flow field around the elastic flag flapping in periodic state
NASA Astrophysics Data System (ADS)
Jia, Yongxia; Jia, Lichao; Su, Zhuang; Yuan, Huijing
2018-05-01
The flapping of a flag in the wind is a classical fluid-structure problem that concerns the interaction of elastic bodies with ambient fluid. We focus on the desirable experimental results of the flow around the flapping flag. By immersing the elastic yet self-supporting heavy flag into water flow, we use particle image velocimetry (PIV) techniques to obtain the whole flow field around the midspan of the flag interacting with a fluid in periodic state. A unique PIV image processing method is used to measure near-wall flow velocities around a moving elastic flag. There exists a thin flow circulation region on the suction side of the flag in periodic state. This observation suggests that viscous flow models may be needed to improve the theoretical predictions of the flapping flag in periodic state, especially in a large amplitude.
Posterior impingement syndromes of the ankle.
Lee, Justin C; Calder, James D F; Healy, Jeremiah C
2008-06-01
Acute, or repetitive, compression of the posterior structures of the ankle may lead to posterior ankle impingement (PAI) syndrome, posteromedial ankle impingement (PoMI) syndrome, or Haglund's syndrome. The etiology of each of these conditions is quite different. Variations in posterior ankle osseous and soft tissue anatomy contribute to the etiology of PAI and Haglund's syndromes. The presence of an os trigonum or Stieda process is classically associated with PAI syndrome, whereas a prominent posterosuperior tubercle of the os calcis or Haglund's deformity is the osseous predisposing factor in Haglund's syndrome. PoMI has no defined predisposing anatomical variants but typically follows an inversion-supination injury of the ankle joint. This article discusses the biomechanics, clinical features, imaging, and management of each of these conditions. Magnetic resonance imaging (MRI) provides the optimal tool in posterior ankle assessment, and this review focuses on the MRI findings of each of the conditions just listed.
Pauli structures arising from confined particles interacting via a statistical potential
NASA Astrophysics Data System (ADS)
Batle, Josep; Ciftja, Orion; Farouk, Ahmed; Alkhambashi, Majid; Abdalla, Soliman
2017-09-01
There have been suggestions that the Pauli exclusion principle alone can lead a non-interacting (free) system of identical fermions to form crystalline structures dubbed Pauli crystals. Single-shot imaging experiments for the case of ultra-cold systems of free spin-polarized fermionic atoms in a two-dimensional harmonic trap appear to show geometric arrangements that cannot be characterized as Wigner crystals. This work explores this idea and considers a well-known approach that enables one to treat a quantum system of free fermions as a system of classical particles interacting with a statistical interaction potential. The model under consideration, though classical in nature, incorporates the quantum statistics by endowing the classical particles with an effective interaction potential. The reasonable expectation is that possible Pauli crystal features seen in experiments may manifest in this model that captures the correct quantum statistics as a first order correction. We use the Monte Carlo simulated annealing method to obtain the most stable configurations of finite two-dimensional systems of confined particles that interact with an appropriate statistical repulsion potential. We consider both an isotropic harmonic and a hard-wall confinement potential. Despite minor differences, the most stable configurations observed in our model correspond to the reported Pauli crystals in single-shot imaging experiments of free spin-polarized fermions in a harmonic trap. The crystalline configurations observed appear to be different from the expected classical Wigner crystal structures that would emerge should the confined classical particles had interacted with a pair-wise Coulomb repulsion.
Quantum enhanced superresolution microscopy (Conference Presentation)
NASA Astrophysics Data System (ADS)
Oron, Dan; Tenne, Ron; Israel, Yonatan; Silberberg, Yaron
2017-02-01
Far-field optical microscopy beyond the Abbe diffraction limit, making use of nonlinear excitation (e.g. STED), or temporal fluctuations in fluorescence (PALM, STORM, SOFI) is already a reality. In contrast, overcoming the diffraction limit using non-classical properties of light is very difficult to achieve due to the fragility of quantum states of light. Here, we experimentally demonstrate superresolution microscopy based on quantum properties of light naturally emitted by fluorophores used as markers in fluorescence microscopy. Our approach is based on photon antibunching, the tendency of fluorophores to emit photons one by one rather than in bursts. Although a distinctively quantum phenomenon, antibunching is readily observed in most common fluorophores even at room temperature. This nonclassical resource can be utilized directly to enhance the imaging resolution, since the non-classical far-field intensity correlations induced by antibunching carry high spatial frequency information on the spatial distribution of emitters. Detecting photon statistics simultaneously in the entire field of view, we were able to detect non-classical correlations of the second and third order, and reconstructed images with resolution significantly beyond the diffraction limit. Alternatively, we demonstrate the utilization of antibunching for augmenting the capabilities of localization-based superresolution imaging in the presence of multiple emitters, using a novel detector comprised of an array of single photon detectors connected to a densely packed fiber bundle. These features allow us to enhance the spatial and temporal resolution with which multiple emitters can be imaged compared with other techniques that rely on CCD cameras.
Halimi, Abdelghafour; Batatia, Hadj; Le Digabel, Jimmy; Josse, Gwendal; Tourneret, Jean Yves
2017-01-01
Detecting skin lentigo in reflectance confocal microscopy images is an important and challenging problem. This imaging modality has not yet been widely investigated for this problem and there are a few automatic processing techniques. They are mostly based on machine learning approaches and rely on numerous classical image features that lead to high computational costs given the very large resolution of these images. This paper presents a detection method with very low computational complexity that is able to identify the skin depth at which the lentigo can be detected. The proposed method performs multiresolution decomposition of the image obtained at each skin depth. The distribution of image pixels at a given depth can be approximated accurately by a generalized Gaussian distribution whose parameters depend on the decomposition scale, resulting in a very-low-dimension parameter space. SVM classifiers are then investigated to classify the scale parameter of this distribution allowing real-time detection of lentigo. The method is applied to 45 healthy and lentigo patients from a clinical study, where sensitivity of 81.4% and specificity of 83.3% are achieved. Our results show that lentigo is identifiable at depths between 50μm and 60μm, corresponding to the average location of the the dermoepidermal junction. This result is in agreement with the clinical practices that characterize the lentigo by assessing the disorganization of the dermoepidermal junction. PMID:29296480
ERIC Educational Resources Information Center
Koren, Pazit; Bar, Varda
2009-01-01
The physical and social image of the scientist among school children, student teachers, and teachers over the last 50 years was investigated. Interest has also been shown in the perception of the personality behind the physical stereotype. Nevertheless, the value judgments of science and scientists and the positive and negative mind-sets attaching…
Non-Markovian Complexity in the Quantum-to-Classical Transition
Xiong, Heng-Na; Lo, Ping-Yuan; Zhang, Wei-Min; Feng, Da Hsuan; Nori, Franco
2015-01-01
The quantum-to-classical transition is due to environment-induced decoherence, and it depicts how classical dynamics emerges from quantum systems. Previously, the quantum-to-classical transition has mainly been described with memory-less (Markovian) quantum processes. Here we study the complexity of the quantum-to-classical transition through general non-Markovian memory processes. That is, the influence of various reservoirs results in a given initial quantum state evolving into one of the following four scenarios: thermal state, thermal-like state, quantum steady state, or oscillating quantum nonstationary state. In the latter two scenarios, the system maintains partial or full quantum coherence due to the strong non-Markovian memory effect, so that in these cases, the quantum-to-classical transition never occurs. This unexpected new feature provides a new avenue for the development of future quantum technologies because the remaining quantum oscillations in steady states are decoherence-free. PMID:26303002
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBTmore » shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support, Hologic, Inc.; Research Support, Barco, Inc.; Scientific Advisory Board, Gamma Medica, Inc.; Scientific Advisory Board, Real-Time Tomography, LLC.; Shareholder, Real-Time Tomography, LLC; J. Mainprize, Our lab has a research agreement with GE Healthcare on various topics in digital mammography and digital tomosynthesis; W. Zhao, Research grant from Siemens Health Care.« less
MO-DE-209-01: Primer On Tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maidment, A.
2016-06-15
Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBTmore » shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support, Hologic, Inc.; Research Support, Barco, Inc.; Scientific Advisory Board, Gamma Medica, Inc.; Scientific Advisory Board, Real-Time Tomography, LLC.; Shareholder, Real-Time Tomography, LLC; J. Mainprize, Our lab has a research agreement with GE Healthcare on various topics in digital mammography and digital tomosynthesis; W. Zhao, Research grant from Siemens Health Care.« less
MO-DE-209-04: Radiation Dosimetry in Breast Tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sechopoulos, I.
2016-06-15
Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBTmore » shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support, Hologic, Inc.; Research Support, Barco, Inc.; Scientific Advisory Board, Gamma Medica, Inc.; Scientific Advisory Board, Real-Time Tomography, LLC.; Shareholder, Real-Time Tomography, LLC; J. Mainprize, Our lab has a research agreement with GE Healthcare on various topics in digital mammography and digital tomosynthesis; W. Zhao, Research grant from Siemens Health Care.« less
MO-DE-209-02: Tomosynthesis Reconstruction Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mainprize, J.
2016-06-15
Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBTmore » shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support, Hologic, Inc.; Research Support, Barco, Inc.; Scientific Advisory Board, Gamma Medica, Inc.; Scientific Advisory Board, Real-Time Tomography, LLC.; Shareholder, Real-Time Tomography, LLC; J. Mainprize, Our lab has a research agreement with GE Healthcare on various topics in digital mammography and digital tomosynthesis; W. Zhao, Research grant from Siemens Health Care.« less
Determination of piezo-optic coefficients of crystals by means of four-point bending.
Krupych, Oleg; Savaryn, Viktoriya; Krupych, Andriy; Klymiv, Ivan; Vlokh, Rostyslav
2013-06-10
A technique developed recently for determining piezo-optic coefficients (POCs) of isotropic optical media, which represents a combination of digital imaging laser interferometry and a classical four-point bending method, is generalized and applied to a single-crystalline anisotropic material. The peculiarities of measuring procedures and data processing for the case of optically uniaxial crystals are described in detail. The capabilities of the technique are tested on the example of canonical nonlinear optical crystal LiNbO3. The high precision achieved in determination of the POCs for isotropic and anisotropic materials testifies that the technique should be both versatile and reliable.
New Insights in Anorexia Nervosa
Gorwood, Philip; Blanchet-Collet, Corinne; Chartrel, Nicolas; Duclos, Jeanne; Dechelotte, Pierre; Hanachi, Mouna; Fetissov, Serguei; Godart, Nathalie; Melchior, Jean-Claude; Ramoz, Nicolas; Rovere-Jovene, Carole; Tolle, Virginie; Viltart, Odile; Epelbaum, Jacques
2016-01-01
Anorexia nervosa (AN) is classically defined as a condition in which an abnormally low body weight is associated with an intense fear of gaining weight and distorted cognitions regarding weight, shape, and drive for thinness. This article reviews recent evidences from physiology, genetics, epigenetics, and brain imaging which allow to consider AN as an abnormality of reward pathways or an attempt to preserve mental homeostasis. Special emphasis is put on ghrelino-resistance and the importance of orexigenic peptides of the lateral hypothalamus, the gut microbiota and a dysimmune disorder of neuropeptide signaling. Physiological processes, secondary to underlying, and premorbid vulnerability factors—the “pondero-nutritional-feeding basements”- are also discussed. PMID:27445651
Salient regions detection using convolutional neural networks and color volume
NASA Astrophysics Data System (ADS)
Liu, Guang-Hai; Hou, Yingkun
2018-03-01
Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.
Chojniak, Rubens; Carneiro, Dominique Piacenti; Moterani, Gustavo Simonetto Peres; Duarte, Ivone da Silva; Bitencourt, Almir Galvão Vieira; Muglia, Valdair Francisco; D'Ippolito, Giuseppe
2017-01-01
To map the different methods for diagnostic imaging instruction at medical schools in Brazil. In this cross-sectional study, a questionnaire was sent to each of the coordinators of 178 Brazilian medical schools. The following characteristics were assessed: teaching model; total course hours; infrastructure; numbers of students and professionals involved; themes addressed; diagnostic imaging modalities covered; and education policies related to diagnostic imaging. Of the 178 questionnaires sent, 45 (25.3%) were completed and returned. Of those 45 responses, 17 (37.8%) were from public medical schools, whereas 28 (62.2%) were from private medical schools. Among the 45 medical schools evaluated, the method of diagnostic imaging instruction was modular at 21 (46.7%), classic (independent discipline) at 13 (28.9%), hybrid (classical and modular) at 9 (20.0%), and none of the preceding at 3 (6.7%). Diagnostic imaging is part of the formal curriculum at 36 (80.0%) of the schools, an elective course at 3 (6.7%), and included within another modality at 6 (13.3%). Professors involved in diagnostic imaging teaching are radiologists at 43 (95.5%) of the institutions. The survey showed that medical courses in Brazil tend to offer diagnostic imaging instruction in courses that include other content and at different time points during the course. Radiologists are extensively involved in undergraduate medical education, regardless of the teaching methodology employed at the institution.
NASA Astrophysics Data System (ADS)
Bonoli, Carlotta; Balestra, Andrea; Bortoletto, Favio; D'Alessandro, Maurizio; Farinelli, Ruben; Medinaceli, Eduardo; Stephen, John; Borsato, Enrico; Dusini, Stefano; Laudisio, Fulvio; Sirignano, Chiara; Ventura, Sandro; Auricchio, Natalia; Corcione, Leonardo; Franceschi, Enrico; Ligori, Sebastiano; Morgante, Gianluca; Patrizii, Laura; Sirri, Gabriele; Trifoglio, Massimo; Valenziano, Luca
2016-07-01
The Near Infrared Spectrograph and Photometer (NISP) is one of the two instruments on board the EUCLID mission now under implementation phase; VIS, the Visible Imager is the second instrument working on the same shared optical beam. The NISP focal plane is based on a detector mosaic deploying 16x, 2048x2048 pixels^2 HAWAII-II HgCdTe detectors, now in advanced delivery phase from Teledyne Imaging Scientific (TIS), and will provide NIR imaging in three bands (Y, J, H) plus slit-less spectroscopy in the range 0.9÷2.0 micron. All the NISP observational modes will be supported by different parametrization of the classic multi-accumulation IR detector readout mode covering the specific needs for spectroscopic, photometric and calibration exposures. Due to the large number of deployed detectors and to the limited satellite telemetry available to ground, a consistent part of the data processing, conventionally performed off-line, will be accomplished on board, in parallel with the flow of data acquisitions. This has led to the development of a specific on-board, HW/SW, data processing pipeline, and to the design of computationally performing control electronics, suited to cope with the time constraints of the NISP acquisition sequences during the sky survey. In this paper we present the architecture of the NISP on-board processing system, directly interfaced to the SIDECAR ASICs system managing the detector focal plane, and the implementation of the on-board pipe-line allowing all the basic operations of input frame averaging, final frame interpolation and data-volume compression before ground down-link.
NASA Astrophysics Data System (ADS)
Murtiyoso, A.; Koehl, M.; Grussenmeyer, P.; Freville, T.
2017-08-01
Photogrammetry has seen an increase in the use of UAVs (Unmanned Aerial Vehicles) for both large and smaller scale cartography. The use of UAVs is also advantageous because it may be used for tasks requiring quick response, including in the case of the inspection and monitoring of buildings. The objective of the project is to study the acquisition and processing protocols which exist in the literature and to adapt them for UAV projects. This implies a study on the calibration of the sensors, flight planning, comparison of software solutions, data management, and analysis on the different products of a UAV project. Two historical buildings of the city of Strasbourg were used as case studies: a part of the Rohan Palace façade and the St-Pierre-le-Jeune Catholic church. In addition, a preliminary test was performed on the Josephine Pavilion. Two UAVs were used in this research; namely the Sensefly Albris and the DJI Phantom 3 Professional. The experiments have shown that the calibration parameters tend to be unstable for small sensors. Furthermore, the dense matching of images remains a particular problem to address in a close range photogrammetry project, more so in the presence of noise on the images. Data management in cases where the number of images is high is also very important. The UAV is nevertheless a suitable solution for the surveying and recording of historical buildings because it is able to take images from points of view which are normally inaccessible to classical terrestrial techniques.
Maiti, Panchanan; Hall, Tia C; Paladugu, Leela; Kolli, Nivya; Learman, Cameron; Rossignol, Julien; Dunbar, Gary L
2016-11-01
Deposition of amyloid beta protein (Aβ) is a key component in the pathogenesis of Alzheimer's disease (AD). As an anti-amyloid natural polyphenol, curcumin (Cur) has been used as a therapy for AD. Its fluorescent activity, preferential binding to Aβ, as well as structural similarities with other traditional amyloid-binding dyes, make it a promising candidate for labeling and imaging of Aβ plaques in vivo. The present study was designed to test whether dietary Cur and nanocurcumin (NC) provide more sensitivity for labeling and imaging of Aβ plaques in brain tissues from the 5×-familial AD (5×FAD) mice than the classical Aβ-binding dyes, such as Congo red and Thioflavin-S. These comparisons were made in postmortem brain tissues from the 5×FAD mice. We observed that Cur and NC labeled Aβ plaques to the same degree as Aβ-specific antibody and to a greater extent than those of the classical amyloid-binding dyes. Cur and NC also labeled Aβ plaques in 5×FAD brain tissues when injected intraperitoneally. Nanomolar concentrations of Cur or NC are sufficient for labeling and imaging of Aβ plaques in 5×FAD brain tissue. Cur and NC also labeled different types of Aβ plaques, including core, neuritic, diffuse, and burned-out, to a greater degree than other amyloid-binding dyes. Therefore, Cur and or NC can be used as an alternative to Aβ-specific antibody for labeling and imaging of Aβ plaques ex vivo and in vivo. It can provide an easy and inexpensive means of detecting Aβ-plaque load in postmortem brain tissue of animal models of AD after anti-amyloid therapy.
Light, Imaging, Vision: An interdisciplinary undergraduate course
NASA Astrophysics Data System (ADS)
Nelson, Philip
Students in physical and life science, and in engineering, need to know about the physics and biology of light. In the 21st century, it has become increasingly clear that the quantum nature of light is essential both for the latest imaging modalities and even to advance our knowledge of fundamental processes, such as photosynthesis and human vision. But many optics courses remain rooted in classical physics, with photons as an afterthought. I'll describe a new undergraduate course, for students in several science and engineering majors, that takes students from the rudiments of probability theory to modern methods like fluorescence imaging and Förster resonance energy transfer. After a digression into color vision, students then see how the Feynman principle explains the apparently wavelike phenomena associated to light, including applications like diffraction limit, subdiffraction imaging, total internal reflection and TIRF microscopy. Then we see how scientists documented the single-quantum sensitivity of the eye seven decades earlier than `ought' to have been possible, and finally close with the remarkable signaling cascade that delivers such outstanding performance. A new textbook embodying this course will be published by Princeton University Press in Spring 2017. Partially supported by the United States National Science Foundation under Grant PHY-1601894.
Cerebella segmentation on MR images of pediatric patients with medulloblastoma
NASA Astrophysics Data System (ADS)
Shan, Zu Y.; Ji, Qing; Glass, John; Gajjar, Amar; Reddick, Wilburn E.
2005-04-01
In this study, an automated method has been developed to identify the cerebellum from T1-weighted MR brain images of patients with medulloblastoma. A new objective function that is similar to Gibbs free energy in classic physics was defined; and the brain structure delineation was viewed as a process of minimizing Gibbs free energy. We used a rigid-body registration and an active contour (snake) method to minimize the Gibbs free energy in this study. The method was applied to 20 patient data sets to generate cerebellum images and volumetric results. The generated cerebellum images were compared with two manually drawn results. Strong correlations were found between the automatically and manually generated volumetric results, the correlation coefficients with each of manual results were 0.971 and 0.974, respectively. The average Jaccard similarities with each of two manual results were 0.89 and 0.88, respectively. The average Kappa indexes with each of two manual results were 0.94 and 0.93, respectively. These results showed this method was both robust and accurate for cerebellum segmentation. The method may be applied to various research and clinical investigation in which cerebellum segmentation and quantitative MR measurement of cerebellum are needed.
Gaucher cell, photomicrograph (image)
Gaucher disease is called a "lipid storage disease" where abnormal amounts of lipids called "glycosphingolipids" are stored in special cells called reticuloendothelial cells. Classically, the nucleus is ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xusheng; Moon, Seoksu; Gao, Jian
Fuel atomization and vaporization process play a critical role in determining the engine combustion and emission. The primary near-nozzle breakup is the vital link between the fuel emerging from the nozzle and the fully atomized spray. In this study, the near-nozzle spray characteristics of diesel injector with different umbrella angle (UA) were investigated using high-speed X-ray phase-contrast imaging and quantitative image processing. A classic ‘dumbbell’ profile of spray width (SW) composed of three stages: opening stage, semisteady stage and closing stage. The SW peak of two-hole injectors was more than twice of that of single-hole injector at the opening andmore » closing stages, corresponding to the hollow-cone spray. This indicated the vortex flow was formed with the increase of the UA. The higher injection pressure had little influence on the SW while led to earlier breakup closer to the nozzle. Significant fuel effect on the SW at higher needle lift was found. However, this effect could be neglect at lower needle lift due to the leading role of internal flow and cavitation on the near-field spray characteristics. In addition, the morphology-based breakup process was observed, which highlighted the important effect of internal flow on the spray development. The possibility of using hollow-cone spray in diesel injector was also discussed.« less
Kuizon, Salomon; DiMaiuta, Kathleen; Walus, Marius; Jenkins, Edmund C; Kuizon, Marisol; Kida, Elizabeth; Golabek, Adam A; Espinoza, Daniel O; Pullarkat, Raju K; Junaid, Mohammed A
2010-08-03
Tripeptidyl aminopeptidase I (TPPI) is a crucial lysosomal enzyme that is deficient in the fatal neurodegenerative disorder called classic late-infantile neuronal ceroid lipofuscinosis (LINCL). It is involved in the catabolism of proteins in the lysosomes. Recent X-ray crystallographic studies have provided insights into the structural/functional aspects of TPPI catalysis, and indicated presence of an octahedrally coordinated Ca(2+). Purified precursor and mature TPPI were used to study inhibition by NBS and EDTA using biochemical and immunological approaches. Site-directed mutagenesis with confocal imaging technique identified a critical W residue in TPPI activity, and the processing of precursor into mature enzyme. NBS is a potent inhibitor of the purified TPPI. In mammalian TPPI, W542 is critical for tripeptidyl peptidase activity as well as autocatalysis. Transfection studies have indicated that mutants of the TPPI that harbor residues other than W at position 542 have delayed processing, and are retained in the ER rather than transported to lysosomes. EDTA inhibits the autocatalytic processing of the precursor TPPI. We propose that W542 and Ca(2+) are critical for maintaining the proper tertiary structure of the precursor proprotein as well as the mature TPPI. Additionally, Ca(2+) is necessary for the autocatalytic processing of the precursor protein into the mature TPPI. We have identified NBS as a potent TPPI inhibitor, which led in delineating a critical role for W542 residue. Studies with such compounds will prove valuable in identifying the critical residues in the TPPI catalysis and its structure-function analysis.
Use of synchrotron tomography to image naturalistic anatomy in insects
NASA Astrophysics Data System (ADS)
Socha, John J.; De Carlo, Francesco
2008-08-01
Understanding the morphology of anatomical structures is a cornerstone of biology. For small animals, classical methods such as histology have provided a wealth of data, but such techniques can be problematic due to destruction of the sample. More importantly, fixation and physical slicing can cause deformation of anatomy, a critical limitation when precise three-dimensional data are required. Modern techniques such as confocal microscopy, MRI, and tabletop x-ray microCT provide effective non-invasive methods, but each of these tools each has limitations including sample size constraints, resolution limits, and difficulty visualizing soft tissue. Our research group at the Advanced Photon Source (Argonne National Laboratory) studies physiological processes in insects, focusing on the dynamics of breathing and feeding. To determine the size, shape, and relative location of internal anatomy in insects, we use synchrotron microtomography at the beamline 2-BM to image structures including tracheal tubes, muscles, and gut. Because obtaining naturalistic, undeformed anatomical information is a key component of our studies, we have developed methods to image fresh and non-fixed whole animals and tissues. Although motion artifacts remain a problem, we have successfully imaged multiple species including beetles, ants, fruit flies, and butterflies. Here we discuss advances in biological imaging and highlight key findings in insect morphology.
Local structure preserving sparse coding for infrared target recognition
Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa
2017-01-01
Sparse coding performs well in image classification. However, robust target recognition requires a lot of comprehensive template images and the sparse learning process is complex. We incorporate sparsity into a template matching concept to construct a local sparse structure matching (LSSM) model for general infrared target recognition. A local structure preserving sparse coding (LSPSc) formulation is proposed to simultaneously preserve the local sparse and structural information of objects. By adding a spatial local structure constraint into the classical sparse coding algorithm, LSPSc can improve the stability of sparse representation for targets and inhibit background interference in infrared images. Furthermore, a kernel LSPSc (K-LSPSc) formulation is proposed, which extends LSPSc to the kernel space to weaken the influence of the linear structure constraint in nonlinear natural data. Because of the anti-interference and fault-tolerant capabilities, both LSPSc- and K-LSPSc-based LSSM can implement target identification based on a simple template set, which just needs several images containing enough local sparse structures to learn a sufficient sparse structure dictionary of a target class. Specifically, this LSSM approach has stable performance in the target detection with scene, shape and occlusions variations. High performance is demonstrated on several datasets, indicating robust infrared target recognition in diverse environments and imaging conditions. PMID:28323824
Renosh, P R; Schmitt, Francois G; Loisel, Hubert
2015-01-01
Satellite remote sensing observations allow the ocean surface to be sampled synoptically over large spatio-temporal scales. The images provided from visible and thermal infrared satellite observations are widely used in physical, biological, and ecological oceanography. The present work proposes a method to understand the multi-scaling properties of satellite products such as the Chlorophyll-a (Chl-a), and the Sea Surface Temperature (SST), rarely studied. The specific objectives of this study are to show how the small scale heterogeneities of satellite images can be characterised using tools borrowed from the fields of turbulence. For that purpose, we show how the structure function, which is classically used in the frame of scaling time series analysis, can be used also in 2D. The main advantage of this method is that it can be applied to process images which have missing data. Based on both simulated and real images, we demonstrate that coarse-graining (CG) of a gradient modulus transform of the original image does not provide correct scaling exponents. We show, using a fractional Brownian simulation in 2D, that the structure function (SF) can be used with randomly sampled couple of points, and verify that 1 million of couple of points provides enough statistics.
NASA Astrophysics Data System (ADS)
Kurek, A. R.; Stachowski, A.; Banaszek, K.; Pollo, A.
2018-05-01
High-angular-resolution imaging is crucial for many applications in modern astronomy and astrophysics. The fundamental diffraction limit constrains the resolving power of both ground-based and spaceborne telescopes. The recent idea of a quantum telescope based on the optical parametric amplification (OPA) of light aims to bypass this limit for the imaging of extended sources by an order of magnitude or more. We present an updated scheme of an OPA-based device and a more accurate model of the signal amplification by such a device. The semiclassical model that we present predicts that the noise in such a system will form so-called light speckles as a result of light interference in the optical path. Based on this model, we analysed the efficiency of OPA in increasing the angular resolution of the imaging of extended targets and the precise localization of a distant point source. According to our new model, OPA offers a gain in resolved imaging in comparison to classical optics. For a given time-span, we found that OPA can be more efficient in localizing a single distant point source than classical telescopes.
Quantum information processing by a continuous Maxwell demon
NASA Astrophysics Data System (ADS)
Stevens, Josey; Deffner, Sebastian
Quantum computing is believed to be fundamentally superior to classical computing; however quantifying the specific thermodynamic advantage has been elusive. Experimentally motivated, we generalize previous minimal models of discrete demons to continuous state space. Analyzing our model allows one to quantify the thermodynamic resources necessary to process quantum information. By further invoking the semi-classical limit we compare the quantum demon with its classical analogue. Finally, this model also serves as a starting point to study open quantum systems.
Jasinska, K K; Petitto, L A
2013-10-01
Is the developing bilingual brain fundamentally similar to the monolingual brain (e.g., neural resources supporting language and cognition)? Or, does early-life bilingual language experience change the brain? If so, how does age of first bilingual exposure impact neural activation for language? We compared how typically-developing bilingual and monolingual children (ages 7-10) and adults recruit brain areas during sentence processing using functional Near Infrared Spectroscopy (fNIRS) brain imaging. Bilingual participants included early-exposed (bilingual exposure from birth) and later-exposed individuals (bilingual exposure between ages 4-6). Both bilingual children and adults showed greater neural activation in left-hemisphere classic language areas, and additionally, right-hemisphere homologues (Right Superior Temporal Gyrus, Right Inferior Frontal Gyrus). However, important differences were observed between early-exposed and later-exposed bilinguals in their earliest-exposed language. Early bilingual exposure imparts fundamental changes to classic language areas instead of alterations to brain regions governing higher cognitive executive functions. However, age of first bilingual exposure does matter. Later-exposed bilinguals showed greater recruitment of the prefrontal cortex relative to early-exposed bilinguals and monolinguals. The findings provide fascinating insight into the neural resources that facilitate bilingual language use and are discussed in terms of how early-life language experiences can modify the neural systems underlying human language processing. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.
Aeroelastic Flight Data Analysis with the Hilbert-Huang Algorithm
NASA Technical Reports Server (NTRS)
Brenner, Martin J.; Prazenica, Chad
2006-01-01
This report investigates the utility of the Hilbert Huang transform for the analysis of aeroelastic flight data. It is well known that the classical Hilbert transform can be used for time-frequency analysis of functions or signals. Unfortunately, the Hilbert transform can only be effectively applied to an extremely small class of signals, namely those that are characterized by a single frequency component at any instant in time. The recently-developed Hilbert Huang algorithm addresses the limitations of the classical Hilbert transform through a process known as empirical mode decomposition. Using this approach, the data is filtered into a series of intrinsic mode functions, each of which admits a well-behaved Hilbert transform. In this manner, the Hilbert Huang algorithm affords time-frequency analysis of a large class of signals. This powerful tool has been applied in the analysis of scientific data, structural system identification, mechanical system fault detection, and even image processing. The purpose of this report is to demonstrate the potential applications of the Hilbert Huang algorithm for the analysis of aeroelastic systems, with improvements such as localized online processing. Applications for correlations between system input and output, and amongst output sensors, are discussed to characterize the time-varying amplitude and frequency correlations present in the various components of multiple data channels. Online stability analyses and modal identification are also presented. Examples are given using aeroelastic test data from the F-18 Active Aeroelastic Wing airplane, an Aerostructures Test Wing, and pitch plunge simulation.
Aeroelastic Flight Data Analysis with the Hilbert-Huang Algorithm
NASA Technical Reports Server (NTRS)
Brenner, Marty; Prazenica, Chad
2005-01-01
This paper investigates the utility of the Hilbert-Huang transform for the analysis of aeroelastic flight data. It is well known that the classical Hilbert transform can be used for time-frequency analysis of functions or signals. Unfortunately, the Hilbert transform can only be effectively applied to an extremely small class of signals, namely those that are characterized by a single frequency component at any instant in time. The recently-developed Hilbert-Huang algorithm addresses the limitations of the classical Hilbert transform through a process known as empirical mode decomposition. Using this approach, the data is filtered into a series of intrinsic mode functions, each of which admits a well-behaved Hilbert transform. In this manner, the Hilbert-Huang algorithm affords time-frequency analysis of a large class of signals. This powerful tool has been applied in the analysis of scientific data, structural system identification, mechanical system fault detection, and even image processing. The purpose of this paper is to demonstrate the potential applications of the Hilbert-Huang algorithm for the analysis of aeroelastic systems, with improvements such as localized/online processing. Applications for correlations between system input and output, and amongst output sensors, are discussed to characterize the time-varying amplitude and frequency correlations present in the various components of multiple data channels. Online stability analyses and modal identification are also presented. Examples are given using aeroelastic test data from the F/A-18 Active Aeroelastic Wing aircraft, an Aerostructures Test Wing, and pitch-plunge simulation.
Gaucher cell, photomicrograph #2 (image)
Gaucher disease is called a "lipid storage disease" where abnormal amounts of lipids called "glycosphingolipids" are stored in special cells called reticuloendothelial cells. Classically, the nucleus is ...
Quantum stochastic walks on networks for decision-making.
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-03-31
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce's response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process' degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.
Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P.
2016-01-01
Purpose Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. Theory and Methods The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly-accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely-used calibrationless uniformly-undersampled trajectories. Results Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. Conclusion The SENSE-LORAKS framework provides promising new opportunities for highly-accelerated MRI. PMID:27037836
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
From Mr. Wameke to Mr. Rivelle to Mr. Woodman: Images of Principals in Film and Television.
ERIC Educational Resources Information Center
Glanz, Jeffrey
Despite burgeoning literature that acknowledges the importance of the principalship in achieving and maintaining school effectiveness, principals have been depicted unfavorably in film and television as insecure autocrats, petty bureaucrats, and classic buffoons. This paper presents findings of a study that not only catalogued images of principals…
Task-driven dictionary learning.
Mairal, Julien; Bach, Francis; Ponce, Jean
2012-04-01
Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.
Zang, Emerson; Brandes, Susanne; Tovar, Miguel; Martin, Karin; Mech, Franziska; Horbert, Peter; Henkel, Thomas; Figge, Marc Thilo; Roth, Martin
2013-09-21
The majority of today's antimicrobial therapeutics is derived from secondary metabolites produced by Actinobacteria. While it is generally assumed that less than 1% of Actinobacteria species from soil habitats have been cultivated so far, classic screening approaches fail to supply new substances, often due to limited throughput and frequent rediscovery of already known strains. To overcome these restrictions, we implement high-throughput cultivation of soil-derived Actinobacteria in microfluidic pL-droplets by generating more than 600,000 pure cultures per hour from a spore suspension that can subsequently be incubated for days to weeks. Moreover, we introduce triggered imaging with real-time image-based droplet classification as a novel universal method for pL-droplet sorting. Growth-dependent droplet sorting at frequencies above 100 Hz is performed for label-free enrichment and extraction of microcultures. The combination of both cultivation of Actinobacteria in pL-droplets and real-time detection of growing Actinobacteria has great potential in screening for yet unknown species as well as their undiscovered natural products.
Towards an Imaging Mid-Infrared Heterodyne Spectrometer
NASA Technical Reports Server (NTRS)
Hewagama, T.; Aslam, S.; Jones, H.; Kostiuk, T.; Villanueva, G.; Roman, P.; Shaw, G. B.; Livengood, T.; Allen, J. E.
2012-01-01
We are developing a concept for a compact, low-mass, low-power, mid-infrared (MIR; 5- 12 microns) imaging heterodyne spectrometer that incorporates fiber optic coupling, Quantum Cascade Laser (QCL) local oscillator, photomixer array, and Radio Frequency Software Defined Readout (RFSDR) for spectral analysis. Planetary Decadal Surveys have highlighted the need for miniaturized, robust, low-mass, and minimal power remote sensing technologies for flight missions. The drive for miniaturization of remote sensing spectroscopy and radiometry techniques has been a continuing process. The advent of MIR fibers, and MEMS techniques for producing waveguides has proven to be an important recent advancement for miniaturization of infrared spectrometers. In conjunction with well-established photonics techniques, the miniaturization of spectrometers is transitioning from classic free space optical systems to waveguide/fiber-based structures for light transport and producing interference effects. By their very nature, these new devices are compact and lightweight. Mercury-Cadmium-Telluride (MCT) and Quantum Well Infrared Photodiodes (QWIP) arrays for heterodyne applications are also being developed. Bulky electronics is another barrier that precluded the extension of heterodyne systems into imaging applications, and our RFSDR will address this aspect.
NASA Astrophysics Data System (ADS)
Moore, J. M.; Grundy, W. M.; Spencer, J. R.; McKinnon, W. B.; Cruikshank, D. P.; White, O. L.; Umurhan, O. M.; Beyer, R. A.; Singer, K. N.; Schenk, P.; Stern, A.; Weaver, H. A., Jr.; Olkin, C.
2017-12-01
The New Horizons encounter with 2014 MU69 on 1 January 2019 will be the first small Kuiper belt object to be studied in detail from a spacecraft. The prospect that the cold classical population, which includes 2014 MU69, may represent a primordial, in situ population is exciting. Indeed, as we have learned just how complex and dynamic the early Solar System was, the cold classical population of the Kuiper belt has emerged as a singular candidate for a fundamentally unaltered original planetesimal population. MU69 in particular provides a unique opportunity to explore the disk processes and chemistry of the primordial solar nebula. As such, compositional measurements during the NH flyby are of paramount importance. So is high-resolution imaging of shape and structure, as the intermediate size of MU69 (much smaller than Pluto but much larger than a typical comet) may show signs of its accretion from much smaller bodies (layers, pebbles, lobes, etc., in the manner of 67P/C-G), or alternatively, derivation via the collisional fragmentation of a larger body if KBOs are "born big". MU69 may also be big enough to show signs of internal evolution driven by radiogenic heat from 26Al decay, if it accreted early enough and fast enough. The size of MU69 (20 - 40 km) places it in a class that has the potential to harbor unusual, and in some cases, possibly active, surface geological processes: several small satellites of similar size, including Helene and Epimetheus, display what appears to be fine-grained material covering large portions of their surfaces, and the surface of Phobos displays an unusual system of parallel grooves. Invariably, these intriguing surface features are only clearly defined at imaging resolutions of at least tens of meters per pixel. The best images of MU69 are planned to have resolutions of 20 - 40 m/pixel at a phase angle range of 40 - 70°. We also plan color imaging in 4 channels at 0.4 to 1 µ at 200 - 500 m/pixel, and 256 channel spectroscopy from 1.25 to 2.5 µ at 1 - 4 km/pixel. Ices such as H2O, NH3, CO2, and CH3OH would be stable and can be detected and mapped if they are exposed at the surface. It will be especially instructive to compare with Cassini VIMS spectra of Phoebe, thought to be a captured outer solar system planetesimal that formed in a related nebular environment to where MU69 formed.
Larsson, Emanuel; Martin, Sabine; Lazzarini, Marcio; Tromba, Giuliana; Missbach-Guentner, Jeannine; Pinkert-Leetsch, Diana; Katschinski, Dörthe M.; Alves, Frauke
2017-01-01
The small size of the adult and developing mouse heart poses a great challenge for imaging in preclinical research. The aim of the study was to establish a phosphotungstic acid (PTA) ex-vivo staining approach that efficiently enhances the x-ray attenuation of soft-tissue to allow high resolution 3D visualization of mouse hearts by synchrotron radiation based μCT (SRμCT) and classical μCT. We demonstrate that SRμCT of PTA stained mouse hearts ex-vivo allows imaging of the cardiac atrium, ventricles, myocardium especially its fibre structure and vessel walls in great detail and furthermore enables the depiction of growth and anatomical changes during distinct developmental stages of hearts in mouse embryos. Our x-ray based virtual histology approach is not limited to SRμCT as it does not require monochromatic and/or coherent x-ray sources and even more importantly can be combined with conventional histological procedures. Furthermore, it permits volumetric measurements as we show for the assessment of the plaque volumes in the aortic valve region of mice from an ApoE-/- mouse model. Subsequent, Masson-Goldner trichrome staining of paraffin sections of PTA stained samples revealed intact collagen and muscle fibres and positive staining of CD31 on endothelial cells by immunohistochemistry illustrates that our approach does not prevent immunochemistry analysis. The feasibility to scan hearts already embedded in paraffin ensured a 100% correlation between virtual cut sections of the CT data sets and histological heart sections of the same sample and may allow in future guiding the cutting process to specific regions of interest. In summary, since our CT based virtual histology approach is a powerful tool for the 3D depiction of morphological alterations in hearts and embryos in high resolution and can be combined with classical histological analysis it may be used in preclinical research to unravel structural alterations of various heart diseases. PMID:28178293
Degraded Chinese rubbing images thresholding based on local first-order statistics
NASA Astrophysics Data System (ADS)
Wang, Fang; Hou, Ling-Ying; Huang, Han
2017-06-01
It is a necessary step for Chinese character segmentation from degraded document images in Optical Character Recognizer (OCR); however, it is challenging due to various kinds of noising in such an image. In this paper, we present three local first-order statistics method that had been adaptive thresholding for segmenting text and non-text of Chinese rubbing image. Both visual inspection and numerically investigate for the segmentation results of rubbing image had been obtained. In experiments, it obtained better results than classical techniques in the binarization of real Chinese rubbing image and PHIBD 2012 datasets.
Bekhtereva, Valeria; Müller, Matthias M
2017-10-01
Is color a critical feature in emotional content extraction and involuntary attentional orienting toward affective stimuli? Here we used briefly presented emotional distractors to investigate the extent to which color information can influence the time course of attentional bias in early visual cortex. While participants performed a demanding visual foreground task, complex unpleasant and neutral background images were displayed in color or grayscale format for a short period of 133 ms and were immediately masked. Such a short presentation poses a challenge for visual processing. In the visual detection task, participants attended to flickering squares that elicited the steady-state visual evoked potential (SSVEP), allowing us to analyze the temporal dynamics of the competition for processing resources in early visual cortex. Concurrently we measured the visual event-related potentials (ERPs) evoked by the unpleasant and neutral background scenes. The results showed (a) that the distraction effect was greater with color than with grayscale images and (b) that it lasted longer with colored unpleasant distractor images. Furthermore, classical and mass-univariate ERP analyses indicated that, when presented in color, emotional scenes elicited more pronounced early negativities (N1-EPN) relative to neutral scenes, than when the scenes were presented in grayscale. Consistent with neural data, unpleasant scenes were rated as being more emotionally negative and received slightly higher arousal values when they were shown in color than when they were presented in grayscale. Taken together, these findings provide evidence for the modulatory role of picture color on a cascade of coordinated perceptual processes: by facilitating the higher-level extraction of emotional content, color influences the duration of the attentional bias to briefly presented affective scenes in lower-tier visual areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmieri, Roberta; Bonifazi, Giuseppe; Serranti, Silvia, E-mail: silvia.serranti@uniroma1.it
Highlights: • A recycling oriented characterization of end-of-life mobile phones was carried out. • Characterization was developed in a zero-waste-perspective, aiming to recover all the mobile phone materials. • Plastic frames and printed circuit boards were analyzed by electronic and chemical imaging. • Suitable milling/classification strategies were set up to define specialized-pre-concentrated-streams. • The proposed approach can improve the recovery of polymers, base/precious metals, rare earths and critical raw materials. - Abstract: This study characterizes the composition of plastic frames and printed circuit boards from end-of-life mobile phones. This knowledge may help define an optimal processing strategy for using thesemore » items as potential raw materials. Correct handling of such a waste is essential for its further “sustainable” recovery, especially to maximize the extraction of base, rare and precious metals, minimizing the environmental impact of the entire process chain. A combination of electronic and chemical imaging techniques was thus examined, applied and critically evaluated in order to optimize the processing, through the identification and the topological assessment of the materials of interest and their quantitative distribution. To reach this goal, end-of-life mobile phone derived wastes have been systematically characterized adopting both “traditional” (e.g. scanning electronic microscopy combined with microanalysis and Raman spectroscopy) and innovative (e.g. hyperspectral imaging in short wave infrared field) techniques, with reference to frames and printed circuit boards. Results showed as the combination of both the approaches (i.e. traditional and classical) could dramatically improve recycling strategies set up, as well as final products recovery.« less
High frequency oscillations are associated with cognitive processing in human recognition memory.
Kucewicz, Michal T; Cimbalnik, Jan; Matsumoto, Joseph Y; Brinkmann, Benjamin H; Bower, Mark R; Vasoli, Vincent; Sulc, Vlastimil; Meyer, Fred; Marsh, W R; Stead, S M; Worrell, Gregory A
2014-08-01
High frequency oscillations are associated with normal brain function, but also increasingly recognized as potential biomarkers of the epileptogenic brain. Their role in human cognition has been predominantly studied in classical gamma frequencies (30-100 Hz), which reflect neuronal network coordination involved in attention, learning and memory. Invasive brain recordings in animals and humans demonstrate that physiological oscillations extend beyond the gamma frequency range, but their function in human cognitive processing has not been fully elucidated. Here we investigate high frequency oscillations spanning the high gamma (50-125 Hz), ripple (125-250 Hz) and fast ripple (250-500 Hz) frequency bands using intracranial recordings from 12 patients (five males and seven females, age 21-63 years) during memory encoding and recall of a series of affectively charged images. Presentation of the images induced high frequency oscillations in all three studied bands within the primary visual, limbic and higher order cortical regions in a sequence consistent with the visual processing stream. These induced oscillations were detected on individual electrodes localized in the amygdala, hippocampus and specific neocortical areas, revealing discrete oscillations of characteristic frequency, duration and latency from image presentation. Memory encoding and recall significantly modulated the number of induced high gamma, ripple and fast ripple detections in the studied structures, which was greater in the primary sensory areas during the encoding (Wilcoxon rank sum test, P = 0.002) and in the higher-order cortical association areas during the recall (Wilcoxon rank sum test, P = 0.001) of memorized images. Furthermore, the induced high gamma, ripple and fast ripple responses discriminated the encoded and the affectively charged images. In summary, our results show that high frequency oscillations, spanning a wide range of frequencies, are associated with memory processing and generated along distributed cortical and limbic brain regions. These findings support an important role for fast network synchronization in human cognition and extend our understanding of normal physiological brain activity during memory processing. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Combining local scaling and global methods to detect soil pore space
NASA Astrophysics Data System (ADS)
Martin-Sotoca, Juan Jose; Saa-Requejo, Antonio; Grau, Juan B.; Tarquis, Ana M.
2017-04-01
The characterization of the spatial distribution of soil pore structures is essential to obtain different parameters that will influence in several models related to water flow and/or microbial growth processes. The first step in pore structure characterization is obtaining soil images that best approximate reality. Over the last decade, major technological advances in X-ray computed tomography (CT) have allowed for the investigation and reconstruction of natural porous media architectures at very fine scales. The subsequent step is delimiting the pore structure (pore space) from the CT soil images applying a thresholding. Many times we could find CT-scan images that show low contrast at the solid-void interface that difficult this step. Different delimitation methods can result in different spatial distributions of pores influencing the parameters used in the models. Recently, new local segmentation method using local greyscale value (GV) concentration variabilities, based on fractal concepts, has been presented. This method creates singularity maps to measure the GV concentration at each point. The C-A method was combined with the singularity map approach (Singularity-CA method) to define local thresholds that can be applied to binarize CT images. Comparing this method with classical methods, such as Otsu and Maximum Entropy, we observed that more pores can be detected mainly due to its ability to amplify anomalous concentrations. However, it delineated many small pores that were incorrect. In this work, we present an improve version of Singularity-CA method that avoid this problem basically combining it with the global classical methods. References Martín-Sotoca, J.J., A. Saa-Requejo, J.B. Grau, A.M. Tarquis. New segmentation method based on fractal properties using singularity maps. Geoderma, 287, 40-53, 2017. Martín-Sotoca, J.J, A. Saa-Requejo, J.B. Grau, A.M. Tarquis. Local 3D segmentation of soil pore space based on fractal properties using singularity maps. Geoderma, http://dx.doi.org/10.1016/j.geoderma.2016.11.029. Torre, Iván G., Juan C. Losada and A.M. Tarquis. Multiscaling properties of soil images. Biosystems Engineering, http://dx.doi.org/10.1016/j.biosystemseng.2016.11.006.
Towards a Unified Approach to Information Integration - A review paper on data/information fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitney, Paul D.; Posse, Christian; Lei, Xingye C.
2005-10-14
Information or data fusion of data from different sources are ubiquitous in many applications, from epidemiology, medical, biological, political, and intelligence to military applications. Data fusion involves integration of spectral, imaging, text, and many other sensor data. For example, in epidemiology, information is often obtained based on many studies conducted by different researchers at different regions with different protocols. In the medical field, the diagnosis of a disease is often based on imaging (MRI, X-Ray, CT), clinical examination, and lab results. In the biological field, information is obtained based on studies conducted on many different species. In military field, informationmore » is obtained based on data from radar sensors, text messages, chemical biological sensor, acoustic sensor, optical warning and many other sources. Many methodologies are used in the data integration process, from classical, Bayesian, to evidence based expert systems. The implementation of the data integration ranges from pure software design to a mixture of software and hardware. In this review we summarize the methodologies and implementations of data fusion process, and illustrate in more detail the methodologies involved in three examples. We propose a unified multi-stage and multi-path mapping approach to the data fusion process, and point out future prospects and challenges.« less
Xu, Tiantian; Feng, Yuanjing; Wu, Ye; Zeng, Qingrun; Zhang, Jun; He, Jianzhong; Zhuge, Qichuan
2017-01-01
Diffusion-weighted magnetic resonance imaging is a non-invasive imaging method that has been increasingly used in neuroscience imaging over the last decade. Partial volume effects (PVEs) exist in sampling signal for many physical and actual reasons, which lead to inaccurate fiber imaging. We overcome the influence of PVEs by separating isotropic signal from diffusion-weighted signal, which can provide more accurate estimation of fiber orientations. In this work, we use a novel response function (RF) and the correspondent fiber orientation distribution function (fODF) to construct different signal models, in which case the fODF is represented using dictionary basis function. We then put forward a new index Piso, which is a part of fODF to quantify white and gray matter. The classic Richardson-Lucy (RL) model is usually used in the field of digital image processing to solve the problem of spherical deconvolution caused by highly ill-posed least-squares algorithm. In this case, we propose an innovative model integrating RL model with spatial regularization to settle the suggested double-models, which improve noise resistance and accuracy of imaging. Experimental results of simulated and real data show that the proposal method, which we call iRL, can robustly reconstruct a more accurate fODF and the quantitative index Piso performs better than fractional anisotropy and general fractional anisotropy.
Feng, Yuanjing; Wu, Ye; Zeng, Qingrun; Zhang, Jun; He, Jianzhong; Zhuge, Qichuan
2017-01-01
Diffusion-weighted magnetic resonance imaging is a non-invasive imaging method that has been increasingly used in neuroscience imaging over the last decade. Partial volume effects (PVEs) exist in sampling signal for many physical and actual reasons, which lead to inaccurate fiber imaging. We overcome the influence of PVEs by separating isotropic signal from diffusion-weighted signal, which can provide more accurate estimation of fiber orientations. In this work, we use a novel response function (RF) and the correspondent fiber orientation distribution function (fODF) to construct different signal models, in which case the fODF is represented using dictionary basis function. We then put forward a new index Piso, which is a part of fODF to quantify white and gray matter. The classic Richardson-Lucy (RL) model is usually used in the field of digital image processing to solve the problem of spherical deconvolution caused by highly ill-posed least-squares algorithm. In this case, we propose an innovative model integrating RL model with spatial regularization to settle the suggested double-models, which improve noise resistance and accuracy of imaging. Experimental results of simulated and real data show that the proposal method, which we call iRL, can robustly reconstruct a more accurate fODF and the quantitative index Piso performs better than fractional anisotropy and general fractional anisotropy. PMID:28081561
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
Finding regions of interest in pathological images: an attentional model approach
NASA Astrophysics Data System (ADS)
Gómez, Francisco; Villalón, Julio; Gutierrez, Ricardo; Romero, Eduardo
2009-02-01
This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological images. This method is based on the cognitive process of visual selective attention that arises during a pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two components. The selected bottom-up information includes local low level features such as intensity, color, orientation and texture information. Top-down information is related to the anatomical and pathological structures known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm, inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally, a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49 images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a classical bottom-up model of attention.
Spatial/Spectral Identification of Endmembers from AVIRIS Data using Mathematical Morphology
NASA Technical Reports Server (NTRS)
Plaza, Antonio; Martinez, Pablo; Gualtieri, J. Anthony; Perez, Rosa M.
2001-01-01
During the last several years, a number of airborne and satellite hyperspectral sensors have been developed or improved for remote sensing applications. Imaging spectrometry allows the detection of materials, objects and regions in a particular scene with a high degree of accuracy. Hyperspectral data typically consist of hundreds of thousands of spectra, so the analysis of this information is a key issue. Mathematical morphology theory is a widely used nonlinear technique for image analysis and pattern recognition. Although it is especially well suited to segment binary or grayscale images with irregular and complex shapes, its application in the classification/segmentation of multispectral or hyperspectral images has been quite rare. In this paper, we discuss a new completely automated methodology to find endmembers in the hyperspectral data cube using mathematical morphology. The extension of classic morphology to the hyperspectral domain allows us to integrate spectral and spatial information in the analysis process. In Section 3, some basic concepts about mathematical morphology and the technical details of our algorithm are provided. In Section 4, the accuracy of the proposed method is tested by its application to real hyperspectral data obtained from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imaging spectrometer. Some details about these data and reference results, obtained by well-known endmember extraction techniques, are provided in Section 2. Finally, in Section 5 we expose the main conclusions at which we have arrived.
Fault-tolerant NAND-flash memory module for next-generation scientific instruments
NASA Astrophysics Data System (ADS)
Lange, Tobias; Michel, Holger; Fiethe, Björn; Michalik, Harald; Walter, Dietmar
2015-10-01
Remote sensing instruments on today's space missions deliver a high amount of data which is typically evaluated on ground. Especially for deep space missions the telemetry downlink is very limited which creates the need for the scientific evaluation and thereby a reduction of data volume already on-board the spacecraft. A demanding example is the Polarimetric and Helioseismic Imager (PHI) instrument on Solar Orbiter. To enable on-board offline processing for data reduction, the instrument has to be equipped with a high capacity memory module. The module is based on non-volatile NAND-Flash technology, which requires more advanced operation than volatile DRAM. Unlike classical mass memories, the module is integrated into the instrument and allows readback of data for processing. The architecture and safe operation of such kind of memory module is described in the following paper.
Improved Contrast-Enhanced Ultrasound Imaging With Multiplane-Wave Imaging.
Gong, Ping; Song, Pengfei; Chen, Shigao
2018-02-01
Contrast-enhanced ultrasound (CEUS) imaging has great potential for use in new ultrasound clinical applications such as myocardial perfusion imaging and abdominal lesion characterization. In CEUS imaging, contrast agents (i.e., microbubbles) are used to improve contrast between blood and tissue because of their high nonlinearity under low ultrasound pressure. However, the quality of CEUS imaging sometimes suffers from a low signal-to-noise ratio (SNR) in deeper imaging regions when a low mechanical index (MI) is used to avoid microbubble disruption, especially for imaging at off-resonance transmit frequencies. In this paper, we propose a new strategy of combining CEUS sequences with the recently proposed multiplane-wave (MW) compounding method to improve the SNR of CEUS in deeper imaging regions without increasing MI or sacrificing frame rate. The MW-CEUS method emits multiple Hadamard-coded CEUS pulses in each transmission event (i.e., pulse-echo event). The received echo signals first undergo fundamental bandpass filtering (i.e., the filter is centered on the transmit frequency) to eliminate the microbubble's second-harmonic signals because they cannot be encoded by pulse inversion. The filtered signals are then Hadamard decoded and realigned in fast time to recover the signals as they would have been obtained using classic CEUS pulses, followed by designed recombination to cancel the linear tissue responses. The MW-CEUS method significantly improved contrast-to-tissue ratio and SNR of CEUS imaging by transmitting longer coded pulses. The image resolution was also preserved. The microbubble disruption ratio and motion artifacts in MW-CEUS were similar to those of classic CEUS imaging. In addition, the MW-CEUS sequence can be adapted to other transmission coding formats. These properties of MW-CEUS can potentially facilitate CEUS imaging for many clinical applications, especially assessing deep abdominal organs or the heart.
Formation of Penumbra in a Sample of Active Regions Observed by the SDO Satellite
NASA Astrophysics Data System (ADS)
Murabito, Mariarita; Zuccarello, Francesca; Guglielmino, Salvo L.; Romano, Paolo
2018-03-01
Recently, high-resolution observations improved our understanding of the penumbra formation process around sunspots. In particular, two aspects have been carefully investigated: whether the settlement of the penumbra can occur between the main opposite magnetic polarities where new magnetic flux is still emerging, and the establishment of the Evershed flow. In this paper, we present the analysis of twelve active regions (ARs) where both the penumbra formation and the onset of the Evershed flow were observed. We used data acquired by the Helioseismic and Magnetic Imager (HMI) instrument on board the Solar Dynamic Observatory (SDO) satellite analyzing continuum images, magnetograms, and Dopplergrams of the selected ARs. The results obtained in our sample provided the following information about the stable settlement of the penumbra: eight spots formed the first stable penumbral sector in the region between the two opposite polarities, and nine spots formed on the opposite side. Moreover, eleven sunpots showed an inverse Evershed flow (i.e., a plasma motion directed toward the protospot border) before the penumbra formation, which changes within 1–6 hr into the classical Evershed flow as soon as the penumbra forms. Comparing our results with recent observations, we are able to discriminate between the different ways of penumbra formation. Moreover, we suggest that the change from inverse Evershed flow, visible before the penumbra appears, into the classical Evershed flow may be a signature of the formation of penumbral filaments.
Solving the Mystery of Galaxy Bulges and Bulge Substructure
NASA Astrophysics Data System (ADS)
Erwin, Peter
2017-08-01
Understanding galaxy bulges is crucial for understanding galaxy evolution and the growth of supermassive black holes (SMBHs). Recent studies have shown that at least some - perhaps most - disk-galaxy bulges are actually composite structures, with both classical-bulge (spheroid) and pseudobulge (disky) components; this calls into question the standard practice of using simple, low-resolution bulge/disk decompositions to determine spheroid and SMBH mass functions. We propose WFC3 optical and near-IR imaging of a volume- and mass-limited sample of local disk galaxies to determine the full range of pure-classical, pure-pseudobulge, and composite-bulge frequencies and parameters, including stellar masses for classical bulges, disky pseudobulges, and boxy/peanut-shaped bulges. We will combine this with ground-based spectroscopy to determine the stellar-kinematic and population characteristics of the different substructures revealed by our WFC3 imaging. This will help resolve growing uncertainties about the status and nature of bulges and their relation to SMBH masses, and will provide an essential local-universe reference for understanding bulge (and SMBH) formation and evolution.
MRI and clinical features of maple syrup urine disease: preliminary results in 10 cases.
Cheng, Ailan; Han, Lianshu; Feng, Yun; Li, Huimin; Yao, Rong; Wang, Dengbin; Jin, Biao
2017-01-01
We aimed to evaluate the magnetic resonance imaging (MRI) and clinical features of maple syrup urine disease (MSUD). This retrospective study consisted of 10 MSUD patients confirmed by genetic testing. All patients underwent brain MRI. Phenotype, genotype, and areas of brain injury on MRI were retrospectively reviewed. Six patients (60%) had the classic form of MSUD with BCKDHB mutation, three patients (30%) had the intermittent form (two with BCKDHA mutations and one with DBT mutation), and one patient (10%) had the thiamine-responsive form with DBT mutation. On diffusion-weighted imaging, nine cases presented restricted diffusion in myelinated areas, and one intermittent case with DBT mutation was normal. The classic form of MSUD involved the basal ganglia in six cases; the cerebellum, mesencephalon, pons, and supratentorial area in five cases; and the thalamus in four cases, respectively. The intermittent form involved the cerebellum, pons, and supratentorial area in two cases. The thiamine-responsive form involved the basal ganglia and supratentorial area. Our preliminary results indicate that patients with MSUD presented more commonly in classic form with BCKDHB mutation and displayed extensive brain injury on MRI.
Reflective ghost imaging through turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, Nicholas D.; Shapiro, Jeffrey H.
2011-12-15
Recent work has indicated that ghost imaging may have applications in standoff sensing. However, most theoretical work has addressed transmission-based ghost imaging. To be a viable remote-sensing system, the ghost imager needs to image rough-surfaced targets in reflection through long, turbulent optical paths. We develop, within a Gaussian-state framework, expressions for the spatial resolution, image contrast, and signal-to-noise ratio of such a system. We consider rough-surfaced targets that create fully developed speckle in their returns and Kolmogorov-spectrum turbulence that is uniformly distributed along all propagation paths. We address both classical and nonclassical optical sources, as well as a computational ghostmore » imager.« less
Students' ideas about prismatic images: teaching experiments for an image-based approach
NASA Astrophysics Data System (ADS)
Grusche, Sascha
2017-05-01
Prismatic refraction is a classic topic in science education. To investigate how undergraduate students think about prismatic dispersion, and to see how they change their thinking when observing dispersed images, five teaching experiments were done and analysed according to the Model of Educational Reconstruction. For projection through a prism, the students used a 'split image projection' conceptualisation. For the view through a prism, this conceptualisation was not fruitful. Based on the observed images, six of seven students changed to a 'diverted image projection' conceptualisation. From a comparison between students' and scientists' ideas, teaching implications are derived for an image-based approach.
Developing students’ ideas about lens imaging: teaching experiments with an image-based approach
NASA Astrophysics Data System (ADS)
Grusche, Sascha
2017-07-01
Lens imaging is a classic topic in physics education. To guide students from their holistic viewpoint to the scientists’ analytic viewpoint, an image-based approach to lens imaging has recently been proposed. To study the effect of the image-based approach on undergraduate students’ ideas, teaching experiments are performed and evaluated using qualitative content analysis. Some of the students’ ideas have not been reported before, namely those related to blurry lens images, and those developed by the proposed teaching approach. To describe learning pathways systematically, a conception-versus-time coordinate system is introduced, specifying how teaching actions help students advance toward a scientific understanding.
NASA Astrophysics Data System (ADS)
Quirin, Sean Albert
The joint application of tailored optical Point Spread Functions (PSF) and estimation methods is an important tool for designing quantitative imaging and sensing solutions. By enhancing the information transfer encoded by the optical waves into an image, matched post-processing algorithms are able to complete tasks with improved performance relative to conventional designs. In this thesis, new engineered PSF solutions with image processing algorithms are introduced and demonstrated for quantitative imaging using information-efficient signal processing tools and/or optical-efficient experimental implementations. The use of a 3D engineered PSF, the Double-Helix (DH-PSF), is applied as one solution for three-dimensional, super-resolution fluorescence microscopy. The DH-PSF is a tailored PSF which was engineered to have enhanced information transfer for the task of localizing point sources in three dimensions. Both an information- and optical-efficient implementation of the DH-PSF microscope are demonstrated here for the first time. This microscope is applied to image single-molecules and micro-tubules located within a biological sample. A joint imaging/axial-ranging modality is demonstrated for application to quantifying sources of extended transverse and axial extent. The proposed implementation has improved optical-efficiency relative to prior designs due to the use of serialized cycling through select engineered PSFs. This system is demonstrated for passive-ranging, extended Depth-of-Field imaging and digital refocusing of random objects under broadband illumination. Although the serialized engineered PSF solution is an improvement over prior designs for the joint imaging/passive-ranging modality, it requires the use of multiple PSFs---a potentially significant constraint. Therefore an alternative design is proposed, the Single-Helix PSF, where only one engineered PSF is necessary and the chromatic behavior of objects under broadband illumination provides the necessary information transfer. The matched estimation algorithms are introduced along with an optically-efficient experimental system to image and passively estimate the distance to a test object. An engineered PSF solution is proposed for improving the sensitivity of optical wave-front sensing using a Shack-Hartmann Wave-front Sensor (SHWFS). The performance limits of the classical SHWFS design are evaluated and the engineered PSF system design is demonstrated to enhance performance. This system is fabricated and the mechanism for additional information transfer is identified.
Chojniak, Rubens; Carneiro, Dominique Piacenti; Moterani, Gustavo Simonetto Peres; Duarte, Ivone da Silva; Bitencourt, Almir Galvão Vieira; Muglia, Valdair Francisco; D'Ippolito, Giuseppe
2017-01-01
Objective To map the different methods for diagnostic imaging instruction at medical schools in Brazil. Materials and Methods In this cross-sectional study, a questionnaire was sent to each of the coordinators of 178 Brazilian medical schools. The following characteristics were assessed: teaching model; total course hours; infrastructure; numbers of students and professionals involved; themes addressed; diagnostic imaging modalities covered; and education policies related to diagnostic imaging. Results Of the 178 questionnaires sent, 45 (25.3%) were completed and returned. Of those 45 responses, 17 (37.8%) were from public medical schools, whereas 28 (62.2%) were from private medical schools. Among the 45 medical schools evaluated, the method of diagnostic imaging instruction was modular at 21 (46.7%), classic (independent discipline) at 13 (28.9%), hybrid (classical and modular) at 9 (20.0%), and none of the preceding at 3 (6.7%). Diagnostic imaging is part of the formal curriculum at 36 (80.0%) of the schools, an elective course at 3 (6.7%), and included within another modality at 6 (13.3%). Professors involved in diagnostic imaging teaching are radiologists at 43 (95.5%) of the institutions. Conclusion The survey showed that medical courses in Brazil tend to offer diagnostic imaging instruction in courses that include other content and at different time points during the course. Radiologists are extensively involved in undergraduate medical education, regardless of the teaching methodology employed at the institution. PMID:28298730
Quantum Stabilizer Codes Can Realize Access Structures Impossible by Classical Secret Sharing
NASA Astrophysics Data System (ADS)
Matsumoto, Ryutaroh
We show a simple example of a secret sharing scheme encoding classical secret to quantum shares that can realize an access structure impossible by classical information processing with limitation on the size of each share. The example is based on quantum stabilizer codes.
Classical molecular dynamics simulation of electronically non-adiabatic processes.
Miller, William H; Cotton, Stephen J
2016-12-22
Both classical and quantum mechanics (as well as hybrids thereof, i.e., semiclassical approaches) find widespread use in simulating dynamical processes in molecular systems. For large chemical systems, however, which involve potential energy surfaces (PES) of general/arbitrary form, it is usually the case that only classical molecular dynamics (MD) approaches are feasible, and their use is thus ubiquitous nowadays, at least for chemical processes involving dynamics on a single PES (i.e., within a single Born-Oppenheimer electronic state). This paper reviews recent developments in an approach which extends standard classical MD methods to the treatment of electronically non-adiabatic processes, i.e., those that involve transitions between different electronic states. The approach treats nuclear and electronic degrees of freedom (DOF) equivalently (i.e., by classical mechanics, thereby retaining the simplicity of standard MD), and provides "quantization" of the electronic states through a symmetrical quasi-classical (SQC) windowing model. The approach is seen to be capable of treating extreme regimes of strong and weak coupling between the electronic states, as well as accurately describing coherence effects in the electronic DOF (including the de-coherence of such effects caused by coupling to the nuclear DOF). A survey of recent applications is presented to illustrate the performance of the approach. Also described is a newly developed variation on the original SQC model (found universally superior to the original) and a general extension of the SQC model to obtain the full electronic density matrix (at no additional cost/complexity).
NASA Astrophysics Data System (ADS)
Feng, L.; Vaulin, R.; Hewitt, J. N.; Remillard, R.; Kaplan, D. L.; Murphy, Tara; Kudryavtseva, N.; Hancock, P.; Bernardi, G.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Deshpande, A. A.; Gaensler, B. M.; Greenhill, L. J.; Hazelton, B. J.; Johnston-Hollitt, M.; Lonsdale, C. J.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Oberoi, D.; Ord, S. M.; Prabu, T.; Udaya Shankar, N.; Srivani, K. S.; Subrahmanyan, R.; Tingay, S. J.; Wayth, R. B.; Webster, R. L.; Williams, A.; Williams, C. L.
2017-03-01
Many astronomical sources produce transient phenomena at radio frequencies, but the transient sky at low frequencies (<300 MHz) remains relatively unexplored. Blind surveys with new wide-field radio instruments are setting increasingly stringent limits on the transient surface density on various timescales. Although many of these instruments are limited by classical confusion noise from an ensemble of faint, unresolved sources, one can in principle detect transients below the classical confusion limit to the extent that the classical confusion noise is independent of time. We develop a technique for detecting radio transients that is based on temporal matched filters applied directly to time series of images, rather than relying on source-finding algorithms applied to individual images. This technique has well-defined statistical properties and is applicable to variable and transient searches for both confusion-limited and non-confusion-limited instruments. Using the Murchison Widefield Array as an example, we demonstrate that the technique works well on real data despite the presence of classical confusion noise, sidelobe confusion noise, and other systematic errors. We searched for transients lasting between 2 minutes and 3 months. We found no transients and set improved upper limits on the transient surface density at 182 MHz for flux densities between ˜20 and 200 mJy, providing the best limits to date for hour- and month-long transients.
NASA Astrophysics Data System (ADS)
Belfort, Benjamin; Weill, Sylvain; Lehmann, François
2017-04-01
A novel, non-invasive imaging technique that determines 2D maps of water content in unsaturated porous media is presented. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage / imbibition experiment in a 2D flow tank with inner dimensions of 40 cm x 14 cm x 6 cm (L x W x D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using numerical simulations with a state-of-the-art computational code that solves the Richards. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Application examples to a larger flow tank with various boundary conditions are finally presented to illustrate the potential of the methodology.
Mudanyali, Onur; Erlinger, Anthony; Seo, Sungkyu; Su, Ting-Wei; Tseng, Derek; Ozcan, Aydogan
2009-12-14
Conventional optical microscopes image cells by use of objective lenses that work together with other lenses and optical components. While quite effective, this classical approach has certain limitations for miniaturization of the imaging platform to make it compatible with the advanced state of the art in microfluidics. In this report, we introduce experimental details of a lensless on-chip imaging concept termed LUCAS (Lensless Ultra-wide field-of-view Cell monitoring Array platform based on Shadow imaging) that does not require any microscope objectives or other bulky optical components to image a heterogeneous cell solution over an ultra-wide field of view that can span as large as approximately 18 cm(2). Moreover, unlike conventional microscopes, LUCAS can image a heterogeneous cell solution of interest over a depth-of-field of approximately 5 mm without the need for refocusing which corresponds to up to approximately 9 mL sample volume. This imaging platform records the shadows (i.e., lensless digital holograms) of each cell of interest within its field of view, and automated digital processing of these cell shadows can determine the type, the count and the relative positions of cells within the solution. Because it does not require any bulky optical components or mechanical scanning stages it offers a significantly miniaturized platform that at the same time reduces the cost, which is quite important for especially point of care diagnostic tools. Furthermore, the imaging throughput of this platform is orders of magnitude better than conventional optical microscopes, which could be exceedingly valuable for high-throughput cell-biology experiments.
Mudanyali, Onur; Erlinger, Anthony; Seo, Sungkyu; Su, Ting-Wei; Tseng, Derek; Ozcan, Aydogan
2009-01-01
Conventional optical microscopes image cells by use of objective lenses that work together with other lenses and optical components. While quite effective, this classical approach has certain limitations for miniaturization of the imaging platform to make it compatible with the advanced state of the art in microfluidics. In this report, we introduce experimental details of a lensless on-chip imaging concept termed LUCAS (Lensless Ultra-wide field-of-view Cell monitoring Array platform based on Shadow imaging) that does not require any microscope objectives or other bulky optical components to image a heterogeneous cell solution over an ultra-wide field of view that can span as large as ~18 cm2. Moreover, unlike conventional microscopes, LUCAS can image a heterogeneous cell solution of interest over a depth-of-field of ~5 mm without the need for refocusing which corresponds to up to ~9 mL sample volume. This imaging platform records the shadows (i.e., lensless digital holograms) of each cell of interest within its field of view, and automated digital processing of these cell shadows can determine the type, the count and the relative positions of cells within the solution. Because it does not require any bulky optical components or mechanical scanning stages it offers a significantly miniaturized platform that at the same time reduces the cost, which is quite important for especially point of care diagnostic tools. Furthermore, the imaging throughput of this platform is orders of magnitude better than conventional optical microscopes, which could be exceedingly valuable for high-throughput cell-biology experiments. PMID:20010542
Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M
2016-01-01
This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation. Copyright © 2015 Elsevier B.V. All rights reserved.
Multiphoton imaging with a nanosecond supercontinuum source
NASA Astrophysics Data System (ADS)
Lefort, Claire; O'Connor, Rodney P.; Blanquet, Véronique; Baraige, Fabienne; Tombelaine, Vincent; Lévêque, Philippe; Couderc, Vincent; Leproux, Philippe
2016-03-01
Multiphoton microscopy is a well-established technique for biological imaging of several kinds of targets. It is classically based on multiphoton processes allowing two means of contrast simultaneously: two-photon fluorescence (TPF) and second harmonic generation (SHG). Today, the quasi exclusive laser technology used in that aim is femtosecond titanium sapphire (Ti: Sa) laser. We experimentally demonstrate that a nanosecond supercontinuum laser source (STM-250-VIS-IR-custom, Leukos, France; 1 ns, 600-2400 nm, 250 kHz, 1 W) allows to obtain the same kind of image quality in the case of both TPF and SHG, since it is properly filtered. The first set of images concerns the muscle of a mouse. It highlights the simultaneous detection of TPF and SHG. TPF is obtained thanks to the labelling of alpha-actinin with Alexa Fluor® 546 by immunochemistry. SHG is created from the non-centrosymmetric organization of myosin. As expected, discs of actin and myosin are superimposed alternatively. The resulting images are compared with those obtained from a standard femtosecond Ti: Sa source. The physical parameters of the supercontinuum are discussed. Finally, all the interest of using an ultra-broadband source is presented with images obtained in vivo on the brain of a mouse where tumor cells labeled with eGFP are grafted. Texas Red® conjugating Dextran is injected into the blood vessels network. Thus, two fluorophores having absorption wavelengths separated by 80 nm are imaged simultaneously with a single laser source.
Cooper, Emily A.; Norcia, Anthony M.
2015-01-01
The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624
NASA Astrophysics Data System (ADS)
Morozovska, A. N.; Eliseev, E. A.; Balke, N.; Kalinin, S. V.
2010-09-01
Electrochemical insertion-deintercalation reactions are typically associated with significant change in molar volume of the host compound. This strong coupling between ionic currents and strains underpins image formation mechanisms in electrochemical strain microscopy (ESM), and allows exploring the tip-induced electrochemical processes locally. Here we analyze the signal formation mechanism in ESM, and develop the analytical description of operation in frequency and time domains. The ESM spectroscopic modes are compared to classical electrochemical methods including potentiostatic and galvanostatic intermittent titration, and electrochemical impedance spectroscopy. This analysis illustrates the feasibility of spatially resolved studies of Li-ion dynamics on the sub-10-nm level using electromechanical detection.
Functional mechanisms involved in the internal inhibition of taboo words.
Severens, Els; Kühn, Simone; Hartsuiker, Robert J; Brass, Marcel
2012-04-01
The present study used functional magnetic resonance imaging to investigate brain processes associated with the inhibition of socially undesirable speech. It is tested whether the inhibition of undesirable speech is solely related to brain areas associated with classical stop signal tasks or rather also involves brain areas involved in endogenous self-control. During the experiment, subjects had to do a SLIP task, which was designed to elicit taboo or neutral spoonerisms. Here we show that the internal inhibition of taboo words activates the right inferior frontal gyrus, an area that has previously been associated with externally triggered inhibition. This finding strongly suggests that external social rules become internalized and act as a stop-signal.
Functional mechanisms involved in the internal inhibition of taboo words
Kühn, Simone; Hartsuiker, Robert J.; Brass, Marcel
2012-01-01
The present study used functional magnetic resonance imaging to investigate brain processes associated with the inhibition of socially undesirable speech. It is tested whether the inhibition of undesirable speech is solely related to brain areas associated with classical stop signal tasks or rather also involves brain areas involved in endogenous self-control. During the experiment, subjects had to do a SLIP task, which was designed to elicit taboo or neutral spoonerisms. Here we show that the internal inhibition of taboo words activates the right inferior frontal gyrus, an area that has previously been associated with externally triggered inhibition. This finding strongly suggests that external social rules become internalized and act as a stop-signal. PMID:21609970
Non-classicality of the molecular vibrations assisting exciton energy transfer at room temperature
O’Reilly, Edward J.; Olaya-Castro, Alexandra
2014-01-01
Advancing the debate on quantum effects in light-initiated reactions in biology requires clear identification of non-classical features that these processes can exhibit and utilize. Here we show that in prototype dimers present in a variety of photosynthetic antennae, efficient vibration-assisted energy transfer in the sub-picosecond timescale and at room temperature can manifest and benefit from non-classical fluctuations of collective pigment motions. Non-classicality of initially thermalized vibrations is induced via coherent exciton–vibration interactions and is unambiguously indicated by negativities in the phase–space quasi-probability distribution of the effective collective mode coupled to the electronic dynamics. These quantum effects can be prompted upon incoherent input of excitation. Our results therefore suggest that investigation of the non-classical properties of vibrational motions assisting excitation and charge transport, photoreception and chemical sensing processes could be a touchstone for revealing a role for non-trivial quantum phenomena in biology. PMID:24402469
Experimental quantum annealing: case study involving the graph isomorphism problem.
Zick, Kenneth M; Shehab, Omar; French, Matthew
2015-06-08
Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N(2) to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers.
Experimental quantum annealing: case study involving the graph isomorphism problem
Zick, Kenneth M.; Shehab, Omar; French, Matthew
2015-01-01
Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N2 to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers. PMID:26053973
Region growing using superpixels with learned shape prior
NASA Astrophysics Data System (ADS)
Borovec, Jiří; Kybic, Jan; Sugimoto, Akihiro
2017-11-01
Region growing is a classical image segmentation method based on hierarchical region aggregation using local similarity rules. Our proposed method differs from classical region growing in three important aspects. First, it works on the level of superpixels instead of pixels, which leads to a substantial speed-up. Second, our method uses learned statistical shape properties that encourage plausible shapes. In particular, we use ray features to describe the object boundary. Third, our method can segment multiple objects and ensure that the segmentations do not overlap. The problem is represented as an energy minimization and is solved either greedily or iteratively using graph cuts. We demonstrate the performance of the proposed method and compare it with alternative approaches on the task of segmenting individual eggs in microscopy images of Drosophila ovaries.
Brain imaging with sup 123 I-IMP-SPECT in migraine between attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlake, H.P.; Boettger, I.G.G.; Grotemeyer, K.H.
1989-06-01
{sup 123}I-IMP-SPECT brain imaging was performed in patients with classic migraine (n = 5) and migraine accompagnee (n = 18) during the headache-free interval. A regional reduction of tracer uptake into brain was observed in all patients with migraine accompagnee, while in patients with classic migraine only one case showed an area of decreased activity. The most marked alteration was found in a patient with persisting neurological symptoms (complicated migraine). In most cases the areas of decreased tracer uptake corresponded to headache localization as well as to topography of neurologic symptoms during migraine attacks. It may be concluded that migrainemore » attacks occur in connection with exacerbations of preexisting changes of cerebral autoregulation due to endogenous or exogenous factors.« less
Montaux-Lambert, Antoine; Mercère, Pascal; Primot, Jérôme
2015-11-02
An interferogram conditioning procedure, for subsequent phase retrieval by Fourier demodulation, is presented here as a fast iterative approach aiming at fulfilling the classical boundary conditions imposed by Fourier transform techniques. Interference fringe patterns with typical edge discontinuities were simulated in order to reveal the edge artifacts that classically appear in traditional Fourier analysis, and were consecutively used to demonstrate the correction efficiency of the proposed conditioning technique. Optimization of the algorithm parameters is also presented and discussed. Finally, the procedure was applied to grating-based interferometric measurements performed in the hard X-ray regime. The proposed algorithm enables nearly edge-artifact-free retrieval of the phase derivatives. A similar enhancement of the retrieved absorption and fringe visibility images is also achieved.
Taking Advantage of Selective Change Driven Processing for 3D Scanning
Vegara, Francisco; Zuccarello, Pedro; Boluda, Jose A.; Pardo, Fernando
2013-01-01
This article deals with the application of the principles of SCD (Selective Change Driven) vision to 3D laser scanning. Two experimental sets have been implemented: one with a classical CMOS (Complementary Metal-Oxide Semiconductor) sensor, and the other one with a recently developed CMOS SCD sensor for comparative purposes, both using the technique known as Active Triangulation. An SCD sensor only delivers the pixels that have changed most, ordered by the magnitude of their change since their last readout. The 3D scanning method is based on the systematic search through the entire image to detect pixels that exceed a certain threshold, showing the SCD approach to be ideal for this application. Several experiments for both capturing strategies have been performed to try to find the limitations in high speed acquisition/processing. The classical approach is limited by the sequential array acquisition, as predicted by the Nyquist–Shannon sampling theorem, and this has been experimentally demonstrated in the case of a rotating helix. These limitations are overcome by the SCD 3D scanning prototype achieving a significantly higher performance. The aim of this article is to compare both capturing strategies in terms of performance in the time and frequency domains, so they share all the static characteristics including resolution, 3D scanning method, etc., thus yielding the same 3D reconstruction in static scenes. PMID:24084110
Callcut, S; Knowles, J C
2002-05-01
Glass-reinforced hydroxyapatite (HA) foams were produced using reticulated foam technology using a polyurethane template with two different pore size distributions. The mechanical properties were evaluated and the structure analyzed through density measurements, image analysis, X-ray diffraction (XRD) and scanning electron microscopy (SEM). For the mechanical properties, the use of a glass significantly improved the ultimate compressive strength (UCS) as did the use of a second coating. All the samples tested showed the classic three regions characteristic of an elastic brittle foam. From the density measurements, after application of a correction to compensate for the closed porosity, the bulk and apparent density showed a 1 : 1 correlation. When relative bulk density was plotted against UCS, a non-linear relationship was found characteristic of an isotropic open celled material. It was found by image analysis that the pore size distribution did not change and there was no degradation of the macrostructure when replicating the ceramic from the initial polyurethane template during processing. However, the pore size distributions did shift to a lower size by about 0.5 mm due to the firing process. The ceramic foams were found to exhibit mechanical properties typical of isotropic open cellular foams.
Neural correlates of the self-concept in adolescence-A focus on the significance of friends.
Romund, Lydia; Golde, Sabrina; Lorenz, Robert C; Raufelder, Diana; Pelz, Patricia; Gleich, Tobias; Heinz, Andreas; Beck, Anne
2017-02-01
The formation of a coherent and unified self-concept represents a key developmental stage during adolescence. Imaging studies on self-referential processing in adolescents are rare, and it is not clear whether neural structures involved in self-reflection are also involved in reflections of familiar others. In the current study, 41 adolescents were asked to make judgments about trait adjectives during functional magnetic resonance imaging (fMRI): they had to indicate whether the word describes themselves, their friends, their teachers or politicians. Findings indicate a greater overlap in neural networks for responses to self- and friend-related judgments compared to teachers and politicians. In particular, classic self-reference structures such as the ventromedial prefrontal cortex and medial posterior parietal cortex also exhibited higher activation to judgments about friends. In contrast, brain responses towards judgments of teachers (familiar others) compared to politicians (unfamiliar others) did not significantly differ. Results support behavioral findings of a greater relevance of friends for the development of a self-concept during adolescence and indicate underlying functional brain processes. Hum Brain Mapp 38:987-996, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Meaning of Interior Tomography
Wang, Ge; Yu, Hengyong
2013-01-01
The classic imaging geometry for computed tomography is for collection of un-truncated projections and reconstruction of a global image, with the Fourier transform as the theoretical foundation that is intrinsically non-local. Recently, interior tomography research has led to theoretically exact relationships between localities in the projection and image spaces and practically promising reconstruction algorithms. Initially, interior tomography was developed for x-ray computed tomography. Then, it has been elevated as a general imaging principle. Finally, a novel framework known as “omni-tomography” is being developed for grand fusion of multiple imaging modalities, allowing tomographic synchrony of diversified features. PMID:23912256
An object-oriented framework for medical image registration, fusion, and visualization.
Zhu, Yang-Ming; Cochoff, Steven M
2006-06-01
An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.
Increasing the field of view of adaptive optics scanning laser ophthalmoscopy.
Laslandes, Marie; Salas, Matthias; Hitzenberger, Christoph K; Pircher, Michael
2017-11-01
An adaptive optics scanning laser ophthalmoscope (AO-SLO) set-up with two deformable mirrors (DM) is presented. It allows high resolution imaging of the retina on a 4°×4° field of view (FoV), considering a 7 mm pupil diameter at the entrance of the eye. Imaging on such a FoV, which is larger compared to classical AO-SLO instruments, is allowed by the use of the two DMs. The first DM is located in a plane that is conjugated to the pupil of the eye and corrects for aberrations that are constant in the FoV. The second DM is conjugated to a plane that is located ∼0.7 mm anterior to the retina. This DM corrects for anisoplanatism effects within the FoV. The control of the DMs is performed by combining the classical AO technique, using a Shack-Hartmann wave-front sensor, and sensorless AO, which uses a criterion characterizing the image quality. The retinas of four healthy volunteers were imaged in-vivo with the developed instrument. In order to assess the performance of the set-up and to demonstrate the benefits of the 2 DM configuration, the acquired images were compared with images taken in conventional conditions, on a smaller FoV and with only one DM. Moreover, an image of a larger patch of the retina was obtained by stitching of 9 images acquired with a 4°×4° FoV, resulting in a total FoV of 10°×10°. Finally, different retinal layers were imaged by shifting the focal plane.
Learning Photogrammetry with Interactive Software Tool PhoX
NASA Astrophysics Data System (ADS)
Luhmann, T.
2016-06-01
Photogrammetry is a complex topic in high-level university teaching, especially in the fields of geodesy, geoinformatics and metrology where high quality results are demanded. In addition, more and more black-box solutions for 3D image processing and point cloud generation are available that generate nice results easily, e.g. by structure-from-motion approaches. Within this context, the classical approach of teaching photogrammetry (e.g. focusing on aerial stereophotogrammetry) has to be reformed in order to educate students and professionals with new topics and provide them with more information behind the scene. Since around 20 years photogrammetry courses at the Jade University of Applied Sciences in Oldenburg, Germany, include the use of digital photogrammetry software that provide individual exercises, deep analysis of calculation results and a wide range of visualization tools for almost all standard tasks in photogrammetry. During the last years the software package PhoX has been developed that is part of a new didactic concept in photogrammetry and related subjects. It also serves as analysis tool in recent research projects. PhoX consists of a project-oriented data structure for images, image data, measured points and features and 3D objects. It allows for almost all basic photogrammetric measurement tools, image processing, calculation methods, graphical analysis functions, simulations and much more. Students use the program in order to conduct predefined exercises where they have the opportunity to analyse results in a high level of detail. This includes the analysis of statistical quality parameters but also the meaning of transformation parameters, rotation matrices, calibration and orientation data. As one specific advantage, PhoX allows for the interactive modification of single parameters and the direct view of the resulting effect in image or object space.
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.
2015-06-01
Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.
Multiscale approach to contour fitting for MR images
NASA Astrophysics Data System (ADS)
Rueckert, Daniel; Burger, Peter
1996-04-01
We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.
Paradigm-shift: radiological changes in the asymptomatic iNPH-patient to be: an observational study.
Engel, D C; Adib, S D; Schuhmann, M U; Brendle, C
2018-02-09
Many radiological signs are known for the diagnosis of idiopathic normal pressure hydrocephalus (iNPH). However, there is little information about these signs in the pre-symptomatic phase. For pathophysiological investigative purposes we conducted a descriptive image analysis study on pre-symptomatic patients. Patients that had contact with either the neurological or neurosurgical department of the university hospital Tuebingen from 2010 through 2016 with magnetic resonance images > 3 years before onset of symptoms, were included. The date of onset and severity of symptoms, date of first imaging and birth date were recorded. Evan's index (EI), width of the third ventricle (3VW), tight high convexity (THC), Sylvian fissure, extent of white matter hyperintensities and aqueductal flow were assessed in images before and around symptom onset. Ten patients were included. In all ten patients the first symptom was gait disturbance. Nine of ten pre-symptomatic images showed classic signs for iNPH. EI showed a significant increase between the pre-symptomatic and symptomatic phase. 3VW showed a trend for increase without significance. THC changed back and forth over time within some patients. In accordance with the scarce literature available, radiological changes are present at least 3 years before onset of iNPH-symptoms. EI seems to be a robust measure for pre-symptomatic radiological changes. Extrapolating the data, the development of iNPH typical changes might be an insidious process and the development of THC might be a variable and non-linear process. Further studies with larger sample sizes are necessary to put these findings into the pathophysiological perspective for the development of iNPH.
Open Quantum Systems and Classical Trajectories
NASA Astrophysics Data System (ADS)
Rebolledo, Rolando
2004-09-01
A Quantum Markov Semigroup consists of a family { T} = ({ T}t)_{t ∈ B R+} of normal ω*- continuous completely positive maps on a von Neumann algebra 𝔐 which preserve the unit and satisfy the semigroup property. This class of semigroups has been extensively used to represent open quantum systems. This article is aimed at studying the existence of a { T} -invariant abelian subalgebra 𝔄 of 𝔐. When this happens, the restriction of { T}t to 𝔄 defines a classical Markov semigroup T = (Tt)
Striatal necrosis in type 1 glutaric aciduria: Different stages in two siblings.
Sen, Anitha; Pillay, Rajesh Subramonia
2011-07-01
Two siblings born of a consanguineous marriage with history of neurologic deterioration were imaged. Imaging features are classical of glutaric aciduria type 1 (GA-1), acute (striatal necrosis) stage in younger sibling, and chronic stage in older sibling. GA-1 is an autosomal recessive disease with typical imaging features. Greater awareness about this condition among clinicians and radiologists is essential for early diagnosis and prevention of its catastrophic consequences. Striatal necrosis with stroke-like signal intensity on imaging correlates with clinical stage of patients.
Striatal necrosis in type 1 glutaric aciduria: Different stages in two siblings
Sen, Anitha; Pillay, Rajesh Subramonia
2011-01-01
Two siblings born of a consanguineous marriage with history of neurologic deterioration were imaged. Imaging features are classical of glutaric aciduria type 1 (GA-1), acute (striatal necrosis) stage in younger sibling, and chronic stage in older sibling. GA-1 is an autosomal recessive disease with typical imaging features. Greater awareness about this condition among clinicians and radiologists is essential for early diagnosis and prevention of its catastrophic consequences. Striatal necrosis with stroke-like signal intensity on imaging correlates with clinical stage of patients. PMID:22408669
Quantum realization of the nearest neighbor value interpolation method for INEQR
NASA Astrophysics Data System (ADS)
Zhou, RiGui; Hu, WenWen; Luo, GaoFeng; Liu, XingAo; Fan, Ping
2018-07-01
This paper presents the nearest neighbor value (NNV) interpolation algorithm for the improved novel enhanced quantum representation of digital images (INEQR). It is necessary to use interpolation in image scaling because there is an increase or a decrease in the number of pixels. The difference between the proposed scheme and nearest neighbor interpolation is that the concept applied, to estimate the missing pixel value, is guided by the nearest value rather than the distance. Firstly, a sequence of quantum operations is predefined, such as cyclic shift transformations and the basic arithmetic operations. Then, the feasibility of the nearest neighbor value interpolation method for quantum image of INEQR is proven using the previously designed quantum operations. Furthermore, quantum image scaling algorithm in the form of circuits of the NNV interpolation for INEQR is constructed for the first time. The merit of the proposed INEQR circuit lies in their low complexity, which is achieved by utilizing the unique properties of quantum superposition and entanglement. Finally, simulation-based experimental results involving different classical images and ratios (i.e., conventional or non-quantum) are simulated based on the classical computer's MATLAB 2014b software, which demonstrates that the proposed interpolation method has higher performances in terms of high resolution compared to the nearest neighbor and bilinear interpolation.
Improved wavefront correction for coherent image restoration.
Zelenka, Claudius; Koch, Reinhard
2017-08-07
Coherent imaging has a wide range of applications in, for example, microscopy, astronomy, and radar imaging. Particularly interesting is the field of microscopy, where the optical quality of the lens is the main limiting factor. In this article, novel algorithms for the restoration of blurred images in a system with known optical aberrations are presented. Physically motivated by the scalar diffraction theory, the new algorithms are based on Haugazeau POCS and FISTA, and are faster and more robust than methods presented earlier. With the new approach the level of restoration quality on real images is very high, thereby blurring and ringing caused by defocus can be effectively removed. In classical microscopy, lenses with very low aberration must be used, which puts a practical limit on their size and numerical aperture. A coherent microscope using the novel restoration method overcomes this limitation. In contrast to incoherent microscopy, severe optical aberrations including defocus can be removed, hence the requirements on the quality of the optics are lower. This can be exploited for an essential price reduction of the optical system. It can be also used to achieve higher resolution than in classical microscopy, using lenses with high numerical aperture and high aberration. All this makes the coherent microscopy superior to the traditional incoherent in suited applications.
Geomorphology, tectonics, and exploration
NASA Technical Reports Server (NTRS)
Sabins, F. F., Jr.
1985-01-01
Explorationists interpret satellite images for tectonic features and patterns that may be clues to mineral and energy deposits. The tectonic features of interest range in scale from regional (sedimentary basins, fold belts) to local (faults, fractures) and are generally expressed as geomorphic features in remote sensing images. Explorationists typically employ classic concepts of geomorphology and landform analysis for their interpretations, which leads to the question - Are there new and evolving concepts in geomorphology that may be applicable to tectonic analyses of images?
Targeted Silver Nanoparticles for Dual-Energy Breast X-Ray Imaging
2013-03-01
imaging parameters. In addition, Ag performs better than I when imaging at the optimal conditions for I. For example, using a rhodium filter, the...Laboratory. XCOM: Photon Cross Sections Database. Retrieved December 10, 2011 2. Boone J.M. , Fewell, T.R., Jennings, R.J. Molybdenum, rhodium , and tungsten...and a 27 kVp low-energy beam with rhodium filtration, at a dose distribution of 50:50. This low-energy technique is a classic example of an
Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P
2017-03-01
Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely used calibrationless uniformly undersampled trajectories. Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. The SENSE-LORAKS framework provides promising new opportunities for highly accelerated MRI. Magn Reson Med 77:1021-1035, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Noninvasive measurement of pharmacokinetics by near-infrared fluorescence imaging in the eye of mice
NASA Astrophysics Data System (ADS)
Dobosz, Michael; Strobel, Steffen; Stubenrauch, Kay-Gunnar; Osl, Franz; Scheuer, Werner
2014-01-01
Purpose: For generating preclinical pharmacokinetics (PKs) of compounds, blood is drawn at different time points and levels are quantified by different analytical methods. In order to receive statistically meaningful data, 3 to 5 animals are used for each time point to get serum peak-level and half-life of the compound. Both characteristics are determined by data interpolation, which may influence the accuracy of these values. We provide a method that allows continuous monitoring of blood levels noninvasively by measuring the fluorescence intensity of labeled compounds in the eye and other body regions of anesthetized mice. Procedures: The method evaluation was performed with four different fluorescent compounds: (i) indocyanine green, a nontargeting dye; (ii) OsteoSense750, a bone targeting agent; (iii) tumor targeting Trastuzumab-Alexa750; and (iv) its F(-alxea750 fragment. The latter was used for a direct comparison between fluorescence imaging and classical blood analysis using enzyme-linked immunosorbent assay (ELISA). Results: We found an excellent correlation between blood levels measured by noninvasive eye imaging with the results generated by classical methods. A strong correlation between eye imaging and ELISA was demonstrated for the F( fragment. Whole body imaging revealed a compound accumulation in the expected regions (e.g., liver, bone). Conclusions: The combination of eye and whole body fluorescence imaging enables the simultaneous measurement of blood PKs and biodistribution of fluorescent-labeled compounds.
Ramakumar, Adarsh; Subramanian, Uma; Prasanna, Pataje G S
2015-11-01
High-throughput individual diagnostic dose assessment is essential for medical management of radiation-exposed subjects after a mass casualty. Cytogenetic assays such as the Dicentric Chromosome Assay (DCA) are recognized as the gold standard by international regulatory authorities. DCA is a multi-step and multi-day bioassay. DCA, as described in the IAEA manual, can be used to assess dose up to 4-6 weeks post-exposure quite accurately but throughput is still a major issue and automation is very essential. The throughput is limited, both in terms of sample preparation as well as analysis of chromosome aberrations. Thus, there is a need to design and develop novel solutions that could utilize extensive laboratory automation for sample preparation, and bioinformatics approaches for chromosome-aberration analysis to overcome throughput issues. We have transitioned the bench-based cytogenetic DCA to a coherent process performing high-throughput automated biodosimetry for individual dose assessment ensuring quality control (QC) and quality assurance (QA) aspects in accordance with international harmonized protocols. A Laboratory Information Management System (LIMS) is designed, implemented and adapted to manage increased sample processing capacity, develop and maintain standard operating procedures (SOP) for robotic instruments, avoid data transcription errors during processing, and automate analysis of chromosome-aberrations using an image analysis platform. Our efforts described in this paper intend to bridge the current technological gaps and enhance the potential application of DCA for a dose-based stratification of subjects following a mass casualty. This paper describes one such potential integrated automated laboratory system and functional evolution of the classical DCA towards increasing critically needed throughput. Published by Elsevier B.V.
Kuizon, Salomon; DiMaiuta, Kathleen; Walus, Marius; Jenkins, Edmund C.; Kuizon, Marisol; Kida, Elizabeth; Golabek, Adam A.; Espinoza, Daniel O.; Pullarkat, Raju K.; Junaid, Mohammed A.
2010-01-01
Background Tripeptidyl aminopeptidase I (TPPI) is a crucial lysosomal enzyme that is deficient in the fatal neurodegenerative disorder called classic late-infantile neuronal ceroid lipofuscinosis (LINCL). It is involved in the catabolism of proteins in the lysosomes. Recent X-ray crystallographic studies have provided insights into the structural/functional aspects of TPPI catalysis, and indicated presence of an octahedrally coordinated Ca2+. Methodology Purified precursor and mature TPPI were used to study inhibition by NBS and EDTA using biochemical and immunological approaches. Site-directed mutagenesis with confocal imaging technique identified a critical W residue in TPPI activity, and the processing of precursor into mature enzyme. Principal Findings NBS is a potent inhibitor of the purified TPPI. In mammalian TPPI, W542 is critical for tripeptidyl peptidase activity as well as autocatalysis. Transfection studies have indicated that mutants of the TPPI that harbor residues other than W at position 542 have delayed processing, and are retained in the ER rather than transported to lysosomes. EDTA inhibits the autocatalytic processing of the precursor TPPI. Conclusions/Significance We propose that W542 and Ca2+ are critical for maintaining the proper tertiary structure of the precursor proprotein as well as the mature TPPI. Additionally, Ca2+ is necessary for the autocatalytic processing of the precursor protein into the mature TPPI. We have identified NBS as a potent TPPI inhibitor, which led in delineating a critical role for W542 residue. Studies with such compounds will prove valuable in identifying the critical residues in the TPPI catalysis and its structure-function analysis. PMID:20689811
Self-amplified CMOS image sensor using a current-mode readout circuit
NASA Astrophysics Data System (ADS)
Santos, Patrick M.; de Lima Monteiro, Davies W.; Pittet, Patrick
2014-05-01
The feature size of the CMOS processes decreased during the past few years and problems such as reduced dynamic range have become more significant in voltage-mode pixels, even though the integration of more functionality inside the pixel has become easier. This work makes a contribution on both sides: the possibility of a high signal excursion range using current-mode circuits together with functionality addition by making signal amplification inside the pixel. The classic 3T pixel architecture was rebuild with small modifications to integrate a transconductance amplifier providing a current as an output. The matrix with these new pixels will operate as a whole large transistor outsourcing an amplified current that will be used for signal processing. This current is controlled by the intensity of the light received by the matrix, modulated pixel by pixel. The output current can be controlled by the biasing circuits to achieve a very large range of output signal levels. It can also be controlled with the matrix size and this permits a very high degree of freedom on the signal level, observing the current densities inside the integrated circuit. In addition, the matrix can operate at very small integration times. Its applications would be those in which fast imaging processing, high signal amplification are required and low resolution is not a major problem, such as UV image sensors. Simulation results will be presented to support: operation, control, design, signal excursion levels and linearity for a matrix of pixels that was conceived using this new concept of sensor.
NASA Astrophysics Data System (ADS)
Belfort, Benjamin; Weill, Sylvain; Lehmann, François
2017-07-01
A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.
Experimental cancellation of aberrations in intensity correlation in classical optics
NASA Astrophysics Data System (ADS)
Jesus-Silva, A. J.; Silva, Juarez G.; Monken, C. H.; Fonseca, E. J. S.
2018-01-01
We study the classical correlation function of spatially incoherent beams with a phase aberration in the beam path. On the basis of our experimental measurements and in the optical coherence theory, we show that the effects of phase disturbances, independently of their kind and without need of coordinate inversion, can be canceled out if the same phase is aligned in the signal and reference beam path. These results can be useful for imaging and microscopy through random media.
Trichuris trichiura egg (image)
... is the classical appearance of the Trichuria (whipworm) egg. The eggs are highly infectious. After a person eats contaminated food, the worms hatch from the eggs and live in the intestine, causing vomiting and ...
NASA Astrophysics Data System (ADS)
Bruynooghe, Michel M.
1998-04-01
In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.
ERIC Educational Resources Information Center
Pina-Camacho, Laura; Villero, Sonia; Boada, Leticia; Fraguas, David; Janssen, Joost; Mayoral, Maria; Llorente, Cloe; Arango, Celso; Parellada, Mara
2013-01-01
This systematic review aims to determine whether or not structural magnetic resonance imaging (sMRI) data support the DSM-5 proposal of an autism spectrum disorder (ASD) diagnostic category, and whether or not classical DSM-IV autistic disorder (AD) and Asperger syndrome (AS) categories should be subsumed into it. The most replicated sMRI findings…
Inverting Images of the 40s: The Berlin Wall and Collective Amnesia.
ERIC Educational Resources Information Center
Loshitzky, Yosefa
1995-01-01
Examines images of World War II invoked in two live, international music concerts (one rock, one classical) celebrating the fall of the Berlin Wall. Argues that Western television's choice of imagery represented the Wall's demise as a marker of the end of the Cold War rather than a vanishing monument of Germany's conflicted struggle with Holocaust…
Mohebbi, Sara; Erfurth, Florian; Hennersdorf, Philipp; Brakhage, Axel A.; Saluz, Hans Peter
2016-01-01
Hyperspectral imaging (HSI) is a technique based on the combination of classical spectroscopy and conventional digital image processing. It is also well suited for the biological assays and quantitative real-time analysis since it provides spectral and spatial data of samples. The method grants detailed information about a sample by recording the entire spectrum in each pixel of the whole image. We applied HSI to quantify the constituent pH variation in a single infected apoptotic monocyte as a model system. Previously, we showed that the human-pathogenic fungus Aspergillus fumigatus conidia interfere with the acidification of phagolysosomes. Here, we extended this finding to monocytes and gained a more detailed analysis of this process. Our data indicate that melanised A. fumigatus conidia have the ability to interfere with apoptosis in human monocytes as they enable the apoptotic cell to recover from mitochondrial acidification and to continue with the cell cycle. We also showed that this ability of A. fumigatus is dependent on the presence of melanin, since a non-pigmented mutant did not stop the progression of apoptosis and consequently, the cell did not recover from the acidic pH. By conducting the current research based on the HSI, we could measure the intracellular pH in an apoptotic infected human monocyte and show the pattern of pH variation during 35 h of measurements. As a conclusion, we showed the importance of melanin for determining the fate of intracellular pH in a single apoptotic cell. PMID:27727286
Nanoscale thermal imaging of dissipation in quantum systems
NASA Astrophysics Data System (ADS)
Halbertal, D.; Cuppens, J.; Shalom, M. Ben; Embon, L.; Shadmi, N.; Anahory, Y.; Naren, H. R.; Sarkar, J.; Uri, A.; Ronen, Y.; Myasoedov, Y.; Levitov, L. S.; Joselevich, E.; Geim, A. K.; Zeldov, E.
2016-11-01
Energy dissipation is a fundamental process governing the dynamics of physical, chemical and biological systems. It is also one of the main characteristics that distinguish quantum from classical phenomena. In particular, in condensed matter physics, scattering mechanisms, loss of quantum information or breakdown of topological protection are deeply rooted in the intricate details of how and where the dissipation occurs. Yet the microscopic behaviour of a system is usually not formulated in terms of dissipation because energy dissipation is not a readily measurable quantity on the micrometre scale. Although nanoscale thermometry has gained much recent interest, existing thermal imaging methods are not sensitive enough for the study of quantum systems and are also unsuitable for the low-temperature operation that is required. Here we report a nano-thermometer based on a superconducting quantum interference device with a diameter of less than 50 nanometres that resides at the apex of a sharp pipette: it provides scanning cryogenic thermal sensing that is four orders of magnitude more sensitive than previous devices—below 1 μK Hz-1/2. This non-contact, non-invasive thermometry allows thermal imaging of very low intensity, nanoscale energy dissipation down to the fundamental Landauer limit of 40 femtowatts for continuous readout of a single qubit at one gigahertz at 4.2 kelvin. These advances enable the observation of changes in dissipation due to single-electron charging of individual quantum dots in carbon nanotubes. They also reveal a dissipation mechanism attributable to resonant localized states in graphene encapsulated within hexagonal boron nitride, opening the door to direct thermal imaging of nanoscale dissipation processes in quantum matter.
Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures
NASA Astrophysics Data System (ADS)
Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino
2010-05-01
3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecchiola, Aymeric; Concept Scientific Instruments, ZA de Courtaboeuf, 2 rue de la Terre de Feu, 91940 Les Ulis; Unité Mixte de Physique CNRS-Thales UMR 137, 1 avenue Augustin Fresnel, 91767 Palaiseau
An imaging technique associating a slowly intermittent contact mode of atomic force microscopy (AFM) with a home-made multi-purpose resistance sensing device is presented. It aims at extending the widespread resistance measurements classically operated in contact mode AFM to broaden their application fields to soft materials (molecular electronics, biology) and fragile or weakly anchored nano-objects, for which nanoscale electrical characterization is highly demanded and often proves to be a challenging task in contact mode. Compared with the state of the art concerning less aggressive solutions for AFM electrical imaging, our technique brings a significantly wider range of resistance measurement (over 10more » decades) without any manual switching, which is a major advantage for the characterization of materials with large on-sample resistance variations. After describing the basics of the set-up, we report on preliminary investigations focused on academic samples of self-assembled monolayers with various thicknesses as a demonstrator of the imaging capabilities of our instrument, from qualitative and semi-quantitative viewpoints. Then two application examples are presented, regarding an organic photovoltaic thin film and an array of individual vertical carbon nanotubes. Both attest the relevance of the technique for the control and optimization of technological processes.« less
Retina vascular network recognition
NASA Astrophysics Data System (ADS)
Tascini, Guido; Passerini, Giorgio; Puliti, Paolo; Zingaretti, Primo
1993-09-01
The analysis of morphological and structural modifications of the retina vascular network is an interesting investigation method in the study of diabetes and hypertension. Normally this analysis is carried out by qualitative evaluations, according to standardized criteria, though medical research attaches great importance to quantitative analysis of vessel color, shape and dimensions. The paper describes a system which automatically segments and recognizes the ocular fundus circulation and micro circulation network, and extracts a set of features related to morphometric aspects of vessels. For this class of images the classical segmentation methods seem weak. We propose a computer vision system in which segmentation and recognition phases are strictly connected. The system is hierarchically organized in four modules. Firstly the Image Enhancement Module (IEM) operates a set of custom image enhancements to remove blur and to prepare data for subsequent segmentation and recognition processes. Secondly the Papilla Border Analysis Module (PBAM) automatically recognizes number, position and local diameter of blood vessels departing from optical papilla. Then the Vessel Tracking Module (VTM) analyses vessels comparing the results of body and edge tracking and detects branches and crossings. Finally the Feature Extraction Module evaluates PBAM and VTM output data and extracts some numerical indexes. Used algorithms appear to be robust and have been successfully tested on various ocular fundus images.
Robinson, Alan M; Stock, Stuart R; Soriano, Carmen; Xiao, Xianghui; Richter, Claus-Peter
2016-11-01
The aim of this study was to determine if X-ray micro-computed tomography could be used to locate and characterize tissue damage caused by laser irradiation and to describe its advantages over classical histology for this application. A surgical CO 2 laser, operated in single pulse mode (100 milliseconds) at different power settings, was used to ablate different types of cadaveric animal tissues. Tissue samples were then harvested and imaged with synchrotron X-ray phase-contrast and micro-computed tomography to generate stacks of virtual sections of the tissues. Subsequently, Fiji (ImageJ) software was used to locate tissue damage, then to quantify volumes of laser ablation cones and thermal coagulation damage from 3D renderings of tissue image stacks. Visual comparisons of tissue structures in X-ray images with those visible by classic light microscopy histology were made. We demonstrated that micro-computed tomography could be used to rapidly identify areas of surgical laser ablation, vacuolization, carbonization, and thermally coagulated tissue. Quantification and comparison of the ablation crater, which represents the volume of ablated tissue, and the thermal coagulation zone volumes were performed faster than we could by classical histology. We demonstrated that these procedures can be performed on fresh hydrated and non-sectioned plastic embedded tissue. We demonstrated that the application of non-destructive micro-computed tomography to the visualization and analysis of laser induced tissue damage without tissue sectioning is possible. This will improve evaluation of new surgical lasers and their corresponding effect on tissues. Lasers Surg. Med. 48:866-877, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
An innovative recycling process to obtain pure polyethylene and polypropylene from household waste.
Serranti, Silvia; Luciani, Valentina; Bonifazi, Giuseppe; Hu, Bin; Rem, Peter C
2015-01-01
An innovative recycling process, based on magnetic density separation (MDS) and hyperspectral imaging (HSI), to obtain high quality polypropylene and polyethylene as secondary raw materials, is presented. More in details, MDS was applied to two different polyolefin mixtures coming from household waste. The quality of the two separated PP and PE streams, in terms of purity, was evaluated by a classification procedure based on HSI working in the near infrared range (1000-1700 nm). The classification model was built using known PE and PP samples as training set. The results obtained by HSI were compared with those obtained by classical density analysis carried in laboratory on the same polymers. The results obtained by MDS and the quality assessment of the plastic products by HSI showed that the combined action of these two technologies is a valid solution that can be implemented at industrial level. Copyright © 2014 Elsevier Ltd. All rights reserved.
STAR Data Reconstruction at NERSC/Cori, an adaptable Docker container approach for HPC
NASA Astrophysics Data System (ADS)
Mustafa, Mustafa; Balewski, Jan; Lauret, Jérôme; Porter, Jefferson; Canon, Shane; Gerhardt, Lisa; Hajdu, Levente; Lukascsyk, Mark
2017-10-01
As HPC facilities grow their resources, adaptation of classic HEP/NP workflows becomes a need. Linux containers may very well offer a way to lower the bar to exploiting such resources and at the time, help collaboration to reach vast elastic resources on such facilities and address their massive current and future data processing challenges. In this proceeding, we showcase STAR data reconstruction workflow at Cori HPC system at NERSC. STAR software is packaged in a Docker image and runs at Cori in Shifter containers. We highlight two of the typical end-to-end optimization challenges for such pipelines: 1) data transfer rate which was carried over ESnet after optimizing end points and 2) scalable deployment of conditions database in an HPC environment. Our tests demonstrate equally efficient data processing workflows on Cori/HPC, comparable to standard Linux clusters.
Pre-processing ambient noise cross-correlations with equalizing the covariance matrix eigenspectrum
NASA Astrophysics Data System (ADS)
Seydoux, Léonard; de Rosny, Julien; Shapiro, Nikolai M.
2017-09-01
Passive imaging techniques from ambient seismic noise requires a nearly isotropic distribution of the noise sources in order to ensure reliable traveltime measurements between seismic stations. However, real ambient seismic noise often partially fulfils this condition. It is generated in preferential areas (in deep ocean or near continental shores), and some highly coherent pulse-like signals may be present in the data such as those generated by earthquakes. Several pre-processing techniques have been developed in order to attenuate the directional and deterministic behaviour of this real ambient noise. Most of them are applied to individual seismograms before cross-correlation computation. The most widely used techniques are the spectral whitening and temporal smoothing of the individual seismic traces. We here propose an additional pre-processing to be used together with the classical ones, which is based on the spatial analysis of the seismic wavefield. We compute the cross-spectra between all available stations pairs in spectral domain, leading to the data covariance matrix. We apply a one-bit normalization to the covariance matrix eigenspectrum before extracting the cross-correlations in the time domain. The efficiency of the method is shown with several numerical tests. We apply the method to the data collected by the USArray, when the M8.8 Maule earthquake occurred on 2010 February 27. The method shows a clear improvement compared with the classical equalization to attenuate the highly energetic and coherent waves incoming from the earthquake, and allows to perform reliable traveltime measurement even in the presence of the earthquake.
NASA Astrophysics Data System (ADS)
Wang, Heming; Liu, Yu; Song, Yongchen; Zhao, Yuechao; Zhao, Jiafei; Wang, Dayong
2012-11-01
Pore structure is one of important factors affecting the properties of porous media, but it is difficult to describe the complexity of pore structure exactly. Fractal theory is an effective and available method for quantifying the complex and irregular pore structure. In this paper, the fractal dimension calculated by box-counting method based on fractal theory was applied to characterize the pore structure of artificial cores. The microstructure or pore distribution in the porous material was obtained using the nuclear magnetic resonance imaging (MRI). Three classical fractals and one sand packed bed model were selected as the experimental material to investigate the influence of box sizes, threshold value, and the image resolution when performing fractal analysis. To avoid the influence of box sizes, a sequence of divisors of the image was proposed and compared with other two algorithms (geometric sequence and arithmetic sequence) with its performance of partitioning the image completely and bringing the least fitted error. Threshold value selected manually and automatically showed that it plays an important role during the image binary processing and the minimum-error method can be used to obtain an appropriate or reasonable one. Images obtained under different pixel matrices in MRI were used to analyze the influence of image resolution. Higher image resolution can detect more quantity of pore structure and increase its irregularity. With benefits of those influence factors, fractal analysis on four kinds of artificial cores showed the fractal dimension can be used to distinguish the different kinds of artificial cores and the relationship between fractal dimension and porosity or permeability can be expressed by the model of D = a - bln(x + c).
Quantum stochastic walks on networks for decision-making
NASA Astrophysics Data System (ADS)
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-03-01
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce’s response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process’ degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.
Quantum stochastic walks on networks for decision-making
Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo
2016-01-01
Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce’s response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process’ degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making. PMID:27030372
2007-11-14
This image from NASA Galaxy Evolution Explorer shows the galaxy NGC 300, located about seven million light-years away in the constellation Sculptor. It is a classic spiral galaxy with open arms and vigorous star formation throughout.
2005-04-11
Young hot blue stars dominate the outer spiral arms of nearby galaxy NGC 300, while the older stars congregate in the nuclear regions which appear yellow-green in this image from NASA Galaxy Evolution Explorer.
Collisions of plastic and foam laser-driven foils studied by orthogonal x-ray imaging.
NASA Astrophysics Data System (ADS)
Aglitskiy, Y.; Metzler, N.; Karasik, M.; Serlin, V.; Obenschain, S. P.; Schmitt, A. J.; Velikovich, A. L.; Zalesak, S. T.; Gardner, J. H.; Weaver, J.; Oh, J.; Harding, E. C.
2007-11-01
We report an experimental study of hydrodynamic Rayleigh-Taylor and Richtmyer-Meshkov-type instabilities developing at the material interface produced in double-foil collisions. Our double-foil targets consist of a plastic foil irradiated by the 4 ns Nike KrF laser pulse at ˜50 TW/cm^2 and accelerated toward a stationary plastic or foam foil. Either the rear side of the front foil or the front side of the rear foil is rippled. Orthogonal imaging, i. e., a simultaneous side-on and face-on x-ray radiography of the targets has been used in these experiments to observe the process of collision and the evolution of the areal mass amplitude modulation. Its observed evolution is similar to the case of the classical RM instability in finite thickness targets first studied by Y. Aglitsky et al., Phys. Plasmas 13, 80703 (2006). Our data are favorably compared with 1D and 2D simulation results.
Štys, Dalibor; Urban, Jan; Vaněk, Jan; Císař, Petr
2011-06-01
We report objective analysis of information in the microscopic image of the cell monolayer. The process of transfer of information about the cell by the microscope is analyzed in terms of the classical Shannon information transfer scheme. The information source is the biological object, the information transfer channel is the whole microscope including the camera chip. The destination is the model of biological system. The information contribution is analyzed as information carried by a point to overall information in the image. Subsequently we obtain information reflection of the biological object. This is transformed in the biological model which, in information terminology, is the destination. This, we propose, should be constructed as state transitions in individual cells modulated by information bonds between the cells. We show examples of detected cell states in multidimensional state space. This space is reflected as colour channel intensity phenomenological state space. We have also observed information bonds and show examples of them.
Stys, Dalibor; Urban, Jan; Vanek, Jan; Císar, Petr
2010-07-01
We report objective analysis of information in the microscopic image of the cell monolayer. The process of transfer of information about the cell by the microscope is analyzed in terms of the classical Shannon information transfer scheme. The information source is the biological object, the information transfer channel is the whole microscope including the camera chip. The destination is the model of biological system. The information contribution is analyzed as information carried by a point to overall information in the image. Subsequently we obtain information reflection of the biological object. This is transformed in the biological model which, in information terminology, is the destination. This, we propose, should be constructed as state transitions in individual cells modulated by information bonds between the cells. We show examples of detected cell states in multidimensional state space reflected in space an colour channel intensity phenomenological state space. We have also observed information bonds and show examples of them. Copyright 2010 Elsevier Ltd. All rights reserved.
Multiple template-based image matching using alpha-rooted quaternion phase correlation
NASA Astrophysics Data System (ADS)
DelMarco, Stephen
2010-04-01
In computer vision applications, image matching performed on quality-degraded imagery is difficult due to image content distortion and noise effects. State-of-the art keypoint based matchers, such as SURF and SIFT, work very well on clean imagery. However, performance can degrade significantly in the presence of high noise and clutter levels. Noise and clutter cause the formation of false features which can degrade recognition performance. To address this problem, previously we developed an extension to the classical amplitude and phase correlation forms, which provides improved robustness and tolerance to image geometric misalignments and noise. This extension, called Alpha-Rooted Phase Correlation (ARPC), combines Fourier domain-based alpha-rooting enhancement with classical phase correlation. ARPC provides tunable parameters to control the alpha-rooting enhancement. These parameter values can be optimized to tradeoff between high narrow correlation peaks, and more robust wider, but smaller peaks. Previously, we applied ARPC in the radon transform domain for logo image recognition in the presence of rotational image misalignments. In this paper, we extend ARPC to incorporate quaternion Fourier transforms, thereby creating Alpha-Rooted Quaternion Phase Correlation (ARQPC). We apply ARQPC to the logo image recognition problem. We use ARQPC to perform multiple-reference logo template matching by representing multiple same-class reference templates as quaternion-valued images. We generate recognition performance results on publicly-available logo imagery, and compare recognition results to results generated from standard approaches. We show that small deviations in reference templates of sameclass logos can lead to improved recognition performance using the joint matching inherent in ARQPC.
NASA Astrophysics Data System (ADS)
Wild, Walter James
1988-12-01
External nuclear medicine diagnostic imaging of early primary and metastatic lung cancer tumors is difficult due to the poor sensitivity and resolution of existing gamma cameras. Nonimaging counting detectors used for internal tumor detection give ambiguous results because distant background variations are difficult to discriminate from neighboring tumor sites. This suggests that an internal imaging nuclear medicine probe, particularly an esophageal probe, may be advantageously used to detect small tumors because of the ability to discriminate against background variations and the capability to get close to sites neighboring the esophagus. The design, theory of operation, preliminary bench tests, characterization of noise behavior and optimization of such an imaging probe is the central theme of this work. The central concept lies in the representation of the aperture shell by a sequence of binary digits. This, coupled with the mode of operation which is data encoding within an axial slice of space, leads to the fundamental imaging equation in which the coding operation is conveniently described by a circulant matrix operator. The coding/decoding process is a classic coded-aperture problem, and various estimators to achieve decoding are discussed. Some estimators require a priori information about the object (or object class) being imaged; the only unbiased estimator that does not impose this requirement is the simple inverse-matrix operator. The effects of noise on the estimate (or reconstruction) is discussed for general noise models and various codes/decoding operators. The choice of an optimal aperture for detector count times of clinical relevance is examined using a statistical class-separability formalism.
MRI and clinical features of maple syrup urine disease: preliminary results in 10 cases
Cheng, Ailan; Han, Lianshu; Feng, Yun; Li, Huimin; Yao, Rong; Wang, Dengbin; Jin, Biao
2017-01-01
PURPOSE We aimed to evaluate the magnetic resonance imaging (MRI) and clinical features of maple syrup urine disease (MSUD). METHODS This retrospective study consisted of 10 MSUD patients confirmed by genetic testing. All patients underwent brain MRI. Phenotype, genotype, and areas of brain injury on MRI were retrospectively reviewed. RESULTS Six patients (60%) had the classic form of MSUD with BCKDHB mutation, three patients (30%) had the intermittent form (two with BCKDHA mutations and one with DBT mutation), and one patient (10%) had the thiamine-responsive form with DBT mutation. On diffusion-weighted imaging, nine cases presented restricted diffusion in myelinated areas, and one intermittent case with DBT mutation was normal. The classic form of MSUD involved the basal ganglia in six cases; the cerebellum, mesencephalon, pons, and supratentorial area in five cases; and the thalamus in four cases, respectively. The intermittent form involved the cerebellum, pons, and supratentorial area in two cases. The thiamine-responsive form involved the basal ganglia and supratentorial area. CONCLUSION Our preliminary results indicate that patients with MSUD presented more commonly in classic form with BCKDHB mutation and displayed extensive brain injury on MRI. PMID:28830848
Prudlo, Johannes; Bißbort, Charlotte; Glass, Aenne; Grossmann, Annette; Hauenstein, Karlheinz; Benecke, Reiner; Teipel, Stefan J
2012-09-01
The aim of this work was to investigate white-matter microstructural changes within and outside the corticospinal tract in classical amyotrophic lateral sclerosis (ALS) and in lower motor neuron (LMN) ALS variants by means of diffusion tensor imaging (DTI). We investigated 22 ALS patients and 21 age-matched controls utilizing a whole-brain approach with a 1.5-T scanner for DTI. The patient group was comprised of 15 classical ALS- and seven LMN ALS-variant patients (progressive muscular atrophy, flail arm and flail leg syndrome). Disease severity was measured by the revised version of the functional rating scale. White matter fractional anisotropy (FA) was assessed using tract-based spatial statistics (TBSS) and a region of interest (ROI) approach. We found significant FA reductions in motor and extra-motor cerebral fiber tracts in classical ALS and in the LMN ALS-variant patients compared to controls. The voxel-based TBSS results were confirmed by the ROI findings. The white matter damage correlated with the disease severity in the patient group and was found in a similar distribution, but to a lesser extent, among the LMN ALS-variant subgroup. ALS and LMN ALS variants are multisystem degenerations. DTI shows the potential to determine an earlier diagnosis, particularly in LMN ALS variants. The statistically identical findings of white matter lesions in classical ALS and LMN variants as ascertained by DTI further underline that these variants should be regarded as part of the ALS spectrum.
Flat clathrin lattices: stable features of the plasma membrane.
Grove, Joe; Metcalf, Daniel J; Knight, Alex E; Wavre-Shapton, Silène T; Sun, Tony; Protonotarios, Emmanouil D; Griffin, Lewis D; Lippincott-Schwartz, Jennifer; Marsh, Mark
2014-11-05
Clathrin-mediated endocytosis (CME) is a fundamental property of eukaryotic cells. Classical CME proceeds via the formation of clathrin-coated pits (CCPs) at the plasma membrane, which invaginate to form clathrin-coated vesicles, a process that is well understood. However, clathrin also assembles into flat clathrin lattices (FCLs); these structures remain poorly described, and their contribution to cell biology is unclear. We used quantitative imaging to provide the first comprehensive description of FCLs and explore their influence on plasma membrane organization. Ultrastructural analysis by electron and superresolution microscopy revealed two discrete populations of clathrin structures. CCPs were typified by their sphericity, small size, and homogeneity. FCLs were planar, large, and heterogeneous and present on both the dorsal and ventral surfaces of cells. Live microscopy demonstrated that CCPs are short lived and culminate in a peak of dynamin recruitment, consistent with classical CME. In contrast, FCLs were long lived, with sustained association with dynamin. We investigated the biological relevance of FCLs using the chemokine receptor CCR5 as a model system. Agonist activation leads to sustained recruitment of CCR5 to FCLs. Quantitative molecular imaging indicated that FCLs partitioned receptors at the cell surface. Our observations suggest that FCLs provide stable platforms for the recruitment of endocytic cargo. © 2014 Grove et al. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
NASA Astrophysics Data System (ADS)
Rack, A.; Stiller, M.; Nelson, K.; Knabe, C.; Rack, T.; Zabler, S.; Dalügge, O.; Riesemeier, H.; Cecilia, A.; Goebbels, J.
2010-09-01
Biocompatible materials such as porous bioactive calcium phosphate ceramics or titanium are regularly applied in dental surgery: ceramics are used to support the local bone regeneration in a given defect, afterwards titanium implants replace lost teeth. The current gold standard for bone reconstruction in implant dentistry is the use of autogenous bone grafts. But the concept of guided bone regeneration (GBR) has become a predictable and well documented surgical approach using biomaterials (bioactive calcium phosphate ceramics) which qualify as bone substitutes for this kind of application as well. We applied high resolution synchrotron microtomography and subsequent 3d image analysis in order to investigate bone formation and degradation of the bone substitute material in a three-dimensional manner, extending the knowledge beyond the limits of classical histology. Following the bone regeneration, titanium-based implants to replace lost teeth call for high mechanical precision, especially when two-piece concepts are used in order to guaranty leak tightness. Here, synchrotron-based radiography in comparison with classical laboratory radiography yields high spatial resolution in combination with high contrast even when exploiting micro-sized features in these kind of highly attenuating objects. Therefore, we could study micro-gap formation at interfaces in two-piece dental implants with the specimen under different mechanical load. We could prove the existence of micro-gaps for implants with conical connections as well as to study the micromechanical behavior of the mating zone of conical implants during loading. The micro-gap is a potential issue of failure, i. e. bacterial leakage which can induce an inflammatory process.
Kapke, Jonathan T; Epperla, Narendranath; Shah, Namrata; Richardson, Kristin; Carrum, George; Hari, Parameswaran N; Pingali, Sai R; Hamadani, Mehdi; Karmali, Reem; Fenske, Timothy S
2017-07-01
Patients with relapsed and refractory classical Hodgkin lymphoma (cHL) are often treated with autologous hematopoietic cell transplantation (auto-HCT). After auto-HCT, most transplant centers implement routine surveillance imaging to monitor for disease relapse; however, there is limited evidence to support this practice. In this multicenter, retrospective study, we identified cHL patients (n = 128) who received auto-HCT, achieved complete remission (CR) after transplantation, and then were followed with routine surveillance imaging. Of these, 29 (23%) relapsed after day 100 after auto-HCT. Relapse was detected clinically in 14 patients and with routine surveillance imaging in 15 patients. When clinically detected relapse was compared with to radiographically detected relapse respectively, the median overall survival (2084 days [range, 225-4161] vs. 2737 days [range, 172-2750]; P = .51), the median time to relapse (247 days [range, 141-3974] vs. 814 days [range, 96-1682]; P = .30) and the median postrelapse survival (674 days [range, 13-1883] vs. 1146 days [range, 4-2548]; P = .52) were not statistically different. In patients who never relapsed after auto-HCT, a median of 4 (range, 1-25) surveillance imaging studies were performed over a median follow-up period of 3.5 years. A minority of patients with cHL who achieve CR after auto-HCT will ultimately relapse. Surveillance imaging detected approximately half of relapses; however, outcomes were similar for those whose relapse was detected using routine surveillance imaging versus detected clinically in between surveillance imaging studies. There appears to be limited utility for routine surveillance imaging in cHL patients who achieve CR after auto-HCT. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tian, J.; Krauß, T.; d'Angelo, P.
2017-05-01
Automatic rooftop extraction is one of the most challenging problems in remote sensing image analysis. Classical 2D image processing techniques are expensive due to the high amount of features required to locate buildings. This problem can be avoided when 3D information is available. In this paper, we show how to fuse the spectral and height information of stereo imagery to achieve an efficient and robust rooftop extraction. In the first step, the digital terrain model (DTM) and in turn the normalized digital surface model (nDSM) is generated by using a newly step-edge approach. In the second step, the initial building locations and rooftop boundaries are derived by removing the low-level pixels and high-level pixels with higher probability to be trees and shadows. This boundary is then served as the initial level set function, which is further refined to fit the best possible boundaries through distance regularized level-set curve evolution. During the fitting procedure, the edge-based active contour model is adopted and implemented by using the edges indicators extracted from panchromatic image. The performance of the proposed approach is tested by using the WorldView-2 satellite data captured over Munich.
Electro-optical design for efficient visual communication
NASA Astrophysics Data System (ADS)
Huck, Friedrich O.; Fales, Carl L.; Jobson, Daniel J.; Rahman, Zia-ur
1994-06-01
Visual communication can be regarded as efficient only if the amount of information that it conveys from the scene to the observer approaches the maximum possible and the associated cost approaches the minimum possible. To deal with this problem, Fales and Huck have integrated the critical limiting factors that constrain image gathering into classical concepts of communication theory. This paper uses this approach to assess the electro-optical design of the image gathering device. Design variables include the f-number and apodization of the objective lens, the aperture size and sampling geometry of the photodetection mechanism, and lateral inhibition and nonlinear radiance-to-signal conversion akin to the retinal processing in the human eye. It is an agreeable consequence of this approach that the image gathering device that is designed along the guidelines developed from communication theory behaves very much like the human eye. The performance approaches the maximum possible in terms of the information content of the acquired data, and thereby, the fidelity, sharpness and clarity with which fine detail can be restored, the efficiency with which the visual information can be transmitted in the form of decorrelated data, and the robustness of these two attributes to the temporal and spatial variations in scene illumination.
2015-07-06
This little-known galaxy, officially named J04542829-6625280, but most often referred to as LEDA 89996, is a classic example of a spiral galaxy. The galaxy is much like our own galaxy, the Milky Way. The disc-shaped galaxy is seen face on, revealing the winding structure of the spiral arms. Dark patches in these spiral arms are in fact dust and gas — the raw materials for new stars. The many young stars that form in these regions make the spiral arms appear bright and bluish. The galaxy sits in a vibrant area of the night sky within the constellation of Dorado (The Swordfish), and appears very close to the Large Magellanic Cloud — one of the satellite galaxies of the Milky Way. The observations were carried out with the high resolution channel of Hubble’s Advanced Camera for Surveys. This instrument has delivered some of the sharpest views of the Universe so far achieved by mankind. This image covers only a tiny patch of sky — about the size of a one cent euro coin held 100 metres away! A version of this image was entered into the Hubble’s Hidden Treasures image processing competition by flickr user c.claude.
[Study of automatic marine oil spills detection using imaging spectroscopy].
Liu, De-Lian; Han, Liang; Zhang, Jian-Qi
2013-11-01
To reduce artificial auxiliary works in oil spills detection process, an automatic oil spill detection method based on adaptive matched filter is presented. Firstly, the characteristics of reflectance spectral signature of C-H bond in oil spill are analyzed. And an oil spill spectral signature extraction model is designed by using the spectral feature of C-H bond. It is then used to obtain the reference spectral signature for the following oil spill detection step. Secondly, the characteristics of reflectance spectral signature of sea water, clouds, and oil spill are compared. The bands which have large difference in reflectance spectral signatures of the sea water, clouds, and oil spill are selected. By using these bands, the sea water pixels are segmented. And the background parameters are then calculated. Finally, the classical adaptive matched filter from target detection algorithms is improved and introduced for oil spill detection. The proposed method is applied to the real airborne visible infrared imaging spectrometer (AVIRIS) hyperspectral image captured during the deepwater horizon oil spill in the Gulf of Mexico for oil spill detection. The results show that the proposed method has, high efficiency, does not need artificial auxiliary work, and can be used for automatic detection of marine oil spill.
Tracking of Ball and Players in Beach Volleyball Videos
Gomez, Gabriel; Herrera López, Patricia; Link, Daniel; Eskofier, Bjoern
2014-01-01
This paper presents methods for the determination of players' positions and contact time points by tracking the players and the ball in beach volleyball videos. Two player tracking methods are compared, a classical particle filter and a rigid grid integral histogram tracker. Due to mutual occlusion of the players and the camera perspective, results are best for the front players, with 74,6% and 82,6% of correctly tracked frames for the particle method and the integral histogram method, respectively. Results suggest an improved robustness against player confusion between different particle sets when tracking with a rigid grid approach. Faster processing and less player confusions make this method superior to the classical particle filter. Two different ball tracking methods are used that detect ball candidates from movement difference images using a background subtraction algorithm. Ball trajectories are estimated and interpolated from parabolic flight equations. The tracking accuracy of the ball is 54,2% for the trajectory growth method and 42,1% for the Hough line detection method. Tracking results of over 90% from the literature could not be confirmed. Ball contact frames were estimated from parabolic trajectory intersection, resulting in 48,9% of correctly estimated ball contact points. PMID:25426936
Liquid jet breakup regimes at supercritical pressures
Oefelein, Joseph C.; Dahms, Rainer Norbert Uwe
2015-07-23
Previously, a theory has been presented that explains how discrete vapor–liquid interfaces become diminished at certain high-pressure conditions in a manner that leads to well known qualitative trends observed from imaging in a variety of experiments. Rather than surface tension forces, transport processes can dominate over relevant ranges of conditions. In this paper, this framework is now generalized to treat a wide range of fuel-oxidizer combinations in a manner consistent with theories of capillary flows and extended corresponding states theory. Different flow conditions and species-specific molecular properties are shown to produce distinct variations of interfacial structures and local free molecularmore » paths. These variations are shown to occur over the operating ranges in a variety of propulsion and power systems. Despite these variations, the generalized analysis reveals that the envelope of flow conditions at which the transition from classical sprays to diffusion-dominated mixing occurs exhibits a characteristic shape for all liquid–gas combinations. As a result, for alkane-oxidizer mixtures, it explains that these conditions shift to higher pressure flow conditions with increasing carbon number and demonstrates that, instead of widely assumed classical spray atomization, diffusion-dominated mixing may occur under relevant high-pressure conditions in many modern devices.« less
NASA Astrophysics Data System (ADS)
Ying, Yibin; Liu, Yande; Fu, Xiaping; Lu, Huishan
2005-11-01
The artificial neural networks (ANNs) have been used successfully in applications such as pattern recognition, image processing, automation and control. However, majority of today's applications of ANNs is back-propagate feed-forward ANN (BP-ANN). In this paper, back-propagation artificial neural networks (BP-ANN) were applied for modeling soluble solid content (SSC) of intact pear from their Fourier transform near infrared (FT-NIR) spectra. One hundred and sixty-four pear samples were used to build the calibration models and evaluate the models predictive ability. The results are compared to the classical calibration approaches, i.e. principal component regression (PCR), partial least squares (PLS) and non-linear PLS (NPLS). The effects of the optimal methods of training parameters on the prediction model were also investigated. BP-ANN combine with principle component regression (PCR) resulted always better than the classical PCR, PLS and Weight-PLS methods, from the point of view of the predictive ability. Based on the results, it can be concluded that FT-NIR spectroscopy and BP-ANN models can be properly employed for rapid and nondestructive determination of fruit internal quality.
Scheimpflug with computational imaging to extend the depth of field of iris recognition systems
NASA Astrophysics Data System (ADS)
Sinharoy, Indranil
Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can significantly relax the constraints on subject movement during iris acquisition, making it less restrictive.
Imaging Total Stations - Modular and Integrated Concepts
NASA Astrophysics Data System (ADS)
Hauth, Stefan; Schlüter, Martin
2010-05-01
Keywords: 3D-Metrology, Engineering Geodesy, Digital Image Processing Initialized in 2009, the Institute for Spatial Information and Surveying Technology i3mainz, Mainz University of Applied Sciences, forces research towards modular concepts for imaging total stations. On the one hand, this research is driven by the successful setup of high precision imaging motor theodolites in the near past, on the other hand it is pushed by the actual introduction of integrated imaging total stations to the positioning market by the manufacturers Topcon and Trimble. Modular concepts for imaging total stations are manufacturer independent to a large extent and consist of a particular combination of accessory hardware, software and algorithmic procedures. The hardware part consists mainly of an interchangeable eyepiece adapter offering opportunities for digital imaging and motorized focus control. An easy assembly and disassembly in the field is possible allowing the user to switch between the classical and the imaging use of a robotic total station. The software part primarily has to ensure hardware control, but several level of algorithmic support might be added and have to be distinguished. Algorithmic procedures allow to reach several levels of calibration concerning the geometry of the external digital camera and the total station. We deliver insight in our recent developments and quality characteristics. Both the modular and the integrated approach seem to have its individual strengths and weaknesses. Therefore we expect that both approaches might point at different target applications. Our aim is a better understanding of appropriate applications for robotic imaging total stations. First results are presented. Stefan Hauth, Martin Schlüter i3mainz - Institut für Raumbezogene Informations- und Messtechnik FH Mainz University of Applied Sciences Lucy-Hillebrand-Straße 2, 55128 Mainz, Germany
The Use of Multiple Data Sources in the Process of Topographic Maps Updating
NASA Astrophysics Data System (ADS)
Cantemir, A.; Visan, A.; Parvulescu, N.; Dogaru, M.
2016-06-01
The methods used in the process of updating maps have evolved and become more complex, especially upon the development of the digital technology. At the same time, the development of technology has led to an abundance of available data that can be used in the updating process. The data sources came in a great variety of forms and formats from different acquisition sensors. Satellite images provided by certain satellite missions are now available on space agencies portals. Images stored in archives of satellite missions such us Sentinel, Landsat and other can be downloaded free of charge.The main advantages are represented by the large coverage area and rather good spatial resolution that enables the use of these images for the map updating at an appropriate scale. In our study we focused our research of these images on 1: 50.000 scale map. DEM that are globally available could represent an appropriate input for watershed delineation and stream network generation, that can be used as support for hydrography thematic layer update. If, in addition to remote sensing aerial photogrametry and LiDAR data are ussed, the accuracy of data sources is enhanced. Ortophotoimages and Digital Terrain Models are the main products that can be used for feature extraction and update. On the other side, the use of georeferenced analogical basemaps represent a significant addition to the process. Concerning the thematic maps, the classic representation of the terrain by contour lines derived from DTM, remains the best method of surfacing the earth on a map, nevertheless the correlation with other layers such as Hidrography are mandatory. In the context of the current national coverage of the Digital Terrain Model, one of the main concerns of the National Center of Cartography, through the Cartography and Photogrammetry Department, is represented by the exploitation of the available data in order to update the layers of the Topographic Reference Map 1:5000, known as TOPRO5 and at the same time, through the generalization and additional data sources of the Romanian 1:50 000 scale map. This paper also investigates the general perspective of DTM automatic use derived products in the process of updating the topographic maps.
Zinser, Max J; Sailer, Hermann F; Ritter, Lutz; Braumann, Bert; Maegele, Marc; Zöller, Joachim E
2013-12-01
Advances in computers and imaging have permitted the adoption of 3-dimensional (3D) virtual planning protocols in orthognathic surgery, which may allow a paradigm shift when the virtual planning can be transferred properly. The purpose of this investigation was to compare the versatility and precision of innovative computer-aided designed and computer-aided manufactured (CAD/CAM) surgical splints, intraoperative navigation, and "classic" intermaxillary occlusal splints for surgical transfer of virtual orthognathic planning. The protocols consisted of maxillofacial imaging, diagnosis, virtual orthognathic planning, and surgical planning transfer using newly designed CAD/CAM splints (approach A), navigation (approach B), and intermaxillary occlusal splints (approach C). In this prospective observational study, all patients underwent bimaxillary osteotomy. Eight patients were treated using approach A, 10 using approach B, and 12 using approach C. These techniques were evaluated by applying 13 hard and 7 soft tissue parameters to compare the virtual orthognathic planning (T0) with the postoperative result (T1) using 3D cephalometry and image fusion (ΔT1 vs T0). The highest precision (ΔT1 vs T0) for the maxillary planning transfer was observed with CAD/CAM splints (<0.23 mm; P > .05) followed by surgical "waferless" navigation (<0.61 mm, P < .05) and classic intermaxillary occlusal splints (<1.1 mm; P < .05). Only the innovative CAD/CAM splints kept the condyles in their central position in the temporomandibular joint. However, no technique enables a precise prediction of the mandible and soft tissue. CAD/CAM splints and surgical navigation provide a reliable, innovative, and precise approach for the transfer of virtual orthognathic planning. These computer-assisted techniques may offer an alternate approach to the use of classic intermaxillary occlusal splints. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy
NASA Astrophysics Data System (ADS)
Bucht, Curry; Söderberg, Per; Manneberg, Göran
2009-02-01
The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.
Label-free in vivo imaging of Drosophila melanogaster by multiphoton microscopy
NASA Astrophysics Data System (ADS)
Lin, Chiao-Ying; Hovhannisyan, Vladimir; Wu, June-Tai; Lin, Sung-Jan; Lin, Chii-Wann; Chen, Jyh-Horng; Dong, Chen-Yuan
2008-02-01
The fruit fly Drosophila melanogaster is one of the most valuable organisms in genetic and developmental biology studies. Drosophila is a small organism with a short life cycle, and is inexpensive and easy to maintain. The entire genome of Drosophila has recently been sequenced (cite the reference). These advantages make fruit fly an attractive model organism for biomedical researches. Unlike humans, Drosophila can be subjected to genetic manipulation with relative ease. Originally, Drosophila was mostly used in classical genetics studies. In the model era of molecular biology, the fruit fly has become a model organ for developmental biology researches. In the past, numerous molecularly modified mutants with well defined genetic defects affecting different aspects of the developmental processes have been identified and studied. However, traditionally, the developmental defects of the mutant flies are mostly examined in isolated fixed tissues which preclude the observation of the dynamic interaction of the different cell types and the extracellular matrix. Therefore, the ability to image different organelles of the fruit fly without extrinsic labeling is invaluable for Drosophila biology. In this work, we successfully acquire in vivo images of both developing muscles and axons of motor neurons in the three larval stages by using the minimially invasive imaging modality of multiphoton (SHG) microscopy. We found that while SHG imaging is useful in revealing the muscular architecture of the developing larva, it is the autofluorescence signal that allows label-free imaging of various organelles to be achieved. Our results demonstrate that multiphoton imaging is a powerful technique for investigation the development of Drosophila.
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Feng; Xin, Lei; Fu, Jie; Huang, Puming
2017-10-01
Large amount of data is one of the most obvious features in satellite based remote sensing systems, which is also a burden for data processing and transmission. The theory of compressive sensing(CS) has been proposed for almost a decade, and massive experiments show that CS has favorable performance in data compression and recovery, so we apply CS theory to remote sensing images acquisition. In CS, the construction of classical sensing matrix for all sparse signals has to satisfy the Restricted Isometry Property (RIP) strictly, which limits applying CS in practical in image compression. While for remote sensing images, we know some inherent characteristics such as non-negative, smoothness and etc.. Therefore, the goal of this paper is to present a novel measurement matrix that breaks RIP. The new sensing matrix consists of two parts: the standard Nyquist sampling matrix for thumbnails and the conventional CS sampling matrix. Since most of sun-synchronous based satellites fly around the earth 90 minutes and the revisit cycle is also short, lots of previously captured remote sensing images of the same place are available in advance. This drives us to reconstruct remote sensing images through a deep learning approach with those measurements from the new framework. Therefore, we propose a novel deep convolutional neural network (CNN) architecture which takes in undersampsing measurements as input and outputs an intermediate reconstruction image. It is well known that the training procedure to the network costs long time, luckily, the training step can be done only once, which makes the approach attractive for a host of sparse recovery problems.
Fluorescent Nano-Probes to Image Plant Cell Walls by Super-Resolution STED Microscopy
Paës, Gabriel; Habrant, Anouck; Terryn, Christine
2018-01-01
Lignocellulosic biomass is a complex network of polymers making up the cell walls of plants. It represents a feedstock of sustainable resources to be converted into fuels, chemicals, and materials. Because of its complex architecture, lignocellulose is a recalcitrant material that requires some pretreatments and several types of catalysts to be transformed efficiently. Gaining more knowledge in the architecture of plant cell walls is therefore important to understand and optimize transformation processes. For the first time, super-resolution imaging of poplar wood samples has been performed using the Stimulated Emission Depletion (STED) technique. In comparison to standard confocal images, STED reveals new details in cell wall structure, allowing the identification of secondary walls and middle lamella with fine details, while keeping open the possibility to perform topochemistry by the use of relevant fluorescent nano-probes. In particular, the deconvolution of STED images increases the signal-to-noise ratio so that images become very well defined. The obtained results show that the STED super-resolution technique can be easily implemented by using cheap commercial fluorescent rhodamine-PEG nano-probes which outline the architecture of plant cell walls due to their interaction with lignin. Moreover, the sample preparation only requires easily-prepared plant sections of a few tens of micrometers, in addition to an easily-implemented post-treatment of images. Overall, the STED super-resolution technique in combination with a variety of nano-probes can provide a new vision of plant cell wall imaging by filling in the gap between classical photon microscopy and electron microscopy. PMID:29415498
Fluorescent Nano-Probes to Image Plant Cell Walls by Super-Resolution STED Microscopy.
Paës, Gabriel; Habrant, Anouck; Terryn, Christine
2018-02-06
Lignocellulosic biomass is a complex network of polymers making up the cell walls of plants. It represents a feedstock of sustainable resources to be converted into fuels, chemicals, and materials. Because of its complex architecture, lignocellulose is a recalcitrant material that requires some pretreatments and several types of catalysts to be transformed efficiently. Gaining more knowledge in the architecture of plant cell walls is therefore important to understand and optimize transformation processes. For the first time, super-resolution imaging of poplar wood samples has been performed using the Stimulated Emission Depletion (STED) technique. In comparison to standard confocal images, STED reveals new details in cell wall structure, allowing the identification of secondary walls and middle lamella with fine details, while keeping open the possibility to perform topochemistry by the use of relevant fluorescent nano-probes. In particular, the deconvolution of STED images increases the signal-to-noise ratio so that images become very well defined. The obtained results show that the STED super-resolution technique can be easily implemented by using cheap commercial fluorescent rhodamine-PEG nano-probes which outline the architecture of plant cell walls due to their interaction with lignin. Moreover, the sample preparation only requires easily-prepared plant sections of a few tens of micrometers, in addition to an easily-implemented post-treatment of images. Overall, the STED super-resolution technique in combination with a variety of nano-probes can provide a new vision of plant cell wall imaging by filling in the gap between classical photon microscopy and electron microscopy.
Image classification of human carcinoma cells using complex wavelet-based covariance descriptors.
Keskin, Furkan; Suhre, Alexander; Kose, Kivanc; Ersahin, Tulin; Cetin, A Enis; Cetin-Atalay, Rengul
2013-01-01
Cancer cell lines are widely used for research purposes in laboratories all over the world. Computer-assisted classification of cancer cells can alleviate the burden of manual labeling and help cancer research. In this paper, we present a novel computerized method for cancer cell line image classification. The aim is to automatically classify 14 different classes of cell lines including 7 classes of breast and 7 classes of liver cancer cells. Microscopic images containing irregular carcinoma cell patterns are represented by subwindows which correspond to foreground pixels. For each subwindow, a covariance descriptor utilizing the dual-tree complex wavelet transform (DT-[Formula: see text]WT) coefficients and several morphological attributes are computed. Directionally selective DT-[Formula: see text]WT feature parameters are preferred primarily because of their ability to characterize edges at multiple orientations which is the characteristic feature of carcinoma cell line images. A Support Vector Machine (SVM) classifier with radial basis function (RBF) kernel is employed for final classification. Over a dataset of 840 images, we achieve an accuracy above 98%, which outperforms the classical covariance-based methods. The proposed system can be used as a reliable decision maker for laboratory studies. Our tool provides an automated, time- and cost-efficient analysis of cancer cell morphology to classify different cancer cell lines using image-processing techniques, which can be used as an alternative to the costly short tandem repeat (STR) analysis. The data set used in this manuscript is available as supplementary material through http://signal.ee.bilkent.edu.tr/cancerCellLineClassificationSampleImages.html.
Image Classification of Human Carcinoma Cells Using Complex Wavelet-Based Covariance Descriptors
Keskin, Furkan; Suhre, Alexander; Kose, Kivanc; Ersahin, Tulin; Cetin, A. Enis; Cetin-Atalay, Rengul
2013-01-01
Cancer cell lines are widely used for research purposes in laboratories all over the world. Computer-assisted classification of cancer cells can alleviate the burden of manual labeling and help cancer research. In this paper, we present a novel computerized method for cancer cell line image classification. The aim is to automatically classify 14 different classes of cell lines including 7 classes of breast and 7 classes of liver cancer cells. Microscopic images containing irregular carcinoma cell patterns are represented by subwindows which correspond to foreground pixels. For each subwindow, a covariance descriptor utilizing the dual-tree complex wavelet transform (DT-WT) coefficients and several morphological attributes are computed. Directionally selective DT-WT feature parameters are preferred primarily because of their ability to characterize edges at multiple orientations which is the characteristic feature of carcinoma cell line images. A Support Vector Machine (SVM) classifier with radial basis function (RBF) kernel is employed for final classification. Over a dataset of 840 images, we achieve an accuracy above 98%, which outperforms the classical covariance-based methods. The proposed system can be used as a reliable decision maker for laboratory studies. Our tool provides an automated, time- and cost-efficient analysis of cancer cell morphology to classify different cancer cell lines using image-processing techniques, which can be used as an alternative to the costly short tandem repeat (STR) analysis. The data set used in this manuscript is available as supplementary material through http://signal.ee.bilkent.edu.tr/cancerCellLineClassificationSampleImages.html. PMID:23341908
Assembly and microscopic characterization of DNA origami structures.
Scheible, Max; Jungmann, Ralf; Simmel, Friedrich C
2012-01-01
DNA origami is a revolutionary method for the assembly of molecular nanostructures from DNA with precisely defined dimensions and with an unprecedented yield. This can be utilized to arrange nanoscale components such as proteins or nanoparticles into pre-defined patterns. For applications it will now be of interest to arrange such components into functional complexes and study their geometry-dependent interactions. While commonly DNA nanostructures are characterized by atomic force microscopy or electron microscopy, these techniques often lack the time-resolution to study dynamic processes. It is therefore of considerable interest to also apply fluorescence microscopic techniques to DNA nanostructures. Of particular importance here is the utilization of novel super-resolved microscopy methods that enable imaging beyond the classical diffraction limit.
NASA Astrophysics Data System (ADS)
N'Diaye, Mamadou; Choquet, Elodie; Carlotti, Alexis; Pueyo, Laurent; Egron, Sylvain; Leboulleux, Lucie; Levecq, Olivier; Perrin, Marshall D.; Wallace, J. Kent; Long, Chris; Lajoie, Rachel; Lajoie, Charles-Philippe; Eldorado Riggs, A. J.; Zimmerman, Neil T.; Groff, Tyler Dean; Kasdin, N. Jeremy; Vanderbei, Robert J.; Mawet, Dimitri; Macintosh, Bruce; Shaklan, Stuart; Soummer, Remi
2015-01-01
HiCAT is a high-contrast imaging testbed designed to provide complete solutions in wavefront sensing, control and starlight suppression with complex aperture telescopes. Primary mirror segmentation, central obstruction and spiders in the pupil of an on-axis telescope introduces additional diffraction features in the point spread function, which make high-contrast imaging very challenging. The testbed alignment was completed in the summer of 2014, exceeding specifications with a total wavefront error of 12nm rms with a 18mm pupil. Two deformable mirrors are to be installed for wavefront control in the fall of 2014. In this communication, we report on the first testbed results using a classical Lyot coronagraph. We have developed novel coronagraph designs combining an Apodized Pupil Lyot Coronagraph (APLC) with shaped-pupil type optimizations. We present the results of these new APLC-type solutions with two-dimensional shaped-pupil apodizers for the HiCAT geometry. These solutions render the system quasi-insensitive to jitter and low-order aberrations, while improving the performance in terms of inner working angle, bandpass and contrast over a classical APLC.
NASA Astrophysics Data System (ADS)
N'Diaye, Mamadou; Mazoyer, Johan; Choquet, Élodie; Pueyo, Laurent; Perrin, Marshall D.; Egron, Sylvain; Leboulleux, Lucie; Levecq, Olivier; Carlotti, Alexis; Long, Chris A.; Lajoie, Rachel; Soummer, Rémi
2015-09-01
HiCAT is a high-contrast imaging testbed designed to provide complete solutions in wavefront sensing, control and starlight suppression with complex aperture telescopes. The pupil geometry of such observatories includes primary mirror segmentation, central obstruction, and spider vanes, which make the direct imaging of habitable worlds very challenging. The testbed alignment was completed in the summer of 2014, exceeding specifications with a total wavefront error of 12nm rms over a 18mm pupil. The installation of two deformable mirrors for wavefront control is to be completed in the winter of 2015. In this communication, we report on the first testbed results using a classical Lyot coronagraph. We also present the coronagraph design for HiCAT geometry, based on our recent development of Apodized Pupil Lyot Coronagraph (APLC) with shaped-pupil type optimizations. These new APLC-type solutions using two-dimensional shaped-pupil apodizer render the system quasi-insensitive to jitter and low-order aberrations, while improving the performance in terms of inner working angle, bandpass and contrast over a classical APLC.
Sadoun, A.; Strelnikov, K.; Bonté, E.; Fonta, C.; Girard, P.
2015-01-01
The number of studies that use the common marmoset (Callithrix jacchus) in various fields of neurosciences is increasing dramatically. In general, animals enter the study when their health status is considered satisfactory on the basis of classical clinical investigations. In behavioral studies, variations of score between individuals are frequently observed, some of them being considered as poor performers or outliers. Experimenters rarely consider the fact that it could be related to some brain anomaly. This raises the important issue of the reliability of such classical behavioral approaches without using complementary imaging, especially in animals lacking striking external clinical signs. Here we report the case of a young marmoset which presented a set of cognitive impairments in two different tasks compared to other age-matched animals. Brain imaging revealed a patent right lateral ventricular enlargement with a mild hippocampal atrophy. This abnormality could explain the cognitive impairments of this animal. Such a case points to the importance of complementing behavioral studies by imaging explorations to avoid experimental bias. PMID:26527211
Relational Contract: Applicable to Department of Defense Contracts
1989-12-01
examine the evolution of contract law and, in particular, the role of contractual incompleteness in exchange relationships. 2.1.1. The Classical Approach...Classical contract law facilitates exchange by separately detailing all aspects of the contracting process 9 at the outset by prespecification of all...modifications after contractual performance has begun. According to Williamson (1979), classical contract law implements prespecification through legal
Quantum-mechanical machinery for rational decision-making in classical guessing game
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Pawłowski, Marcin; Ham, Byoung S.; Lee, Jinhyoung
2016-02-01
In quantum game theory, one of the most intriguing and important questions is, “Is it possible to get quantum advantages without any modification of the classical game?” The answer to this question so far has largely been negative. So far, it has usually been thought that a change of the classical game setting appears to be unavoidable for getting the quantum advantages. However, we give an affirmative answer here, focusing on the decision-making process (we call ‘reasoning’) to generate the best strategy, which may occur internally, e.g., in the player’s brain. To show this, we consider a classical guessing game. We then define a one-player reasoning problem in the context of the decision-making theory, where the machinery processes are designed to simulate classical and quantum reasoning. In such settings, we present a scenario where a rational player is able to make better use of his/her weak preferences due to quantum reasoning, without any altering or resetting of the classically defined game. We also argue in further analysis that the quantum reasoning may make the player fail, and even make the situation worse, due to any inappropriate preferences.
Quantum-mechanical machinery for rational decision-making in classical guessing game
Bang, Jeongho; Ryu, Junghee; Pawłowski, Marcin; Ham, Byoung S.; Lee, Jinhyoung
2016-01-01
In quantum game theory, one of the most intriguing and important questions is, “Is it possible to get quantum advantages without any modification of the classical game?” The answer to this question so far has largely been negative. So far, it has usually been thought that a change of the classical game setting appears to be unavoidable for getting the quantum advantages. However, we give an affirmative answer here, focusing on the decision-making process (we call ‘reasoning’) to generate the best strategy, which may occur internally, e.g., in the player’s brain. To show this, we consider a classical guessing game. We then define a one-player reasoning problem in the context of the decision-making theory, where the machinery processes are designed to simulate classical and quantum reasoning. In such settings, we present a scenario where a rational player is able to make better use of his/her weak preferences due to quantum reasoning, without any altering or resetting of the classically defined game. We also argue in further analysis that the quantum reasoning may make the player fail, and even make the situation worse, due to any inappropriate preferences. PMID:26875685
Quantum-mechanical machinery for rational decision-making in classical guessing game.
Bang, Jeongho; Ryu, Junghee; Pawłowski, Marcin; Ham, Byoung S; Lee, Jinhyoung
2016-02-15
In quantum game theory, one of the most intriguing and important questions is, "Is it possible to get quantum advantages without any modification of the classical game?" The answer to this question so far has largely been negative. So far, it has usually been thought that a change of the classical game setting appears to be unavoidable for getting the quantum advantages. However, we give an affirmative answer here, focusing on the decision-making process (we call 'reasoning') to generate the best strategy, which may occur internally, e.g., in the player's brain. To show this, we consider a classical guessing game. We then define a one-player reasoning problem in the context of the decision-making theory, where the machinery processes are designed to simulate classical and quantum reasoning. In such settings, we present a scenario where a rational player is able to make better use of his/her weak preferences due to quantum reasoning, without any altering or resetting of the classically defined game. We also argue in further analysis that the quantum reasoning may make the player fail, and even make the situation worse, due to any inappropriate preferences.