Sample records for image processing theory

  1. Theory on data processing and instrumentation. [remote sensing

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1978-01-01

    A selection of NASA Earth observations programs are reviewed, emphasizing hardware capabilities. Sampling theory, noise and detection considerations, and image evaluation are discussed for remote sensor imagery. Vision and perception are considered, leading to numerical image processing. The use of multispectral scanners and of multispectral data processing systems, including digital image processing, is depicted. Multispectral sensing and analysis in application with land use and geographical data systems are also covered.

  2. Stochastic processes, estimation theory and image enhancement

    NASA Technical Reports Server (NTRS)

    Assefi, T.

    1978-01-01

    An introductory account of stochastic processes, estimation theory, and image enhancement is presented. The book is primarily intended for first-year graduate students and practicing engineers and scientists whose work requires an acquaintance with the theory. Fundamental concepts of probability were reviewed that are required to support the main topics. The appendices discuss the remaining mathematical background.

  3. Television Images and Adolescent Girls' Body Image Disturbance.

    ERIC Educational Resources Information Center

    Botta, Renee A.

    1999-01-01

    Contributes to scholarship on the effects of media images on adolescents, using social-comparison theory and critical-viewing theory. Finds that media do have an impact on body-image disturbance. Suggests that body-image processing is the key to understanding how television images affect adolescent girls' body-image attitudes and behaviors. (SR)

  4. Relationships between digital signal processing and control and estimation theory

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1978-01-01

    Research areas associated with digital signal processing and control and estimation theory are identified. Particular attention is given to image processing, system identification problems (parameter identification, linear prediction, least squares, Kalman filtering), stability analyses (the use of the Liapunov theory, frequency domain criteria, passivity), and multiparameter systems, distributed processes, and random fields.

  5. Responding mindfully to distressing psychosis: A grounded theory analysis.

    PubMed

    Abba, Nicola; Chadwick, Paul; Stevenson, Chris

    2008-01-01

    This study investigates the psychological process involved when people with current distressing psychosis learned to respond mindfully to unpleasant psychotic sensations (voices, thoughts, and images). Sixteen participants were interviewed on completion of a mindfulness group program. Grounded theory methodology was used to generate a theory of the core psychological process using a systematically applied set of methods linking analysis with data collection. The theory inducted describes the experience of relating differently to psychosis through a three-stage process: centering in awareness of psychosis; allowing voices, thoughts, and images to come and go without reacting or struggle; and reclaiming power through acceptance of psychosis and the self. The conceptual and clinical applications of the theory and its limits are discussed.

  6. Recognition-by-Components: A Theory of Human Image Understanding.

    ERIC Educational Resources Information Center

    Biederman, Irving

    1987-01-01

    The theory proposed (recognition-by-components) hypothesizes the perceptual recognition of objects to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components. Experiments on the perception of briefly presented pictures support the theory. (Author/LMO)

  7. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  8. Introduction to computer image processing

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  9. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  10. Designing a Virtual Item Bank Based on the Techniques of Image Processing

    ERIC Educational Resources Information Center

    Liao, Wen-Wei; Ho, Rong-Guey

    2011-01-01

    One of the major weaknesses of the item exposure rates of figural items in Intelligence Quotient (IQ) tests lies in its inaccuracies. In this study, a new approach is proposed and a useful test tool known as the Virtual Item Bank (VIB) is introduced. The VIB combine Automatic Item Generation theory and image processing theory with the concepts of…

  11. Position Estimation Using Image Derivative

    NASA Technical Reports Server (NTRS)

    Mortari, Daniele; deDilectis, Francesco; Zanetti, Renato

    2015-01-01

    This paper describes an image processing algorithm to process Moon and/or Earth images. The theory presented is based on the fact that Moon hard edge points are characterized by the highest values of the image derivative. Outliers are eliminated by two sequential filters. Moon center and radius are then estimated by nonlinear least-squares using circular sigmoid functions. The proposed image processing has been applied and validated using real and synthetic Moon images.

  12. Prospects for Image Restoration

    NASA Astrophysics Data System (ADS)

    Hunt, B. R.

    Image restoration is the theory and practice of processing an image to correct it for distortions caused by the image formation process. The first efforts in image restoration appeared more than 25 years ago. In this article we review the more recent trends in image restoration and discuss the main directions that are expected to influence the continued evolution of this technology.

  13. Relationships between digital signal processing and control and estimation theory

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1978-01-01

    Research directions in the fields of digital signal processing and modern control and estimation theory are discussed. Stability theory, linear prediction and parameter identification, system synthesis and implementation, two-dimensional filtering, decentralized control and estimation, and image processing are considered in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the disciplines.

  14. Multiresponse imaging system design for improved resolution

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.; Rahman, Zia-Ur; Reichenbach, Stephen E.

    1991-01-01

    Multiresponse imaging is a process that acquires A images, each with a different optical response, and reassembles them into a single image with an improved resolution that can approach 1/sq rt A times the photodetector-array sampling lattice. Our goals are to optimize the performance of this process in terms of the resolution and fidelity of the restored image and to assess the amount of information required to do so. The theoretical approach is based on the extension of both image restoration and rate-distortion theories from their traditional realm of signal processing to image processing which includes image gathering and display.

  15. Anterior EEG asymmetries and opponent process theory.

    PubMed

    Kline, John P; Blackhart, Ginette C; Williams, William C

    2007-03-01

    The opponent process theory of emotion [Solomon, R.L., and Corbit, J.D. (1974). An opponent-process theory of motivation: I. Temporal dynamics of affect. Psychological Review, 81, 119-143.] predicts a temporary reversal of emotional valence during the recovery from emotional stimulation. We hypothesized that this affective contrast would be apparent in asymmetrical activity patterns in the frontal lobes, and would be more apparent for left frontally active individuals. The present study tested this prediction by examining EEG asymmetries during and after blocked presentations of aversive pictures selected from the International Affective Picture System (IAPS). 12 neutral images, 12 aversive images, and 24 neutral images were presented in blocks. Participants who were right frontally active at baseline did not show changes in EEG asymmetry while viewing aversive slides or after cessation. Participants left frontally active at baseline, however, exhibited greater relative left frontal activity after aversive stimulation than before stimulation. Asymmetrical activity patterns in the frontal lobes may relate to affect regulatory processes, including contrasting opponent after-reactions to aversive stimuli.

  16. Digital signal processing and control and estimation theory -- Points of tangency, area of intersection, and parallel directions

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1976-01-01

    A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.

  17. The Effect of Mental Rotation on Surgical Pathological Diagnosis.

    PubMed

    Park, Heejung; Kim, Hyun Soo; Cha, Yoon Jin; Choi, Junjeong; Minn, Yangki; Kim, Kyung Sik; Kim, Se Hoon

    2018-05-01

    Pathological diagnosis involves very delicate and complex consequent processing that is conducted by a pathologist. The recognition of false patterns might be an important cause of misdiagnosis in the field of surgical pathology. In this study, we evaluated the influence of visual and cognitive bias in surgical pathologic diagnosis, focusing on the influence of "mental rotation." We designed three sets of the same images of uterine cervix biopsied specimens (original, left to right mirror images, and 180-degree rotated images), and recruited 32 pathologists to diagnose the 3 set items individually. First, the items found to be adequate for analysis by classical test theory, Generalizability theory, and item response theory. The results showed statistically no differences in difficulty, discrimination indices, and response duration time between the image sets. Mental rotation did not influence the pathologists' diagnosis in practice. Interestingly, outliers were more frequent in rotated image sets, suggesting that the mental rotation process may influence the pathological diagnoses of a few individual pathologists. © Copyright: Yonsei University College of Medicine 2018.

  18. Graph theory for feature extraction and classification: a migraine pathology case study.

    PubMed

    Jorge-Hernandez, Fernando; Garcia Chimeno, Yolanda; Garcia-Zapirain, Begonya; Cabrera Zubizarreta, Alberto; Gomez Beldarrain, Maria Angeles; Fernandez-Ruanova, Begonya

    2014-01-01

    Graph theory is also widely used as a representational form and characterization of brain connectivity network, as is machine learning for classifying groups depending on the features extracted from images. Many of these studies use different techniques, such as preprocessing, correlations, features or algorithms. This paper proposes an automatic tool to perform a standard process using images of the Magnetic Resonance Imaging (MRI) machine. The process includes pre-processing, building the graph per subject with different correlations, atlas, relevant feature extraction according to the literature, and finally providing a set of machine learning algorithms which can produce analyzable results for physicians or specialists. In order to verify the process, a set of images from prescription drug abusers and patients with migraine have been used. In this way, the proper functioning of the tool has been proved, providing results of 87% and 92% of success depending on the classifier used.

  19. Symmetrical group theory for mathematical complexity reduction of digital holograms

    NASA Astrophysics Data System (ADS)

    Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.

    2017-10-01

    This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.

  20. Information theoretic analysis of edge detection in visual communication

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2010-08-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.

  1. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  2. The 6th International Conference on Computer Science and Computational Mathematics (ICCSCM 2017)

    NASA Astrophysics Data System (ADS)

    2017-09-01

    The ICCSCM 2017 (The 6th International Conference on Computer Science and Computational Mathematics) has aimed to provide a platform to discuss computer science and mathematics related issues including Algebraic Geometry, Algebraic Topology, Approximation Theory, Calculus of Variations, Category Theory; Homological Algebra, Coding Theory, Combinatorics, Control Theory, Cryptology, Geometry, Difference and Functional Equations, Discrete Mathematics, Dynamical Systems and Ergodic Theory, Field Theory and Polynomials, Fluid Mechanics and Solid Mechanics, Fourier Analysis, Functional Analysis, Functions of a Complex Variable, Fuzzy Mathematics, Game Theory, General Algebraic Systems, Graph Theory, Group Theory and Generalizations, Image Processing, Signal Processing and Tomography, Information Fusion, Integral Equations, Lattices, Algebraic Structures, Linear and Multilinear Algebra; Matrix Theory, Mathematical Biology and Other Natural Sciences, Mathematical Economics and Financial Mathematics, Mathematical Physics, Measure Theory and Integration, Neutrosophic Mathematics, Number Theory, Numerical Analysis, Operations Research, Optimization, Operator Theory, Ordinary and Partial Differential Equations, Potential Theory, Real Functions, Rings and Algebras, Statistical Mechanics, Structure Of Matter, Topological Groups, Wavelets and Wavelet Transforms, 3G/4G Network Evolutions, Ad-Hoc, Mobile, Wireless Networks and Mobile Computing, Agent Computing & Multi-Agents Systems, All topics related Image/Signal Processing, Any topics related Computer Networks, Any topics related ISO SC-27 and SC- 17 standards, Any topics related PKI(Public Key Intrastructures), Artifial Intelligences(A.I.) & Pattern/Image Recognitions, Authentication/Authorization Issues, Biometric authentication and algorithms, CDMA/GSM Communication Protocols, Combinatorics, Graph Theory, and Analysis of Algorithms, Cryptography and Foundation of Computer Security, Data Base(D.B.) Management & Information Retrievals, Data Mining, Web Image Mining, & Applications, Defining Spectrum Rights and Open Spectrum Solutions, E-Comerce, Ubiquitous, RFID, Applications, Fingerprint/Hand/Biometrics Recognitions and Technologies, Foundations of High-performance Computing, IC-card Security, OTP, and Key Management Issues, IDS/Firewall, Anti-Spam mail, Anti-virus issues, Mobile Computing for E-Commerce, Network Security Applications, Neural Networks and Biomedical Simulations, Quality of Services and Communication Protocols, Quantum Computing, Coding, and Error Controls, Satellite and Optical Communication Systems, Theory of Parallel Processing and Distributed Computing, Virtual Visions, 3-D Object Retrievals, & Virtual Simulations, Wireless Access Security, etc. The success of ICCSCM 2017 is reflected in the received papers from authors around the world from several countries which allows a highly multinational and multicultural idea and experience exchange. The accepted papers of ICCSCM 2017 are published in this Book. Please check http://www.iccscm.com for further news. A conference such as ICCSCM 2017 can only become successful using a team effort, so herewith we want to thank the International Technical Committee and the Reviewers for their efforts in the review process as well as their valuable advices. We are thankful to all those who contributed to the success of ICCSCM 2017. The Secretary

  3. Developing the Image and Public Reputation of Universities: The Managerial Process.

    ERIC Educational Resources Information Center

    Davies, John L.; Melchiori, Gerlinda S.

    1982-01-01

    Managerial processes used in developing programs to improve an institution's public image are outlined, drawing on both theory and experience in college administration and public relations. Eight case studies provide illustrations. A five-stage managerial plan is presented. (MSE)

  4. Building an exceptional imaging management team: from theory to practice.

    PubMed

    Hogan, Laurie

    2010-01-01

    Building a strong, cohesive, and talented managerial team is a critical endeavor for imaging administrators, as the job will be enhanced if supported by a group of high-performing, well-developed managers. For the purposes of this article, leadership and management are discussed as two separate, yet equally important, components of an imaging administrator's role. The difference between the two is defined as: leadership relates to people, management relates to process. There are abundant leadership and management theories that can help imaging administrators develop managers and ultimately build a better team. Administrators who apply these theories in practical and meaningful ways will improve their teams' leadership and management aptitude. Imaging administrators will find it rewarding to coach and develop managers and witness transformations that result from improved leadership and management abilities.

  5. Parallel and serial grouping of image elements in visual perception.

    PubMed

    Houtkamp, Roos; Roelfsema, Pieter R

    2010-12-01

    The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some situations, but we demonstrate that there are also situations where Gestalt grouping becomes serial. We observe substantial time delays when image elements have to be grouped indirectly through a chain of local groupings. We call this chaining process incremental grouping and demonstrate that it can occur for only a single object at a time. We suggest that incremental grouping requires the gradual spread of object-based attention so that eventually all the object's parts become grouped explicitly by an attentional labeling process. Our findings inspire a new incremental grouping theory that relates the parallel, local grouping process to feedforward processing and the serial, incremental grouping process to recurrent processing in the visual cortex.

  6. The Art of Multi-Image.

    ERIC Educational Resources Information Center

    Gordon, Roger L., Ed.

    This guide to multi-image program production for practitioners describes the process from the beginning stages through final presentation, examines historical perspectives, theory, and research in multi-image, and provides examples of successful utilization. Ten chapters focus on the following topics: (1) definition of multi-image field and…

  7. Rhetorical Approaches to Crisis Communication: The Research, Development, and Validation of an Image Repair Situational Theory for Educational Leaders

    ERIC Educational Resources Information Center

    Vogelaar, Robert J.

    2005-01-01

    In this project a product to aid educational leaders in the process of communicating in crisis situations is presented. The product was created and received a formative evaluation using an educational research and development methodology. Ultimately, an administrative training course that utilized an Image Repair Situational Theory was developed.…

  8. ART AND SCIENCE OF IMAGE MAPS.

    USGS Publications Warehouse

    Kidwell, Richard D.; McSweeney, Joseph A.

    1985-01-01

    The visual image of reflected light is influenced by the complex interplay of human color discrimination, spatial relationships, surface texture, and the spectral purity of light, dyes, and pigments. Scientific theories of image processing may not always achieve acceptable results as the variety of factors, some psychological, are in part, unpredictable. Tonal relationships that affect digital image processing and the transfer functions used to transform from the continuous-tone source image to a lithographic image, may be interpreted for an insight of where art and science fuse in the production process. The application of art and science in image map production at the U. S. Geological Survey is illustrated and discussed.

  9. Spatial vision processes: From the optical image to the symbolic structures of contour information

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1988-01-01

    The significance of machine and natural vision is discussed together with the need for a general approach to image acquisition and processing aimed at recognition. An exploratory scheme is proposed which encompasses the definition of spatial primitives, intrinsic image properties and sampling, 2-D edge detection at the smallest scale, the construction of spatial primitives from edges, and the isolation of contour information from textural information. Concepts drawn from or suggested by natural vision at both perceptual and physiological levels are relied upon heavily to guide the development of the overall scheme. The scheme is intended to provide a larger context in which to place the emerging technology of detector array focal-plane processors. The approach differs from many recent efforts in edge detection and image coding by emphasizing smallest scale edge detection as a foundation for multi-scale symbolic processing while diminishing somewhat the importance of image convolutions with multi-scale edge operators. Cursory treatments of information theory illustrate that the direct application of this theory to structural information in images could not be realized.

  10. Information Acquisition, Analysis and Integration

    DTIC Science & Technology

    2016-08-03

    of sensing and processing, theory, applications, signal processing, image and video processing, machine learning , technology transfer. 16. SECURITY... learning . 5. Solved elegantly old problems like image and video debluring, intro- ducing new revolutionary approaches. 1 DISTRIBUTION A: Distribution...Polatkan, G. Sapiro, D. Blei, D. B. Dunson, and L. Carin, “ Deep learning with hierarchical convolution factor analysis,” IEEE 6 DISTRIBUTION A

  11. Quantum Image Processing and Its Application to Edge Detection: Theory and Experiment

    NASA Astrophysics Data System (ADS)

    Yao, Xi-Wei; Wang, Hengyan; Liao, Zeyang; Chen, Ming-Cheng; Pan, Jian; Li, Jun; Zhang, Kechao; Lin, Xingcheng; Wang, Zhehui; Luo, Zhihuang; Zheng, Wenqiang; Li, Jianzhong; Zhao, Meisheng; Peng, Xinhua; Suter, Dieter

    2017-07-01

    Processing of digital images is continuously gaining in volume and relevance, with concomitant demands on data storage, transmission, and processing power. Encoding the image information in quantum-mechanical systems instead of classical ones and replacing classical with quantum information processing may alleviate some of these challenges. By encoding and processing the image information in quantum-mechanical systems, we here demonstrate the framework of quantum image processing, where a pure quantum state encodes the image information: we encode the pixel values in the probability amplitudes and the pixel positions in the computational basis states. Our quantum image representation reduces the required number of qubits compared to existing implementations, and we present image processing algorithms that provide exponential speed-up over their classical counterparts. For the commonly used task of detecting the edge of an image, we propose and implement a quantum algorithm that completes the task with only one single-qubit operation, independent of the size of the image. This demonstrates the potential of quantum image processing for highly efficient image and video processing in the big data era.

  12. Assessment of visual communication by information theory

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.

    1994-01-01

    This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.

  13. Parallel-hierarchical processing and classification of laser beam profile images based on the GPU-oriented architecture

    NASA Astrophysics Data System (ADS)

    Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan

    2017-08-01

    The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.

  14. Hailstone classifier based on Rough Set Theory

    NASA Astrophysics Data System (ADS)

    Wan, Huisong; Jiang, Shuming; Wei, Zhiqiang; Li, Jian; Li, Fengjiao

    2017-09-01

    The Rough Set Theory was used for the construction of the hailstone classifier. Firstly, the database of the radar image feature was constructed. It included transforming the base data reflected by the Doppler radar into the bitmap format which can be seen. Then through the image processing, the color, texture, shape and other dimensional features should be extracted and saved as the characteristic database to provide data support for the follow-up work. Secondly, Through the Rough Set Theory, a machine for hailstone classifications can be built to achieve the hailstone samples’ auto-classification.

  15. Statistical lamb wave localization based on extreme value theory

    NASA Astrophysics Data System (ADS)

    Harley, Joel B.

    2018-04-01

    Guided wave localization methods based on delay-and-sum imaging, matched field processing, and other techniques have been designed and researched to create images that locate and describe structural damage. The maximum value of these images typically represent an estimated damage location. Yet, it is often unclear if this maximum value, or any other value in the image, is a statistically significant indicator of damage. Furthermore, there are currently few, if any, approaches to assess the statistical significance of guided wave localization images. As a result, we present statistical delay-and-sum and statistical matched field processing localization methods to create statistically significant images of damage. Our framework uses constant rate of false alarm statistics and extreme value theory to detect damage with little prior information. We demonstrate our methods with in situ guided wave data from an aluminum plate to detect two 0.75 cm diameter holes. Our results show an expected improvement in statistical significance as the number of sensors increase. With seventeen sensors, both methods successfully detect damage with statistical significance.

  16. On the assessment of visual communication by information theory

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1993-01-01

    This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.

  17. Infrared small target detection based on Danger Theory

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Yang, Xiao

    2009-11-01

    To solve the problem that traditional method can't detect the small objects whose local SNR is less than 2 in IR images, a Danger Theory-based model to detect infrared small target is presented in this paper. First, on the analog with immunology, the definition is given, in this paper, to such terms as dangerous signal, antigens, APC, antibodies. Besides, matching rule between antigen and antibody is improved. Prior to training the detection model and detecting the targets, the IR images are processed utilizing adaptive smooth filter to decrease the stochastic noise. Then at the training process, deleting rule, generating rule, crossover rule and the mutation rule are established after a large number of experiments in order to realize immediate convergence and obtain good antibodies. The Danger Theory-based model is built after the training process, and this model can detect the target whose local SNR is only 1.5.

  18. Central and Divided Visual Field Presentation of Emotional Images to Measure Hemispheric Differences in Motivated Attention.

    PubMed

    O'Hare, Aminda J; Atchley, Ruth Ann; Young, Keith M

    2017-11-16

    Two dominant theories on lateralized processing of emotional information exist in the literature. One theory posits that unpleasant emotions are processed by right frontal regions, while pleasant emotions are processed by left frontal regions. The other theory posits that the right hemisphere is more specialized for the processing of emotional information overall, particularly in posterior regions. Assessing the different roles of the cerebral hemispheres in processing emotional information can be difficult without the use of neuroimaging methodologies, which are not accessible or affordable to all scientists. Divided visual field presentation of stimuli can allow for the investigation of lateralized processing of information without the use of neuroimaging technology. This study compared central versus divided visual field presentations of emotional images to assess differences in motivated attention between the two hemispheres. The late positive potential (LPP) was recorded using electroencephalography (EEG) and event-related potentials (ERPs) methodologies to assess motivated attention. Future work will pair this paradigm with a more active behavioral task to explore the behavioral impacts on the attentional differences found.

  19. Development and validation of a short-lag spatial coherence theory for photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Graham, Michelle T.; Lediju Bell, Muyinatu A.

    2018-02-01

    We previously derived spatial coherence theory to be implemented for studying theoretical properties of ShortLag Spatial Coherence (SLSC) beamforming applied to photoacoustic images. In this paper, our newly derived theoretical equation is evaluated to generate SLSC images of a point target and a 1.2 mm diameter target and corresponding lateral profiles. We compared SLSC images simulated solely based on our theory to SLSC images created after beamforming acoustic channel data from k-Wave simulations of 1.2 mm-diameter disc target. This process was repeated for a point target and the full width at half the maximum signal amplitudes were measured to estimate the resolution of each imaging system. Resolution as a function of lag was comparable for the first 10% of the receive aperture (i.e., the short-lag region), after which resolution measurements diverged by a maximum of 1 mm between the two types of simulated images. These results indicate the potential for both simulation methods to be utilized as independent resources to study coherence-based photoacoustic beamformers when imaging point-like targets.

  20. Oximetry using multispectral imaging: theory and application

    NASA Astrophysics Data System (ADS)

    MacKenzie, Lewis E.; Harvey, Andrew R.

    2018-06-01

    Multispectral imaging (MSI) is a technique for measurement of blood oxygen saturation in vivo that can be applied using various imaging modalities to provide new insights into physiology and disease development. This tutorial aims to provide a thorough introduction to the theory and application of MSI oximetry for researchers new to the field, whilst also providing detailed information for more experienced researchers. The optical theory underlying two-wavelength oximetry, three-wavelength oximetry, pulse oximetry, and multispectral oximetry algorithms are described in detail. The varied challenges of applying MSI oximetry to in vivo applications are outlined and discussed, covering: the optical properties of blood and tissue, optical paths in blood vessels, tissue auto-fluorescence, oxygen diffusion, and common oximetry artefacts. Essential image processing techniques for MSI are discussed, in particular, image acquisition, image registration strategies, and blood vessel line profile fitting. Calibration and validation strategies for MSI are discussed, including comparison techniques, physiological interventions, and phantoms. The optical principles and unique imaging capabilities of various cutting-edge MSI oximetry techniques are discussed, including photoacoustic imaging, spectroscopic optical coherence tomography, and snapshot MSI.

  1. Formal Foundations for the Specification of Software Architecture.

    DTIC Science & Technology

    1995-03-01

    Architectures For- mally: A Case-Study Using KWIC." Kestrel Institute, Palo Alto, CA 94304, April 1994. 58. Kang, Kyo C. Feature-Oriented Domain Analysis ( FODA ...6.3.5 Constraint-Based Architectures ................. 6-60 6.4 Summary ......... ............................. 6-63 VII. Analysis of Process-Based...between these architec- ture theories were investigated. A feasibility analysis on an image processing application demonstrated that architecture theories

  2. Laterality in Metaphor Processing: Lack of Evidence from Functional Magnetic Resonance Imaging for the Right Hemisphere Theory

    ERIC Educational Resources Information Center

    Rapp, Alexander M.; Leube, Dirk T.; Erb, Michael; Grodd, Wolfgang; Kircher, Tilo T. J.

    2007-01-01

    We investigated processing of metaphoric sentences using event-related functional magnetic resonance imaging (fMRI). Seventeen healthy subjects (6 female, 11 male) read 60 novel short German sentence pairs with either metaphoric or literal meaning and performed two different tasks: judging the metaphoric content and judging whether the sentence…

  3. Retinex enhancement of infrared images.

    PubMed

    Li, Ying; He, Renjie; Xu, Guizhi; Hou, Changzhi; Sun, Yunyan; Guo, Lei; Rao, Liyun; Yan, Weili

    2008-01-01

    With the ability of imaging the temperature distribution of body, infrared imaging is promising in diagnostication and prognostication of diseases. However the poor quality of the raw original infrared images prevented applications and one of the essential problems is the low contrast appearance of the imagined object. In this paper, the image enhancement technique based on the Retinex theory is studied, which is a process that automatically retrieve the visual realism to images. The algorithms, including Frackle-McCann algorithm, McCann99 algorithm, single-scale Retinex algorithm, multi-scale Retinex algorithm and multi-scale Retinex algorithm with color restoration, are experienced to the enhancement of infrared images. The entropy measurements along with the visual inspection were compared and results shown the algorithms based on Retinex theory have the ability in enhancing the infrared image. Out of the algorithms compared, MSRCR demonstrated the best performance.

  4. V-Sipal - a Virtual Laboratory for Satellite Image Processing and Analysis

    NASA Astrophysics Data System (ADS)

    Buddhiraju, K. M.; Eeti, L.; Tiwari, K. K.

    2011-09-01

    In this paper a virtual laboratory for the Satellite Image Processing and Analysis (v-SIPAL) being developed at the Indian Institute of Technology Bombay is described. v-SIPAL comprises a set of experiments that are normally carried out by students learning digital processing and analysis of satellite images using commercial software. Currently, the experiments that are available on the server include Image Viewer, Image Contrast Enhancement, Image Smoothing, Edge Enhancement, Principal Component Transform, Texture Analysis by Co-occurrence Matrix method, Image Indices, Color Coordinate Transforms, Fourier Analysis, Mathematical Morphology, Unsupervised Image Classification, Supervised Image Classification and Accuracy Assessment. The virtual laboratory includes a theory module for each option of every experiment, a description of the procedure to perform each experiment, the menu to choose and perform the experiment, a module on interpretation of results when performed with a given image and pre-specified options, bibliography, links to useful internet resources and user-feedback. The user can upload his/her own images for performing the experiments and can also reuse outputs of one experiment in another experiment where applicable. Some of the other experiments currently under development include georeferencing of images, data fusion, feature evaluation by divergence andJ-M distance, image compression, wavelet image analysis and change detection. Additions to the theory module include self-assessment quizzes, audio-video clips on selected concepts, and a discussion of elements of visual image interpretation. V-SIPAL is at the satge of internal evaluation within IIT Bombay and will soon be open to selected educational institutions in India for evaluation.

  5. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  6. Phase-space evolution of x-ray coherence in phase-sensitive imaging.

    PubMed

    Wu, Xizeng; Liu, Hong

    2008-08-01

    X-ray coherence evolution in the imaging process plays a key role for x-ray phase-sensitive imaging. In this work we present a phase-space formulation for the phase-sensitive imaging. The theory is reformulated in terms of the cross-spectral density and associated Wigner distribution. The phase-space formulation enables an explicit and quantitative account of partial coherence effects on phase-sensitive imaging. The presented formulas for x-ray spectral density at the detector can be used for performing accurate phase retrieval and optimizing the phase-contrast visibility. The concept of phase-space shearing length derived from this phase-space formulation clarifies the spatial coherence requirement for phase-sensitive imaging with incoherent sources. The theory has been applied to x-ray Talbot interferometric imaging as well. The peak coherence condition derived reveals new insights into three-grating-based Talbot-interferometric imaging and gratings-based x-ray dark-field imaging.

  7. Intensity dependent spread theory

    NASA Technical Reports Server (NTRS)

    Holben, Richard

    1990-01-01

    The Intensity Dependent Spread (IDS) procedure is an image-processing technique based on a model of the processing which occurs in the human visual system. IDS processing is relevant to many aspects of machine vision and image processing. For quantum limited images, it produces an ideal trade-off between spatial resolution and noise averaging, performs edge enhancement thus requiring only mean-crossing detection for the subsequent extraction of scene edges, and yields edge responses whose amplitudes are independent of scene illumination, depending only upon the ratio of the reflectance on the two sides of the edge. These properties suggest that the IDS process may provide significant bandwidth reduction while losing only minimal scene information when used as a preprocessor at or near the image plane.

  8. Halftoning and Image Processing Algorithms

    DTIC Science & Technology

    1999-02-01

    screening techniques with the quality advantages of error diffusion in the half toning of color maps, and on color image enhancement for halftone ...image quality. Our goals in this research were to advance the understanding in image science for our new halftone algorithm and to contribute to...image retrieval and noise theory for such imagery. In the field of color halftone printing, research was conducted on deriving a theoretical model of our

  9. An improved K-means clustering algorithm in agricultural image segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Huifeng; Peng, Hui; Liu, Shanmei

    Image segmentation is the first important step to image analysis and image processing. In this paper, according to color crops image characteristics, we firstly transform the color space of image from RGB to HIS, and then select proper initial clustering center and cluster number in application of mean-variance approach and rough set theory followed by clustering calculation in such a way as to automatically segment color component rapidly and extract target objects from background accurately, which provides a reliable basis for identification, analysis, follow-up calculation and process of crops images. Experimental results demonstrate that improved k-means clustering algorithm is able to reduce the computation amounts and enhance precision and accuracy of clustering.

  10. Theory of Remote Image Formation

    NASA Astrophysics Data System (ADS)

    Blahut, Richard E.

    2004-11-01

    In many applications, images, such as ultrasonic or X-ray signals, are recorded and then analyzed with digital or optical processors in order to extract information. Such processing requires the development of algorithms of great precision and sophistication. This book presents a unified treatment of the mathematical methods that underpin the various algorithms used in remote image formation. The author begins with a review of transform and filter theory. He then discusses two- and three-dimensional Fourier transform theory, the ambiguity function, image construction and reconstruction, tomography, baseband surveillance systems, and passive systems (where the signal source might be an earthquake or a galaxy). Information-theoretic methods in image formation are also covered, as are phase errors and phase noise. Throughout the book, practical applications illustrate theoretical concepts, and there are many homework problems. The book is aimed at graduate students of electrical engineering and computer science, and practitioners in industry. Presents a unified treatment of the mathematical methods that underpin the algorithms used in remote image formation Illustrates theoretical concepts with reference to practical applications Provides insights into the design parameters of real systems

  11. [Image processing applying in analysis of motion features of cultured cardiac myocyte in rat].

    PubMed

    Teng, Qizhi; He, Xiaohai; Luo, Daisheng; Wang, Zhengrong; Zhou, Beiyi; Yuan, Zhirun; Tao, Dachang

    2007-02-01

    Study of mechanism of medicine actions, by quantitative analysis of cultured cardiac myocyte, is one of the cutting edge researches in myocyte dynamics and molecular biology. The characteristics of cardiac myocyte auto-beating without external stimulation make the research sense. Research of the morphology and cardiac myocyte motion using image analysis can reveal the fundamental mechanism of medical actions, increase the accuracy of medicine filtering, and design the optimal formula of medicine for best medical treatments. A system of hardware and software has been built with complete sets of functions including living cardiac myocyte image acquisition, image processing, motion image analysis, and image recognition. In this paper, theories and approaches are introduced for analysis of living cardiac myocyte motion images and implementing quantitative analysis of cardiac myocyte features. A motion estimation algorithm is used for motion vector detection of particular points and amplitude and frequency detection of a cardiac myocyte. Beatings of cardiac myocytes are sometimes very small. In such case, it is difficult to detect the motion vectors from the particular points in a time sequence of images. For this reason, an image correlation theory is employed to detect the beating frequencies. Active contour algorithm in terms of energy function is proposed to approximate the boundary and detect the changes of edge of myocyte.

  12. Synthesis strategy: building a culturally sensitive mid-range theory of risk perception using literary, quantitative, and qualitative methods.

    PubMed

    Siaki, Leilani A; Loescher, Lois J; Trego, Lori L

    2013-03-01

    This article presents a discussion of development of a mid-range theory of risk perception. Unhealthy behaviours contribute to the development of health inequalities worldwide. The link between perceived risk and successful health behaviour change is inconclusive, particularly in vulnerable populations. This may be attributed to inattention to culture. The synthesis strategy of theory building guided the process using three methods: (1) a systematic review of literature published between 2000-2011 targeting perceived risk in vulnerable populations; (2) qualitative and (3) quantitative data from a study of Samoan Pacific Islanders at high risk of cardiovascular disease and diabetes. Main concepts of this theory include risk attention, appraisal processes, cognition, and affect. Overarching these concepts is health-world view: cultural ways of knowing, beliefs, values, images, and ideas. This theory proposes the following: (1) risk attention varies based on knowledge of the health risk in the context of health-world views; (2) risk appraisals are influenced by affect, health-world views, cultural customs, and protocols that intersect with the health risk; (3) strength of cultural beliefs, values, and images (cultural identity) mediate risk attention and risk appraisal influencing the likelihood that persons will engage in health-promoting behaviours that may contradict cultural customs/protocols. Interventions guided by a culturally sensitive mid-range theory may improve behaviour-related health inequalities in vulnerable populations. The synthesis strategy is an intensive process for developing a culturally sensitive mid-range theory. Testing of the theory will ascertain its usefulness for reducing health inequalities in vulnerable groups. © 2012 Blackwell Publishing Ltd.

  13. Rank-based decompositions of morphological templates.

    PubMed

    Sussner, P; Ritter, G X

    2000-01-01

    Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.

  14. High-resolution imaging of the supercritical antisolvent process

    NASA Astrophysics Data System (ADS)

    Bell, Philip W.; Stephens, Amendi P.; Roberts, Christopher B.; Duke, Steve R.

    2005-06-01

    A high-magnification and high-resolution imaging technique was developed for the supercritical fluid antisolvent (SAS) precipitation process. Visualizations of the jet injection, flow patterns, droplets, and particles were obtained in a high-pressure vessel for polylactic acid and budesonide precipitation in supercritical CO2. The results show two regimes for particle production: one where turbulent mixing occurs in gas-like plumes, and another where distinct droplets were observed in the injection. Images are presented to demonstrate the capabilities of the method for examining particle formation theories and for understanding the underlying fluid mechanics, thermodynamics, and mass transport in the SAS process.

  15. Application of two-dimensional crystallography and image processing to atomic resolution Z-contrast images.

    PubMed

    Morgan, David G; Ramasse, Quentin M; Browning, Nigel D

    2009-06-01

    Zone axis images recorded using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM or Z-contrast imaging) reveal the atomic structure with a resolution that is defined by the probe size of the microscope. In most cases, the full images contain many sub-images of the crystal unit cell and/or interface structure. Thanks to the repetitive nature of these images, it is possible to apply standard image processing techniques that have been developed for the electron crystallography of biological macromolecules and have been used widely in other fields of electron microscopy for both organic and inorganic materials. These methods can be used to enhance the signal-to-noise present in the original images, to remove distortions in the images that arise from either the instrumentation or the specimen itself and to quantify properties of the material in ways that are difficult without such data processing. In this paper, we describe briefly the theory behind these image processing techniques and demonstrate them for aberration-corrected, high-resolution HAADF-STEM images of Si(46) clathrates developed for hydrogen storage.

  16. Theory and applications of structured light single pixel imaging

    NASA Astrophysics Data System (ADS)

    Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.

    2018-02-01

    Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.

  17. Focusing on media body ideal images triggers food intake among restrained eaters: a test of restraint theory and the elaboration likelihood model.

    PubMed

    Boyce, Jessica A; Kuijer, Roeline G

    2014-04-01

    Although research consistently shows that images of thin women in the media (media body ideals) affect women negatively (e.g., increased weight dissatisfaction and food intake), this effect is less clear among restrained eaters. The majority of experiments demonstrate that restrained eaters - identified with the Restraint Scale - consume more food than do other participants after viewing media body ideal images; whereas a minority of experiments suggest that such images trigger restrained eaters' dietary restraint. Weight satisfaction and mood results are just as variable. One reason for these inconsistent results might be that different methods of image exposure (e.g., slideshow vs. film) afford varying levels of attention. Therefore, we manipulated attention levels and measured participants' weight satisfaction and food intake. We based our hypotheses on the elaboration likelihood model and on restraint theory. We hypothesised that advertent (i.e., processing the images via central routes of persuasion) and inadvertent (i.e., processing the images via peripheral routes of persuasion) exposure would trigger differing degrees of weight dissatisfaction and dietary disinhibition among restrained eaters (cf. restraint theory). Participants (N = 174) were assigned to one of four conditions: advertent or inadvertent exposure to media or control images. The dependent variables were measured in a supposedly unrelated study. Although restrained eaters' weight satisfaction was not significantly affected by either media exposure condition, advertent (but not inadvertent) media exposure triggered restrained eaters' eating. These results suggest that teaching restrained eaters how to pay less attention to media body ideal images might be an effective strategy in media-literary interventions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Computational method for multi-modal microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2017-02-01

    In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  19. "I just feel so guilty": The role of introjected regulation in linking appearance goals for exercise with women's body image.

    PubMed

    Hurst, Megan; Dittmar, Helga; Banerjee, Robin; Bond, Rod

    2017-03-01

    Appearance goals for exercise are consistently associated with negative body image, but research has yet to consider the processes that link these two variables. Self-determination theory offers one such process: introjected (guilt-based) regulation of exercise behavior. Study 1 investigated these relationships within a cross-sectional sample of female UK students (n=215, 17-30 years). Appearance goals were indirectly, negatively associated with body image due to links with introjected regulation. Study 2 experimentally tested this pathway, manipulating guilt relating to exercise and appearance goals independently and assessing post-test guilt and body anxiety (n=165, 18-27 years). The guilt manipulation significantly increased post-test feelings of guilt, and these increases were associated with increased post-test body anxiety, but only for participants in the guilt condition. The implications of these findings for self-determination theory and the importance of guilt for the body image literature are discussed. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Geometric shapes inversion method of space targets by ISAR image segmentation

    NASA Astrophysics Data System (ADS)

    Huo, Chao-ying; Xing, Xiao-yu; Yin, Hong-cheng; Li, Chen-guang; Zeng, Xiang-yun; Xu, Gao-gui

    2017-11-01

    The geometric shape of target is an effective characteristic in the process of space targets recognition. This paper proposed a method of shape inversion of space target based on components segmentation from ISAR image. The Radon transformation, Hough transformation, K-means clustering, triangulation will be introduced into ISAR image processing. Firstly, we use Radon transformation and edge detection to extract space target's main body spindle and solar panel spindle from ISAR image. Then the targets' main body, solar panel, rectangular and circular antenna are segmented from ISAR image based on image detection theory. Finally, the sizes of every structural component are computed. The effectiveness of this method is verified using typical targets' simulation data.

  1. Using fuzzy fractal features of digital images for the material surface analisys

    NASA Astrophysics Data System (ADS)

    Privezentsev, D. G.; Zhiznyakov, A. L.; Astafiev, A. V.; Pugin, E. V.

    2018-01-01

    Edge detection is an important task in image processing. There are a lot of approaches in this area: Sobel, Canny operators and others. One of the perspective techniques in image processing is the use of fuzzy logic and fuzzy sets theory. They allow us to increase processing quality by representing information in its fuzzy form. Most of the existing fuzzy image processing methods switch to fuzzy sets on very late stages, so this leads to some useful information loss. In this paper, a novel method of edge detection based on fuzzy image representation and fuzzy pixels is proposed. With this approach, we convert the image to fuzzy form on the first step. Different approaches to this conversion are described. Several membership functions for fuzzy pixel description and requirements for their form and view are given. A novel approach to edge detection based on Sobel operator and fuzzy image representation is proposed. Experimental testing of developed method was performed on remote sensing images.

  2. Laser Speckle Imaging to Monitor Microvascular Blood Flow: A Review.

    PubMed

    Vaz, Pedro G; Humeau-Heurtier, Anne; Figueiras, Edite; Correia, Carlos; Cardoso, Joao

    2016-01-01

    Laser speckle is a complex interference phenomenon that can easily be understood, in concept, but is difficult to predict mathematically, because it is a stochastic process. The use of laser speckle to produce images, which can carry many types of information, is called laser speckle imaging (LSI). The biomedical applications of LSI started in 1981 and, since then, many scientists have improved the laser speckle theory and developed different imaging techniques. During this process, some inconsistencies have been propagated up to now. These inconsistencies should be clarified in order to avoid errors in future works. This review presents a review of the laser speckle theory used in biomedical applications. Moreover, we also make a review of the practical concepts that are useful in the construction of laser speckle imagers. This study is not only an exposition of the concepts that can be found in the literature but also a critical analysis of the investigations presented so far. Concepts like scatterers velocity distribution, effect of static scatterers, optimal speckle size, light penetration angle, and contrast computation algorithms are discussed in detail.

  3. Go With the Flow, on Jupiter and Snow. Coherence from Model-Free Video Data Without Trajectories

    NASA Astrophysics Data System (ADS)

    AlMomani, Abd AlRahman R.; Bollt, Erik

    2018-06-01

    Viewing a data set such as the clouds of Jupiter, coherence is readily apparent to human observers, especially the Great Red Spot, but also other great storms and persistent structures. There are now many different definitions and perspectives mathematically describing coherent structures, but we will take an image processing perspective here. We describe an image processing perspective inference of coherent sets from a fluidic system directly from image data, without attempting to first model underlying flow fields, related to a concept in image processing called motion tracking. In contrast to standard spectral methods for image processing which are generally related to a symmetric affinity matrix, leading to standard spectral graph theory, we need a not symmetric affinity which arises naturally from the underlying arrow of time. We develop an anisotropic, directed diffusion operator corresponding to flow on a directed graph, from a directed affinity matrix developed with coherence in mind, and corresponding spectral graph theory from the graph Laplacian. Our methodology is not offered as more accurate than other traditional methods of finding coherent sets, but rather our approach works with alternative kinds of data sets, in the absence of vector field. Our examples will include partitioning the weather and cloud structures of Jupiter, and a local to Potsdam, NY, lake effect snow event on Earth, as well as the benchmark test double-gyre system.

  4. Unhealthy weight control behaviours in adolescent girls: a process model based on self-determination theory.

    PubMed

    Thøgersen-Ntoumani, Cecilie; Ntoumanis, Nikos; Nikitaras, Nikitas

    2010-06-01

    This study used self-determination theory (Deci, E.L., & Ryan, R.M. (2000). The 'what' and 'why' of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11, 227-268.) to examine predictors of body image concerns and unhealthy weight control behaviours in a sample of 350 Greek adolescent girls. A process model was tested which proposed that perceptions of parental autonomy support and two life goals (health and image) would predict adolescents' degree of satisfaction of their basic psychological needs. In turn, psychological need satisfaction was hypothesised to negatively predict body image concerns (i.e. drive for thinness and body dissatisfaction) and, indirectly, unhealthy weight control behaviours. The predictions of the model were largely supported indicating that parental autonomy support and adaptive life goals can indirectly impact upon the extent to which female adolescents engage in unhealthy weight control behaviours via facilitating the latter's psychological need satisfaction.

  5. Research on the lesion segmentation of breast tumor MR images based on FCM-DS theory

    NASA Astrophysics Data System (ADS)

    Zhang, Liangbin; Ma, Wenjun; Shen, Xing; Li, Yuehua; Zhu, Yuemin; Chen, Li; Zhang, Su

    2017-03-01

    Magnetic resonance imaging (MRI) plays an important role in the treatment of breast tumor by high intensity focused ultrasound (HIFU). The doctors evaluate the scale, distribution and the statement of benign or malignancy of breast tumor by analyzing variety modalities of MRI, such as the T2, DWI and DCE images for making accurate preoperative treatment plan and evaluating the effect of the operation. This paper presents a method of lesion segmentation of breast tumor based on FCM-DS theory. Fuzzy c-means clustering (FCM) algorithm combined with Dempster-Shafer (DS) theory is used to process the uncertainty of information, segmenting the lesion areas on DWI and DCE modalities of MRI and reducing the scale of the uncertain parts. Experiment results show that FCM-DS can fuse the DWI and DCE images to achieve accurate segmentation and display the statement of benign or malignancy of lesion area by Time-Intensity Curve (TIC), which could be beneficial in making preoperative treatment plan and evaluating the effect of the therapy.

  6. Child Development and Curriculum in Waldorf Education.

    ERIC Educational Resources Information Center

    Schmitt-Stegmann, Astrid

    Every educational theory has behind it a particular image of human beings and their development that supports a particular view of the learning process. This paper examines the image of children underlying Waldorf education. The paper identifies the individual and unique Self as the "third factor," that together with heredity and…

  7. Toward a Rhetoric of Change: Reconstructing Image and Narrative in Distressed Organizations.

    ERIC Educational Resources Information Center

    Faber, Brenton

    1998-01-01

    Proposes a model of organizational change by describing organizational change as a discursive process, sparked by a rhetorical conflict in an organization's narratives and images. Examines the educational assumptions and theories that structured a training course used by a company that was restructuring and reorganizing. (SG)

  8. [An improved low spectral distortion PCA fusion method].

    PubMed

    Peng, Shi; Zhang, Ai-Wu; Li, Han-Lun; Hu, Shao-Xing; Meng, Xian-Gang; Sun, Wei-Dong

    2013-10-01

    Aiming at the spectral distortion produced in PCA fusion process, the present paper proposes an improved low spectral distortion PCA fusion method. This method uses NCUT (normalized cut) image segmentation algorithm to make a complex hyperspectral remote sensing image into multiple sub-images for increasing the separability of samples, which can weaken the spectral distortions of traditional PCA fusion; Pixels similarity weighting matrix and masks were produced by using graph theory and clustering theory. These masks are used to cut the hyperspectral image and high-resolution image into some sub-region objects. All corresponding sub-region objects between the hyperspectral image and high-resolution image are fused by using PCA method, and all sub-regional integration results are spliced together to produce a new image. In the experiment, Hyperion hyperspectral data and Rapid Eye data were used. And the experiment result shows that the proposed method has the same ability to enhance spatial resolution and greater ability to improve spectral fidelity performance.

  9. Social Comparison and Body Image in Adolescence: A Grounded Theory Approach

    ERIC Educational Resources Information Center

    Krayer, A.; Ingledew, D. K.; Iphofen, R.

    2008-01-01

    This study explored the use of social comparison appraisals in adolescents' lives with particular reference to enhancement appraisals which can be used to counter threats to the self. Social comparison theory has been increasingly used in quantitative research to understand the processes through which societal messages about appearance influence…

  10. Research on assessment and improvement method of remote sensing image reconstruction

    NASA Astrophysics Data System (ADS)

    Sun, Li; Hua, Nian; Yu, Yanbo; Zhao, Zhanping

    2018-01-01

    Remote sensing image quality assessment and improvement is an important part of image processing. Generally, the use of compressive sampling theory in remote sensing imaging system can compress images while sampling which can improve efficiency. A method of two-dimensional principal component analysis (2DPCA) is proposed to reconstruct the remote sensing image to improve the quality of the compressed image in this paper, which contain the useful information of image and can restrain the noise. Then, remote sensing image quality influence factors are analyzed, and the evaluation parameters for quantitative evaluation are introduced. On this basis, the quality of the reconstructed images is evaluated and the different factors influence on the reconstruction is analyzed, providing meaningful referential data for enhancing the quality of remote sensing images. The experiment results show that evaluation results fit human visual feature, and the method proposed have good application value in the field of remote sensing image processing.

  11. Processing Digital Imagery to Enhance Perceptions of Realism

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur

    2003-01-01

    Multi-scale retinex with color restoration (MSRCR) is a method of processing digital image data based on Edwin Land s retinex (retina + cortex) theory of human color vision. An outgrowth of basic scientific research and its application to NASA s remote-sensing mission, MSRCR is embodied in a general-purpose algorithm that greatly improves the perception of visual realism and the quantity and quality of perceived information in a digitized image. In addition, the MSRCR algorithm includes provisions for automatic corrections to accelerate and facilitate what could otherwise be a tedious image-editing process. The MSRCR algorithm has been, and is expected to continue to be, the basis for development of commercial image-enhancement software designed to extend and refine its capabilities for diverse applications.

  12. Theory-informed design of values clarification methods: a cognitive psychological perspective on patient health-related decision making.

    PubMed

    Pieterse, Arwen H; de Vries, Marieke; Kunneman, Marleen; Stiggelbout, Anne M; Feldman-Stewart, Deb

    2013-01-01

    Healthcare decisions, particularly those involving weighing benefits and harms that may significantly affect quality and/or length of life, should reflect patients' preferences. To support patients in making choices, patient decision aids and values clarification methods (VCM) in particular have been developed. VCM intend to help patients to determine the aspects of the choices that are important to their selection of a preferred option. Several types of VCM exist. However, they are often designed without clear reference to theory, which makes it difficult for their development to be systematic and internally coherent. Our goal was to provide theory-informed recommendations for the design of VCM. Process theories of decision making specify components of decision processes, thus, identify particular processes that VCM could aim to facilitate. We conducted a review of the MEDLINE and PsycINFO databases and of references to theories included in retrieved papers, to identify process theories of decision making. We selected a theory if (a) it fulfilled criteria for a process theory; (b) provided a coherent description of the whole process of decision making; and (c) empirical evidence supports at least some of its postulates. Four theories met our criteria: Image Theory, Differentiation and Consolidation theory, Parallel Constraint Satisfaction theory, and Fuzzy-trace Theory. Based on these, we propose that VCM should: help optimize mental representations; encourage considering all potentially appropriate options; delay selection of an initially favoured option; facilitate the retrieval of relevant values from memory; facilitate the comparison of options and their attributes; and offer time to decide. In conclusion, our theory-based design recommendations are explicit and transparent, providing an opportunity to test each in a systematic manner. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Post-image acquisition processing approaches for coherent backscatter validation

    NASA Astrophysics Data System (ADS)

    Smith, Christopher A.; Belichki, Sara B.; Coffaro, Joseph T.; Panich, Michael G.; Andrews, Larry C.; Phillips, Ronald L.

    2014-10-01

    Utilizing a retro-reflector from a target point, the reflected irradiance of a laser beam traveling back toward the transmitting point contains a peak point of intensity known as the enhanced backscatter (EBS) phenomenon. EBS is dependent on the strength regime of turbulence currently occurring within the atmosphere as the beam propagates across and back. In order to capture and analyze this phenomenon so that it may be compared to theory, an imaging system is integrated into the optical set up. With proper imaging established, we are able to implement various post-image acquisition techniques to help determine detection and positioning of EBS which can then be validated with theory by inspection of certain dependent meteorological parameters such as the refractive index structure parameter, Cn2 and wind speed.

  14. Cardiovascular imaging and image processing: Theory and practice - 1975; Proceedings of the Conference, Stanford University, Stanford, Calif., July 10-12, 1975

    NASA Technical Reports Server (NTRS)

    Harrison, D. C.; Sandler, H.; Miller, H. A.

    1975-01-01

    The present collection of papers outlines advances in ultrasonography, scintigraphy, and commercialization of medical technology as applied to cardiovascular diagnosis in research and clinical practice. Particular attention is given to instrumentation, image processing and display. As necessary concomitants to mathematical analysis, recently improved magnetic recording methods using tape or disks and high-speed computers of large capacity are coming into use. Major topics include Doppler ultrasonic techniques, high-speed cineradiography, three-dimensional imaging of the myocardium with isotopes, sector-scanning echocardiography, and commercialization of the echocardioscope. Individual items are announced in this issue.

  15. Radiometry rocks

    NASA Astrophysics Data System (ADS)

    Harvey, James E.

    2012-10-01

    Professor Bill Wolfe was an exceptional mentor for his graduate students, and he made a major contribution to the field of optical engineering by teaching the (largely ignored) principles of radiometry for over forty years. This paper describes an extension of Bill's work on surface scatter behavior and the application of the BRDF to practical optical engineering problems. Most currently-available image analysis codes require the BRDF data as input in order to calculate the image degradation from residual optical fabrication errors. This BRDF data is difficult to measure and rarely available for short EUV wavelengths of interest. Due to a smooth-surface approximation, the classical Rayleigh-Rice surface scatter theory cannot be used to calculate BRDFs from surface metrology data for even slightly rough surfaces. The classical Beckmann-Kirchhoff theory has a paraxial limitation and only provides a closed-form solution for Gaussian surfaces. Recognizing that surface scatter is a diffraction process, and by utilizing sound radiometric principles, we first developed a linear systems theory of non-paraxial scalar diffraction in which diffracted radiance is shift-invariant in direction cosine space. Since random rough surfaces are merely a superposition of sinusoidal phase gratings, it was a straightforward extension of this non-paraxial scalar diffraction theory to develop a unified surface scatter theory that is valid for moderately rough surfaces at arbitrary incident and scattered angles. Finally, the above two steps are combined to yield a linear systems approach to modeling image quality for systems suffering from a variety of image degradation mechanisms. A comparison of image quality predictions with experimental results taken from on-orbit Solar X-ray Imager (SXI) data is presented.

  16. Enhancing the image resolution in a single-pixel sub-THz imaging system based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Alkus, Umit; Ermeydan, Esra Sengun; Sahin, Asaf Behzat; Cankaya, Ilyas; Altan, Hakan

    2018-04-01

    Compressed sensing (CS) techniques allow for faster imaging when combined with scan architectures, which typically suffer from speed. This technique when implemented with a subterahertz (sub-THz) single detector scan imaging system provides images whose resolution is only limited by the pixel size of the pattern used to scan the image plane. To overcome this limitation, the image of the target can be oversampled; however, this results in slower imaging rates especially if this is done in two-dimensional across the image plane. We show that by implementing a one-dimensional (1-D) scan of the image plane, a modified approach to CS theory applied with an appropriate reconstruction algorithm allows for successful reconstruction of the reflected oversampled image of a target placed in standoff configuration from the source. The experiments are done in reflection mode configuration where the operating frequency is 93 GHz and the corresponding wavelength is λ = 3.2 mm. To reconstruct the image with fewer samples, CS theory is applied using masks where the pixel size is 5 mm × 5 mm, and each mask covers an image area of 5 cm × 5 cm, meaning that the basic image is resolved as 10 × 10 pixels. To enhance the resolution, the information between two consecutive pixels is used, and oversampling along 1-D coupled with a modification of the masks in CS theory allowed for oversampled images to be reconstructed rapidly in 20 × 20 and 40 × 40 pixel formats. These are then compared using two different reconstruction algorithms, TVAL3 and ℓ1-MAGIC. The performance of these methods is compared for both simulated signals and real signals. It is found that the modified CS theory approach coupled with the TVAL3 reconstruction process, even when scanning along only 1-D, allows for rapid precise reconstruction of the oversampled target.

  17. Optimal Mass Transport for Statistical Estimation, Image Analysis, Information Geometry, and Control

    DTIC Science & Technology

    2017-01-10

    Metric Uncertainty for Spectral Estimation based on Nevanlinna-Pick Interpolation, (with J. Karlsson) Intern. Symp. on the Math . Theory of Networks and...Systems, Melbourne 2012. 22. Geometric tools for the estimation of structured covariances, (with L. Ning, X. Jiang) Intern. Symposium on the Math . Theory...estimation and the reversibility of stochastic processes, (with Y. Chen, J. Karlsson) Proc. Int. Symp. on Math . Theory of Networks and Syst., July

  18. The Fall of Parity.

    ERIC Educational Resources Information Center

    Forman, Paul

    1982-01-01

    Physicists had assumed that the world is distinguishable from its mirror image and constructed theories to ensure that the corresponding mathematical property (parity) is conserved in all subatomic processes. However, a scientific experiment demonstrated an intrinsic handedness to at least one physical process. The experiment, equipment, and…

  19. Structured light: theory and practice and practice and practice...

    NASA Astrophysics Data System (ADS)

    Keizer, Richard L.; Jun, Heesung; Dunn, Stanley M.

    1991-04-01

    We have developed a structured light system for noncontact 3-D measurement of human body surface areas and volumes. We illustrate the image processing steps and algorithms used to recover range data from a single camera image, reconstruct a complete surface from one or more sets of range data, and measure areas and volumes. The development of a working system required the solution to a number of practical problems in image processing and grid labeling (the stereo correspondence problem for structured light). In many instances we found that the standard cookbook techniques for image processing failed. This was due in part to the domain (human body), the restrictive assumptions of the models underlying the cookbook techniques, and the inability to consistently predict the outcome of the image processing operations. In this paper, we will discuss some of our successes and failures in two key steps in acquiring range data using structured light: First, the problem of detecting intersections in the structured light grid, and secondly, the problem of establishing correspondence between projected and detected intersections. We will outline the problems and solutions we have arrived at after several years of trial and error. We can now measure range data with an r.m.s. relative error of 0.3% and measure areas on the human body surface within 3% and volumes within 10%. We have found that the solution to building a working vision system requires the right combination of theory and experimental verification.

  20. Cardiovascular Imaging and Image Processing: Theory and Practice - 1975

    NASA Technical Reports Server (NTRS)

    Harrison, Donald C. (Editor); Sandler, Harold (Editor); Miller, Harry A. (Editor); Hood, Manley J. (Editor); Purser, Paul E. (Editor); Schmidt, Gene (Editor)

    1975-01-01

    Ultrasonography was examined in regard to the developmental highlights and present applicatons of cardiac ultrasound. Doppler ultrasonic techniques and the technology of miniature acoustic element arrays were reported. X-ray angiography was discussed with special considerations on quantitative three dimensional dynamic imaging of structure and function of the cardiopulmonary and circulatory systems in all regions of the body. Nuclear cardiography and scintigraphy, three--dimensional imaging of the myocardium with isotopes, and the commercialization of the echocardioscope were studied.

  1. Defocusing effects of lensless ghost imaging and ghost diffraction with partially coherent sources

    NASA Astrophysics Data System (ADS)

    Zhou, Shuang-Xi; Sheng, Wei; Bi, Yu-Bo; Luo, Chun-Ling

    2018-04-01

    The defocusing effect is inevitable and degrades the image quality in the conventional optical imaging process significantly due to the close confinement of the imaging lens. Based on classical optical coherent theory and linear algebra, we develop a unified formula to describe the defocusing effects of both lensless ghost imaging (LGI) and lensless ghost diffraction (LGD) systems with a partially coherent source. Numerical examples are given to illustrate the influence of defocusing length on the quality of LGI and LGD. We find that the defocusing effects of the test and reference paths in the LGI or LGD systems are entirely different, while the LGD system is more robust against defocusing than the LGI system. Specifically, we find that the imaging process for LGD systems can be viewed as pinhole imaging, which may find applications in ultra-short-wave band imaging without imaging lenses, e.g. x-ray diffraction and γ-ray imaging.

  2. Tutte polynomial in functional magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    García-Castillón, Marlly V.

    2015-09-01

    Methods of graph theory are applied to the processing of functional magnetic resonance images. Specifically the Tutte polynomial is used to analyze such kind of images. Functional Magnetic Resonance Imaging provide us connectivity networks in the brain which are represented by graphs and the Tutte polynomial will be applied. The problem of computing the Tutte polynomial for a given graph is #P-hard even for planar graphs. For a practical application the maple packages "GraphTheory" and "SpecialGraphs" will be used. We will consider certain diagram which is depicting functional connectivity, specifically between frontal and posterior areas, in autism during an inferential text comprehension task. The Tutte polynomial for the resulting neural networks will be computed and some numerical invariants for such network will be obtained. Our results show that the Tutte polynomial is a powerful tool to analyze and characterize the networks obtained from functional magnetic resonance imaging.

  3. [Perception of odor quality by Free Image-Association Test].

    PubMed

    Ueno, Y

    1992-10-01

    A method was devised for evaluating odor quality. Subjects were requested to freely describe the images elicited by smelling odors. This test was named the "Free Image-Association Test (FIT)". The test was applied for 20 flavors of various foods, five odors from the standards of T&T olfactometer (Japanese standard olfactory test), butter of yak milk, and incense from Lamaism temples. The words for expressing imagery were analyzed by multidimensional scaling and cluster analysis. Seven clusters of odors were obtained. The feature of these clusters were quite similar to that of primary odors which have been suggested by previous studies. However, the clustering of odors can not be explained on the basis of the primary-odor theory, but the information processing theory originally proposed by Miller (1956). These results support the usefulness of the Free Image-Association Test for investigating odor perception based on the images associated with odors.

  4. Generalized Fourier slice theorem for cone-beam image reconstruction.

    PubMed

    Zhao, Shuang-Ren; Jiang, Dazong; Yang, Kevin; Yang, Kang

    2015-01-01

    The cone-beam reconstruction theory has been proposed by Kirillov in 1961, Tuy in 1983, Feldkamp in 1984, Smith in 1985, Pierre Grangeat in 1990. The Fourier slice theorem is proposed by Bracewell 1956, which leads to the Fourier image reconstruction method for parallel-beam geometry. The Fourier slice theorem is extended to fan-beam geometry by Zhao in 1993 and 1995. By combining the above mentioned cone-beam image reconstruction theory and the above mentioned Fourier slice theory of fan-beam geometry, the Fourier slice theorem in cone-beam geometry is proposed by Zhao 1995 in short conference publication. This article offers the details of the derivation and implementation of this Fourier slice theorem for cone-beam geometry. Especially the problem of the reconstruction from Fourier domain has been overcome, which is that the value of in the origin of Fourier space is 0/0. The 0/0 type of limit is proper handled. As examples, the implementation results for the single circle and two perpendicular circle source orbits are shown. In the cone-beam reconstruction if a interpolation process is considered, the number of the calculations for the generalized Fourier slice theorem algorithm is O(N^4), which is close to the filtered back-projection method, here N is the image size of 1-dimension. However the interpolation process can be avoid, in that case the number of the calculations is O(N5).

  5. Theoretical foundations of spatially-variant mathematical morphology part ii: gray-level images.

    PubMed

    Bouaynaya, Nidhal; Schonfeld, Dan

    2008-05-01

    In this paper, we develop a spatially-variant (SV) mathematical morphology theory for gray-level signals and images in the Euclidean space. The proposed theory preserves the geometrical concept of the structuring function, which provides the foundation of classical morphology and is essential in signal and image processing applications. We define the basic SV gray-level morphological operators (i.e., SV gray-level erosion, dilation, opening, and closing) and investigate their properties. We demonstrate the ubiquity of SV gray-level morphological systems by deriving a kernel representation for a large class of systems, called V-systems, in terms of the basic SV graylevel morphological operators. A V-system is defined to be a gray-level operator, which is invariant under gray-level (vertical) translations. Particular attention is focused on the class of SV flat gray-level operators. The kernel representation for increasing V-systems is a generalization of Maragos' kernel representation for increasing and translation-invariant function-processing systems. A representation of V-systems in terms of their kernel elements is established for increasing and upper-semi-continuous V-systems. This representation unifies a large class of spatially-variant linear and non-linear systems under the same mathematical framework. Finally, simulation results show the potential power of the general theory of gray-level spatially-variant mathematical morphology in several image analysis and computer vision applications.

  6. Northeast Artificial Intelligence Consortium Annual Report for 1987. Volume 4. Research in Automated Photointerpretation

    DTIC Science & Technology

    1989-03-01

    KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection

  7. Automatic building identification under bomb damage conditions

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Noll, Warren; Barker, Joseph; Wunsch, Donald C., II

    2009-05-01

    Given the vast amount of image intelligence utilized in support of planning and executing military operations, a passive automated image processing capability for target identification is urgently required. Furthermore, transmitting large image streams from remote locations would quickly use available band width (BW) precipitating the need for processing to occur at the sensor location. This paper addresses the problem of automatic target recognition for battle damage assessment (BDA). We utilize an Adaptive Resonance Theory approach to cluster templates of target buildings. The results show that the network successfully classifies targets from non-targets in a virtual test bed environment.

  8. Bidirectional light-scattering image processing method for high-concentration jet sprays

    NASA Astrophysics Data System (ADS)

    Shimizu, I.; Emori, Y.; Yang, W.-J.; Shimoda, M.; Suzuki, T.

    1985-01-01

    In order to study the distributions of droplet size and volume density in high-concentration jet sprays, a new technique is developed, which combines the forward and backward light scattering method and an image processing method. A pulsed ruby laser is used as the light source. The Mie scattering theory is applied to the results obtained from image processing on the scattering photographs. The time history is obtained for the droplet size and volume density distributions, and the method is demonstrated by diesel fuel sprays under various injecting conditions. The validity of the technique is verified by a good agreement in the injected fuel volume distributions obtained by the present method and by injection rate measurements.

  9. Real-time single image dehazing based on dark channel prior theory and guided filtering

    NASA Astrophysics Data System (ADS)

    Zhang, Zan

    2017-10-01

    Images and videos taken outside the foggy day are serious degraded. In order to restore degraded image taken in foggy day and overcome traditional Dark Channel prior algorithms problems of remnant fog in edge, we propose a new dehazing method.We first find the fog area in the dark primary color map to obtain the estimated value of the transmittance using quadratic tree. Then we regard the gray-scale image after guided filtering as atmospheric light map and remove haze based on it. Box processing and image down sampling technology are also used to improve the processing speed. Finally, the atmospheric light scattering model is used to restore the image. A plenty of experiments show that algorithm is effective, efficient and has a wide range of application.

  10. Blind Forensics of Successive Geometric Transformations in Digital Images Using Spectral Method: Theory and Applications.

    PubMed

    Chen, Chenglong; Ni, Jiangqun; Shen, Zhaoyi; Shi, Yun Qing

    2017-06-01

    Geometric transformations, such as resizing and rotation, are almost always needed when two or more images are spliced together to create convincing image forgeries. In recent years, researchers have developed many digital forensic techniques to identify these operations. Most previous works in this area focus on the analysis of images that have undergone single geometric transformations, e.g., resizing or rotation. In several recent works, researchers have addressed yet another practical and realistic situation: successive geometric transformations, e.g., repeated resizing, resizing-rotation, rotation-resizing, and repeated rotation. We will also concentrate on this topic in this paper. Specifically, we present an in-depth analysis in the frequency domain of the second-order statistics of the geometrically transformed images. We give an exact formulation of how the parameters of the first and second geometric transformations influence the appearance of periodic artifacts. The expected positions of characteristic resampling peaks are analytically derived. The theory developed here helps to address the gap left by previous works on this topic and is useful for image security and authentication, in particular, the forensics of geometric transformations in digital images. As an application of the developed theory, we present an effective method that allows one to distinguish between the aforementioned four different processing chains. The proposed method can further estimate all the geometric transformation parameters. This may provide useful clues for image forgery detection.

  11. Research on Multi-Temporal PolInSAR Modeling and Applications

    NASA Astrophysics Data System (ADS)

    Hong, Wen; Pottier, Eric; Chen, Erxue

    2014-11-01

    In the study of theory and processing methodology, we apply accurate topographic phase to the Freeman-Durden decomposition for PolInSAR data. On the other hand, we present a TomoSAR imaging method based on convex optimization regularization theory. The target decomposition and reconstruction performance will be evaluated by multi-temporal Land P-band fully polarimetric images acquired in BioSAR campaigns. In the study of hybrid Quad-Pol system performance, we analyse the expression of range ambiguity to signal ratio (RASR) in this architecture. Simulations are used to testify its advantage in the improvement of range ambiguities.

  12. Research on Multi-Temporal PolInSAR Modeling and Applications

    NASA Astrophysics Data System (ADS)

    Hong, Wen; Pottier, Eric; Chen, Erxue

    2014-11-01

    In the study of theory and processing methodology, we apply accurate topographic phase to the Freeman- Durden decomposition for PolInSAR data. On the other hand, we present a TomoSAR imaging method based on convex optimization regularization theory. The target decomposition and reconstruction performance will be evaluated by multi-temporal L- and P-band fully polarimetric images acquired in BioSAR campaigns. In the study of hybrid Quad-Pol system performance, we analyse the expression of range ambiguity to signal ratio (RASR) in this architecture. Simulations are used to testify its advantage in the improvement of range ambiguities.

  13. Salient man-made structure detection in infrared images

    NASA Astrophysics Data System (ADS)

    Li, Dong-jie; Zhou, Fu-gen; Jin, Ting

    2013-09-01

    Target detection, segmentation and recognition is a hot research topic in the field of image processing and pattern recognition nowadays, among which salient area or object detection is one of core technologies of precision guided weapon. Many theories have been raised in this paper; we detect salient objects in a series of input infrared images by using the classical feature integration theory and Itti's visual attention system. In order to find the salient object in an image accurately, we present a new method to solve the edge blur problem by calculating and using the edge mask. We also greatly improve the computing speed by improving the center-surround differences method. Unlike the traditional algorithm, we calculate the center-surround differences through rows and columns separately. Experimental results show that our method is effective in detecting salient object accurately and rapidly.

  14. The neural code of thoughts and feelings. Comment on "Topodynamics of metastable brains" by Arturo Tozzi et al.

    NASA Astrophysics Data System (ADS)

    Jaušovec, Norbert

    2017-07-01

    Recently the number of theories trying to explain the brain - cognition - behavior relation has been increased. Promoted on the one hand by the development of sophisticated brain imaging techniques, such as functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI), and on the other, by complex computational models based on chaos and graph theory. But has this really advanced our understanding of the brain-behavior relation beyond Descartes's dualistic mind body division? One could critically argue that replacing the pineal body with extracellular electric fields represented in the electroencephalogram (EEG) as rapid transitional processes (RTS), combined with algebraic topology and dubbed brain topodynamics [1] is just putting lipstick on an outmoded evergreen.

  15. The influence of processor focus on speckle correlation statistics for a Shuttle imaging radar scene of Hurricane Josephine

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1988-01-01

    The surface wave field produced by Hurricane Josephine was imaged by the L-band SAR aboard the Challenger on October 12, 1984. Exponential trends found in the two-dimensional autocorrelations of speckled image data support an equilibrium theory model of sea surface hydrodynamics. The notions of correlated specular reflection, surface coherence, optimal Doppler parameterization and spatial resolution are discussed within the context of a Poisson-Rayleigh statistical model of the SAR imaging process.

  16. Recognizing 3 D Objects from 2D Images Using Structural Knowledge Base of Genetic Views

    DTIC Science & Technology

    1988-08-31

    technical report. [BIE85] I. Biederman , "Human image understanding: Recent research and a theory", Computer Vision, Graphics, and Image Processing, vol...model bases", Technical Report 87-85, COINS Dept, University of Massachusetts, Amherst, MA 01003, August 1987 . [BUR87b) Burns, J. B. and L. J. Kitchen...34Recognition in 2D images of 3D objects from large model bases using prediction hierarchies", Proc. IJCAI-10, 1987 . [BUR891 J. B. Burns, forthcoming

  17. What difference reveals about similarity.

    PubMed

    Sagi, Eyal; Gentner, Dedre; Lovett, Andrew

    2012-08-01

    Detecting that two images are different is faster for highly dissimilar images than for highly similar images. Paradoxically, we showed that the reverse occurs when people are asked to describe how two images differ--that is, to state a difference between two images. Following structure-mapping theory, we propose that this disassociation arises from the multistage nature of the comparison process. Detecting that two images are different can be done in the initial (local-matching) stage, but only for pairs with low overlap; thus, "different" responses are faster for low-similarity than for high-similarity pairs. In contrast, identifying a specific difference generally requires a full structural alignment of the two images, and this alignment process is faster for high-similarity pairs. We described four experiments that demonstrate this dissociation and show that the results can be simulated using the Structure-Mapping Engine. These results pose a significant challenge for nonstructural accounts of similarity comparison and suggest that structural alignment processes play a significant role in visual comparison. Copyright © 2012 Cognitive Science Society, Inc.

  18. 2012 MULTIPHOTON PROCESSES GRC, JUNE 3-8, 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Barry

    2012-03-08

    The sessions will focus on:  Attosecond science;  Strong-field processes in molecules and solids;  Generation of harmonics and attosecond pulses;  Free-electron laser experiments and theory;  Ultrafast imaging;  Applications of very high intensity lasers;  Propagation of intense laser fields.

  19. The case for integrating grounded theory and participatory action research: empowering clients to inform professional practice.

    PubMed

    Teram, Eli; Schachter, Candice L; Stalker, Carol A

    2005-10-01

    Grounded theory and participatory action research methods are distinct approaches to qualitative inquiry. Although grounded theory has been conceptualized in constructivist terms, it has elements of positivist thinking with an image of neutral search for objective truth through rigorous data collection and analysis. Participatory action research is based on a critique of this image and calls for more inclusive research processes. It questions the possibility of objective social sciences and aspires to engage people actively in all stages of generating knowledge. The authors applied both approaches in a project designed to explore the experiences of female survivors of childhood sexual abuse with physical therapy and subsequently develop a handbook on sensitive practice for clinicians that takes into consideration the needs and perspectives of these clients. Building on this experience, they argue that the integration of grounded theory and participatory action research can empower clients to inform professional practice.

  20. Brain-based Learning.

    ERIC Educational Resources Information Center

    Weiss, Ruth Palombo

    2000-01-01

    Discusses brain research and how new imaging technologies allow scientists to explore how human brains process memory, emotion, attention, patterning, motivation, and context. Explains how brain research is being used to revise learning theories. (JOW)

  1. Parallel and Serial Grouping of Image Elements in Visual Perception

    ERIC Educational Resources Information Center

    Houtkamp, Roos; Roelfsema, Pieter R.

    2010-01-01

    The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some…

  2. JIP: Java image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  3. Imaging episodic memory: implications for cognitive theories and phenomena.

    PubMed

    Nyberg, L

    1999-01-01

    Functional neuroimaging studies are beginning to identify neuroanatomical correlates of various cognitive functions. This paper presents results relevant to several theories and phenomena of episodic memory, including component processes of episodic retrieval, encoding specificity, inhibition, item versus source memory, encoding-retrieval overlap, and the picture-superiority effect. Overall, by revealing specific activation patterns, the results provide support for existing theoretical views and they add some unique information which may be important to consider in future attempts to develop cognitive theories of episodic memory.

  4. Experiments with recursive estimation in astronomical image processing

    NASA Technical Reports Server (NTRS)

    Busko, I.

    1992-01-01

    Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.

  5. Image processing system design for microcantilever-based optical readout infrared arrays

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  6. Acid diffusion, standing waves, and information theory: a molecular-scale model of chemically amplified resist

    NASA Astrophysics Data System (ADS)

    Trefonas, Peter, III; Allen, Mary T.

    1992-06-01

    Shannon's information theory is adapted to analyze the photolithographic process, defining the mask pattern as the prior state. Definitions and constraints to the general theory are developed so that the information content at various stages of the lithographic process can be described. Its application is illustrated by exploring the information content within projected aerial images and resultant latent images. Next, a 3-dimensional molecular scale model of exposure, acid diffusion, and catalytic crosslinking in acid-hardened resists (AHR) is presented. In this model, initial positions of photogenerated acids are determined by probability functions generated from the aerial images and the local light intensity in the film. In order to simulate post-exposure baking processes, acids are diffused in a random walk manner, for which the catalytic chain length and the average distance between crosslinks can be set. Crosslink locations are defined in terms of the topologically minimized number required to link different chains. The size and location of polymer chains involved in a larger scale crosslinked network is established and related to polymer solubility. In this manner, the nature of the crosslinked latent image can be established. Good correlation with experimental data is found for the calculated percent insolubilization as a function of dose when the rms acid diffusion length is about 500 angstroms. Information analysis is applied in detail to the specific example of AHR chemistry. The information contained within the 3-D crosslinked latent image is explored as a function of exposure dose, catalytic chain length, average distance between crosslinks. Eopt (the exposure dose which optimizes the information contained within the latent image) was found to vary with catalytic chain length in a manner similar to that observed experimentally in a plot of E90 versus post-exposure bake time. Surprisingly, the information content of the crosslinked latent image remains high even when rms diffusion lengths are as long as 1500 angstroms. The information content of a standing wave is shown to decrease with increasing diffusion length, with essentially all standing wave information being lost at diffusion lengths greater than 450 angstroms. A unique mechanism for self-contrast enhancement and high resolution in AHR resist is proposed.

  7. Data analysis for GOPEX image frames

    NASA Technical Reports Server (NTRS)

    Levine, B. M.; Shaik, K. S.; Yan, T.-Y.

    1993-01-01

    The data analysis based on the image frames received at the Solid State Imaging (SSI) camera of the Galileo Optical Experiment (GOPEX) demonstration conducted between 9-16 Dec. 1992 is described. Laser uplink was successfully established between the ground and the Galileo spacecraft during its second Earth-gravity-assist phase in December 1992. SSI camera frames were acquired which contained images of detected laser pulses transmitted from the Table Mountain Facility (TMF), Wrightwood, California, and the Starfire Optical Range (SOR), Albuquerque, New Mexico. Laser pulse data were processed using standard image-processing techniques at the Multimission Image Processing Laboratory (MIPL) for preliminary pulse identification and to produce public release images. Subsequent image analysis corrected for background noise to measure received pulse intensities. Data were plotted to obtain histograms on a daily basis and were then compared with theoretical results derived from applicable weak-turbulence and strong-turbulence considerations. Processing steps are described and the theories are compared with the experimental results. Quantitative agreement was found in both turbulence regimes, and better agreement would have been found, given more received laser pulses. Future experiments should consider methods to reliably measure low-intensity pulses, and through experimental planning to geometrically locate pulse positions with greater certainty.

  8. Improving lateral resolution and image quality of optical coherence tomography by the multi-frame superresolution technique for 3D tissue imaging.

    PubMed

    Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R

    2017-11-01

    The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues.

  9. An Improved Image Matching Method Based on Surf Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, S. J.; Zheng, S. Z.; Xu, Z. G.; Guo, C. C.; Ma, X. L.

    2018-04-01

    Many state-of-the-art image matching methods, based on the feature matching, have been widely studied in the remote sensing field. These methods of feature matching which get highly operating efficiency, have a disadvantage of low accuracy and robustness. This paper proposes an improved image matching method which based on the SURF algorithm. The proposed method introduces color invariant transformation, information entropy theory and a series of constraint conditions to increase feature points detection and matching accuracy. First, the model of color invariant transformation is introduced for two matching images aiming at obtaining more color information during the matching process and information entropy theory is used to obtain the most information of two matching images. Then SURF algorithm is applied to detect and describe points from the images. Finally, constraint conditions which including Delaunay triangulation construction, similarity function and projective invariant are employed to eliminate the mismatches so as to improve matching precision. The proposed method has been validated on the remote sensing images and the result benefits from its high precision and robustness.

  10. Incremental retinal-defocus theory of myopia development--schematic analysis and computer simulation.

    PubMed

    Hung, George K; Ciuffreda, Kenneth J

    2007-07-01

    Previous theories of myopia development involved subtle and complex processes such as the sensing and analyzing of chromatic aberration, spherical aberration, spatial gradient of blur, or spatial frequency content of the retinal image, but they have not been able to explain satisfactorily the diverse experimental results reported in the literature. On the other hand, our newly proposed incremental retinal-defocus theory (IRDT) has been able to explain all of these results. This theory is based on a relatively simple and direct mechanism for the regulation of ocular growth. It states that a time-averaged decrease in retinal-image defocus area decreases the rate of release of retinal neuromodulators, which decreases the rate of retinal proteoglycan synthesis with an associated decrease in scleral structural integrity. This increases the rate of scleral growth, and in turn the eye's axial length, which leads to myopia. Our schematic analysis has provided a clear explanation for the eye's ability to grow in the appropriate direction under a wide range of experimental conditions. In addition, the theory has been able to explain how repeated cycles of nearwork-induced transient myopia leads to repeated periods of decreased retinal-image defocus, whose cumulative effect over an extended period of time results in an increase in axial growth that leads to permanent myopia. Thus, this unifying theory forms the basis for understanding the underlying retinal and scleral mechanisms of myopia development.

  11. Better safe than sorry: simplistic fear-relevant stimuli capture attention.

    PubMed

    Forbes, Sarah J; Purkis, Helena M; Lipp, Ottmar V

    2011-08-01

    It has been consistently demonstrated that fear-relevant images capture attention preferentially over fear-irrelevant images. Current theory suggests that this faster processing could be mediated by an evolved module that allows certain stimulus features to attract attention automatically, prior to the detailed processing of the image. The present research investigated whether simplified images of fear-relevant stimuli would produce interference with target detection in a visual search task. In Experiment 1, silhouettes and degraded silhouettes of fear-relevant animals produced more interference than did the fear-irrelevant images. Experiment 2, compared the effects of fear-relevant and fear-irrelevant distracters and confirmed that the interference produced by fear-relevant distracters was not an effect of novelty. Experiment 3 suggested that fear-relevant stimuli produced interference regardless of whether participants were instructed as to the content of the images. The three experiments indicate that even very simplistic images of fear-relevant animals can divert attention.

  12. What does cancer treatment look like in consumer cancer magazines? An exploratory analysis of photographic content in consumer cancer magazines.

    PubMed

    Phillips, Selene G; Della, Lindsay J; Sohn, Steve H

    2011-04-01

    In an exploratory analysis of several highly circulated consumer cancer magazines, the authors evaluated congruency between visual images of cancer patients and target audience risk profile. The authors assessed 413 images of cancer patients/potential patients for demographic variables such as age, gender, and ethnicity/race. They compared this profile with actual risk statistics. The images in the magazines are considerably younger, more female, and more White than what is indicated by U.S. cancer risk statistics. The authors also assessed images for visual signs of cancer testing/diagnosis and treatment. Few individuals show obvious signs of cancer treatment (e.g., head scarves, skin/nail abnormalities, thin body types). Most images feature healthier looking people, some actively engaged in construction work, bicycling, and yoga. In contrast, a scan of the editorial content showed that nearly two thirds of the articles focus on treatment issues. To explicate the implications of this imagery-text discontinuity on readers' attention and cognitive processing, the authors used constructs from information processing and social identity theories. On the basis of these models/theories, the authors provide recommendations for consumer cancer magazines, suggesting that the imagery be adjusted to reflect cancer diagnosis realities for enhanced message attention and comprehension.

  13. Generative Adversarial Networks: An Overview

    NASA Astrophysics Data System (ADS)

    Creswell, Antonia; White, Tom; Dumoulin, Vincent; Arulkumaran, Kai; Sengupta, Biswa; Bharath, Anil A.

    2018-01-01

    Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification. The aim of this review paper is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.

  14. Validation of Western North America Models based on finite-frequency and ray theory imaging methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larmat, Carene; Maceira, Monica; Porritt, Robert W.

    2015-02-02

    We validate seismic models developed for western North America with a focus on effect of imaging methods on data fit. We use the DNA09 models for which our collaborators provide models built with both the body-­wave FF approach and the RT approach, when the data selection, processing and reference models are the same.

  15. Racial Identity and the Development of Body Image Issues among African American Adolescent Girls

    ERIC Educational Resources Information Center

    Hesse-Biber, Sharlene Nagy; Howling, Stephanie A.; Leavy, Patricia; Lovejoy, Meg

    2004-01-01

    This study focuses on the impact of race, and its intersection with gender, in influencing and/or preventing the development of disordered body image. Specifically, Black samples are examined to see the role that racial identity plays in the process of developing such attitudes. Using qualitative data analysis methods rooted in grounded theory,…

  16. Developments of Finite-Frequency Seismic Theory and Applications to Regional Tomographic Imaging

    DTIC Science & Technology

    2009-01-31

    banana -doughnut” sensitivity kernels of teleseismic body waves to image the crust and mantle beneath eastern Eurasia. We have collected and processed...In this project, we use the “ banana -doughnut” sensitivity kernels of teleseismic body waves to image the crust and mantle beneath eastern Eurasia...replaced body-wave ray paths with “ banana -doughnut” sensitivity kernels calculated in 1D (Dahlen et al., 2000; Hung et al., 2000; Zhao et al., 2000

  17. Integration of instrumentation and processing software of a laser speckle contrast imaging system

    NASA Astrophysics Data System (ADS)

    Carrick, Jacob J.

    Laser speckle contrast imaging (LSCI) has the potential to be a powerful tool in medicine, but more research in the field is required so it can be used properly. To help in the progression of Michigan Tech's research in the field, a graphical user interface (GUI) was designed in Matlab to control the instrumentation of the experiments as well as process the raw speckle images into contrast images while they are being acquired. The design of the system was successful and is currently being used by Michigan Tech's Biomedical Engineering department. This thesis describes the development of the LSCI GUI as well as offering a full introduction into the history, theory and applications of LSCI.

  18. The minimal local-asperity hypothesis of early retinal lateral inhibition.

    PubMed

    Balboa, R M; Grzywacz, N M

    2000-07-01

    Recently we found that the theories related to information theory existent in the literature cannot explain the behavior of the extent of the lateral inhibition mediated by retinal horizontal cells as a function of background light intensity. These theories can explain the fall of the extent from intermediate to high intensities, but not its rise from dim to intermediate intensities. We propose an alternate hypothesis that accounts for the extent's bell-shape behavior. This hypothesis proposes that the lateral-inhibition adaptation in the early retina is part of a system to extract several image attributes, such as occlusion borders and contrast. To do so, this system would use prior probabilistic knowledge about the biological processing and relevant statistics in natural images. A key novel statistic used here is the probability of the presence of an occlusion border as a function of local contrast. Using this probabilistic knowledge, the retina would optimize the spatial profile of lateral inhibition to minimize attribute-extraction error. The two significant errors that this minimization process must reduce are due to the quantal noise in photoreceptors and the straddling of occlusion borders by lateral inhibition.

  19. An Image Processing Algorithm Based On FMAT

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Pal, Sankar K.

    1995-01-01

    Information deleted in ways minimizing adverse effects on reconstructed images. New grey-scale generalization of medial axis transformation (MAT), called FMAT (short for Fuzzy MAT) proposed. Formulated by making natural extension to fuzzy-set theory of all definitions and conditions (e.g., characteristic function of disk, subset condition of disk, and redundancy checking) used in defining MAT of crisp set. Does not need image to have any kind of priori segmentation, and allows medial axis (and skeleton) to be fuzzy subset of input image. Resulting FMAT (consisting of maximal fuzzy disks) capable of reconstructing exactly original image.

  20. Wave field restoration using three-dimensional Fourier filtering method.

    PubMed

    Kawasaki, T; Takai, Y; Ikuta, T; Shimizu, R

    2001-11-01

    A wave field restoration method in transmission electron microscopy (TEM) was mathematically derived based on a three-dimensional (3D) image formation theory. Wave field restoration using this method together with spherical aberration correction was experimentally confirmed in through-focus images of amorphous tungsten thin film, and the resolution of the reconstructed phase image was successfully improved from the Scherzer resolution limit to the information limit. In an application of this method to a crystalline sample, the surface structure of Au(110) was observed in a profile-imaging mode. The processed phase image showed quantitatively the atomic relaxation of the topmost layer.

  1. Rotation covariant image processing for biomedical applications.

    PubMed

    Skibbe, Henrik; Reisert, Marco

    2013-01-01

    With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

  2. Image wavelet decomposition and applications

    NASA Technical Reports Server (NTRS)

    Treil, N.; Mallat, S.; Bajcsy, R.

    1989-01-01

    The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.

  3. Perceptual load interacts with stimulus processing across sensory modalities.

    PubMed

    Klemen, J; Büchel, C; Rose, M

    2009-06-01

    According to perceptual load theory, processing of task-irrelevant stimuli is limited by the perceptual load of a parallel attended task if both the task and the irrelevant stimuli are presented to the same sensory modality. However, it remains a matter of debate whether the same principles apply to cross-sensory perceptual load and, more generally, what form cross-sensory attentional modulation in early perceptual areas takes in humans. Here we addressed these questions using functional magnetic resonance imaging. Participants undertook an auditory one-back working memory task of low or high perceptual load, while concurrently viewing task-irrelevant images at one of three object visibility levels. The processing of the visual and auditory stimuli was measured in the lateral occipital cortex (LOC) and auditory cortex (AC), respectively. Cross-sensory interference with sensory processing was observed in both the LOC and AC, in accordance with previous results of unisensory perceptual load studies. The present neuroimaging results therefore warrant the extension of perceptual load theory from a unisensory to a cross-sensory context: a validation of this cross-sensory interference effect through behavioural measures would consolidate the findings.

  4. Comment on Vaknine, R. and Lorenz, W.J. Lateral filtering of medical ultrasonic B-scans before image generation.

    PubMed

    Dickinson, R J

    1985-04-01

    In a recent paper, Vaknine and Lorenz discuss the merits of lateral deconvolution of demodulated B-scans. While this technique will decrease the lateral blurring of single discrete targets, such as the diaphragm in their figure 3, it is inappropriate to apply the method to the echoes arising from inhomogeneous structures such as soft tissue. In this latter case, the echoes from individual scatterers within the resolution cell of the transducer interfere to give random fluctuations in received echo amplitude termed speckle. Although his process can be modeled as a linear convolution similar to that of conventional image formation theory, the process of demodulation is a nonlinear process which loses the all-important phase information, and prevents the subsequent restoration of the image by Wiener filtering, itself a linear process.

  5. Simultaneous parametric generation and up-conversion of entangled optical images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saygin, M. Yu., E-mail: mihasyu@gmail.com; Chirkin, A. S., E-mail: aschirkin@rambler.r

    A quantum theory of parametric amplification and frequency conversion of an optical image in coupled nonlinear optical processes that include one parametric amplification process at high-frequency pumping and two up-conversion processes in the same pump field is developed. The field momentum operator that takes into account the diffraction and group velocities of the waves is used to derive the quantum equations related to the spatial dynamics of the images during the interaction. An optical scheme for the amplification and conversion of a close image is considered. The mean photon number density and signal-to-noise ratio are calculated in the fixed-pump-field approximationmore » for images at various frequencies. It has been established that the signal-to-noise ratio decreases with increasing interaction length in the amplified image and increases in the images at the generated frequencies, tending to asymptotic values for all interacting waves. The variance of the difference of the numbers of photons is calculated for various pairs of frequencies. The quantum entanglement of the optical images formed in a high-frequency pump field is shown to be converted to higher frequencies during the generation of sum frequencies. Thus, two pairs of entangled optical images are produced in the process considered.« less

  6. Coarsening in Solid-Liquid Mixtures Studied on the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Caruso, John J.

    1999-01-01

    Ostwald ripening, or coarsening, is a process in which large particles in a two-phase mixture grow at the expense of small particles. It is a ubiquitous natural phenomena occurring in the late stages of virtually all phase separation processes. In addition, a large number of commercially important alloys undergo coarsening because they are composed of particles embedded in a matrix. Many of them, such as high-temperature superalloys used for turbine blade materials and low-temperature aluminum alloys, coarsen in the solid state. In addition, many alloys, such as the tungsten-heavy metal systems, coarsen in the solid-liquid state during liquid phase sintering. Numerous theories have been proposed that predict the rate at which the coarsening process occurs and the shape of the particle size distribution. Unfortunately, these theories have never been tested using a system that satisfies all the assumptions of the theory. In an effort to test these theories, NASA studied the coarsening process in a solid-liquid mixture composed of solid tin particles in a liquid lead-tin matrix. On Earth, the solid tin particles float to the surface of the sample, like ice in water. In contrast, in a microgravity environment this does not occur. The microstructures in the ground- and space-processed samples (see the photos) show clearly the effects of gravity on the coarsening process. The STS-83-processed sample (right image) shows nearly spherical uniformly dispersed solid tin particles. In contrast, the identically processed, ground-based sample (left image) shows significant density-driven, nonspherical particles, and because of the higher effective solid volume fraction, a larger particle size after the same coarsening time. The "Coarsening in Solid-Liquid Mixtures" (CSLM) experiment was conducted in the Middeck Glovebox facility (MGBX) flown aboard the shuttle in the Microgravity Science Laboratory (MSL-1/1R) on STS-83/94. The primary objective of CSLM is to measure the temporal evolution of the solid particles during coarsening.

  7. Processing Distracting Non-face Emotional Images: No Evidence of an Age-Related Positivity Effect

    PubMed Central

    Madill, Mark; Murray, Janice E.

    2017-01-01

    Cognitive aging may be accompanied by increased prioritization of social and emotional goals that enhance positive experiences and emotional states. The socioemotional selectivity theory suggests this may be achieved by giving preference to positive information and avoiding or suppressing negative information. Although there is some evidence of a positivity bias in controlled attention tasks, it remains unclear whether a positivity bias extends to the processing of affective stimuli presented outside focused attention. In two experiments, we investigated age-related differences in the effects of to-be-ignored non-face affective images on target processing. In Experiment 1, 27 older (64–90 years) and 25 young adults (19–29 years) made speeded valence judgments about centrally presented positive or negative target images taken from the International Affective Picture System. To-be-ignored distractor images were presented above and below the target image and were either positive, negative, or neutral in valence. The distractors were considered task relevant because they shared emotional characteristics with the target stimuli. Both older and young adults responded slower to targets when distractor valence was incongruent with target valence relative to when distractors were neutral. Older adults responded faster to positive than to negative targets but did not show increased interference effects from positive distractors. In Experiment 2, affective distractors were task irrelevant as the target was a three-digit array and did not share emotional characteristics with the distractors. Twenty-six older (63–84 years) and 30 young adults (18–30 years) gave speeded responses on a digit disparity task while ignoring the affective distractors positioned in the periphery. Task performance in either age group was not influenced by the task-irrelevant affective images. In keeping with the socioemotional selectivity theory, these findings suggest that older adults preferentially process task-relevant positive non-face images but only when presented within the main focus of attention. PMID:28450848

  8. Processing Distracting Non-face Emotional Images: No Evidence of an Age-Related Positivity Effect.

    PubMed

    Madill, Mark; Murray, Janice E

    2017-01-01

    Cognitive aging may be accompanied by increased prioritization of social and emotional goals that enhance positive experiences and emotional states. The socioemotional selectivity theory suggests this may be achieved by giving preference to positive information and avoiding or suppressing negative information. Although there is some evidence of a positivity bias in controlled attention tasks, it remains unclear whether a positivity bias extends to the processing of affective stimuli presented outside focused attention. In two experiments, we investigated age-related differences in the effects of to-be-ignored non-face affective images on target processing. In Experiment 1, 27 older (64-90 years) and 25 young adults (19-29 years) made speeded valence judgments about centrally presented positive or negative target images taken from the International Affective Picture System. To-be-ignored distractor images were presented above and below the target image and were either positive, negative, or neutral in valence. The distractors were considered task relevant because they shared emotional characteristics with the target stimuli. Both older and young adults responded slower to targets when distractor valence was incongruent with target valence relative to when distractors were neutral. Older adults responded faster to positive than to negative targets but did not show increased interference effects from positive distractors. In Experiment 2, affective distractors were task irrelevant as the target was a three-digit array and did not share emotional characteristics with the distractors. Twenty-six older (63-84 years) and 30 young adults (18-30 years) gave speeded responses on a digit disparity task while ignoring the affective distractors positioned in the periphery. Task performance in either age group was not influenced by the task-irrelevant affective images. In keeping with the socioemotional selectivity theory, these findings suggest that older adults preferentially process task-relevant positive non-face images but only when presented within the main focus of attention.

  9. Information theoretical assessment of visual communication with subband coding

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-ur; Fales, Carl L.; Huck, Friedrich O.

    1994-09-01

    A well-designed visual communication channel is one which transmits the most information about a radiance field with the fewest artifacts. The role of image processing, encoding and restoration is to improve the quality of visual communication channels by minimizing the error in the transmitted data. Conventionally this role has been analyzed strictly in the digital domain neglecting the effects of image-gathering and image-display devices on the quality of the image. This results in the design of a visual communication channel which is `suboptimal.' We propose an end-to-end assessment of the imaging process which incorporates the influences of these devices in the design of the encoder and the restoration process. This assessment combines Shannon's communication theory with Wiener's restoration filter and with the critical design factors of the image gathering and display devices, thus providing the metrics needed to quantify and optimize the end-to-end performance of the visual communication channel. Results show that the design of the image-gathering device plays a significant role in determining the quality of the visual communication channel and in designing the analysis filters for subband encoding.

  10. The interactive processes of accommodation and vergence.

    PubMed

    Semmlow, J L; Bérard, P V; Vercher, J L; Putteman, A; Gauthier, G M

    1994-01-01

    A near target generates two different, though related stimuli: image disparity and image blur. Fixation of that near target evokes three motor responses: the so-called oculomotor "near triad". It has long been known that both disparity and blur stimuli are each capable of independently generating all three responses, and a recent theory of near triad control (the Dual Interactive Theory) describes how these stimulus components normally work together in the aid of near vision. However, this theory also indicates that when the system becomes unbalanced, as in high AC/A ratios of some accommodative esotropes, the two components will become antagonistic. In this situation, the interaction between the blur and disparity driven components exaggerates the imbalance created in the vergence motor output. Conversely, there is enhanced restoration when the AC/A ratio is effectively reduced surgically.

  11. Restoration of solar and star images with phase diversity-based blind deconvolution

    NASA Astrophysics Data System (ADS)

    Li, Qiang; Liao, Sheng; Wei, Honggang; Shen, Mangzuo

    2007-04-01

    The images recorded by a ground-based telescope are often degraded by atmospheric turbulence and the aberration of the optical system. Phase diversity-based blind deconvolution is an effective post-processing method that can be used to overcome the turbulence-induced degradation. The method uses an ensemble of short-exposure images obtained simultaneously from multiple cameras to jointly estimate the object and the wavefront distribution on pupil. Based on signal estimation theory and optimization theory, we derive the cost function and solve the large-scale optimization problem using a limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method. We apply the method to the turbulence-degraded images generated with computer, the solar images acquired with the swedish vacuum solar telescope (SVST, 0.475 m) in La Palma and the star images collected with 1.2-m telescope in Yunnan Observatory. In order to avoid edge effect in the restoration of the solar images, a modified Hanning apodized window is adopted. The star image still can be restored when the defocus distance is measured inaccurately. The restored results demonstrate that the method is efficient for removing the effect of turbulence and reconstructing the point-like or extended objects.

  12. Image-Guided Intraoperative Cortical Deformation Recovery Using Game Theory: Application to Neocortical Epilepsy Surgery

    PubMed Central

    DeLorenzo, Christine; Papademetris, Xenophon; Staib, Lawrence H.; Vives, Kenneth P.; Spencer, Dennis D.; Duncan, James S.

    2010-01-01

    During neurosurgery, nonrigid brain deformation prevents preoperatively-acquired images from accurately depicting the intraoperative brain. Stereo vision systems can be used to track intraoperative cortical surface deformation and update preoperative brain images in conjunction with a biomechanical model. However, these stereo systems are often plagued with calibration error, which can corrupt the deformation estimation. In order to decouple the effects of camera calibration from the surface deformation estimation, a framework that can solve for disparate and often competing variables is needed. Game theory, which was developed to handle decision making in this type of competitive environment, has been applied to various fields from economics to biology. In this paper, game theory is applied to cortical surface tracking during neocortical epilepsy surgery and used to infer information about the physical processes of brain surface deformation and image acquisition. The method is successfully applied to eight in vivo cases, resulting in an 81% decrease in mean surface displacement error. This includes a case in which some of the initial camera calibration parameters had errors of 70%. Additionally, the advantages of using a game theoretic approach in neocortical epilepsy surgery are clearly demonstrated in its robustness to initial conditions. PMID:20129844

  13. Stereo-Video Data Reduction of Wake Vortices and Trailing Aircrafts

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel

    1998-01-01

    This report presents stereo image theory and the corresponding image processing software developed to analyze stereo imaging data acquired for the wake-vortex hazard flight experiment conducted at NASA Langley Research Center. In this experiment, a leading Lockheed C-130 was equipped with wing-tip smokers to visualize its wing vortices, while a trailing Boeing 737 flew into the wake vortices of the leading airplane. A Rockwell OV-10A airplane, fitted with video cameras under its wings, flew at 400 to 1000 feet above and parallel to the wakes, and photographed the wake interception process for the purpose of determining the three-dimensional location of the trailing aircraft relative to the wake. The report establishes the image-processing tools developed to analyze the video flight-test data, identifies sources of potential inaccuracies, and assesses the quality of the resultant set of stereo data reduction.

  14. A prediction model for cognitive performance in health ageing using diffusion tensor imaging with graph theory.

    PubMed

    Yun, Ruijuan; Lin, Chung-Chih; Wu, Shuicai; Huang, Chu-Chung; Lin, Ching-Po; Chao, Yi-Ping

    2013-01-01

    In this study, we employed diffusion tensor imaging (DTI) to construct brain structural network and then derive the connection matrices from 96 healthy elderly subjects. The correlation analysis between these topological properties of network based on graph theory and the Cognitive Abilities Screening Instrument (CASI) index were processed to extract the significant network characteristics. These characteristics were then integrated to estimate the models by various machine-learning algorithms to predict user's cognitive performance. From the results, linear regression model and Gaussian processes model showed presented better abilities with lower mean absolute errors of 5.8120 and 6.25 to predict the cognitive performance respectively. Moreover, these extracted topological properties of brain structural network derived from DTI also could be regarded as the bio-signatures for further evaluation of brain degeneration in healthy aged and early diagnosis of mild cognitive impairment (MCI).

  15. Methods of training the graduate level and professional geologist in remote sensing technology

    NASA Technical Reports Server (NTRS)

    Kolm, K. E.

    1981-01-01

    Requirements for a basic course in remote sensing to accommodate the needs of the graduate level and professional geologist are described. The course should stress the general topics of basic remote sensing theory, the theory and data types relating to different remote sensing systems, an introduction to the basic concepts of computer image processing and analysis, the characteristics of different data types, the development of methods for geological interpretations, the integration of all scales and data types of remote sensing in a given study, the integration of other data bases (geophysical and geochemical) into a remote sensing study, and geological remote sensing applications. The laboratories should stress hands on experience to reinforce the concepts and procedures presented in the lecture. The geologist should then be encouraged to pursue a second course in computer image processing and analysis of remotely sensed data.

  16. Lexicons, contexts, events, and images: commentary on Elman (2009) from the perspective of dual coding theory.

    PubMed

    Paivio, Allan; Sadoski, Mark

    2011-01-01

    Elman (2009) proposed that the traditional role of the mental lexicon in language processing can largely be replaced by a theoretical model of schematic event knowledge founded on dynamic context-dependent variables. We evaluate Elman's approach and propose an alternative view, based on dual coding theory and evidence that modality-specific cognitive representations contribute strongly to word meaning and language performance across diverse contexts which also have effects predictable from dual coding theory. Copyright © 2010 Cognitive Science Society, Inc.

  17. Improving lateral resolution and image quality of optical coherence tomography by the multi-frame superresolution technique for 3D tissue imaging

    PubMed Central

    Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R.

    2017-01-01

    The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues. PMID:29188089

  18. Terahertz imaging with compressed sensing and phase retrieval.

    PubMed

    Chan, Wai Lam; Moravec, Matthew L; Baraniuk, Richard G; Mittleman, Daniel M

    2008-05-01

    We describe a novel, high-speed pulsed terahertz (THz) Fourier imaging system based on compressed sensing (CS), a new signal processing theory, which allows image reconstruction with fewer samples than traditionally required. Using CS, we successfully reconstruct a 64 x 64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels, which defines the image in the Fourier plane, and observe improved reconstruction quality when we apply phase correction. For our chosen image, only about 12% of the pixels are required for reassembling the image. In combination with phase retrieval, our system has the capability to reconstruct images with only a small subset of Fourier amplitude measurements and thus has potential application in THz imaging with cw sources.

  19. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera

    PubMed Central

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479

  20. Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera.

    PubMed

    Rosnell, Tomi; Honkavaara, Eija

    2012-01-01

    The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

  1. Knowledge Driven Image Mining with Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  2. Knowledge Driven Image Mining with Mixture Density Mercer Kernals

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper we present the theory of Mercer Kernels; describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  3. Practical research on the teaching of Optical Design

    NASA Astrophysics Data System (ADS)

    Fan, Changjiang; Ren, Zhijun; Ying, Chaofu; Peng, Baojin

    2017-08-01

    Optical design, together with applied optics, forms a complete system from basic theory to application theory, and it plays a very important role in professional education. In order to improve senior undergraduates' understanding of optical design, this course is divided into three parts: theoretical knowledge, software design and product processing. Through learning theoretical knowledge, students can master the aberration theory and the design principles of typical optical system. By using ZEMAX(an imaging design software), TRACEPRO(a lighting optical design software), SOLIDWORKS or PROE( mechanical design software), student can establish a complete model of optical system. Student can use carving machine located in lab or cooperative units to process the model. Through the above three parts, student can learn necessary practical knowledge and get improved in their learning and analysis abilities, thus they can also get enough practice to prompt their creative abilities, then they could gradually change from scientific theory learners to an Optics Engineers.

  4. Similar Brain Activation during False Belief Tasks in a Large Sample of Adults with and without Autism

    PubMed Central

    Dufour, Nicholas; Redcay, Elizabeth; Young, Liane; Mavros, Penelope L.; Moran, Joseph M.; Triantafyllou, Christina; Gabrieli, John D. E.; Saxe, Rebecca

    2013-01-01

    Reading about another person’s beliefs engages ‘Theory of Mind’ processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462) individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31), using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults. PMID:24073267

  5. Similar brain activation during false belief tasks in a large sample of adults with and without autism.

    PubMed

    Dufour, Nicholas; Redcay, Elizabeth; Young, Liane; Mavros, Penelope L; Moran, Joseph M; Triantafyllou, Christina; Gabrieli, John D E; Saxe, Rebecca

    2013-01-01

    Reading about another person's beliefs engages 'Theory of Mind' processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462) individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31), using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults.

  6. Wavelength-Modulated Differential Photoacoustic Spectroscopy (WM-DPAS): Theory of a High-Sensitivity Methodology for the Detection of Early-Stage Tumors in Tissues

    NASA Astrophysics Data System (ADS)

    Choi, S.; Mandelis, A.; Guo, X.; Lashkari, B.; Kellnberger, S.; Ntziachristos, V.

    2015-06-01

    In the field of medical diagnostics, biomedical photoacoustics (PA) is a non-invasive hybrid optical-ultrasonic imaging modality. Due to the unique hybrid capability of optical and acoustic imaging, PA imaging has risen to the frontiers of medical diagnostic procedures such as human breast cancer detection. While conventional PA imaging has been mainly carried out by a high-power pulsed laser, an alternative technology, the frequency domain biophotoacoustic radar (FD-PAR) is under intensive development. It utilizes a continuous wave optical source with the laser intensity modulated by a frequency-swept waveform for acoustic wave generation. The small amplitude of the generated acoustic wave is significantly compensated by increased signal-to-noise ratio (several orders of magnitude) using matched-filter and pulse compression correlation processing in a manner similar to radar systems. The current study introduces the theory of a novel FD-PAR modality for ultra-sensitive characterization of functional information for breast cancer imaging. The newly developed theory of wavelength-modulated differential PA spectroscopy (WM-DPAS) detection has been introduced to address angiogenesis and hypoxia monitoring, two well-known benchmarks of breast tumor formation. Based on the WM-DPAS theory, this modality efficiently suppresses background absorptions and is expected to detect very small changes in total hemoglobin concentration and oxygenation levels, thereby identifying pre-malignant tumors before they are anatomically apparent. An experimental system design for the WM-DPAS is presented and preliminary single-ended laser experimental results were obtained and compared to a limiting case of the developed theoretical formalism.

  7. Fuzzy geometry, entropy, and image information

    NASA Technical Reports Server (NTRS)

    Pal, Sankar K.

    1991-01-01

    Presented here are various uncertainty measures arising from grayness ambiguity and spatial ambiguity in an image, and their possible applications as image information measures. Definitions are given of an image in the light of fuzzy set theory, and of information measures and tools relevant for processing/analysis e.g., fuzzy geometrical properties, correlation, bound functions and entropy measures. Also given is a formulation of algorithms along with management of uncertainties for segmentation and object extraction, and edge detection. The output obtained here is both fuzzy and nonfuzzy. Ambiguity in evaluation and assessment of membership function are also described.

  8. An incompressible fluid flow model with mutual information for MR image registration

    NASA Astrophysics Data System (ADS)

    Tsai, Leo; Chang, Herng-Hua

    2013-03-01

    Image registration is one of the fundamental and essential tasks within image processing. It is a process of determining the correspondence between structures in two images, which are called the template image and the reference image, respectively. The challenge of registration is to find an optimal geometric transformation between corresponding image data. This paper develops a new MR image registration algorithm that uses a closed incompressible viscous fluid model associated with mutual information. In our approach, we treat the image pixels as the fluid elements of a viscous fluid flow governed by the nonlinear Navier-Stokes partial differential equation (PDE). We replace the pressure term with the body force mainly used to guide the transformation with a weighting coefficient, which is expressed by the mutual information between the template and reference images. To solve this modified Navier-Stokes PDE, we adopted the fast numerical techniques proposed by Seibold1. The registration process of updating the body force, the velocity and deformation fields is repeated until the mutual information weight reaches a prescribed threshold. We applied our approach to the BrainWeb and real MR images. As consistent with the theory of the proposed fluid model, we found that our method accurately transformed the template images into the reference images based on the intensity flow. Experimental results indicate that our method is of potential in a wide variety of medical image registration applications.

  9. Comparison of Image Processing Techniques using Random Noise Radar

    DTIC Science & Technology

    2014-03-27

    detection UWB ultra-wideband EM electromagnetic CW continuous wave RCS radar cross section RFI radio frequency interference FFT fast Fourier transform...several factors including radar cross section (RCS), orientation, and material makeup. A single monostatic radar at some position collects only range and...Chapter 2 is to provide the theory behind noise radar and SAR imaging. Section 2.1 presents the basic concepts in transmitting and receiving random

  10. Two-Dimensional Signal Processing and Storage and Theory and Applications of Electromagnetic Measurements.

    DTIC Science & Technology

    1983-06-01

    system, provides a convenient, low- noise , fully parallel method of improving contrast and enhancing structural detail in an image prior to input to a...directed towards problems in deconvolution, reconstruction from projections, bandlimited extrapolation, and shift varying deblurring of images...deconvolution algorithm has been studied with promising 5 results [I] for simulated motion blurs. Future work will focus on noise effects and the extension

  11. Rotation Covariant Image Processing for Biomedical Applications

    PubMed Central

    Reisert, Marco

    2013-01-01

    With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences. PMID:23710255

  12. Driving the brain towards creativity and intelligence: A network control theory analysis.

    PubMed

    Kenett, Yoed N; Medaglia, John D; Beaty, Roger E; Chen, Qunlin; Betzel, Richard F; Thompson-Schill, Sharon L; Qiu, Jiang

    2018-01-04

    High-level cognitive constructs, such as creativity and intelligence, entail complex and multiple processes, including cognitive control processes. Recent neurocognitive research on these constructs highlight the importance of dynamic interaction across neural network systems and the role of cognitive control processes in guiding such a dynamic interaction. How can we quantitatively examine the extent and ways in which cognitive control contributes to creativity and intelligence? To address this question, we apply a computational network control theory (NCT) approach to structural brain imaging data acquired via diffusion tensor imaging in a large sample of participants, to examine how NCT relates to individual differences in distinct measures of creative ability and intelligence. Recent application of this theory at the neural level is built on a model of brain dynamics, which mathematically models patterns of inter-region activity propagated along the structure of an underlying network. The strength of this approach is its ability to characterize the potential role of each brain region in regulating whole-brain network function based on its anatomical fingerprint and a simplified model of node dynamics. We find that intelligence is related to the ability to "drive" the brain system into easy to reach neural states by the right inferior parietal lobe and lower integration abilities in the left retrosplenial cortex. We also find that creativity is related to the ability to "drive" the brain system into difficult to reach states by the right dorsolateral prefrontal cortex (inferior frontal junction) and higher integration abilities in sensorimotor areas. Furthermore, we found that different facets of creativity-fluency, flexibility, and originality-relate to generally similar but not identical network controllability processes. We relate our findings to general theories on intelligence and creativity. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  14. Myers-Briggs typology and Jungian individuation.

    PubMed

    Myers, Steve

    2016-06-01

    Myers-Briggs typology is widely seen as equivalent to and representative of Jungian theory by the users of the Myers-Briggs Type Indicator (MBTI) and similar questionnaires. However, the omission of the transcendent function from the theory, and the use of typological functions as its foundation, has resulted in an inadvertent reframing of the process of individuation. This is despite some attempts to integrate individuation and typology, and reintroduce the transcendent function into Myers-Briggs theory. This paper examines the differing views of individuation in Myers-Briggs and Jungian theory, and some of the challenges of reconciling those differences, particularly in the context of normality. It proposes eight principles, drawn mainly from Jungian and classical post-Jungian work, that show how individuation as a process can be integrated with contemporary Myers-Briggs typology. These principles show individuation as being a natural process that can be encouraged outside of the analytic process. They make use of a wide range of opposites as well as typological functions, whilst being centred on the transcendent function. Central to the process is the alchemical image of the caduceus and a practical interpretation of the axiom of Maria, both of which Jung used to illustrate the process of individuation. © 2016, The Society of Analytical Psychology.

  15. Association between Social Anxiety and Visual Mental Imagery of Neutral Scenes: The Moderating Role of Effortful Control.

    PubMed

    Moriya, Jun

    2017-01-01

    According to cognitive theories, verbal processing attenuates emotional processing, whereas visual imagery enhances emotional processing and contributes to the maintenance of social anxiety. Individuals with social anxiety report negative mental images in social situations. However, the general ability of visual mental imagery of neutral scenes in individuals with social anxiety is still unclear. The present study investigated the general ability of non-emotional mental imagery (vividness, preferences for imagery vs. verbal processing, and object or spatial imagery) and the moderating role of effortful control in attenuating social anxiety. The participants ( N = 231) completed five questionnaires. The results showed that social anxiety was not necessarily associated with all aspects of mental imagery. As suggested by theories, social anxiety was not associated with a preference for verbal processing. However, social anxiety was positively correlated with the visual imagery scale, especially the object imagery scale, which concerns the ability to construct pictorial images of individual objects. Further, it was negatively correlated with the spatial imagery scale, which concerns the ability to process information about spatial relations between objects. Although object imagery and spatial imagery positively and negatively predicted the degree of social anxiety, respectively, these effects were attenuated when socially anxious individuals had high effortful control. Specifically, in individuals with high effortful control, both object and spatial imagery were not associated with social anxiety. Socially anxious individuals might prefer to construct pictorial images of individual objects in natural scenes through object imagery. However, even in individuals who exhibit these features of mental imagery, effortful control could inhibit the increase in social anxiety.

  16. Information theory analysis of sensor-array imaging systems for computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.; Self, M. O.

    1983-01-01

    Information theory is used to assess the performance of sensor-array imaging systems, with emphasis on the performance obtained with image-plane signal processing. By electronically controlling the spatial response of the imaging system, as suggested by the mechanism of human vision, it is possible to trade-off edge enhancement for sensitivity, increase dynamic range, and reduce data transmission. Computational results show that: signal information density varies little with large variations in the statistical properties of random radiance fields; most information (generally about 85 to 95 percent) is contained in the signal intensity transitions rather than levels; and performance is optimized when the OTF of the imaging system is nearly limited to the sampling passband to minimize aliasing at the cost of blurring, and the SNR is very high to permit the retrieval of small spatial detail from the extensively blurred signal. Shading the lens aperture transmittance to increase depth of field and using a regular hexagonal sensor-array instead of square lattice to decrease sensitivity to edge orientation also improves the signal information density up to about 30 percent at high SNRs.

  17. Visual communication - Information and fidelity. [of images

    NASA Technical Reports Server (NTRS)

    Huck, Freidrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur; Reichenbach, Stephen E.

    1993-01-01

    This assessment of visual communication deals with image gathering, coding, and restoration as a whole rather than as separate and independent tasks. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image. Past applications of these criteria to the assessment of image coding and restoration have been limited to the link that connects the output of the image-gathering device to the input of the image-display device. By contrast, the approach presented in this paper explicitly includes the critical limiting factors that constrain image gathering and display. This extension leads to an end-to-end assessment theory of visual communication that combines optical design with digital processing.

  18. Electrocortical consequences of image processing: The influence of working memory load and worry.

    PubMed

    White, Evan J; Grant, DeMond M

    2017-03-30

    Research suggests that worry precludes emotional processing as well as biases attentional processes. Although there is burgeoning evidence for the relationship between executive functioning and worry, more research in this area is needed. A recent theory suggests one mechanism for the negative effects of worry on neural indicators of attention may be working memory load, however few studies have examined this directly. The goal of the current study was to document the influence of both visual and verbal working memory load and worry on attention allocation during processing of emotional images in a cued image paradigm. It was hypothesized that working memory load will decrease attention allocation during processing of emotional images. This was tested among 38 participants using a modified S1-S2 paradigm. Results indicated that both the visual and verbal working memory tasks resulted in a reduction of attention allocation to the processing of images across stimulus types compared to the baseline task, although only for individuals low in worry. These data extend the literature by documenting decreased neural responding (i.e., LPP amplitude) to imagery both the visual and verbal working memory load, particularly among individuals low in worry. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  19. Design of polarization imaging system based on CIS and FPGA

    NASA Astrophysics Data System (ADS)

    Zeng, Yan-an; Liu, Li-gang; Yang, Kun-tao; Chang, Da-ding

    2008-02-01

    As polarization is an important characteristic of light, polarization image detecting is a new image detecting technology of combining polarimetric and image processing technology. Contrasting traditional image detecting in ray radiation, polarization image detecting could acquire a lot of very important information which traditional image detecting couldn't. Polarization image detecting will be widely used in civilian field and military field. As polarization image detecting could resolve some problem which couldn't be resolved by traditional image detecting, it has been researched widely around the world. The paper introduces polarization image detecting in physical theory at first, then especially introduces image collecting and polarization image process based on CIS (CMOS image sensor) and FPGA. There are two parts including hardware and software for polarization imaging system. The part of hardware include drive module of CMOS image sensor, VGA display module, SRAM access module and the real-time image data collecting system based on FPGA. The circuit diagram and PCB was designed. Stokes vector and polarization angle computing method are analyzed in the part of software. The float multiply of Stokes vector is optimized into just shift and addition operation. The result of the experiment shows that real time image collecting system could collect and display image data from CMOS image sensor in real-time.

  20. Image Encryption Algorithm Based on Hyperchaotic Maps and Nucleotide Sequences Database

    PubMed Central

    2017-01-01

    Image encryption technology is one of the main means to ensure the safety of image information. Using the characteristics of chaos, such as randomness, regularity, ergodicity, and initial value sensitiveness, combined with the unique space conformation of DNA molecules and their unique information storage and processing ability, an efficient method for image encryption based on the chaos theory and a DNA sequence database is proposed. In this paper, digital image encryption employs a process of transforming the image pixel gray value by using chaotic sequence scrambling image pixel location and establishing superchaotic mapping, which maps quaternary sequences and DNA sequences, and by combining with the logic of the transformation between DNA sequences. The bases are replaced under the displaced rules by using DNA coding in a certain number of iterations that are based on the enhanced quaternary hyperchaotic sequence; the sequence is generated by Chen chaos. The cipher feedback mode and chaos iteration are employed in the encryption process to enhance the confusion and diffusion properties of the algorithm. Theoretical analysis and experimental results show that the proposed scheme not only demonstrates excellent encryption but also effectively resists chosen-plaintext attack, statistical attack, and differential attack. PMID:28392799

  1. The social mysteries of the superior temporal sulcus.

    PubMed

    Beauchamp, Michael S

    2015-09-01

    The superior temporal sulcus (STS) is implicated in a variety of social processes, ranging from language perception to simulating the mental processes of others (theory of mind). In a new study, Deen and colleagues use functional magnetic resonance imaging (fMRI) to show a regular anterior-posterior organization in the STS for different social tasks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Modified tandem gratings anastigmatic imaging spectrometer with oblique incidence for spectral broadband

    NASA Astrophysics Data System (ADS)

    Cui, Chengguang; Wang, Shurong; Huang, Yu; Xue, Qingsheng; Li, Bo; Yu, Lei

    2015-09-01

    A modified spectrometer with tandem gratings that exhibits high spectral resolution and imaging quality for solar observation, monitoring, and understanding of coastal ocean processes is presented in this study. Spectral broadband anastigmatic imaging condition, spectral resolution, and initial optical structure are obtained based on geometric aberration theory. Compared with conventional tandem gratings spectrometers, this modified design permits flexibility in selecting gratings. A detailed discussion of the optical design and optical performance of an ultraviolet spectrometer with tandem gratings is also included to explain the advantage of oblique incidence for spectral broadband.

  3. Determination of differential cross sections and kinetic energy release of co-products from central sliced images in photo-initiated dynamic processes.

    PubMed

    Chen, Kuo-mei; Chen, Yu-wei

    2011-04-07

    For photo-initiated inelastic and reactive collisions, dynamic information can be extracted from central sliced images of state-selected Newton spheres of product species. An analysis framework has been established to determine differential cross sections and the kinetic energy release of co-products from experimental images. When one of the reactants exhibits a high recoil speed in a photo-initiated dynamic process, the present theory can be employed to analyze central sliced images from ion imaging or three-dimensional sliced fluorescence imaging experiments. It is demonstrated that the differential cross section of a scattering process can be determined from the central sliced image by a double Legendre moment analysis, for either a fixed or continuously distributed recoil speeds in the center-of-mass reference frame. Simultaneous equations which lead to the determination of the kinetic energy release of co-products can be established from the second-order Legendre moment of the experimental image, as soon as the differential cross section is extracted. The intensity distribution of the central sliced image, along with its outer and inner ring sizes, provide all the clues to decipher the differential cross section and the kinetic energy release of co-products.

  4. Simulation of target interpretation based on infrared image features and psychology principle

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Chen, Yu-hua; Gao, Hong-sheng; Wang, Zhan-feng; Wang, Ji-jun; Su, Rong-hua; Huang, Yan-ping

    2009-07-01

    It's an important and complicated process in target interpretation that target features extraction and identification, which effect psychosensorial quantity of interpretation person to target infrared image directly, and decide target viability finally. Using statistical decision theory and psychology principle, designing four psychophysical experiment, the interpretation model of the infrared target is established. The model can get target detection probability by calculating four features similarity degree between target region and background region, which were plotted out on the infrared image. With the verification of a great deal target interpretation in practice, the model can simulate target interpretation and detection process effectively, get the result of target interpretation impersonality, which can provide technique support for target extraction, identification and decision-making.

  5. Quantum-like model of processing of information in the brain based on classical electromagnetic field.

    PubMed

    Khrennikov, Andrei

    2011-09-01

    We propose a model of quantum-like (QL) processing of mental information. This model is based on quantum information theory. However, in contrast to models of "quantum physical brain" reducing mental activity (at least at the highest level) to quantum physical phenomena in the brain, our model matches well with the basic neuronal paradigm of the cognitive science. QL information processing is based (surprisingly) on classical electromagnetic signals induced by joint activity of neurons. This novel approach to quantum information is based on representation of quantum mechanics as a version of classical signal theory which was recently elaborated by the author. The brain uses the QL representation (QLR) for working with abstract concepts; concrete images are described by classical information theory. Two processes, classical and QL, are performed parallely. Moreover, information is actively transmitted from one representation to another. A QL concept given in our model by a density operator can generate a variety of concrete images given by temporal realizations of the corresponding (Gaussian) random signal. This signal has the covariance operator coinciding with the density operator encoding the abstract concept under consideration. The presence of various temporal scales in the brain plays the crucial role in creation of QLR in the brain. Moreover, in our model electromagnetic noise produced by neurons is a source of superstrong QL correlations between processes in different spatial domains in the brain; the binding problem is solved on the QL level, but with the aid of the classical background fluctuations. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  6. Self-affirmation theory and cigarette smoking warning images.

    PubMed

    DiBello, Angelo M; Neighbors, Clayton; Ammar, Joe

    2015-02-01

    The present study examined self-affirmation theory, cigarette smoking, and health-related images depicting adverse effects of smoking. Previous research examining self-affirmation and negative health-related images has shown that individuals who engage in a self-affirmation activity are more receptive to messages when compared to those who do not affirm. We were interested in examining the extent to which self-affirmation would reduce defensive responding to negative health images related to cigarette smoking. Participants included 203 daily smokers who were undergraduate students at a large southern university. Participants completed a battery of questionnaires and were then randomly assigned to one of four conditions (non-smoking image control, smoking image control, low affirmation, and high affirmation). Analyses evaluated the effectiveness of affirmation condition as it related to defensive responding. Results indicated that both affirmation conditions were effective in reducing defensive responding for those at greatest risk (heavier smokers) and those more resistant to health benefits associated with quitting. Findings are discussed in terms of potential public health implications as well as the role defensive responding plays in the evaluation and processing of negative health messages. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Communication in diagnostic radiology: meeting the challenges of complexity.

    PubMed

    Larson, David B; Froehle, Craig M; Johnson, Neil D; Towbin, Alexander J

    2014-11-01

    As patients and information flow through the imaging process, value is added step-by-step when information is acquired, interpreted, and communicated back to the referring clinician. However, radiology information systems are often plagued with communication errors and delays. This article presents theories and recommends strategies to continuously improve communication in the complex environment of modern radiology. Communication theories, methods, and systems that have proven their effectiveness in other environments can serve as models for radiology.

  8. The two-process theory of face processing: modifications based on two decades of data from infants and adults.

    PubMed

    Johnson, Mark H; Senju, Atsushi; Tomalski, Przemyslaw

    2015-03-01

    Johnson and Morton (1991. Biology and Cognitive Development: The Case of Face Recognition. Blackwell, Oxford) used Gabriel Horn's work on the filial imprinting model to inspire a two-process theory of the development of face processing in humans. In this paper we review evidence accrued over the past two decades from infants and adults, and from other primates, that informs this two-process model. While work with newborns and infants has been broadly consistent with predictions from the model, further refinements and questions have been raised. With regard to adults, we discuss more recent evidence on the extension of the model to eye contact detection, and to subcortical face processing, reviewing functional imaging and patient studies. We conclude with discussion of outstanding caveats and future directions of research in this field. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Bayesian or Laplacien inference, entropy and information theory and information geometry in data and signal processing

    NASA Astrophysics Data System (ADS)

    Mohammad-Djafari, Ali

    2015-01-01

    The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.

  10. Review of Current Aided/Automatic Target Acquisition Technology for Military Target Acquisition Tasks

    DTIC Science & Technology

    2011-07-01

    radar [e.g., synthetic aperture radar (SAR)]. EO/IR includes multi- and hyperspectral imaging. Signal processing of data from nonimaging sensors, such...enhanced recognition ability. Other nonimage -based techniques, such as category theory,45 hierarchical systems,46 and gradient index flow,47 are possible...the battle- field. There is a plethora of imaging and nonimaging sensors on the battlefield that are being networked together for trans- mission of

  11. Real time diffuse reflectance polarisation spectroscopy imaging to evaluate skin microcirculation

    NASA Astrophysics Data System (ADS)

    O'Doherty, Jim; Henricson, Joakim; Nilsson, Gert E.; Anderson, Chris; Leahy, Martin J.

    2007-07-01

    This article describes the theoretical development and design of a real-time microcirculation imaging system, an extension from a previously technology developed by our group. The technology utilises polarisation spectroscopy, a technique used in order to selectively gate photons returning from various compartments of human skin tissue, namely from the superficial layers of the epidermis, and the deeper backscattered light from the dermal matrix. A consumer-end digital camcorder captures colour data with three individual CCDs, and a custom designed light source consisting of a 24 LED ring light provides broadband illumination over the 400 nm - 700 nm wavelength region. Theory developed leads to an image processing algorithm, the output of which scales linearly with increasing red blood cell (RBC) concentration. Processed images are displayed online in real-time at a rate of 25 frames s -1, at a frame size of 256 x 256 pixels, and is limited only by computer RAM memory and processing speed. General demonstrations of the technique in vivo display several advantages over similar technology.

  12. Information theoretic methods for image processing algorithm optimization

    NASA Astrophysics Data System (ADS)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  13. Referential processing: reciprocity and correlates of naming and imaging.

    PubMed

    Paivio, A; Clark, J M; Digdon, N; Bons, T

    1989-03-01

    To shed light on the referential processes that underlie mental translation between representations of objects and words, we studied the reciprocity and determinants of naming and imaging reaction times (RT). Ninety-six subjects pressed a key when they had covertly named 248 pictures or imaged to their names. Mean naming and imagery RTs for each item were correlated with one another, and with properties of names, images, and their interconnections suggested by prior research and dual coding theory. Imagery RTs correlated .56 (df = 246) with manual naming RTs and .58 with voicekey naming RTs from prior studies. A factor analysis of the RTs and of 31 item characteristics revealed 7 dimensions. Imagery and naming RTs loaded on a common referential factor that included variables related to both directions of processing (e.g., missing names and missing images). Naming RTs also loaded on a nonverbal-to-verbal factor that included such variables as number of different names, whereas imagery RTs loaded on a verbal-to-nonverbal factor that included such variables as rated consistency of imagery. The other factors were verbal familiarity, verbal complexity, nonverbal familiarity, and nonverbal complexity. The findings confirm the reciprocity of imaging and naming, and their relation to constructs associated with distinct phases of referential processing.

  14. A Biological-Plausable Architecture for Shape Recognition

    DTIC Science & Technology

    2006-06-30

    between curves. Information Processing Letters, 64, 1997. [4] Irving Biederman . Recognition-by-components: A theory of human image understanding...Psychological Review, 94(2):115–147, 1987 . 43 [5] C. Cadieu, M. Kouh, M. Riesenhuber, and T. Poggio. Shape representation in v4: Investi- gating position

  15. Autonomous Image Processing Algorithms Locate Region-of-Interests: The Mars Rover Application

    NASA Technical Reports Server (NTRS)

    Privitera, Claudio; Azzariti, Michela; Stark, Lawrence W.

    1998-01-01

    In this report, we demonstrate that bottom-up IPA's, image-processing algorithms, can perform a new visual task to select and locate Regions-Of-Interests (ROIs). This task has been defined on the basis of a theory of top-down human vision, the scanpath theory. Further, using measures, Sp and Ss, the similarity of location and ordering, respectively, developed over the years in studying human perception and the active looking role of eye movements, we could quantify the efficient and efficacious manner that IPAs can imitate human vision in located ROIS. The means to quantitatively evaluate IPA performance has been an important part of our study. In fact, these measures were essential in choosing from the initial wide variety of IPAS, that particular one that best serves for a type of picture and for a required task. It should be emphasized that the selection of efficient IPAs has depended upon their correlation with actual human chosen ROIs for the same type of picture and for the same required task accomplishment.

  16. Quantitative image quality evaluation of MR images using perceptual difference models

    PubMed Central

    Miao, Jun; Huo, Donglai; Wilson, David L.

    2008-01-01

    The authors are using a perceptual difference model (Case-PDM) to quantitatively evaluate image quality of the thousands of test images which can be created when optimizing fast magnetic resonance (MR) imaging strategies and reconstruction techniques. In this validation study, they compared human evaluation of MR images from multiple organs and from multiple image reconstruction algorithms to Case-PDM and similar models. The authors found that Case-PDM compared very favorably to human observers in double-stimulus continuous-quality scale and functional measurement theory studies over a large range of image quality. The Case-PDM threshold for nonperceptible differences in a 2-alternative forced choice study varied with the type of image under study, but was ≈1.1 for diffuse image effects, providing a rule of thumb. Ordering the image quality evaluation models, we found in overall Case-PDM ≈ IDM (Sarnoff Corporation) ≈ SSIM [Wang et al. IEEE Trans. Image Process. 13, 600–612 (2004)] > mean squared error ≈ NR [Wang et al. (2004) (unpublished)] > DCTune (NASA) > IQM (MITRE Corporation). The authors conclude that Case-PDM is very useful in MR image evaluation but that one should probably restrict studies to similar images and similar processing, normally not a limitation in image reconstruction studies. PMID:18649487

  17. Spatial imaging in color and HDR: prometheus unchained

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2013-03-01

    The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.

  18. Linear landmark extraction in SAR images with application to augmented integrity aero-navigation: an overview to a novel processing chain

    NASA Astrophysics Data System (ADS)

    Fabbrini, L.; Messina, M.; Greco, M.; Pinelli, G.

    2011-10-01

    In the context of augmented integrity Inertial Navigation System (INS), recent technological developments have been focusing on landmark extraction from high-resolution synthetic aperture radar (SAR) images in order to retrieve aircraft position and attitude. The article puts forward a processing chain that can automatically detect linear landmarks on highresolution synthetic aperture radar (SAR) images and can be successfully exploited also in the context of augmented integrity INS. The processing chain uses constant false alarm rate (CFAR) edge detectors as the first step of the whole processing procedure. Our studies confirm that the ratio of averages (RoA) edge detector detects object boundaries more effectively than Student T-test and Wilcoxon-Mann-Whitney (WMW) test. Nevertheless, all these statistical edge detectors are sensitive to violation of the assumptions which underlie their theory. In addition to presenting a solution to the previous problem, we put forward a new post-processing algorithm useful to remove the main false alarms, to select the most probable edge position, to reconstruct broken edges and finally to vectorize them. SAR images from the "MSTAR clutter" dataset were used to prove the effectiveness of the proposed algorithms.

  19. Empathy-Related Responses to Depicted People in Art Works

    PubMed Central

    Kesner, Ladislav; Horáček, Jiří

    2017-01-01

    Existing theories of empathic response to visual art works postulate the primacy of automatic embodied reaction to images based on mirror neuron mechanisms. Arguing for a more inclusive concept of empathy-related response and integrating four distinct bodies of literature, we discuss contextual, and personal factors which modulate empathic response to depicted people. We then present an integrative model of empathy-related responses to depicted people in art works. The model assumes that a response to empathy-eliciting figural artworks engages the dynamic interaction of two mutually interlinked sets of processes: socio-affective/cognitive processing, related to the person perception, and esthetic processing, primarily concerned with esthetic appreciation and judgment and attention to non-social aspects of the image. The model predicts that the specific pattern of interaction between empathy-related and esthetic processing is co-determined by several sets of factors: (i) the viewer's individual characteristics, (ii) the context variables (which include various modes of priming by narratives and other images), (iii) multidimensional features of the image, and (iv) aspects of a viewer's response. Finally we propose that the model is implemented by the interaction of functionally connected brain networks involved in socio-cognitive and esthetic processing. PMID:28286487

  20. Human Image Understanding

    DTIC Science & Technology

    1989-01-01

    A Theory of Human Image Understanding " and the reprint of the chapter "Aspects and...Extensions of a Theory of Human Image Understanding " in Z. Pylyshyn (Ed). CONTENTS I. Introduction and Background ............................... 2 II. A...edges Fig;i 4 Some nonacmdentg differences between a brick and a cylinder. From Fig. 5, Recognition-by-Components: A theory of human image

  1. Multifaceted free-space image distributor for optical interconnects in massively parrallel processing

    NASA Astrophysics Data System (ADS)

    Zhao, Feng; Frietman, Edward E. E.; Han, Zhong; Chen, Ray T.

    1999-04-01

    A characteristic feature of a conventional von Neumann computer is that computing power is delivered by a single processing unit. Although increasing the clock frequency improves the performance of the computer, the switching speed of the semiconductor devices and the finite speed at which electrical signals propagate along the bus set the boundaries. Architectures containing large numbers of nodes can solve this performance dilemma, with the comment that main obstacles in designing such systems are caused by difficulties to come up with solutions that guarantee efficient communications among the nodes. Exchanging data becomes really a bottleneck should al nodes be connected by a shared resource. Only optics, due to its inherent parallelism, could solve that bottleneck. Here, we explore a multi-faceted free space image distributor to be used in optical interconnects in massively parallel processing. In this paper, physical and optical models of the image distributor are focused on from diffraction theory of light wave to optical simulations. the general features and the performance of the image distributor are also described. The new structure of an image distributor and the simulations for it are discussed. From the digital simulation and experiment, it is found that the multi-faceted free space image distributing technique is quite suitable for free space optical interconnection in massively parallel processing and new structure of the multifaceted free space image distributor would perform better.

  2. Features Extraction of Flotation Froth Images and BP Neural Network Soft-Sensor Model of Concentrate Grade Optimized by Shuffled Cuckoo Searching Algorithm

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia

    2014-01-01

    For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210

  3. Tell me more: Can a memory test reduce analogue traumatic intrusions?

    PubMed

    Krans, Julie; Näring, Gérard; Holmes, Emily A; Becker, Eni S

    2009-05-01

    Information processing theories of post-traumatic stress disorder (PTSD) state that intrusive images emerge due to a lack of integration of perceptual trauma representations in autobiographical memory. To test this hypothesis experimentally, participants were shown an aversive film to elicit intrusive images. After viewing, they received a recognition test for just one part of the film. The test contained neutrally formulated items to rehearse information from the film. Participants reported intrusive images for the film in an intrusion diary during one week after viewing. In line with expectations, the number of intrusive images decreased only for the part of the film for which the recognition test was given. Furthermore, deliberate cued-recall memory after one week was selectively enhanced for the film part that was in the recognition test a week before. The findings provide new evidence supporting information processing models of PTSD and have potential implications for early interventions after trauma.

  4. Point spread functions and deconvolution of ultrasonic images.

    PubMed

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  5. Making Holograms at School

    ERIC Educational Resources Information Center

    Robinson, M. L. A.; Saunders, A. P.

    1973-01-01

    Discusses holography, a process by which a three-dimensional image of an object can be completely recorded on a photographic film or plate. Introduces hologram theory with a view to presenting it to senior high school students, and explains how the apparently very great experimental difficulties can be overcome. (JR)

  6. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    PubMed

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  7. Modulation of the semantic system by word imageability.

    PubMed

    Sabsevitz, D S; Medler, D A; Seidenberg, M; Binder, J R

    2005-08-01

    A prevailing neurobiological theory of semantic memory proposes that part of our knowledge about concrete, highly imageable concepts is stored in the form of sensory-motor representations. While this theory predicts differential activation of the semantic system by concrete and abstract words, previous functional imaging studies employing this contrast have provided relatively little supporting evidence. We acquired event-related functional magnetic resonance imaging (fMRI) data while participants performed a semantic similarity judgment task on a large number of concrete and abstract noun triads. Task difficulty was manipulated by varying the degree to which the words in the triad were similar in meaning. Concrete nouns, relative to abstract nouns, produced greater activation in a bilateral network of multimodal and heteromodal association areas, including ventral and medial temporal, posterior-inferior parietal, dorsal prefrontal, and posterior cingulate cortex. In contrast, abstract nouns produced greater activation almost exclusively in the left hemisphere in superior temporal and inferior frontal cortex. Increasing task difficulty modulated activation mainly in attention, working memory, and response monitoring systems, with almost no effect on areas that were modulated by imageability. These data provide critical support for the hypothesis that concrete, imageable concepts activate perceptually based representations not available to abstract concepts. In contrast, processing abstract concepts makes greater demands on left perisylvian phonological and lexical retrieval systems. The findings are compatible with dual coding theory and less consistent with single-code models of conceptual representation. The lack of overlap between imageability and task difficulty effects suggests that once the neural representation of a concept is activated, further maintenance and manipulation of that information in working memory does not further increase neural activation in the conceptual store.

  8. Future trends in image coding

    NASA Astrophysics Data System (ADS)

    Habibi, Ali

    1993-01-01

    The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

  9. Body-Image Evaluation and Body-Image Investment among Adolescents: A Test of Sociocultural and Social Comparison Theories

    ERIC Educational Resources Information Center

    Morrison, Todd G.; Kalin, Rudolf; Morrison, Melanie A.

    2004-01-01

    Sociocultural theory and social comparison theory were used to account for variations in body-image evaluation and body-image investment among male and female adolescents (N = 1,543). Exposure to magazines and television programs containing idealistic body imagery as well as frequency of self-comparison to universalistic targets (e.g., fashion…

  10. NEFI: Network Extraction From Images

    PubMed Central

    Dirnberger, M.; Kehl, T.; Neumann, A.

    2015-01-01

    Networks are amongst the central building blocks of many systems. Given a graph of a network, methods from graph theory enable a precise investigation of its properties. Software for the analysis of graphs is widely available and has been applied to study various types of networks. In some applications, graph acquisition is relatively simple. However, for many networks data collection relies on images where graph extraction requires domain-specific solutions. Here we introduce NEFI, a tool that extracts graphs from images of networks originating in various domains. Regarding previous work on graph extraction, theoretical results are fully accessible only to an expert audience and ready-to-use implementations for non-experts are rarely available or insufficiently documented. NEFI provides a novel platform allowing practitioners to easily extract graphs from images by combining basic tools from image processing, computer vision and graph theory. Thus, NEFI constitutes an alternative to tedious manual graph extraction and special purpose tools. We anticipate NEFI to enable time-efficient collection of large datasets. The analysis of these novel datasets may open up the possibility to gain new insights into the structure and function of various networks. NEFI is open source and available at http://nefi.mpi-inf.mpg.de. PMID:26521675

  11. Application of the EM algorithm to radiographic images.

    PubMed

    Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J

    1992-01-01

    The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.

  12. The concreteness effect: evidence for dual coding and context availability.

    PubMed

    Jessen, F; Heun, R; Erb, M; Granath, D O; Klose, U; Papassotiropoulos, A; Grodd, W

    2000-08-01

    The term concreteness effect refers to the observation that concrete nouns are processed faster and more accurately than abstract nouns in a variety of cognitive tasks. Two models have been proposed to explain the neuronal basis of the concreteness effect. The dual-coding theory attributes the advantage to the access of a right hemisphere image based system in addition to a verbal system by concrete words. The context availability theory argues that concrete words activate a broader contextual verbal support, which results in faster processing, but do not access a distinct image based system. We used event-related fMRI to detect the brain regions that subserve to the concreteness effect. We found greater activation in the lower right and left parietal lobes, in the left inferior frontal lobe and in the precuneus during encoding of concrete compared to abstract nouns. This makes a single exclusive theory unlikely and rather suggests a combination of both models. Superior encoding of concrete words in the present study may result from (1) greater verbal context resources reflected by the activation of left parietal and frontal associative areas, and (2) the additional activation of a non-verbal, perhaps spatial imagery-based system, in the right parietal lobe. Copyright 2000 Academic Press.

  13. Computer model for harmonic ultrasound imaging.

    PubMed

    Li, Y; Zagzebski, J A

    2000-01-01

    Harmonic ultrasound imaging has received great attention from ultrasound scanner manufacturers and researchers. In this paper, we present a computer model that can generate realistic harmonic images. In this model, the incident ultrasound is modeled after the "KZK" equation, and the echo signal is modeled using linear propagation theory because the echo signal is much weaker than the incident pulse. Both time domain and frequency domain numerical solutions to the "KZK" equation were studied. Realistic harmonic images of spherical lesion phantoms were generated for scans by a circular transducer. This model can be a very useful tool for studying the harmonic buildup and dissipation processes in a nonlinear medium, and it can be used to investigate a wide variety of topics related to B-mode harmonic imaging.

  14. Computer model for harmonic ultrasound imaging.

    PubMed

    Li, Y; Zagzebski, J A

    2000-01-01

    Harmonic ultrasound imaging has received great attention from ultrasound scanner manufacturers and researchers. Here, the authors present a computer model that can generate realistic harmonic images. In this model, the incident ultrasound is modeled after the "KZK" equation, and the echo signal is modeled using linear propagation theory because the echo signal is much weaker than the incident pulse. Both time domain and frequency domain numerical solutions to the "KZK" equation were studied. Realistic harmonic images of spherical lesion phantoms were generated for scans by a circular transducer. This model can be a very useful tool for studying the harmonic buildup and dissipation processes in a nonlinear medium, and it can be used to investigate a wide variety of topics related to B-mode harmonic imaging.

  15. How lateral inhibition and fast retinogeniculo-cortical oscillations create vision: A new hypothesis.

    PubMed

    Jerath, Ravinder; Cearley, Shannon M; Barnes, Vernon A; Nixon-Shapiro, Elizabeth

    2016-11-01

    The role of the physiological processes involved in human vision escapes clarification in current literature. Many unanswered questions about vision include: 1) whether there is more to lateral inhibition than previously proposed, 2) the role of the discs in rods and cones, 3) how inverted images on the retina are converted to erect images for visual perception, 4) what portion of the image formed on the retina is actually processed in the brain, 5) the reason we have an after-image with antagonistic colors, and 6) how we remember space. This theoretical article attempts to clarify some of the physiological processes involved with human vision. The global integration of visual information is conceptual; therefore, we include illustrations to present our theory. Universally, the eyeball is 2.4cm and works together with membrane potential, correspondingly representing the retinal layers, photoreceptors, and cortex. Images formed within the photoreceptors must first be converted into chemical signals on the photoreceptors' individual discs and the signals at each disc are transduced from light photons into electrical signals. We contend that the discs code the electrical signals into accurate distances and are shown in our figures. The pre-existing oscillations among the various cortices including the striate and parietal cortex, and the retina work in unison to create an infrastructure of visual space that functionally "places" the objects within this "neural" space. The horizontal layers integrate all discs accurately to create a retina that is pre-coded for distance. Our theory suggests image inversion never takes place on the retina, but rather images fall onto the retina as compressed and coiled, then amplified through lateral inhibition through intensification and amplification on the OFF-center cones. The intensified and amplified images are decompressed and expanded in the brain, which become the images we perceive as external vision. This is a theoretical article presenting a novel hypothesis about the physiological processes in vision, and expounds upon the visual aspect of two of our previously published articles, "A unified 3D default space consciousness model combining neurological and physiological processes that underlie conscious experience", and "Functional representation of vision within the mind: A visual consciousness model based in 3D default space." Currently, neuroscience teaches that visual images are initially inverted on the retina, processed in the brain, and then conscious perception of vision happens in the visual cortex. Here, we propose that inversion of visual images never takes place because images enter the retina as coiled and compressed graded potentials that are intensified and amplified in OFF-center photoreceptors. Once they reach the brain, they are decompressed and expanded to the original size of the image, which is perceived by the brain as the external image. We adduce that pre-existing oscillations (alpha, beta, and gamma) among the various cortices in the brain (including the striate and parietal cortex) and the retina, work together in unison to create an infrastructure of visual space thatfunctionally "places" the objects within a "neural" space. These fast oscillations "bring" the faculties of the cortical activity to the retina, creating the infrastructure of the space within the eye where visual information can be immediately recognized by the brain. By this we mean that the visual (striate) cortex synchronizes the information with the photoreceptors in the retina, and the brain instantaneously receives the already processed visual image, thereby relinquishing the eye from being required to send the information to the brain to be interpreted before it can rise to consciousness. The visual system is a heavily studied area of neuroscience yet very little is known about how vision occurs. We believe that our novel hypothesis provides new insights into how vision becomes part of consciousness, helps to reconcile various previously proposed models, and further elucidates current questions in vision based on our unified 3D default space model. Illustrations are provided to aid in explaining our theory. Copyright © 2016. Published by Elsevier Ltd.

  16. Applications of independent component analysis in SAR images

    NASA Astrophysics Data System (ADS)

    Huang, Shiqi; Cai, Xinhua; Hui, Weihua; Xu, Ping

    2009-07-01

    The detection of faint, small and hidden targets in synthetic aperture radar (SAR) image is still an issue for automatic target recognition (ATR) system. How to effectively separate these targets from the complex background is the aim of this paper. Independent component analysis (ICA) theory can enhance SAR image targets and improve signal clutter ratio (SCR), which benefits to detect and recognize faint targets. Therefore, this paper proposes a new SAR image target detection algorithm based on ICA. In experimental process, the fast ICA (FICA) algorithm is utilized. Finally, some real SAR image data is used to test the method. The experimental results verify that the algorithm is feasible, and it can improve the SCR of SAR image and increase the detection rate for the faint small targets.

  17. Quantitative Image Restoration in Bright Field Optical Microscopy.

    PubMed

    Gutiérrez-Medina, Braulio; Sánchez Miranda, Manuel de Jesús

    2017-11-07

    Bright field (BF) optical microscopy is regarded as a poor method to observe unstained biological samples due to intrinsic low image contrast. We introduce quantitative image restoration in bright field (QRBF), a digital image processing method that restores out-of-focus BF images of unstained cells. Our procedure is based on deconvolution, using a point spread function modeled from theory. By comparing with reference images of bacteria observed in fluorescence, we show that QRBF faithfully recovers shape and enables quantify size of individual cells, even from a single input image. We applied QRBF in a high-throughput image cytometer to assess shape changes in Escherichia coli during hyperosmotic shock, finding size heterogeneity. We demonstrate that QRBF is also applicable to eukaryotic cells (yeast). Altogether, digital restoration emerges as a straightforward alternative to methods designed to generate contrast in BF imaging for quantitative analysis. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  18. Efficient image enhancement using sparse source separation in the Retinex theory

    NASA Astrophysics Data System (ADS)

    Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik

    2017-11-01

    Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.

  19. The Use of Narrative Paradigm Theory in Assessing Audience Value Conflict in Image Advertising.

    ERIC Educational Resources Information Center

    Stutts, Nancy B.; Barker, Randolph T.

    1999-01-01

    Presents an analysis of image advertisement developed from Narrative Paradigm Theory. Suggests that the nature of postmodern culture makes image advertising an appropriate external communication strategy for generating stake holder loyalty. Suggests that Narrative Paradigm Theory can identify potential sources of audience conflict by illuminating…

  20. Calibration of a polarimetric imaging SAR

    NASA Technical Reports Server (NTRS)

    Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.

    1991-01-01

    Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.

  1. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  2. Quantum Random Number Generation Using a Quanta Image Sensor

    PubMed Central

    Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.

    2016-01-01

    A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698

  3. Coherent multiscale image processing using dual-tree quaternion wavelets.

    PubMed

    Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G

    2008-07-01

    The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.

  4. Images Encryption Method using Steganographic LSB Method, AES and RSA algorithm

    NASA Astrophysics Data System (ADS)

    Moumen, Abdelkader; Sissaoui, Hocine

    2017-03-01

    Vulnerability of communication of digital images is an extremely important issue nowadays, particularly when the images are communicated through insecure channels. To improve communication security, many cryptosystems have been presented in the image encryption literature. This paper proposes a novel image encryption technique based on an algorithm that is faster than current methods. The proposed algorithm eliminates the step in which the secrete key is shared during the encryption process. It is formulated based on the symmetric encryption, asymmetric encryption and steganography theories. The image is encrypted using a symmetric algorithm, then, the secret key is encrypted by means of an asymmetrical algorithm and it is hidden in the ciphered image using a least significant bits steganographic scheme. The analysis results show that while enjoying the faster computation, our method performs close to optimal in terms of accuracy.

  5. Role of electron-electron interference in ultrafast time-resolved imaging of electronic wavepackets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixit, Gopal; Santra, Robin; Department of Physics, University of Hamburg, D-20355 Hamburg

    2013-04-07

    Ultrafast time-resolved x-ray scattering is an emerging approach to image the dynamical evolution of the electronic charge distribution during complex chemical and biological processes in real-space and real-time. Recently, the differences between semiclassical and quantum-electrodynamical (QED) theory of light-matter interaction for scattering of ultrashort x-ray pulses from the electronic wavepacket were formally demonstrated and visually illustrated by scattering patterns calculated for an electronic wavepacket in atomic hydrogen [G. Dixit, O. Vendrell, and R. Santra, Proc. Natl. Acad. Sci. U.S.A. 109, 11636 (2012)]. In this work, we present a detailed analysis of time-resolved x-ray scattering from a sample containing a mixturemore » of non-stationary and stationary electrons within both the theories. In a many-electron system, the role of scattering interference between a non-stationary and several stationary electrons to the total scattering signal is investigated. In general, QED and semiclassical theory provide different results for the contribution from the scattering interference, which depends on the energy resolution of the detector and the x-ray pulse duration. The present findings are demonstrated by means of a numerical example of x-ray time-resolved imaging for an electronic wavepacket in helium. It is shown that the time-dependent scattering interference vanishes within semiclassical theory and the corresponding patterns are dominated by the scattering contribution from the time-independent interference, whereas the time-dependent scattering interference contribution do not vanish in the QED theory and the patterns are dominated by the scattering contribution from the non-stationary electron scattering.« less

  6. Role of electron-electron interference in ultrafast time-resolved imaging of electronic wavepackets

    NASA Astrophysics Data System (ADS)

    Dixit, Gopal; Santra, Robin

    2013-04-01

    Ultrafast time-resolved x-ray scattering is an emerging approach to image the dynamical evolution of the electronic charge distribution during complex chemical and biological processes in real-space and real-time. Recently, the differences between semiclassical and quantum-electrodynamical (QED) theory of light-matter interaction for scattering of ultrashort x-ray pulses from the electronic wavepacket were formally demonstrated and visually illustrated by scattering patterns calculated for an electronic wavepacket in atomic hydrogen [G. Dixit, O. Vendrell, and R. Santra, Proc. Natl. Acad. Sci. U.S.A. 109, 11636 (2012)], 10.1073/pnas.1202226109. In this work, we present a detailed analysis of time-resolved x-ray scattering from a sample containing a mixture of non-stationary and stationary electrons within both the theories. In a many-electron system, the role of scattering interference between a non-stationary and several stationary electrons to the total scattering signal is investigated. In general, QED and semiclassical theory provide different results for the contribution from the scattering interference, which depends on the energy resolution of the detector and the x-ray pulse duration. The present findings are demonstrated by means of a numerical example of x-ray time-resolved imaging for an electronic wavepacket in helium. It is shown that the time-dependent scattering interference vanishes within semiclassical theory and the corresponding patterns are dominated by the scattering contribution from the time-independent interference, whereas the time-dependent scattering interference contribution do not vanish in the QED theory and the patterns are dominated by the scattering contribution from the non-stationary electron scattering.

  7. Role of electron-electron interference in ultrafast time-resolved imaging of electronic wavepackets.

    PubMed

    Dixit, Gopal; Santra, Robin

    2013-04-07

    Ultrafast time-resolved x-ray scattering is an emerging approach to image the dynamical evolution of the electronic charge distribution during complex chemical and biological processes in real-space and real-time. Recently, the differences between semiclassical and quantum-electrodynamical (QED) theory of light-matter interaction for scattering of ultrashort x-ray pulses from the electronic wavepacket were formally demonstrated and visually illustrated by scattering patterns calculated for an electronic wavepacket in atomic hydrogen [G. Dixit, O. Vendrell, and R. Santra, Proc. Natl. Acad. Sci. U.S.A. 109, 11636 (2012)]. In this work, we present a detailed analysis of time-resolved x-ray scattering from a sample containing a mixture of non-stationary and stationary electrons within both the theories. In a many-electron system, the role of scattering interference between a non-stationary and several stationary electrons to the total scattering signal is investigated. In general, QED and semiclassical theory provide different results for the contribution from the scattering interference, which depends on the energy resolution of the detector and the x-ray pulse duration. The present findings are demonstrated by means of a numerical example of x-ray time-resolved imaging for an electronic wavepacket in helium. It is shown that the time-dependent scattering interference vanishes within semiclassical theory and the corresponding patterns are dominated by the scattering contribution from the time-independent interference, whereas the time-dependent scattering interference contribution do not vanish in the QED theory and the patterns are dominated by the scattering contribution from the non-stationary electron scattering.

  8. Behavioral change in patients with severe self-injurious behavior: a patient's perspective.

    PubMed

    Kool, Nienke; van Meijel, Berno; Bosman, Maartje

    2009-02-01

    Semistructured interviews were conducted with 12 women who had successfully stopped self-injuring to gain an understanding of the process of stopping self-injury. The data were analyzed based on the grounded theory method. The researchers found that the process of stopping self-injury consists of six phases. Connection was identified as key to all phases of the process. Nursing interventions should focus on forging a connection, encouraging people who self-injure to develop a positive self-image and learn alternative behavior.

  9. Grammatical verb aspect and event roles in sentence processing.

    PubMed

    Madden-Lombardi, Carol; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2017-01-01

    Two experiments examine how grammatical verb aspect constrains our understanding of events. According to linguistic theory, an event described in the perfect aspect (John had opened the bottle) should evoke a mental representation of a finished event with focus on the resulting object, whereas an event described in the imperfective aspect (John was opening the bottle) should evoke a representation of the event as ongoing, including all stages of the event, and focusing all entities relevant to the ongoing action (instruments, objects, agents, locations, etc.). To test this idea, participants saw rebus sentences in the perfect and imperfective aspect, presented one word at a time, self-paced. In each sentence, the instrument and the recipient of the action were replaced by pictures (John was using/had used a *corkscrew* to open the *bottle* at the restaurant). Time to process the two images as well as speed and accuracy on sensibility judgments were measured. Although experimental sentences always made sense, half of the object and instrument pictures did not match the temporal constraints of the verb. For instance, in perfect sentences aspect-congruent trials presented an image of the corkscrew closed (no longer in-use) and the wine bottle fully open. The aspect-incongruent yet still sensible versions either replaced the corkscrew with an in-use corkscrew (open, in-hand) or the bottle image with a half-opened bottle. In this case, the participant would still respond "yes", but with longer expected response times. A three-way interaction among Verb Aspect, Sentence Role, and Temporal Match on image processing times showed that participants were faster to process images that matched rather than mismatched the aspect of the verb, especially for resulting objects in perfect sentences. A second experiment replicated and extended the results to confirm that this was not due to the placement of the object in the sentence. These two experiments extend previous research, showing how verb aspect drives not only the temporal structure of event representation, but also the focus on specific roles of the event. More generally, the findings of visual match during online sentence-picture processing are consistent with theories of perceptual simulation.

  10. Grammatical verb aspect and event roles in sentence processing

    PubMed Central

    Madden-Lombardi, Carol; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2017-01-01

    Two experiments examine how grammatical verb aspect constrains our understanding of events. According to linguistic theory, an event described in the perfect aspect (John had opened the bottle) should evoke a mental representation of a finished event with focus on the resulting object, whereas an event described in the imperfective aspect (John was opening the bottle) should evoke a representation of the event as ongoing, including all stages of the event, and focusing all entities relevant to the ongoing action (instruments, objects, agents, locations, etc.). To test this idea, participants saw rebus sentences in the perfect and imperfective aspect, presented one word at a time, self-paced. In each sentence, the instrument and the recipient of the action were replaced by pictures (John was using/had used a *corkscrew* to open the *bottle* at the restaurant). Time to process the two images as well as speed and accuracy on sensibility judgments were measured. Although experimental sentences always made sense, half of the object and instrument pictures did not match the temporal constraints of the verb. For instance, in perfect sentences aspect-congruent trials presented an image of the corkscrew closed (no longer in-use) and the wine bottle fully open. The aspect-incongruent yet still sensible versions either replaced the corkscrew with an in-use corkscrew (open, in-hand) or the bottle image with a half-opened bottle. In this case, the participant would still respond “yes”, but with longer expected response times. A three-way interaction among Verb Aspect, Sentence Role, and Temporal Match on image processing times showed that participants were faster to process images that matched rather than mismatched the aspect of the verb, especially for resulting objects in perfect sentences. A second experiment replicated and extended the results to confirm that this was not due to the placement of the object in the sentence. These two experiments extend previous research, showing how verb aspect drives not only the temporal structure of event representation, but also the focus on specific roles of the event. More generally, the findings of visual match during online sentence-picture processing are consistent with theories of perceptual simulation. PMID:29287091

  11. New possibilities in the prevention of eating disorders: The introduction of positive body image measures.

    PubMed

    Piran, Niva

    2015-06-01

    Delineating positive psychological processes in inhabiting the body, as well as quantitative measures to assess them, can facilitate progress in the field of prevention of eating disorders by expanding outcome evaluation of prevention interventions, identifying novel mediators of change, and increasing highly needed research into protective factors. Moreover, enhancing positive ways of inhabiting the body may contribute toward the maintenance of gains of prevention interventions. Integrated social etiological models to eating disorders that focus on gender and other social variables, such as the Developmental Theory of Embodiment (Piran & Teall, 2012a), can contribute to positive body image intervention development and research within the prevention field. Using the Developmental Theory of Embodiment as a lens, this article explores whether existing prevention programs (i.e., Cognitive Dissonance and Media Smart) may already work to promote positive body image, and whether prevention programs need to be expanded toward this goal. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Contour sensitive saliency and depth application in image retargeting

    NASA Astrophysics Data System (ADS)

    Lu, Hongju; Yue, Pengfei; Zhao, Yanhui; Liu, Rui; Fu, Yuanbin; Zheng, Yuanjie; Cui, Jia

    2018-04-01

    Image retargeting technique requires important information preservation and less edge distortion during increasing/decreasing image size. The major existed content-aware methods perform well. However, there are two problems should be improved: the slight distortion appeared at the object edges and the structure distortion in the nonsalient area. According to psychological theories, people evaluate image quality based on multi-level judgments and comparison between different areas, both image content and image structure. The paper proposes a new standard: the structure preserving in non-salient area. After observation and image analysis, blur (slight blur) is generally existed at the edge of objects. The blur feature is used to estimate the depth cue, named blur depth descriptor. It can be used in the process of saliency computation for balanced image retargeting result. In order to keep the structure information in nonsalient area, the salient edge map is presented in Seam Carving process, instead of field-based saliency computation. The derivative saliency from x- and y-direction can avoid the redundant energy seam around salient objects causing structure distortion. After the comparison experiments between classical approaches and ours, the feasibility of our algorithm is proved.

  13. Speech processing: from peripheral to hemispheric asymmetry of the auditory system.

    PubMed

    Lazard, Diane S; Collette, Jean-Louis; Perrot, Xavier

    2012-01-01

    Language processing from the cochlea to auditory association cortices shows side-dependent specificities with an apparent left hemispheric dominance. The aim of this article was to propose to nonspeech specialists a didactic review of two complementary theories about hemispheric asymmetry in speech processing. Starting from anatomico-physiological and clinical observations of auditory asymmetry and interhemispheric connections, this review then exposes behavioral (dichotic listening paradigm) as well as functional (functional magnetic resonance imaging and positron emission tomography) experiments that assessed hemispheric specialization for speech processing. Even though speech at an early phonological level is regarded as being processed bilaterally, a left-hemispheric dominance exists for higher-level processing. This asymmetry may arise from a segregation of the speech signal, broken apart within nonprimary auditory areas in two distinct temporal integration windows--a fast one on the left and a slower one on the right--modeled through the asymmetric sampling in time theory or a spectro-temporal trade-off, with a higher temporal resolution in the left hemisphere and a higher spectral resolution in the right hemisphere, modeled through the spectral/temporal resolution trade-off theory. Both theories deal with the concept that lower-order tuning principles for acoustic signal might drive higher-order organization for speech processing. However, the precise nature, mechanisms, and origin of speech processing asymmetry are still being debated. Finally, an example of hemispheric asymmetry alteration, which has direct clinical implications, is given through the case of auditory aging that mixes peripheral disorder and modifications of central processing. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.

  14. Modeling fixation locations using spatial point processes.

    PubMed

    Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix

    2013-10-01

    Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.

  15. From quantum to classical interactions between a free electron and a surface

    NASA Astrophysics Data System (ADS)

    Beierle, Peter James

    Quantum theory is often cited as being one of the most empirically validated theories in terms of its predictive power and precision. These attributes have led to numerous scientific discoveries and technological advancements. However, the precise relationship between quantum and classical physics remains obscure. The prevailing description is known as decoherence theory, where classical physics emerges from a more general quantum theory through environmental interaction. Sometimes referred to as the decoherence program, it does not solve the quantum measurement problem. We believe experiments performed between the microscopic and macroscopic world may help finish the program. The following considers a free electron that interacts with a surface (the environment), providing a controlled decoherence mechanism. There are non-decohering interactions to be examined and quantified before the weaker decohering effects are filtered out. In the first experiment, an electron beam passes over a surface that's illuminated by low-power laser light. This induces a surface charge redistribution causing the electron deflection. This phenomenon's parameters are investigated. This system can be well understood in terms of classical electrodynamics, and the technological applications of this electron beam switch are considered. Such phenomena may mask decoherence effects. A second experiment tests decoherence theory by introducing a nanofabricated diffraction grating before the surface. The electron undergoes diffraction through the grating, but as the electron passes over the surface it's predicted by various physical models that the electron will lose its wave interference property. Image charge based models, which predict a larger loss of contrast than what is observed, are falsified (despite experiencing an image charge force). A theoretical study demonstrates how a loss of contrast may not be due to the irreversible process decoherence, but dephasing (a reversible process due to randomization of the wavefunction's phase). To resolve this ambiguity, a correlation function on an ensemble of diffraction patterns is analyzed after an electron undergoes either process in a path integral calculation. The diffraction pattern is successfully recovered for dephasing, but not for decoherence, thus verifying it as a potential tool in experimental studies to determine the nature of the observed process.

  16. Observation of wave celerity evolution in the nearshore using digital video imagery

    NASA Astrophysics Data System (ADS)

    Yoo, J.; Fritz, H. M.; Haas, K. A.; Work, P. A.; Barnes, C. F.; Cho, Y.

    2008-12-01

    Celerity of incident waves in the nearshore is observed from oblique video imagery collected at Myrtle Beach, S.C.. The video camera covers the field view of length scales O(100) m. Celerity of waves propagating in shallow water including the surf zone is estimated by applying advanced image processing and analysis methods to the individual video images sampled at 3 Hz. Original image sequences are processed through video image frame differencing, directional low-pass image filtering to reduce the noise arising from foam in the surf zone. The breaking wave celerity is computed along a cross-shore transect from the wave crest tracks extracted by a Radon transform-based line detection method. The observed celerity from the nearshore video imagery is larger than the linear wave celerity computed from the measured water depths over the entire surf zone. Compared to the nonlinear shallow water wave equation (NSWE)-based celerity computed using the measured depths and wave heights, in general, the video-based celerity shows good agreements over the surf zone except the regions across the incipient wave breaking locations. In the regions across the breaker points, the observed wave celerity is even larger than the NSWE-based celerity due to the transition of wave crest shapes. The observed celerity using the video imagery can be used to monitor the nearshore geometry through depth inversion based on the nonlinear wave celerity theories. For this purpose, the exceeding celerity across the breaker points needs to be corrected accordingly compared to a nonlinear wave celerity theory applied.

  17. The Socio-Moral Image Database (SMID): A novel stimulus set for the study of social, moral and affective processes.

    PubMed

    Crone, Damien L; Bode, Stefan; Murawski, Carsten; Laham, Simon M

    2018-01-01

    A major obstacle for the design of rigorous, reproducible studies in moral psychology is the lack of suitable stimulus sets. Here, we present the Socio-Moral Image Database (SMID), the largest standardized moral stimulus set assembled to date, containing 2,941 freely available photographic images, representing a wide range of morally (and affectively) positive, negative and neutral content. The SMID was validated with over 820,525 individual judgments from 2,716 participants, with normative ratings currently available for all images on affective valence and arousal, moral wrongness, and relevance to each of the five moral values posited by Moral Foundations Theory. We present a thorough analysis of the SMID regarding (1) inter-rater consensus, (2) rating precision, and (3) breadth and variability of moral content. Additionally, we provide recommendations for use aimed at efficient study design and reproducibility, and outline planned extensions to the database. We anticipate that the SMID will serve as a useful resource for psychological, neuroscientific and computational (e.g., natural language processing or computer vision) investigations of social, moral and affective processes. The SMID images, along with associated normative data and additional resources are available at https://osf.io/2rqad/.

  18. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  19. Absorbed dose in AgBr in direct film for photon energies ( < 150 keV): relation to optical density. Theoretical calculation and experimental evaluation.

    PubMed

    Helmrot, E; Alm Carlsson, G

    1996-01-01

    In the radiological process it is necessary to develop tools so as to explore how X-rays can be used in the most effective way. Evaluation of models to derive measures of image quality and risk-related parameters is one possibility of getting such a tool. Modelling the image receptor, an important part of the imaging chain, is then required. The aim of this work was to find convenient and accurate ways of describing the blackening of direct dental films by X-rays. Since the beginning of the 20th century, the relation between optical density and photon interactions in the silver bromide in X-ray films has been investigated by many authors. The first attempts used simple quantum theories with no consideration of underlying physical interaction processes. The theories were gradually made more realistic by the introduction of dosimetric concepts and cavity theory. A review of cavity theories for calculating the mean absorbed dose in the AgBr grains of the film emulsion is given in this work. The cavity theories of GREENING (15) and SPIERS-CHARLTON (37) were selected for calculating the mean absorbed dose in the AgBr grains relative to the air collision kerma (Kc,air) of the incident photons of Ultra-speed and Ektaspeed (intraoral) films using up-to-date values of interaction coefficients. GREENING'S theory is a multi-grain theory and the results depend on the relative amounts of silver bromide and gelatine in the emulsion layer. In the single grain theory of SPIERS-CHARLTON, the shape and size of the silver bromide grain are important. Calculations of absorbed dose in the silver bromide were compared with measurements of optical densities in Ultra-speed and Ektaspeed films for a broad range (25-145 kV) of X-ray energy. The calculated absorbed dose values were appropriately averaged over the complete photon energy spectrum, which was determined experimentally using a Compton spectrometer. For the whole range of tube potentials used, the measured optical densities of the films were found to be proportional to the mean absorbed dose in the AgBr grains calculated according to GREENING'S theory. They were also found to be proportional to the collision kerma in silver bromide (Kc,AgBr) indicating proportionality between Kc,AgBr and the mean absorbed dose in silver bromide. While GREENING'S theory shows that the quotient of the mean absorbed dose in silver bromide and Kc,AgBr varies with photon energy, this is not apparent when averaged over the broad (diagnostic) X-ray energy spectra used here. Alternatively, proportionality between Kc,AgBr and the mean absorbed dose in silver bromide can be interpreted as resulting from a combination of the SPIERS-CHARLTON theory, valid at low photon energies ( < 30 keV) and GREENING'S theory, which is strictly valid at energies above 50 keV. This study shows that the blackening of non-screen films can be related directly to the energy absorbed in the AgBr grains of the emulsion layer and that, for the purpose of modelling the imaging chain in intraoral radiography, film response can be represented by Kc,AgBr (at the position of the film) independent of photon energy. The importance of taking the complete X-ray energy spectrum into full account in deriving Kc,AgBr is clearly demonstrated, showing that the concept of effective energy must be used with care.

  20. Generalized probabilistic scale space for image restoration.

    PubMed

    Wong, Alexander; Mishra, Akshaya K

    2010-10-01

    A novel generalized sampling-based probabilistic scale space theory is proposed for image restoration. We explore extending the definition of scale space to better account for both noise and observation models, which is important for producing accurately restored images. A new class of scale-space realizations based on sampling and probability theory is introduced to realize this extended definition in the context of image restoration. Experimental results using 2-D images show that generalized sampling-based probabilistic scale-space theory can be used to produce more accurate restored images when compared with state-of-the-art scale-space formulations, particularly under situations characterized by low signal-to-noise ratios and image degradation.

  1. Einstein's creative thinking and the general theory of relativity: a documented report.

    PubMed

    Rothenberg, A

    1979-01-01

    A document written by Albert Einstein has recently come to light in which the eminent scientist described the actual sequence of his thoughts leading to the development of the general theory of relativity. The key creative thought was an instance of a type of creative cognition the author has previously designated "Janusian thinking," Janusian thinking consists of actively conceiving two or more opposite or antithetical concepts, ideas, or images simultaneously. This form of high-level secondary process cognition has been found to operate widely in art, science, and other fields.

  2. Image Theory: Policies, Goals, Strategies and Tactics in Decision Making.

    DTIC Science & Technology

    1986-03-01

    Harvard Busin *; s Review. July/August, 49-61. Mintzberg, H., Rausingham, D. & Theoret, A. (1976). The structure of unstructured decision processes...coveEuDEo lmagle Theory: Policies, Goals, Strategies Technical Report 6. EFRIGDG REPORT NUME-f 7 A~NOR~eJTR 86-3 𔄁ATO, I CONTRACT OR GRANT NUMBER( s ) Lee Roy...Identity by block niumber) De’Cis ,ion Makinlg Doubt P0 li ci es Uncerta inty L I Li CTIs SbDject ive Probability Tact ic~ s C o 00 AOSTHACT (Conan.uo an

  3. Experimental Study for Automatic Colony Counting System Based Onimage Processing

    NASA Astrophysics Data System (ADS)

    Fang, Junlong; Li, Wenzhe; Wang, Guoxin

    Colony counting in many colony experiments is detected by manual method at present, therefore it is difficult for man to execute the method quickly and accurately .A new automatic colony counting system was developed. Making use of image-processing technology, a study was made on the feasibility of distinguishing objectively white bacterial colonies from clear plates according to the RGB color theory. An optimal chromatic value was obtained based upon a lot of experiments on the distribution of the chromatic value. It has been proved that the method greatly improves the accuracy and efficiency of the colony counting and the counting result is not affected by using inoculation, shape or size of the colony. It is revealed that automatic detection of colony quantity using image-processing technology could be an effective way.

  4. Approach motivation and cognitive resources combine to influence memory for positive emotional stimuli.

    PubMed

    Crowell, Adrienne; Schmeichel, Brandon J

    2016-01-01

    Inspired by the elaborated intrusion theory of desire, the current research tested the hypothesis that persons higher in trait approach motivation process positive stimuli deeply, which enhances memory for them. Ninety-four undergraduates completed a measure of trait approach motivation, viewed positive or negative image slideshows in the presence or absence of a cognitive load, and one week later completed an image memory test. Higher trait approach motivation predicted better memory for the positive slideshow, but this memory boost disappeared under cognitive load. Approach motivation did not influence memory for the negative slideshow. The current findings support the idea that individuals higher in approach motivation spontaneously devote limited resources to processing positive stimuli.

  5. Sixth Annual Flight Mechanics/Estimation Theory Symposium

    NASA Technical Reports Server (NTRS)

    Lefferts, E. (Editor)

    1981-01-01

    Methods of orbital position estimation were reviewed. The problem of accuracy in orbital mechanics is discussed and various techniques in current use are presented along with suggested improvements. Of special interest is the compensation for bias in satelliteborne instruments due to attitude instabilities. Image processing and correctional techniques are reported for geodetic measurements and mapping.

  6. On use of image quality metrics for perceptual blur modeling: image/video compression case

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  7. Final Technical Report for SISGR: Ultrafast Molecular Scale Chemical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hersam, Mark C.; Guest, Jeffrey R.; Guisinger, Nathan P.

    2017-04-10

    The Northwestern-Argonne SISGR program utilized newly developed instrumentation and techniques including integrated ultra-high vacuum tip-enhanced Raman spectroscopy/scanning tunneling microscopy (UHV-TERS/STM) and surface-enhanced femtosecond stimulated Raman scattering (SE-FSRS) to advance the spatial and temporal resolution of chemical imaging for the study of photoinduced dynamics of molecules on plasmonically active surfaces. An accompanying theory program addressed modeling of charge transfer processes using constrained density functional theory (DFT) in addition to modeling of SE-FSRS, thereby providing a detailed description of the excited state dynamics. This interdisciplinary and highly collaborative research resulted in 62 publications with ~ 48% of them being co-authored by multiplemore » SISGR team members. A summary of the scientific accomplishments from this SISGR program is provided in this final technical report.« less

  8. Analysis of Particle Image Velocimetry (PIV) Data for Acoustic Velocity Measurements

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    Acoustic velocity measurements were taken using Particle Image Velocimetry (PIV) in a Normal Incidence Tube configuration at various frequency, phase, and amplitude levels. This report presents the results of the PIV analysis and data reduction portions of the test and details the processing that was done. Estimates of lower measurement sensitivity levels were determined based on PIV image quality, correlation, and noise level parameters used in the test. Comparison of measurements with linear acoustic theory are presented. The onset of nonlinear, harmonic frequency acoustic levels were also studied for various decibel and frequency levels ranging from 90 to 132 dB and 500 to 3000 Hz, respectively.

  9. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    PubMed

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  10. Functional imaging of small tissue volumes with diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Klose, Alexander D.; Hielscher, Andreas H.

    2006-03-01

    Imaging of dynamic changes in blood parameters, functional brain imaging, and tumor imaging are the most advanced application areas of diffuse optical tomography (DOT). When dealing with the image reconstruction problem one is faced with the fact that near-infrared photons, unlike X-rays, are highly scattered when they traverse biological tissue. Image reconstruction schemes are required that model the light propagation inside biological tissue and predict measurements on the tissue surface. By iteratively changing the tissue-parameters until the predictions agree with the real measurements, a spatial distribution of optical properties inside the tissue is found. The optical properties can be related to the tissue oxygenation, inflammation, or to the fluorophore concentration of a biochemical marker. If the model of light propagation is inaccurate, the reconstruction process will lead to an inaccurate result as well. Here, we focus on difficulties that are encountered when DOT is employed for functional imaging of small tissue volumes, for example, in cancer studies involving small animals, or human finger joints for early diagnosis of rheumatoid arthritis. Most of the currently employed image reconstruction methods rely on the diffusion theory that is an approximation to the equation of radiative transfer. But, in the cases of small tissue volumes and tissues that contain low scattering regions diffusion theory has been shown to be of limited applicability Therefore, we employ a light propagation model that is based on the equation of radiative transfer, which promises to overcome the limitations.

  11. Determination of Small Animal Long Bone Properties Using Densitometry

    NASA Technical Reports Server (NTRS)

    Breit, Gregory A.; Goldberg, BethAnn K.; Whalen, Robert T.; Hargens, Alan R. (Technical Monitor)

    1996-01-01

    Assessment of bone structural property changes due to loading regimens or pharmacological treatment typically requires destructive mechanical testing and sectioning. Our group has accurately and non-destructively estimated three dimensional cross-sectional areal properties (principal moments of inertia, Imax and Imin, and principal angle, Theta) of human cadaver long bones from pixel-by-pixel analysis of three non-coplanar densitometry scans. Because the scanner beam width is on the order of typical small animal diapbyseal diameters, applying this technique to high-resolution scans of rat long bones necessitates additional processing to minimize errors induced by beam smearing, such as dependence on sample orientation and overestimation of Imax and Imin. We hypothesized that these errors are correctable by digital image processing of the raw scan data. In all cases, four scans, using only the low energy data (Hologic QDR-1000W, small animal mode), are averaged to increase image signal-to-noise ratio. Raw scans are additionally processed by interpolation, deconvolution by a filter derived from scanner beam characteristics, and masking using a variable threshold based on image dynamic range. To assess accuracy, we scanned an aluminum step phantom at 12 orientations over a range of 180 deg about the longitudinal axis, in 15 deg increments. The phantom dimensions (2.5, 3.1, 3.8 mm x 4.4 mm; Imin/Imax: 0.33-0.74) were comparable to the dimensions of a rat femur which was also scanned. Cross-sectional properties were determined at 0.25 mm increments along the length of the phantom and femur. The table shows average error (+/- SD) from theory of Imax, Imin, and Theta) over the 12 orientations, calculated from raw and fully processed phantom images, as well as standard deviations about the mean for the femur scans. Processing of phantom scans increased agreement with theory, indicating improved accuracy. Smaller standard deviations with processing indicate increased precision and repeatability. Standard deviations for the femur are consistent with those of the phantom. We conclude that in conjunction with digital image enhancement, densitometry scans are suitable for non-destructive determination of areal properties of small animal bones of comparable size to our phantom, allowing prediction of Imax and Imin within 2.5% and Theta within a fraction of a degree. This method represents a considerable extension of current methods of analyzing bone tissue distribution in small animal bones.

  12. Word attributes and lateralization revisited: implications for dual coding and discrete versus continuous processing.

    PubMed

    Boles, D B

    1989-01-01

    Three attributes of words are their imageability, concreteness, and familiarity. From a literature review and several experiments, I previously concluded (Boles, 1983a) that only familiarity affects the overall near-threshold recognition of words, and that none of the attributes affects right-visual-field superiority for word recognition. Here these conclusions are modified by two experiments demonstrating a critical mediating influence of intentional versus incidental memory instructions. In Experiment 1, subjects were instructed to remember the words they were shown, for subsequent recall. The results showed effects of both imageability and familiarity on overall recognition, as well as an effect of imageability on lateralization. In Experiment 2, word-memory instructions were deleted and the results essentially reinstated the findings of Boles (1983a). It is concluded that right-hemisphere imagery processes can participate in word recognition under intentional memory instructions. Within the dual coding theory (Paivio, 1971), the results argue that both discrete and continuous processing modes are available, that the modes can be used strategically, and that continuous processing can occur prior to response stages.

  13. A manual for microcomputer image analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rich, P.M.; Ranken, D.M.; George, J.S.

    1989-12-01

    This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less

  14. Green's function and image system for the Laplace operator in the prolate spheroidal geometry

    NASA Astrophysics Data System (ADS)

    Xue, Changfeng; Deng, Shaozhong

    2017-01-01

    In the present paper, electrostatic image theory is studied for Green's function for the Laplace operator in the case where the fundamental domain is either the exterior or the interior of a prolate spheroid. In either case, an image system is developed to consist of a point image inside the complement of the fundamental domain and an additional symmetric continuous surface image over a confocal prolate spheroid outside the fundamental domain, although the process of calculating such an image system is easier for the exterior than for the interior Green's function. The total charge of the surface image is zero and its centroid is at the origin of the prolate spheroid. In addition, if the source is on the focal axis outside the prolate spheroid, then the image system of the exterior Green's function consists of a point image on the focal axis and a line image on the line segment between the two focal points.

  15. Vision, healing brush, and fiber bundles

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor

    2005-03-01

    The Healing Brush is a tool introduced for the first time in Adobe Photoshop (2002) that removes defects in images by seamless cloning (gradient domain fusion). The Healing Brush algorithms are built on a new mathematical approach that uses Fibre Bundles and Connections to model the representation of images in the visual system. Our mathematical results are derived from first principles of human vision, related to adaptation transforms of von Kries type and Retinex theory. In this paper we present the new result of Healing in arbitrary color space. In addition to supporting image repair and seamless cloning, our approach also produces the exact solution to the problem of high dynamic range compression of17 and can be applied to other image processing algorithms.

  16. Wavelet domain image restoration with adaptive edge-preserving regularization.

    PubMed

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  17. Theory of mind for processing unexpected events across contexts.

    PubMed

    Dungan, James A; Stepanovic, Michael; Young, Liane

    2016-08-01

    Theory of mind, or mental state reasoning, may be particularly useful for making sense of unexpected events. Here, we investigated unexpected behavior across both social and non-social contexts in order to characterize the precise role of theory of mind in processing unexpected events. We used functional magnetic resonance imaging to examine how people respond to unexpected outcomes when initial expectations were based on (i) an object's prior behavior, (ii) an agent's prior behavior and (iii) an agent's mental states. Consistent with prior work, brain regions for theory of mind were preferentially recruited when people first formed expectations about social agents vs non-social objects. Critically, unexpected vs expected outcomes elicited greater activity in dorsomedial prefrontal cortex, which also discriminated in its spatial pattern of activity between unexpected and expected outcomes for social events. In contrast, social vs non-social events elicited greater activity in precuneus across both expected and unexpected outcomes. Finally, given prior information about an agent's behavior, unexpected vs expected outcomes elicited an especially robust response in right temporoparietal junction, and the magnitude of this difference across participants correlated negatively with autistic-like traits. Together, these findings illuminate the distinct contributions of brain regions for theory of mind for processing unexpected events across contexts. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  18. Forensic hash for multimedia information

    NASA Astrophysics Data System (ADS)

    Lu, Wenjun; Varna, Avinash L.; Wu, Min

    2010-01-01

    Digital multimedia such as images and videos are prevalent on today's internet and cause significant social impact, which can be evidenced by the proliferation of social networking sites with user generated contents. Due to the ease of generating and modifying images and videos, it is critical to establish trustworthiness for online multimedia information. In this paper, we propose novel approaches to perform multimedia forensics using compact side information to reconstruct the processing history of a document. We refer to this as FASHION, standing for Forensic hASH for informatION assurance. Based on the Radon transform and scale space theory, the proposed forensic hash is compact and can effectively estimate the parameters of geometric transforms and detect local tampering that an image may have undergone. Forensic hash is designed to answer a broader range of questions regarding the processing history of multimedia data than the simple binary decision from traditional robust image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic techniques that do not use any side information.

  19. Influence of Burke and Lessing on the Semiotic Theory of Document Design: Ideologies and Good Visual Images of Documents.

    ERIC Educational Resources Information Center

    Ding, Daniel D.

    2000-01-01

    Presents historical roots of page design principles, arguing that current theories and practices of document design have their roots in gender-related theories of images. Claims visual design should be evaluated regarding the rhetorical situation in which the design is used. Focuses on visual images of documents in professional communication,…

  20. Automatic data-processing equipment of moon mark of nail for verifying some experiential theory of Traditional Chinese Medicine.

    PubMed

    Niu, Renjie; Fu, Chenyu; Xu, Zhiyong; Huang, Jianyuan

    2016-04-29

    Doctors who practice Traditional Chinese Medicine (TCM) diagnose using four methods - inspection, auscultation and olfaction, interrogation, and pulse feeling/palpation. The shape and shape changes of the moon marks on the nails are an important indication when judging the patient's health. There are a series of classical and experimental theories about moon marks in TCM, which does not have support from statistical data. To verify some experiential theories on moon mark in TCM by automatic data-processing equipment. This paper proposes the equipment that utilizes image processing technology to collect moon mark data of different target groups conveniently and quickly, building a database that combines this information with that gathered from the health and mental status questionnaire in each test. This equipment has a simple design, a low cost, and an optimized algorithm. The practice has been proven to quickly complete automatic acquisition and preservation of key data about moon marks. In the future, some conclusions will likely be obtained from these data; some changes of moon marks related to a special pathological change will be established with statistical methods.

  1. Self-aligning and compressed autosophy video databases

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus E.

    1993-04-01

    Autosophy, an emerging new science, explains `self-assembling structures,' such as crystals or living trees, in mathematical terms. This research provides a new mathematical theory of `learning' and a new `information theory' which permits the growing of self-assembling data network in a computer memory similar to the growing of `data crystals' or `data trees' without data processing or programming. Autosophy databases are educated very much like a human child to organize their own internal data storage. Input patterns, such as written questions or images, are converted to points in a mathematical omni dimensional hyperspace. The input patterns are then associated with output patterns, such as written answers or images. Omni dimensional information storage will result in enormous data compression because each pattern fragment is only stored once. Pattern recognition in the text or image files is greatly simplified by the peculiar omni dimensional storage method. Video databases will absorb input images from a TV camera and associate them with textual information. The `black box' operations are totally self-aligning where the input data will determine their own hyperspace storage locations. Self-aligning autosophy databases may lead to a new generation of brain-like devices.

  2. Embedded Implementation of VHR Satellite Image Segmentation

    PubMed Central

    Li, Chao; Balla-Arabé, Souleymane; Ginhac, Dominique; Yang, Fan

    2016-01-01

    Processing and analysis of Very High Resolution (VHR) satellite images provide a mass of crucial information, which can be used for urban planning, security issues or environmental monitoring. However, they are computationally expensive and, thus, time consuming, while some of the applications, such as natural disaster monitoring and prevention, require high efficiency performance. Fortunately, parallel computing techniques and embedded systems have made great progress in recent years, and a series of massively parallel image processing devices, such as digital signal processors or Field Programmable Gate Arrays (FPGAs), have been made available to engineers at a very convenient price and demonstrate significant advantages in terms of running-cost, embeddability, power consumption flexibility, etc. In this work, we designed a texture region segmentation method for very high resolution satellite images by using the level set algorithm and the multi-kernel theory in a high-abstraction C environment and realize its register-transfer level implementation with the help of a new proposed high-level synthesis-based design flow. The evaluation experiments demonstrate that the proposed design can produce high quality image segmentation with a significant running-cost advantage. PMID:27240370

  3. Nonpuerperal mastitis and subareolar abscess of the breast.

    PubMed

    Kasales, Claudia J; Han, Bing; Smith, J Stanley; Chetlen, Alison L; Kaneda, Heather J; Shereef, Serene

    2014-02-01

    The purpose of this article is to show radiologists how to readily recognize nonpuerperal subareolar abscess and its complications in order to help reduce the time to definitive therapy and improve patient care. To achieve this purpose, the various theories of pathogenesis and the associated histopathologic features are reviewed; the typical clinical characteristics are detailed in contrast to those seen in lactational abscess and inflammatory breast cancer; the common imaging findings are described with emphasis on the sonographic features; correlative pathologic findings are presented to reinforce the imaging findings as they pertain to disease origins; and the various treatment options are reviewed. Nonpuerperal subareolar mastitis and abscess is a benign breast entity often associated with prolonged morbidity. Through better understanding of the underlying disease process the imaging, physical, and clinical findings of this rare process can be more readily recognized and treatment options expedited, improving patient care.

  4. Neuronal correlates of theory of mind and empathy: a functional magnetic resonance imaging study in a nonverbal task.

    PubMed

    Völlm, Birgit A; Taylor, Alexander N W; Richardson, Paul; Corcoran, Rhiannon; Stirling, John; McKie, Shane; Deakin, John F W; Elliott, Rebecca

    2006-01-01

    Theory of Mind (ToM), the ability to attribute mental states to others, and empathy, the ability to infer emotional experiences, are important processes in social cognition. Brain imaging studies in healthy subjects have described a brain system involving medial prefrontal cortex, superior temporal sulcus and temporal pole in ToM processing. Studies investigating networks associated with empathic responding also suggest involvement of temporal and frontal lobe regions. In this fMRI study, we used a cartoon task derived from Sarfati et al. (1997) [Sarfati, Y., Hardy-Bayle, M.C., Besche, C., Widlocher, D. 1997. Attribution of intentions to others in people with schizophrenia: a non-verbal exploration with comic strips. Schizophrenia Research 25, 199-209.]with both ToM and empathy stimuli in order to allow comparison of brain activations in these two processes. Results of 13 right-handed, healthy, male volunteers were included. Functional images were acquired using a 1.5 T Phillips Gyroscan. Our results confirmed that ToM and empathy stimuli are associated with overlapping but distinct neuronal networks. Common areas of activation included the medial prefrontal cortex, temporoparietal junction and temporal poles. Compared to the empathy condition, ToM stimuli revealed increased activations in lateral orbitofrontal cortex, middle frontal gyrus, cuneus and superior temporal gyrus. Empathy, on the other hand, was associated with enhanced activations of paracingulate, anterior and posterior cingulate and amygdala. We therefore suggest that ToM and empathy both rely on networks associated with making inferences about mental states of others. However, empathic responding requires the additional recruitment of networks involved in emotional processing. These results have implications for our understanding of disorders characterized by impairments of social cognition, such as autism and psychopathy.

  5. The Syntax of Moving Images: Principles and Applications.

    ERIC Educational Resources Information Center

    Metallinos, Nikos

    This paper examines the various theories of motion relating to visual communication media, discusses the syntactic rules of moving images derived from those of still pictures, and underlines the motions employed in the construction of moving images, primarily television pictures. The following theories of motion and moving images are presented:…

  6. Numerical correction of distorted images in full-field optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Min, Gihyeon; Kim, Ju Wan; Choi, Woo June; Lee, Byeong Ha

    2012-03-01

    We propose a numerical method which can numerically correct the distorted en face images obtained with a full field optical coherence tomography (FF-OCT) system. It is shown that the FF-OCT image of the deep region of a biological sample is easily blurred or degraded because the sample has a refractive index (RI) much higher than its surrounding medium in general. It is analyzed that the focal plane of the imaging system is segregated from the imaging plane of the coherence-gated system due to the RI mismatch. This image-blurring phenomenon is experimentally confirmed by imaging the chrome pattern of a resolution test target through its glass substrate in water. Moreover, we demonstrate that the blurred image can be appreciably corrected by using the numerical correction process based on the Fresnel-Kirchhoff diffraction theory. The proposed correction method is applied to enhance the image of a human hair, which permits the distinct identification of the melanin granules inside the cortex layer of the hair shaft.

  7. Semiology and a Semiological Reading of Power Myths in Education

    ERIC Educational Resources Information Center

    Kükürt, Remzi Onur

    2016-01-01

    By referring to the theory of semiology, this study aims to present how certain phrases, applications, images, and objects, which are assumed to be unnoticed in the educational process as if they were natural, could be read as signs encrypted with certain ideologically-loaded cultural codes, and to propose semiology as a method for educational…

  8. "Needle and Stick" Save the World: Sustainable Development and the Universal Child

    ERIC Educational Resources Information Center

    Dahlbeck, Johan; De Lucia Dahlbeck, Moa

    2012-01-01

    This text deals with a problem concerning processes of the productive power of knowledge. We draw on the so-called poststructural theories challenging the classical image of thought--as hinged upon a representational logic identifying entities in a rigid sense--when formulating a problem concerning the gap between knowledge and the object of…

  9. Opponent appetitive-aversive neural processes underlie predictive learning of pain relief.

    PubMed

    Seymour, Ben; O'Doherty, John P; Koltzenburg, Martin; Wiech, Katja; Frackowiak, Richard; Friston, Karl; Dolan, Raymond

    2005-09-01

    Termination of a painful or unpleasant event can be rewarding. However, whether the brain treats relief in a similar way as it treats natural reward is unclear, and the neural processes that underlie its representation as a motivational goal remain poorly understood. We used fMRI (functional magnetic resonance imaging) to investigate how humans learn to generate expectations of pain relief. Using a pavlovian conditioning procedure, we show that subjects experiencing prolonged experimentally induced pain can be conditioned to predict pain relief. This proceeds in a manner consistent with contemporary reward-learning theory (average reward/loss reinforcement learning), reflected by neural activity in the amygdala and midbrain. Furthermore, these reward-like learning signals are mirrored by opposite aversion-like signals in lateral orbitofrontal cortex and anterior cingulate cortex. This dual coding has parallels to 'opponent process' theories in psychology and promotes a formal account of prediction and expectation during pain.

  10. Lasting mantle scars lead to perennial plate tectonics.

    PubMed

    Heron, Philip J; Pysklywec, Russell N; Stephenson, Randell

    2016-06-10

    Mid-ocean ridges, transform faults, subduction and continental collisions form the conventional theory of plate tectonics to explain non-rigid behaviour at plate boundaries. However, the theory does not explain directly the processes involved in intraplate deformation and seismicity. Recently, damage structures in the lithosphere have been linked to the origin of plate tectonics. Despite seismological imaging suggesting that inherited mantle lithosphere heterogeneities are ubiquitous, their plate tectonic role is rarely considered. Here we show that deep lithospheric anomalies can dominate shallow geological features in activating tectonics in plate interiors. In numerical experiments, we found that structures frozen into the mantle lithosphere through plate tectonic processes can behave as quasi-plate boundaries reactivated under far-field compressional forcing. Intraplate locations where proto-lithospheric plates have been scarred by earlier suturing could be regions where latent plate boundaries remain, and where plate tectonics processes are expressed as a 'perennial' phenomenon.

  11. Lasting mantle scars lead to perennial plate tectonics

    PubMed Central

    Heron, Philip J.; Pysklywec, Russell N.; Stephenson, Randell

    2016-01-01

    Mid-ocean ridges, transform faults, subduction and continental collisions form the conventional theory of plate tectonics to explain non-rigid behaviour at plate boundaries. However, the theory does not explain directly the processes involved in intraplate deformation and seismicity. Recently, damage structures in the lithosphere have been linked to the origin of plate tectonics. Despite seismological imaging suggesting that inherited mantle lithosphere heterogeneities are ubiquitous, their plate tectonic role is rarely considered. Here we show that deep lithospheric anomalies can dominate shallow geological features in activating tectonics in plate interiors. In numerical experiments, we found that structures frozen into the mantle lithosphere through plate tectonic processes can behave as quasi-plate boundaries reactivated under far-field compressional forcing. Intraplate locations where proto-lithospheric plates have been scarred by earlier suturing could be regions where latent plate boundaries remain, and where plate tectonics processes are expressed as a ‘perennial' phenomenon. PMID:27282541

  12. 2010 MULTIPHOTON PROCESSES GORDON RESEARCH CONFERENCE, JUNE 6-11, 2010, TILTON, NH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mette Gaarde

    2010-06-11

    The Gordon Research Conference on Multiphoton Processes will be held for the 15th time in 2010. The meeting continues to evolve as it embraces both the rapid technological and intellectual growth in the field as well as the multi-disciplinary expertise of the participants. This time the sessions will focus on: (1) Ultrafast coherent control; (2) Free-electron laser experiments and theory; (3) Generation of harmonics and attosecond pulses; (4) Ultrafast imaging; (5) Applications of very high intensity laser fields; (6) Strong-field processes in molecules and solids; (7) Attosecond science; and (8) Controlling light. The scientific program will blur traditional disciplinary boundariesmore » as the presenters and discussion leaders involve chemists, physicists, and optical engineers, representing both experiment and theory. The broad range of expertise and different perspectives of attendees should provide a stimulating and unique environment for solving problems and developing new ideas in this rapidly evolving field.« less

  13. Efficient content-based low-altitude images correlated network and strips reconstruction

    NASA Astrophysics Data System (ADS)

    He, Haiqing; You, Qi; Chen, Xiaoyong

    2017-01-01

    The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.

  14. Retinex based low-light image enhancement using guided filtering and variational framework

    NASA Astrophysics Data System (ADS)

    Zhang, Shi; Tang, Gui-jin; Liu, Xiao-hua; Luo, Su-huai; Wang, Da-dong

    2018-03-01

    A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization (CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods.

  15. The Role of Auroral Imaging in Understanding Ionosphere-Inner Magnetosphere Interactions

    NASA Technical Reports Server (NTRS)

    Spann, Jim; Khazanov, George; Mende, Stephen

    2004-01-01

    The more ways we probe the ionosphere and inner magnetosphere, the better we can understand their interaction. For example, the multifaceted imaging of geospace with the IMAGE mission complements the more traditional in situ measurements made with many previous missions. Together they have enabled new knowledge of the ionosphere-magnetosphere (IM) coupling. The role of imaging the aurora in understanding this interaction has received renewed attention recently. Based on in situ data, such as FAST or DMSP, and our recent theories, we believe that imaging multiscale features of the aurora is a key component to gaining insight into the processes and mechanisms at work. This talk will explore how auroral imaging can be used to provide improved insight of the dynamics of IM interaction on micro and meso scales, with an emphasis on the current limitations and future possibilities of quantitative analyses.

  16. The record of electrical and communication engineering conversazione Tohoku University Volume 63, No. 3

    NASA Astrophysics Data System (ADS)

    1995-05-01

    English abstracts contained are from papers authored by the research staff of the Research Institute of Electrical Communication and the departments of Electrical Engineering, Electrical Communications, Electronic Engineering, and Information Engineering, Tohoku University, which originally appeared in scientific journals in 1994. The abstracts are organized under the following disciplines: electromagnetic theory; physics; fundamental theory of information; communication theory and systems; signal and image processing; systems control; computers; artificial intelligence; recording; acoustics and speech; ultrasonic electronics; antenna, propagation, and transmission; optoelectronics and optical communications; quantum electronics; superconducting materials and applications; magnetic materials and magnetics; semiconductors; electronic materials and parts; electronic devices and integrated circuits; electronic circuits; medical electronics and bionics; measurements and applied electronics; electric power; and miscellaneous.

  17. The Socio-Moral Image Database (SMID): A novel stimulus set for the study of social, moral and affective processes

    PubMed Central

    Bode, Stefan; Murawski, Carsten; Laham, Simon M.

    2018-01-01

    A major obstacle for the design of rigorous, reproducible studies in moral psychology is the lack of suitable stimulus sets. Here, we present the Socio-Moral Image Database (SMID), the largest standardized moral stimulus set assembled to date, containing 2,941 freely available photographic images, representing a wide range of morally (and affectively) positive, negative and neutral content. The SMID was validated with over 820,525 individual judgments from 2,716 participants, with normative ratings currently available for all images on affective valence and arousal, moral wrongness, and relevance to each of the five moral values posited by Moral Foundations Theory. We present a thorough analysis of the SMID regarding (1) inter-rater consensus, (2) rating precision, and (3) breadth and variability of moral content. Additionally, we provide recommendations for use aimed at efficient study design and reproducibility, and outline planned extensions to the database. We anticipate that the SMID will serve as a useful resource for psychological, neuroscientific and computational (e.g., natural language processing or computer vision) investigations of social, moral and affective processes. The SMID images, along with associated normative data and additional resources are available at https://osf.io/2rqad/. PMID:29364985

  18. Wavelet-Based Signal and Image Processing for Target Recognition

    NASA Astrophysics Data System (ADS)

    Sherlock, Barry G.

    2002-11-01

    The PI visited NSWC Dahlgren, VA, for six weeks in May-June 2002 and collaborated with scientists in the G33 TEAMS facility, and with Marilyn Rudzinsky of T44 Technology and Photonic Systems Branch. During this visit the PI also presented six educational seminars to NSWC scientists on various aspects of signal processing. Several items from the grant proposal were completed, including (1) wavelet-based algorithms for interpolation of 1-d signals and 2-d images; (2) Discrete Wavelet Transform domain based algorithms for filtering of image data; (3) wavelet-based smoothing of image sequence data originally obtained for the CRITTIR (Clutter Rejection Involving Temporal Techniques in the Infra-Red) project. The PI visited the University of Stellenbosch, South Africa to collaborate with colleagues Prof. B.M. Herbst and Prof. J. du Preez on the use of wavelet image processing in conjunction with pattern recognition techniques. The University of Stellenbosch has offered the PI partial funding to support a sabbatical visit in Fall 2003, the primary purpose of which is to enable the PI to develop and enhance his expertise in Pattern Recognition. During the first year, the grant supported publication of 3 referred papers, presentation of 9 seminars and an intensive two-day course on wavelet theory. The grant supported the work of two students who functioned as research assistants.

  19. A novel method about detecting missing holes on the motor carling

    NASA Astrophysics Data System (ADS)

    Xu, Hongsheng; Tan, Hao; Li, Guirong

    2018-03-01

    After a deep analysis on how to use an image processing system to detect the missing holes on the motor carling, we design the whole system combined with the actual production conditions of the motor carling. Afterwards we explain the whole system's hardware and software in detail. We introduce the general functions for the system's hardware and software. Analyzed these general functions, we discuss the modules of the system's hardware and software and the theory to design these modules in detail. The measurement to confirm the area to image processing, edge detection, randomized Hough transform to circle detecting is explained in detail. Finally, the system result tested in the laboratory and in the factory is given out.

  20. Discrete shearlet transform: faithful digitization concept and its applications

    NASA Astrophysics Data System (ADS)

    Lim, Wang-Q.

    2011-09-01

    Over the past years, various representation systems which sparsely approximate functions governed by anisotropic features such as edges in images have been proposed. Alongside the theoretical development of these systems, algorithmic realizations of the associated transforms were provided. However, one of the most common short-comings of these frameworks is the lack of providing a unified treatment of the continuum and digital world, i.e., allowing a digital theory to be a natural digitization of the continuum theory. Shearlets were introduced as means to sparsely encode anisotropic singularities of multivariate data while providing a unified treatment of the continuous and digital realm. In this paper, we introduce a discrete framework which allows a faithful digitization of the continuum domain shearlet transform based on compactly supported shearlets. Finally, we show numerical experiments demonstrating the potential of the discrete shearlet transform in several image processing applications.

  1. Anterior cingulate hyperactivations during negative emotion processing among men with schizophrenia and a history of violent behavior.

    PubMed

    Tikàsz, Andràs; Potvin, Stéphane; Lungu, Ovidiu; Joyal, Christian C; Hodgins, Sheilagh; Mendrek, Adrianna; Dumais, Alexandre

    2016-01-01

    Evidence suggests a 2.1-4.6 times increase in the risk of violent behavior in schizophrenia compared to the general population. Current theories propose that the processing of negative emotions is defective in violent individuals and that dysfunctions within the neural circuits involved in emotion processing are implicated in violence. Although schizophrenia patients show enhanced sensitivity to negative stimuli, there are only few functional neuroimaging studies that have examined emotion processing among men with schizophrenia and a history of violence. The present study aimed to identify the brain regions with greater neurofunctional alterations, as detected by functional magnetic resonance imaging during an emotion processing task, of men with schizophrenia who had engaged in violent behavior compared with those who had not. Sixty men were studied; 20 with schizophrenia and a history of violence, 19 with schizophrenia and no violence, and 21 healthy men were scanned while viewing positive, negative, and neutral images. Negative images elicited hyperactivations in the anterior cingulate cortex (ACC), left and right lingual gyrus, and the left precentral gyrus in violent men with schizophrenia, compared to nonviolent men with schizophrenia and healthy men. Neutral images elicited hyperactivations in the right and left middle occipital gyrus, left lingual gyrus, and the left fusiform gyrus in violent men with schizophrenia, compared to the other two groups. Violent men with schizophrenia displayed specific increases in ACC in response to negative images. Given the role of the ACC in information integration, these results indicate a specific dysfunction in the processing of negative emotions that may trigger violent behavior in men with schizophrenia.

  2. An Enduring Dialogue between Computational and Empirical Vision.

    PubMed

    Martinez-Conde, Susana; Macknik, Stephen L; Heeger, David J

    2018-04-01

    In the late 1970s, key discoveries in neurophysiology, psychophysics, computer vision, and image processing had reached a tipping point that would shape visual science for decades to come. David Marr and Ellen Hildreth's 'Theory of edge detection', published in 1980, set out to integrate the newly available wealth of data from behavioral, physiological, and computational approaches in a unifying theory. Although their work had wide and enduring ramifications, their most important contribution may have been to consolidate the foundations of the ongoing dialogue between theoretical and empirical vision science. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Quantum Theory of Superresolution for Incoherent Optical Imaging

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    Rayleigh's criterion for resolving two incoherent point sources has been the most influential measure of optical imaging resolution for over a century. In the context of statistical image processing, violation of the criterion is especially detrimental to the estimation of the separation between the sources, and modern far-field superresolution techniques rely on suppressing the emission of close sources to enhance the localization precision. Using quantum optics, quantum metrology, and statistical analysis, here we show that, even if two close incoherent sources emit simultaneously, measurements with linear optics and photon counting can estimate their separation from the far field almost as precisely as conventional methods do for isolated sources, rendering Rayleigh's criterion irrelevant to the problem. Our results demonstrate that superresolution can be achieved not only for fluorophores but also for stars. Recent progress in generalizing our theory for multiple sources and spectroscopy will also be discussed. This work is supported by the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07 and the Singapore Ministry of Education Academic Research Fund Tier 1 Project R-263-000-C06-112.

  4. Language-motor interference reflected in MEG beta oscillations.

    PubMed

    Klepp, Anne; Niccolai, Valentina; Buccino, Giovanni; Schnitzler, Alfons; Biermann-Ruben, Katja

    2015-04-01

    The involvement of the brain's motor system in action-related language processing can lead to overt interference with simultaneous action execution. The aim of the current study was to find evidence for this behavioural interference effect and to investigate its neurophysiological correlates using oscillatory MEG analysis. Subjects performed a semantic decision task on single action verbs, describing actions executed with the hands or the feet, and abstract verbs. Right hand button press responses were given for concrete verbs only. Therefore, longer response latencies for hand compared to foot verbs should reflect interference. We found interference effects to depend on verb imageability: overall response latencies for hand verbs did not differ significantly from foot verbs. However, imageability interacted with effector: while response latencies to hand and foot verbs with low imageability were equally fast, those for highly imageable hand verbs were longer than for highly imageable foot verbs. The difference is reflected in motor-related MEG beta band power suppression, which was weaker for highly imageable hand verbs compared with highly imageable foot verbs. This provides a putative neuronal mechanism for language-motor interference where the involvement of cortical hand motor areas in hand verb processing interacts with the typical beta suppression seen before movements. We found that the facilitatory effect of higher imageability on action verb processing time is perturbed when verb and motor response relate to the same body part. Importantly, this effect is accompanied by neurophysiological effects in beta band oscillations. The attenuated power suppression around the time of movement, reflecting decreased cortical excitability, seems to result from motor simulation during action-related language processing. This is in line with embodied cognition theories. Copyright © 2015. Published by Elsevier Inc.

  5. Application of abstract harmonic analysis to the high-speed recognition of images

    NASA Technical Reports Server (NTRS)

    Usikov, D. A.

    1979-01-01

    Methods are constructed for rapidly computing correlation functions using the theory of abstract harmonic analysis. The theory developed includes as a particular case the familiar Fourier transform method for a correlation function which makes it possible to find images which are independent of their translation in the plane. Two examples of the application of the general theory described are the search for images, independent of their rotation and scale, and the search for images which are independent of their translations and rotations in the plane.

  6. SAR Image Change Detection Based on Fuzzy Markov Random Field Model

    NASA Astrophysics Data System (ADS)

    Zhao, J.; Huang, G.; Zhao, Z.

    2018-04-01

    Most existing SAR image change detection algorithms only consider single pixel information of different images, and not consider the spatial dependencies of image pixels. So the change detection results are susceptible to image noise, and the detection effect is not ideal. Markov Random Field (MRF) can make full use of the spatial dependence of image pixels and improve detection accuracy. When segmenting the difference image, different categories of regions have a high degree of similarity at the junction of them. It is difficult to clearly distinguish the labels of the pixels near the boundaries of the judgment area. In the traditional MRF method, each pixel is given a hard label during iteration. So MRF is a hard decision in the process, and it will cause loss of information. This paper applies the combination of fuzzy theory and MRF to the change detection of SAR images. The experimental results show that the proposed method has better detection effect than the traditional MRF method.

  7. "But I Like My Body": Positive body image characteristics and a holistic model for young-adult women.

    PubMed

    Wood-Barcalow, Nichole L; Tylka, Tracy L; Augustus-Horvath, Casey L

    2010-03-01

    Extant body image research has provided a rich understanding of negative body image but a rather underdeveloped depiction of positive body image. Thus, this study used Grounded Theory to analyze interviews from 15 college women classified as having positive body image and five body image experts. Many characteristics of positive body image emerged, including appreciating the unique beauty and functionality of their body, filtering information (e.g., appearance commentary, media ideals) in a body-protective manner, defining beauty broadly, and highlighting their body's assets while minimizing perceived imperfections. A holistic model emerged: when women processed mostly positive and rejected negative source information, their body investment decreased and body evaluation became more positive, illustrating the fluidity of body image. Women reciprocally influenced these sources (e.g., mentoring others to love their bodies, surrounding themselves with others who promote body acceptance, taking care of their health), which, in turn, promoted increased positive source information. Copyright 2010. Published by Elsevier Ltd.

  8. Electromagnetic Vortex-Based Radar Imaging Using a Single Receiving Antenna: Theory and Experimental Results

    PubMed Central

    Yuan, Tiezhu; Wang, Hongqiang; Cheng, Yongqiang; Qin, Yuliang

    2017-01-01

    Radar imaging based on electromagnetic vortex can achieve azimuth resolution without relative motion. The present paper investigates this imaging technique with the use of a single receiving antenna through theoretical analysis and experimental results. Compared with the use of multiple receiving antennas, the echoes from a single receiver cannot be used directly for image reconstruction using Fourier method. The reason is revealed by using the point spread function. An additional phase is compensated for each mode before imaging process based on the array parameters and the elevation of the targets. A proof-of-concept imaging system based on a circular phased array is created, and imaging experiments of corner-reflector targets are performed in an anechoic chamber. The azimuthal image is reconstructed by the use of Fourier transform and spectral estimation methods. The azimuth resolution of the two methods is analyzed and compared through experimental data. The experimental results verify the principle of azimuth resolution and the proposed phase compensation method. PMID:28335487

  9. Estimation of images degraded by film-grain noise.

    PubMed

    Naderi, F; Sawchuk, A A

    1978-04-15

    Film-grain noise describes the intrinsic noise produced by a photographic emulsion during the process of image recording and reproduction. In this paper we consider the restoration of images degraded by film-grain noise. First a detailed model for the over-all photographic imaging system is presented. The model includes linear blurring effects and the signal-dependent effect of film-grain noise. The accuracy of this model is tested by simulating images according to it and comparing the results to images of similar targets that were actually recorded on film. The restoration of images degraded by film-grain noise is then considered in the context of estimation theory. A discrete Wiener filer is developed which explicitly allows for the signal dependence of the noise. The filter adaptively alters its characteristics based on the nonstationary first order statistics of an image and is shown to have advantages over the conventional Wiener filter. Experimental results for modeling and the adaptive estimation filter are presented.

  10. Fractal analysis and its impact factors on pore structure of artificial cores based on the images obtained using magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Wang, Heming; Liu, Yu; Song, Yongchen; Zhao, Yuechao; Zhao, Jiafei; Wang, Dayong

    2012-11-01

    Pore structure is one of important factors affecting the properties of porous media, but it is difficult to describe the complexity of pore structure exactly. Fractal theory is an effective and available method for quantifying the complex and irregular pore structure. In this paper, the fractal dimension calculated by box-counting method based on fractal theory was applied to characterize the pore structure of artificial cores. The microstructure or pore distribution in the porous material was obtained using the nuclear magnetic resonance imaging (MRI). Three classical fractals and one sand packed bed model were selected as the experimental material to investigate the influence of box sizes, threshold value, and the image resolution when performing fractal analysis. To avoid the influence of box sizes, a sequence of divisors of the image was proposed and compared with other two algorithms (geometric sequence and arithmetic sequence) with its performance of partitioning the image completely and bringing the least fitted error. Threshold value selected manually and automatically showed that it plays an important role during the image binary processing and the minimum-error method can be used to obtain an appropriate or reasonable one. Images obtained under different pixel matrices in MRI were used to analyze the influence of image resolution. Higher image resolution can detect more quantity of pore structure and increase its irregularity. With benefits of those influence factors, fractal analysis on four kinds of artificial cores showed the fractal dimension can be used to distinguish the different kinds of artificial cores and the relationship between fractal dimension and porosity or permeability can be expressed by the model of D = a - bln(x + c).

  11. Attention trees and semantic paths

    NASA Astrophysics Data System (ADS)

    Giusti, Christian; Pieroni, Goffredo G.; Pieroni, Laura

    2007-02-01

    In the last few decades several techniques for image content extraction, often based on segmentation, have been proposed. It has been suggested that under the assumption of very general image content, segmentation becomes unstable and classification becomes unreliable. According to recent psychological theories, certain image regions attract the attention of human observers more than others and, generally, the image main meaning appears concentrated in those regions. Initially, regions attracting our attention are perceived as a whole and hypotheses on their content are formulated; successively the components of those regions are carefully analyzed and a more precise interpretation is reached. It is interesting to observe that an image decomposition process performed according to these psychological visual attention theories might present advantages with respect to a traditional segmentation approach. In this paper we propose an automatic procedure generating image decomposition based on the detection of visual attention regions. A new clustering algorithm taking advantage of the Delaunay- Voronoi diagrams for achieving the decomposition target is proposed. By applying that algorithm recursively, starting from the whole image, a transformation of the image into a tree of related meaningful regions is obtained (Attention Tree). Successively, a semantic interpretation of the leaf nodes is carried out by using a structure of Neural Networks (Neural Tree) assisted by a knowledge base (Ontology Net). Starting from leaf nodes, paths toward the root node across the Attention Tree are attempted. The task of the path consists in relating the semantics of each child-parent node pair and, consequently, in merging the corresponding image regions. The relationship detected in this way between two tree nodes generates, as a result, the extension of the interpreted image area through each step of the path. The construction of several Attention Trees has been performed and partial results will be shown.

  12. Cybernetic Basis and System Practice of Remote Sensing and Spatial Information Science

    NASA Astrophysics Data System (ADS)

    Tan, X.; Jing, X.; Chen, R.; Ming, Z.; He, L.; Sun, Y.; Sun, X.; Yan, L.

    2017-09-01

    Cybernetics provides a new set of ideas and methods for the study of modern science, and it has been fully applied in many areas. However, few people have introduced cybernetics into the field of remote sensing. The paper is based on the imaging process of remote sensing system, introducing cybernetics into the field of remote sensing, establishing a space-time closed-loop control theory for the actual operation of remote sensing. The paper made the process of spatial information coherently, and improved the comprehensive efficiency of the space information from acquisition, procession, transformation to application. We not only describes the application of cybernetics in remote sensing platform control, sensor control, data processing control, but also in whole system of remote sensing imaging process control. We achieve the information of output back to the input to control the efficient operation of the entire system. This breakthrough combination of cybernetics science and remote sensing science will improve remote sensing science to a higher level.

  13. Smokers exhibit biased neural processing of smoking and affective images.

    PubMed

    Oliver, Jason A; Jentink, Kade G; Drobes, David J; Evans, David E

    2016-08-01

    There has been growing interest in the role that implicit processing of drug cues can play in motivating drug use behavior. However, the extent to which drug cue processing biases relate to the processing biases exhibited to other types of evocative stimuli is largely unknown. The goal of the present study was to determine how the implicit cognitive processing of smoking cues relates to the processing of affective cues using a novel paradigm. Smokers (n = 50) and nonsmokers (n = 38) completed a picture-viewing task, in which participants were presented with a series of smoking, pleasant, unpleasant, and neutral images while engaging in a distractor task designed to direct controlled resources away from conscious processing of image content. Electroencephalogram recordings were obtained throughout the task for extraction of event-related potentials (ERPs). Smokers exhibited differential processing of smoking cues across 3 different ERP indices compared with nonsmokers. Comparable effects were found for pleasant cues on 2 of these indices. Late cognitive processing of smoking and pleasant cues was associated with nicotine dependence and cigarette use. Results suggest that cognitive biases may extend across classes of stimuli among smokers. This raises important questions about the fundamental meaning of cognitive biases, and suggests the need to consider generalized cognitive biases in theories of drug use behavior and interventions based on cognitive bias modification. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Age-related Neural Differences in Affiliation and Isolation

    PubMed Central

    Beadle, Janelle N.; Yoon, Carolyn; Gutchess, Angela H.

    2012-01-01

    While previous aging studies have focused on particular components of social perception (e.g., theory of mind, self-referencing), little is known about age-related differences specifically for the neural basis of perception of affiliation and isolation. This study investigates age-related similarities and differences in the neural basis of affiliation and isolation. Participants viewed images of affiliation (groups engaged in social interaction), and isolation (lone individuals), as well as non-social stimuli (e.g., landscapes) while making pleasantness judgments and undergoing functional neuroimaging (BOLD fMRI). Results indicated age-related similarities in response to affiliation and isolation in recruitment of regions involved in theory of mind and self-referencing (e.g. temporal pole, medial prefrontal cortex). Yet, age-related differences also emerged in response to affiliation and isolation in regions implicated in theory of mind as well as self-referencing. Specifically, in response to isolation versus affiliation images, older adults showed greater recruitment than younger adults of the temporal pole, a region that is important for retrieval of personally-relevant memories utilized to understand others’ mental states. Furthermore, in response to images of affiliation versus isolation, older adults showed greater recruitment than younger adults of the precuneus, a region implicated in self-referencing. We suggest that age-related divergence in neural activation patterns underlying judgments of scenes depicting isolation versus affiliation may indicate that older adults’ theory of mind processes are driven by retrieval of isolation-relevant information. Moreover, older adults’ greater recruitment of the precuneus for affiliation versus isolation suggests that the positivity bias for emotional information may extend to social information involving affiliation. PMID:22371086

  15. [Dream in the land of paradoxical sleep].

    PubMed

    Pire, E; Herman, G; Cambron, L; Maquet, P; Poirrier, R

    2008-01-01

    Paradoxical sleep (PS or REM sleep) is traditionally a matter for neurophysiology, a science of the brain. Dream is associated with neuropsychology and sciences of the mind. The relationships between sleep and dream are better understood in the light of new methodologies in both domains, particularly those of basic neurosciences which elucidate the mechanisms underlying SP and functional imaging techniques. Data from these approaches are placed here in the perspective of rather old clinical observations in human cerebral lesions and in the phylogeny of vertebrates, in order to support a theory of dream. Dreams may be seen as a living marker of a cognitivo-emotional process, called here "eidictic process", involving posterior brain and limbic structures, keeping up during wakefulness, but subjected, at that time, to the leading role of a cognitivo-rational process, called here "thought process". The last one is of instrumental origin in human beings. It involves prefrontal cortices (executive tasks) and frontal/parietal cortices (attention) in the brain. Some clinical implications of the theory are illustrated.

  16. Theoretical study for aerial image intensity in resist in high numerical aperture projection optics and experimental verification with one-dimensional patterns

    NASA Astrophysics Data System (ADS)

    Shibuya, Masato; Takada, Akira; Nakashima, Toshiharu

    2016-04-01

    In optical lithography, high-performance exposure tools are indispensable to obtain not only fine patterns but also preciseness in pattern width. Since an accurate theoretical method is necessary to predict these values, some pioneer and valuable studies have been proposed. However, there might be some ambiguity or lack of consensus regarding the treatment of diffraction by object, incoming inclination factor onto image plane in scalar imaging theory, and paradoxical phenomenon of the inclined entrance plane wave onto image in vector imaging theory. We have reconsidered imaging theory in detail and also phenomenologically resolved the paradox. By comparing theoretical aerial image intensity with experimental pattern width for one-dimensional pattern, we have validated our theoretical consideration.

  17. Parallel algorithm of real-time infrared image restoration based on total variation theory

    NASA Astrophysics Data System (ADS)

    Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

    2015-10-01

    Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

  18. Retinal Connectomics: Towards Complete, Accurate Networks

    PubMed Central

    Marc, Robert E.; Jones, Bryan W.; Watt, Carl B.; Anderson, James R.; Sigulinsky, Crystal; Lauritzen, Scott

    2013-01-01

    Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 1012–1015 byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532

  19. Effects of Modality and Redundancy Principles on the Learning and Attitude of a Computer-Based Music Theory Lesson among Jordanian Primary Pupils

    ERIC Educational Resources Information Center

    Aldalalah, Osamah Ahmad; Fong, Soon Fook

    2010-01-01

    The purpose of this study was to investigate the effects of modality and redundancy principles on the attitude and learning of music theory among primary pupils of different aptitudes in Jordan. The lesson of music theory was developed in three different modes, audio and image (AI), text with image (TI) and audio with image and text (AIT). The…

  20. Image Repair Discourse and Crisis Communication.

    ERIC Educational Resources Information Center

    Benoit, William L.

    1997-01-01

    Describes the theory of image restoration discourse as an approach for understanding corporate crisis situations. States this theory can be used by practitioners to help design messages during crises and by critics or educators to critically evaluate such messages. Describes and illustrates the theory's basic concepts. Offers suggestions for…

  1. Auditory-musical processing in autism spectrum disorders: a review of behavioral and brain imaging studies.

    PubMed

    Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L

    2012-04-01

    Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.

  2. [Recent progress of research and applications of fractal and its theories in medicine].

    PubMed

    Cai, Congbo; Wang, Ping

    2014-10-01

    Fractal, a mathematics concept, is used to describe an image of self-similarity and scale invariance. Some organisms have been discovered with the fractal characteristics, such as cerebral cortex surface, retinal vessel structure, cardiovascular network, and trabecular bone, etc. It has been preliminarily confirmed that the three-dimensional structure of cells cultured in vitro could be significantly enhanced by bionic fractal surface. Moreover, fractal theory in clinical research will help early diagnosis and treatment of diseases, reducing the patient's pain and suffering. The development process of diseases in the human body can be expressed by the fractal theories parameter. It is of considerable significance to retrospectively review the preparation and application of fractal surface and its diagnostic value in medicine. This paper gives an application of fractal and its theories in the medical science, based on the research achievements in our laboratory.

  3. Evaluating a Split Processing Model of Visual Word Recognition: Effects of Orthographic Neighborhood Size

    ERIC Educational Resources Information Center

    Lavidor, Michal; Hayes, Adrian; Shillcock, Richard; Ellis, Andrew W.

    2004-01-01

    The split fovea theory proposes that visual word recognition of centrally presented words is mediated by the splitting of the foveal image, with letters to the left of fixation being projected to the right hemisphere (RH) and letters to the right of fixation being projected to the left hemisphere (LH). Two lexical decision experiments aimed to…

  4. Proceedings of Selected Research and Development Presentations at the 1996 National Convention of the Association for Educational Communications and Technology Sponsored by the Research and Theory Division (18th, Indianapolis, IN, 1996).

    ERIC Educational Resources Information Center

    Simonson, Michael R., Ed.; And Others

    1996-01-01

    This proceedings volume contains 77 papers. Subjects addressed include: image processing; new faculty research methods; preinstructional activities for preservice teacher education; computer "window" presentation styles; interface design; stress management instruction; cooperative learning; graphical user interfaces; student attitudes,…

  5. Flight Mechanics/Estimation Theory Symposium. [with application to autonomous navigation and attitude/orbit determination

    NASA Technical Reports Server (NTRS)

    Fuchs, A. J. (Editor)

    1979-01-01

    Onboard and real time image processing to enhance geometric correction of the data is discussed with application to autonomous navigation and attitude and orbit determination. Specific topics covered include: (1) LANDSAT landmark data; (2) star sensing and pattern recognition; (3) filtering algorithms for Global Positioning System; and (4) determining orbital elements for geostationary satellites.

  6. Students Perception towards the Implementation of Computer Graphics Technology in Class via Unified Theory of Acceptance and Use of Technology (UTAUT) Model

    NASA Astrophysics Data System (ADS)

    Binti Shamsuddin, Norsila

    Technology advancement and development in a higher learning institution is a chance for students to be motivated to learn in depth in the information technology areas. Students should take hold of the opportunity to blend their skills towards these technologies as preparation for them when graduating. The curriculum itself can rise up the students' interest and persuade them to be directly involved in the evolvement of the technology. The aim of this study is to see how deep is the students' involvement as well as their acceptance towards the adoption of the technology used in Computer Graphics and Image Processing subjects. The study will be towards the Bachelor students in Faculty of Industrial Information Technology (FIIT), Universiti Industri Selangor (UNISEL); Bac. In Multimedia Industry, BSc. Computer Science and BSc. Computer Science (Software Engineering). This study utilizes the new Unified Theory of Acceptance and Use of Technology (UTAUT) to further validate the model and enhance our understanding of the adoption of Computer Graphics and Image Processing Technologies. Four (4) out of eight (8) independent factors in UTAUT will be studied towards the dependent factor.

  7. Measuring and managing radiologist workload: application of lean and constraint theories and production planning principles to planning radiology services in a major tertiary hospital.

    PubMed

    MacDonald, Sharyn L S; Cowan, Ian A; Floyd, Richard; Mackintosh, Stuart; Graham, Rob; Jenkins, Emma; Hamilton, Richard

    2013-10-01

    We describe how techniques traditionally used in the manufacturing industry (lean management, the theory of constraints and production planning) can be applied to planning radiology services to reduce the impact of constraints such as limited radiologist hours, and to subsequently reduce delays in accessing imaging and in report turnaround. Targets for imaging and reporting were set aligned with clinical needs. Capacity was quantified for each modality and for radiologists and recorded in activity lists. Demand was quantified and forecasting commenced based on historical referral rates. To try and mitigate the impact of radiologists as a constraint, lean management processes were applied to radiologist workflows. A production planning process was implemented. Outpatient waiting times to access imaging steadily decreased. Report turnaround times improved with the percentage of overnight/on-call reports completed by a 1030 target time increased from approximately 30% to 80 to 90%. The percentage of emergency and inpatient reports completed within one hour increased from approximately 15% to approximately 50% with 80 to 90% available within 4 hours. The number of unreported cases on the radiologist work-list at the end of the working day reduced. The average weekly accuracy for demand forecasts for emergency and inpatient CT, MRI and plain film imaging was 91%, 83% and 92% respectively. For outpatient CT, MRI and plain film imaging the accuracy was 60%, 55% and 77% respectively. Reliable routine weekly and medium to longer term service planning is now possible. Tools from industry can be successfully applied to diagnostic imaging services to improve performance. They allow an accurate understanding of the demands on a service, capacity, and can reliably predict the impact of changes in demand or capacity on service delivery. © 2013 The Royal Australian and New Zealand College of Radiologists.

  8. Small blob identification in medical images using regional features from optimum scale.

    PubMed

    Zhang, Min; Wu, Teresa; Bennett, Kevin M

    2015-04-01

    Recent advances in medical imaging technology have greatly enhanced imaging-based diagnosis which requires computational effective and accurate algorithms to process the images (e.g., measure the objects) for quantitative assessment. In this research, we are interested in one type of imaging objects: small blobs. Examples of small blob objects are cells in histopathology images, glomeruli in MR images, etc. This problem is particularly challenging because the small blobs often have in homogeneous intensity distribution and an indistinct boundary against the background. Yet, in general, these blobs have similar sizes. Motivated by this finding, we propose a novel detector termed Hessian-based Laplacian of Gaussian (HLoG) using scale space theory as the foundation. Like most imaging detectors, an image is first smoothed via LoG. Hessian analysis is then launched to identify the single optimal scale on which a presegmentation is conducted. The advantage of the Hessian process is that it is capable of delineating the blobs. As a result, regional features can be retrieved. These features enable an unsupervised clustering algorithm for postpruning which should be more robust and sensitive than the traditional threshold-based postpruning commonly used in most imaging detectors. To test the performance of the proposed HLoG, two sets of 2-D grey medical images are studied. HLoG is compared against three state-of-the-art detectors: generalized LoG, Radial-Symmetry and LoG using precision, recall, and F-score metrics.We observe that HLoG statistically outperforms the compared detectors.

  9. GLOBECOM '84 - Global Telecommunications Conference, Atlanta, GA, November 26-29, 1984, Conference Record. Volume 3

    NASA Astrophysics Data System (ADS)

    Attention is given to aspects of quality assurance methodologies in development life cycles, optical intercity transmission systems, multiaccess protocols, system and technology aspects in the case of regional/domestic satellites, advances in SSB-AM radio transmission over terrestrial and satellite network, and development environments for telecommunications systems. Other subjects studied are concerned with business communication networks for voice and data, VLSI in local network and communication protocol, product evaluation and support, an update regarding Videotex, topics in communication theory, topics in radio propagation, a status report regarding societal effects of technology in the workplace, digital image processing, and adaptive signal processing for communications. The management of the reliability function in the development process is considered along with Giga-bit technologies for long distance large capacity optical transmission equipment. The application of gallium arsenide analog and digital integrated circuits for high-speed fiber optical communications, and a simple algorithm for image data coding.

  10. [Seeking the aetiology of autistic spectrum disorder. Part 2: Functional neuroimaging].

    PubMed

    Bryńska, Anita

    2012-01-01

    Multiple functional imaging techniques help to a better understanding of the neurobiological basis of autism-spectrum disorders (ASD). The early functional imaging studies on ASD focused on task-specific methods related to core symptom domains and explored patterns of activation in response to face processing, theory of mind tasks, language processing and executive function tasks. On the other hand, fMRI research in ASD focused on the development of functional connectivity methods and has provided evidence of alterations in cortical connectivity in ASD and establish autism as a disorder of under-connectivity among the brain regions participating in cortical networks. This atypical functional connectivity in ASD results in inefficiency and poor integration of processing in network connections to achieve task performance. The goal of this review is to summarise the actual neuroimaging functional data and examine their implication for understanding of the neurobiology of ASD.

  11. A learning tool for optical and microwave satellite image processing and analysis

    NASA Astrophysics Data System (ADS)

    Dashondhi, Gaurav K.; Mohanty, Jyotirmoy; Eeti, Laxmi N.; Bhattacharya, Avik; De, Shaunak; Buddhiraju, Krishna M.

    2016-04-01

    This paper presents a self-learning tool, which contains a number of virtual experiments for processing and analysis of Optical/Infrared and Synthetic Aperture Radar (SAR) images. The tool is named Virtual Satellite Image Processing and Analysis Lab (v-SIPLAB) Experiments that are included in Learning Tool are related to: Optical/Infrared - Image and Edge enhancement, smoothing, PCT, vegetation indices, Mathematical Morphology, Accuracy Assessment, Supervised/Unsupervised classification etc.; Basic SAR - Parameter extraction and range spectrum estimation, Range compression, Doppler centroid estimation, Azimuth reference function generation and compression, Multilooking, image enhancement, texture analysis, edge and detection. etc.; SAR Interferometry - BaseLine Calculation, Extraction of single look SAR images, Registration, Resampling, and Interferogram generation; SAR Polarimetry - Conversion of AirSAR or Radarsat data to S2/C3/T3 matrix, Speckle Filtering, Power/Intensity image generation, Decomposition of S2/C3/T3, Classification of S2/C3/T3 using Wishart Classifier [3]. A professional quality polarimetric SAR software can be found at [8], a part of whose functionality can be found in our system. The learning tool also contains other modules, besides executable software experiments, such as aim, theory, procedure, interpretation, quizzes, link to additional reading material and user feedback. Students can have understanding of Optical and SAR remotely sensed images through discussion of basic principles and supported by structured procedure for running and interpreting the experiments. Quizzes for self-assessment and a provision for online feedback are also being provided to make this Learning tool self-contained. One can download results after performing experiments.

  12. Neural correlates of concreteness in semantic categorization.

    PubMed

    Pexman, Penny M; Hargreaves, Ian S; Edwards, Jodi D; Henry, Luke C; Goodyear, Bradley G

    2007-08-01

    In some contexts, concrete words (CARROT) are recognized and remembered more readily than abstract words (TRUTH). This concreteness effect has historically been explained by two theories of semantic representation: dual-coding [Paivio, A. Dual coding theory: Retrospect and current status. Canadian Journal of Psychology, 45, 255-287, 1991] and context-availability [Schwanenflugel, P. J. Why are abstract concepts hard to understand? In P. J. Schwanenflugel (Ed.), The psychology of word meanings (pp. 223-250). Hillsdale, NJ: Erlbaum, 1991]. Past efforts to adjudicate between these theories using functional magnetic resonance imaging have produced mixed results. Using event-related functional magnetic resonance imaging, we reexamined this issue with a semantic categorization task that allowed for uniform semantic judgments of concrete and abstract words. The participants were 20 healthy adults. Functional analyses contrasted activation associated with concrete and abstract meanings of ambiguous and unambiguous words. Results showed that for both ambiguous and unambiguous words, abstract meanings were associated with more widespread cortical activation than concrete meanings in numerous regions associated with semantic processing, including temporal, parietal, and frontal cortices. These results are inconsistent with both dual-coding and context-availability theories, as these theories propose that the representations of abstract concepts are relatively impoverished. Our results suggest, instead, that semantic retrieval of abstract concepts involves a network of association areas. We argue that this finding is compatible with a theory of semantic representation such as Barsalou's [Barsalou, L. W. Perceptual symbol systems. Behavioral & Brain Sciences, 22, 577-660, 1999] perceptual symbol systems, whereby concrete and abstract concepts are represented by similar mechanisms but with differences in focal content.

  13. The collaboration of grouping laws in vision.

    PubMed

    Grompone von Gioi, Rafael; Delon, Julie; Morel, Jean-Michel

    2012-01-01

    Gestalt theory gives a list of geometric grouping laws that could in principle give a complete account of human image perception. Based on an extensive thesaurus of clever graphical images, this theory discusses how grouping laws collaborate, and conflict toward a global image understanding. Unfortunately, as shown in the bibliographical analysis herewith, the attempts to formalize the grouping laws in computer vision and psychophysics have at best succeeded to compute individual partial structures (or partial gestalts), such as alignments or symmetries. Nevertheless, we show here that a never formalized clever Gestalt experimental procedure, the Nachzeichnung suggests a numerical set up to implement and test the collaboration of partial gestalts. The new computational procedure proposed here analyzes a digital image, and performs a numerical simulation that we call Nachtanz or Gestaltic dance. In this dance, the analyzed digital image is gradually deformed in a random way, but maintaining the detected partial gestalts. The resulting dancing images should be perceptually indistinguishable if and only if the grouping process was complete. Like the Nachzeichnung, the Nachtanz permits a visual exploration of the degrees of freedom still available to a figure after all partial groups (or gestalts) have been detected. In the new proposed procedure, instead of drawing themselves, subjects will be shown samples of the automatic Gestalt dances and required to evaluate if the figures are similar. Several numerical preliminary results with this new Gestaltic experimental setup are thoroughly discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. BIM-Sim: Interactive Simulation of Broadband Imaging Using Mie Theory

    PubMed Central

    Berisha, Sebastian; van Dijk, Thomas; Bhargava, Rohit; Carney, P. Scott; Mayerich, David

    2017-01-01

    Understanding the structure of a scattered electromagnetic (EM) field is critical to improving the imaging process. Mechanisms such as diffraction, scattering, and interference affect an image, limiting the resolution, and potentially introducing artifacts. Simulation and visualization of scattered fields thus plays an important role in imaging science. However, EM fields are high-dimensional, making them time-consuming to simulate, and difficult to visualize. In this paper, we present a framework for interactively computing and visualizing EM fields scattered by micro and nano-particles. Our software uses graphics hardware for evaluating the field both inside and outside of these particles. We then use Monte-Carlo sampling to reconstruct and visualize the three-dimensional structure of the field, spectral profiles at individual points, the structure of the field at the surface of the particle, and the resulting image produced by an optical system. PMID:29170738

  15. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    PubMed

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  16. Fundamental limits of reconstruction-based superresolution algorithms under local translation.

    PubMed

    Lin, Zhouchen; Shum, Heung-Yeung

    2004-01-01

    Superresolution is a technique that can produce images of a higher resolution than that of the originally captured ones. Nevertheless, improvement in resolution using such a technique is very limited in practice. This makes it significant to study the problem: "Do fundamental limits exist for superresolution?" In this paper, we focus on a major class of superresolution algorithms, called the reconstruction-based algorithms, which compute high-resolution images by simulating the image formation process. Assuming local translation among low-resolution images, this paper is the first attempt to determine the explicit limits of reconstruction-based algorithms, under both real and synthetic conditions. Based on the perturbation theory of linear systems, we obtain the superresolution limits from the conditioning analysis of the coefficient matrix. Moreover, we determine the number of low-resolution images that are sufficient to achieve the limit. Both real and synthetic experiments are carried out to verify our analysis.

  17. The Fringe-Imaging Skin Friction Technique PC Application User's Manual

    NASA Technical Reports Server (NTRS)

    Zilliac, Gregory G.

    1999-01-01

    A personal computer application (CXWIN4G) has been written which greatly simplifies the task of extracting skin friction measurements from interferograms of oil flows on the surface of wind tunnel models. Images are first calibrated, using a novel approach to one-camera photogrammetry, to obtain accurate spatial information on surfaces with curvature. As part of the image calibration process, an auxiliary file containing the wind tunnel model geometry is used in conjunction with a two-dimensional direct linear transformation to relate the image plane to the physical (model) coordinates. The application then applies a nonlinear regression model to accurately determine the fringe spacing from interferometric intensity records as required by the Fringe Imaging Skin Friction (FISF) technique. The skin friction is found through application of a simple expression that makes use of lubrication theory to relate fringe spacing to skin friction.

  18. Image-Word Pairing-Congruity Effect on Affective Responses

    NASA Astrophysics Data System (ADS)

    Sanabria Z., Jorge C.; Cho, Youngil; Sambai, Ami; Yamanaka, Toshimasa

    The present study explores the effects of familiarity on affective responses (pleasure and arousal) to Japanese ad elements, based on the schema incongruity theory. Print ads showing natural scenes (landscapes) were used to create the stimuli (images and words). An empirical study was conducted to measure subjects' affective responses to image-word combinations that varied in terms of incongruity. The level of incongruity was based on familiarity levels, and was statistically determined by a variable called ‘pairing-congruity status’. The tested hypothesis proposed that even highly familiar image-word combinations, when combined incongruously, would elicit strong affective responses. Subjects assessed the stimuli using bipolar scales. The study was effective in tracing interactions between familiarity, pleasure and arousal, although the incongruous image-word combinations did not elicit the predicted strong effects on pleasure and arousal. The results suggest a need for further research incorporating kansei (i.e., creativity) into the process of stimuli selection.

  19. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    PubMed Central

    Park, Jinho; Park, Hasil

    2017-01-01

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system. PMID:29232826

  20. The New Physical Optics Notebook: Tutorials in Fourier Optics.

    ERIC Educational Resources Information Center

    Reynolds, George O.; And Others

    This is a textbook of Fourier optics for the classroom or self-study. Major topics included in the 38 chapters are: Huygens' principle and Fourier transforms; image formation; optical coherence theory; coherent imaging; image analysis; coherent noise; interferometry; holography; communication theory techniques; analog optical computing; phase…

  1. Conceptual Coordination Bridges Information Processing and Neurophysiology

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Norrig, Peter (Technical Monitor)

    2000-01-01

    Information processing theories of memory and skills can be reformulated in terms of how categories are physically and temporally related, a process called conceptual coordination. Dreaming can then be understood as a story understanding process in which two mechanisms found in everyday comprehension are missing: conceiving sequences (chunking categories in time as a categorization) and coordinating across modalities (e.g., relating the sound of a word and the image of its meaning). On this basis, we can readily identify isomorphisms between dream phenomenology and neurophysiology, and explain the function of dreaming as facilitating future coordination of sequential, cross-modal categorization (i.e., REM sleep lowers activation thresholds, "unlearning").

  2. VICAR image processing system guide to system use

    NASA Technical Reports Server (NTRS)

    Seidman, J. B.

    1977-01-01

    The functional characteristics and operating requirements of the VICAR (Video Image Communication and Retrieval) system are described. An introduction to the system describes the functional characteristics and the basic theory of operation. A brief description of the data flow as well as tape and disk formats is also presented. A formal presentation of the control statement formats is given along with a guide to usage of the system. The guide provides a step-by-step reference to the creation of a VICAR control card deck. Simple examples are employed to illustrate the various options and the system response thereto.

  3. Imaging 2D optical diffuse reflectance in skeletal muscle

    NASA Astrophysics Data System (ADS)

    Ranasinghesagara, Janaka; Yao, Gang

    2007-04-01

    We discovered a unique pattern of optical reflectance from fresh prerigor skeletal muscles, which can not be described using existing theories. A numerical fitting function was developed to quantify the equiintensity contours of acquired reflectance images. Using this model, we studied the changes of reflectance profile during stretching and rigor process. We found that the prominent anisotropic features diminished after rigor completion. These results suggested that muscle sarcomere structures played important roles in modulating light propagation in whole muscle. When incorporating the sarcomere diffraction in a Monte Carlo model, we showed that the resulting reflectance profiles quantitatively resembled the experimental observation.

  4. Fluorinated Paramagnetic Complexes: Sensitive and Responsive Probes for Magnetic Resonance Spectroscopy and Imaging

    NASA Astrophysics Data System (ADS)

    Peterson, Katie L.; Srivastava, Kriti; Pierre, Valérie C.

    2018-05-01

    Fluorine magnetic resonance spectroscopy (MRS) and magnetic resonance imaging (MRI) of chemical and physiological processes is becoming more widespread. The strength of this technique comes from the negligible background signal in in vivo 19F MRI and the large chemical shift window of 19F that enables it to image concomitantly more than one marker. These same advantages have also been successfully exploited in the design of responsive 19F probes. Part of the recent growth of this technique can be attributed to novel designs of 19F probes with improved imaging parameters due to the incorporation of paramagnetic metal ions. In this review, we provide a description of the theories and strategies that have been employed successfully to improve the sensitivity of 19F probes with paramagnetic metal ions. The Bloch-Wangsness-Redfield theory accurately predicts how molecular parameters such as distance, geometry, rotational correlation times, as well as the nature, oxidation state, and spin state of the metal ion affect the sensitivity of the fluorine-based probes. The principles governing the design of responsive 19F probes are subsequently described in a “how to” guide format. Examples of such probes and their advantages and disadvantages are highlighted through a synopsis of the literature.

  5. Retinex at 50: color theory and spatial algorithms, a review

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2017-05-01

    Retinex Imaging shares two distinct elements: first, a model of human color vision; second, a spatial-imaging algorithm for making better reproductions. Edwin Land's 1964 Retinex Color Theory began as a model of human color vision of real complex scenes. He designed many experiments, such as Color Mondrians, to understand why retinal cone quanta catch fails to predict color constancy. Land's Retinex model used three spatial channels (L, M, S) that calculated three independent sets of monochromatic lightnesses. Land and McCann's lightness model used spatial comparisons followed by spatial integration across the scene. The parameters of their model were derived from extensive observer data. This work was the beginning of the second Retinex element, namely, using models of spatial vision to guide image reproduction algorithms. Today, there are many different Retinex algorithms. This special section, "Retinex at 50," describes a wide variety of them, along with their different goals, and ground truths used to measure their success. This paper reviews (and provides links to) the original Retinex experiments and image-processing implementations. Observer matches (measuring appearances) have extended our understanding of how human spatial vision works. This paper describes a collection very challenging datasets, accumulated by Land and McCann, for testing algorithms that predict appearance.

  6. Ghost microscope imaging system from the perspective of coherent-mode representation

    NASA Astrophysics Data System (ADS)

    Shen, Qian; Bai, Yanfeng; Shi, Xiaohui; Nan, Suqin; Qu, Lijie; Li, Hengxing; Fu, Xiquan

    2018-03-01

    The coherent-mode representation theory of partially coherent fields is firstly used to analyze a two-arm ghost microscope imaging system. It is shown that imaging quality of the generated images depend crucially on the distribution of the decomposition coefficients of the object imaged when the light source is fixed. This theory is also suitable for demonstrating the effects from the distance the object is moved away from the original plane on imaging quality. Our results are verified theoretically and experimentally.

  7. Transition theory and its relevance to patients with chronic wounds.

    PubMed

    Neil, J A; Barrell, L M

    1998-01-01

    A wound, in the broadest sense, is a disruption of normal anatomic structure and function. Acute wounds progress through a timely and orderly sequence of repair that leads to the restoration of functional integrity. In chronic wounds, this timely and orderly sequence goes awry. As a result, people with chronic wounds often face not only physiological difficulties but emotional ones as well. The study of body image and its damage as a result of a chronic wound fits well with Selder's transition theory. This article describes interviews with seven patients with chronic wounds. The themes that emerged from those interviews were compared with Selder's theory to describe patients' experience with chronic wounds as a transition process that can be identified and better understood by healthcare providers.

  8. High efficient optical remote sensing images acquisition for nano-satellite: reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Li, Feng; Xin, Lei; Fu, Jie; Huang, Puming

    2017-10-01

    Large amount of data is one of the most obvious features in satellite based remote sensing systems, which is also a burden for data processing and transmission. The theory of compressive sensing(CS) has been proposed for almost a decade, and massive experiments show that CS has favorable performance in data compression and recovery, so we apply CS theory to remote sensing images acquisition. In CS, the construction of classical sensing matrix for all sparse signals has to satisfy the Restricted Isometry Property (RIP) strictly, which limits applying CS in practical in image compression. While for remote sensing images, we know some inherent characteristics such as non-negative, smoothness and etc.. Therefore, the goal of this paper is to present a novel measurement matrix that breaks RIP. The new sensing matrix consists of two parts: the standard Nyquist sampling matrix for thumbnails and the conventional CS sampling matrix. Since most of sun-synchronous based satellites fly around the earth 90 minutes and the revisit cycle is also short, lots of previously captured remote sensing images of the same place are available in advance. This drives us to reconstruct remote sensing images through a deep learning approach with those measurements from the new framework. Therefore, we propose a novel deep convolutional neural network (CNN) architecture which takes in undersampsing measurements as input and outputs an intermediate reconstruction image. It is well known that the training procedure to the network costs long time, luckily, the training step can be done only once, which makes the approach attractive for a host of sparse recovery problems.

  9. Topography of Cells Revealed by Variable-Angle Total Internal Reflection Fluorescence Microscopy.

    PubMed

    Cardoso Dos Santos, Marcelina; Déturche, Régis; Vézy, Cyrille; Jaffiol, Rodolphe

    2016-09-20

    We propose an improved version of variable-angle total internal reflection fluorescence microscopy (vaTIRFM) adapted to modern TIRF setup. This technique involves the recording of a stack of TIRF images, by gradually increasing the incident angle of the light beam on the sample. A comprehensive theory was developed to extract the membrane/substrate separation distance from fluorescently labeled cell membranes. A straightforward image processing was then established to compute the topography of cells with a nanometric axial resolution, typically 10-20 nm. To highlight the new opportunities offered by vaTIRFM to quantify adhesion process of motile cells, adhesion of MDA-MB-231 cancer cells on glass substrate coated with fibronectin was examined. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  10. Arnheim's Gestalt theory of visual balance: Examining the compositional structure of art photographs and abstract images

    PubMed Central

    McManus, I C; Stöver, Katharina; Kim, Do

    2011-01-01

    In Art and Visual Perception, Rudolf Arnheim, following on from Denman Ross's A Theory of Pure Design, proposed a Gestalt theory of visual composition. The current paper assesses a physicalist interpretation of Arnheim's theory, calculating an image's centre of mass (CoM). Three types of data are used: a large, representative collection of art photographs of recognised quality; croppings by experts and non-experts of photographs; and Ross and Arnheim's procedure of placing a frame around objects such as Arnheim's two black disks. Compared with control images, the CoM of art photographs was closer to an axis (horizontal, vertical, or diagonal), as was the case for photographic croppings. However, stronger, within-image, paired comparison studies, comparing art photographs with the CoM moved on or off an axis (the ‘gamma-ramp study’), or comparing adjacent croppings on or off an axis (the ‘spider-web study’), showed no support for the Arnheim–Ross theory. Finally, studies moving a frame around two disks, of different size, greyness, or background, did not support Arnheim's Gestalt theory. Although the detailed results did not support the Arnheim–Ross theory, several significant results were found which clearly require explanation by any adequate theory of the aesthetics of visual composition. PMID:23145250

  11. Arnheim's Gestalt theory of visual balance: Examining the compositional structure of art photographs and abstract images.

    PubMed

    McManus, I C; Stöver, Katharina; Kim, Do

    2011-01-01

    In Art and Visual Perception, Rudolf Arnheim, following on from Denman Ross's A Theory of Pure Design, proposed a Gestalt theory of visual composition. The current paper assesses a physicalist interpretation of Arnheim's theory, calculating an image's centre of mass (CoM). Three types of data are used: a large, representative collection of art photographs of recognised quality; croppings by experts and non-experts of photographs; and Ross and Arnheim's procedure of placing a frame around objects such as Arnheim's two black disks. Compared with control images, the CoM of art photographs was closer to an axis (horizontal, vertical, or diagonal), as was the case for photographic croppings. However, stronger, within-image, paired comparison studies, comparing art photographs with the CoM moved on or off an axis (the 'gamma-ramp study'), or comparing adjacent croppings on or off an axis (the 'spider-web study'), showed no support for the Arnheim-Ross theory. Finally, studies moving a frame around two disks, of different size, greyness, or background, did not support Arnheim's Gestalt theory. Although the detailed results did not support the Arnheim-Ross theory, several significant results were found which clearly require explanation by any adequate theory of the aesthetics of visual composition.

  12. Anterior cingulate hyperactivations during negative emotion processing among men with schizophrenia and a history of violent behavior

    PubMed Central

    Tikàsz, Andràs; Potvin, Stéphane; Lungu, Ovidiu; Joyal, Christian C; Hodgins, Sheilagh; Mendrek, Adrianna; Dumais, Alexandre

    2016-01-01

    Background Evidence suggests a 2.1–4.6 times increase in the risk of violent behavior in schizophrenia compared to the general population. Current theories propose that the processing of negative emotions is defective in violent individuals and that dysfunctions within the neural circuits involved in emotion processing are implicated in violence. Although schizophrenia patients show enhanced sensitivity to negative stimuli, there are only few functional neuroimaging studies that have examined emotion processing among men with schizophrenia and a history of violence. Objective The present study aimed to identify the brain regions with greater neurofunctional alterations, as detected by functional magnetic resonance imaging during an emotion processing task, of men with schizophrenia who had engaged in violent behavior compared with those who had not. Methods Sixty men were studied; 20 with schizophrenia and a history of violence, 19 with schizophrenia and no violence, and 21 healthy men were scanned while viewing positive, negative, and neutral images. Results Negative images elicited hyperactivations in the anterior cingulate cortex (ACC), left and right lingual gyrus, and the left precentral gyrus in violent men with schizophrenia, compared to nonviolent men with schizophrenia and healthy men. Neutral images elicited hyperactivations in the right and left middle occipital gyrus, left lingual gyrus, and the left fusiform gyrus in violent men with schizophrenia, compared to the other two groups. Discussion Violent men with schizophrenia displayed specific increases in ACC in response to negative images. Given the role of the ACC in information integration, these results indicate a specific dysfunction in the processing of negative emotions that may trigger violent behavior in men with schizophrenia. PMID:27366072

  13. Application of signal detection theory to optics. [image evaluation and restoration

    NASA Technical Reports Server (NTRS)

    Helstrom, C. W.

    1973-01-01

    Basic quantum detection and estimation theory, applications to optics, photon counting, and filtering theory are studied. Recent work on the restoration of degraded optical images received at photoelectrically emissive surfaces is also reported, the data used by the method are the numbers of electrons ejected from various parts of the surface.

  14. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  15. Computational polarization difference underwater imaging based on image fusion

    NASA Astrophysics Data System (ADS)

    Han, Hongwei; Zhang, Xiaohui; Guan, Feng

    2016-01-01

    Polarization difference imaging can improve the quality of images acquired underwater, whether the background and veiling light are unpolarized or partial polarized. Computational polarization difference imaging technique which replaces the mechanical rotation of polarization analyzer and shortens the time spent to select the optimum orthogonal ǁ and ⊥axes is the improvement of the conventional PDI. But it originally gets the output image by setting the weight coefficient manually to an identical constant for all pixels. In this paper, a kind of algorithm is proposed to combine the Q and U parameters of the Stokes vector through pixel-level image fusion theory based on non-subsample contourlet transform. The experimental system built by the green LED array with polarizer to illuminate a piece of flat target merged in water and the CCD with polarization analyzer to obtain target image under different angle is used to verify the effect of the proposed algorithm. The results showed that the output processed by our algorithm could show more details of the flat target and had higher contrast compared to original computational polarization difference imaging.

  16. Ab initio Simulation of Helium-Ion Microscopy Images: The Case of Suspended Graphene

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Miyamoto, Yoshiyuki; Rubio, Angel

    2012-12-01

    Helium ion microscopy (HIM), which was released in 2006 by Ward et al., provides nondestructive imaging of nanoscale objects with higher contrast than scanning electron microscopy. HIM measurement of suspended graphene under typical conditions is simulated by first-principles time-dependent density functional theory and the 30 keV He+ collision is found to induce the emission of electrons dependent on the impact point. This finding suggests the possibility of obtaining a highly accurate image of the honeycomb pattern of suspended graphene by HIM. Comparison with a simulation of He0 under the same kinetic energy shows that electron emission is governed by the impact ionization instead of Auger process initiated by neutralization of He+.

  17. Topological anomaly detection performance with multispectral polarimetric imagery

    NASA Astrophysics Data System (ADS)

    Gartley, M. G.; Basener, W.,

    2009-05-01

    Polarimetric imaging has demonstrated utility for increasing contrast of manmade targets above natural background clutter. Manual detection of manmade targets in multispectral polarimetric imagery can be challenging and a subjective process for large datasets. Analyst exploitation may be improved utilizing conventional anomaly detection algorithms such as RX. In this paper we examine the performance of a relatively new approach to anomaly detection, which leverages topology theory, applied to spectral polarimetric imagery. Detection results for manmade targets embedded in a complex natural background will be presented for both the RX and Topological Anomaly Detection (TAD) approaches. We will also present detailed results examining detection sensitivities relative to: (1) the number of spectral bands, (2) utilization of Stoke's images versus intensity images, and (3) airborne versus spaceborne measurements.

  18. A survey of visual preprocessing and shape representation techniques

    NASA Technical Reports Server (NTRS)

    Olshausen, Bruno A.

    1988-01-01

    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention).

  19. Motor cognition-motor semantics: action perception theory of cognition and communication.

    PubMed

    Pulvermüller, Friedemann; Moseley, Rachel L; Egorova, Natalia; Shebani, Zubaida; Boulenger, Véronique

    2014-03-01

    A new perspective on cognition views cortical cell assemblies linking together knowledge about actions and perceptions not only as the vehicles of integrated action and perception processing but, furthermore, as a brain basis for a wide range of higher cortical functions, including attention, meaning and concepts, sequences, goals and intentions, and even communicative social interaction. This article explains mechanisms relevant to mechanistic action perception theory, points to concrete neuronal circuits in brains along with artificial neuronal network simulations, and summarizes recent brain imaging and other experimental data documenting the role of action perception circuits in cognition, language and communication. © 2013 Published by Elsevier Ltd.

  20. Algorithms for Image Analysis and Combination of Pattern Classifiers with Application to Medical Diagnosis

    NASA Astrophysics Data System (ADS)

    Georgiou, Harris

    2009-10-01

    Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.

  1. Low dose reconstruction algorithm for differential phase contrast imaging.

    PubMed

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  2. Taking the lead from our colleagues in medical education: the use of images of the in-vivo setting in teaching concepts of pharmaceutical science.

    PubMed

    Curley, Louise E; Kennedy, Julia; Hinton, Jordan; Mirjalili, Ali; Svirskis, Darren

    2017-01-01

    Despite pharmaceutical sciences being a core component of pharmacy curricula, few published studies have focussed on innovative methodologies to teach the content. This commentary identifies imaging techniques which can visualise oral dosage forms in-vivo and observe formulation disintegration in order to achieve a better understanding of in-vivo performance. Images formed through these techniques can provide students with a deeper appreciation of the fate of oral formulations in the body compared to standard disintegration and dissolution testing, which is conducted in-vitro. Such images which represent the in-vivo setting can be used in teaching to give context to both theory and experimental work, thereby increasing student understanding and enabling teaching of pharmaceutical sciences supporting students to correlate in-vitro and in-vivo processes.

  3. Research on registration algorithm for check seal verification

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Liu, Tiegen

    2008-03-01

    Nowadays seals play an important role in China. With the development of social economy, the traditional method of manual check seal identification can't meet the need s of banking transactions badly. This paper focus on pre-processing and registration algorithm for check seal verification using theory of image processing and pattern recognition. First of all, analyze the complex characteristics of check seals. To eliminate the difference of producing conditions and the disturbance caused by background and writing in check image, many methods are used in the pre-processing of check seal verification, such as color components transformation, linearity transform to gray-scale image, medium value filter, Otsu, close calculations and labeling algorithm of mathematical morphology. After the processes above, the good binary seal image can be obtained. On the basis of traditional registration algorithm, a double-level registration method including rough and precise registration method is proposed. The deflection angle of precise registration method can be precise to 0.1°. This paper introduces the concepts of difference inside and difference outside and use the percent of difference inside and difference outside to judge whether the seal is real or fake. The experimental results of a mass of check seals are satisfied. It shows that the methods and algorithmic presented have good robustness to noise sealing conditions and satisfactory tolerance of difference within class.

  4. Visual attention to food cues in obesity: an eye-tracking study.

    PubMed

    Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M

    2014-12-01

    Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.

  5. Statistical ultrasonics: the influence of Robert F. Wagner

    NASA Astrophysics Data System (ADS)

    Insana, Michael F.

    2009-02-01

    An important ongoing question for higher education is how to successfully mentor the next generation of scientists and engineers. It has been my privilege to have been mentored by one of the best, Dr Robert F. Wagner and his colleagues at the CDRH/FDA during the mid 1980s. Bob introduced many of us in medical ultrasonics to statistical imaging techniques. These ideas continue to broadly influence studies on adaptive aperture management (beamforming, speckle suppression, compounding), tissue characterization (texture features, Rayleigh/Rician statistics, scatterer size and number density estimators), and fundamental questions about how limitations of the human eye-brain system for extracting information from textured images can motivate image processing. He adapted the classical techniques of signal detection theory to coherent imaging systems that, for the first time in ultrasonics, related common engineering metrics for image quality to task-based clinical performance. This talk summarizes my wonderfully-exciting three years with Bob as I watched him explore topics in statistical image analysis that formed a rational basis for many of the signal processing techniques used in commercial systems today. It is a story of an exciting time in medical ultrasonics, and of how a sparkling personality guided and motivated the development of junior scientists who flocked around him in admiration and amazement.

  6. MO-PIS-Exhibit Hall-01: Tools for TG-142 Linac Imaging QA I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clements, M; Wiesmeyer, M

    2014-06-15

    Partners in Solutions is an exciting new program in which AAPM partners with our vendors to present practical “hands-on” information about the equipment and software systems that we use in our clinics. The therapy topic this year is solutions for TG-142 recommendations for linear accelerator imaging QA. Note that the sessions are being held in a special purpose room built on the Exhibit Hall Floor, to encourage further interaction with the vendors. Automated Imaging QA for TG-142 with RIT Presentation Time: 2:45 – 3:15 PM This presentation will discuss software tools for automated imaging QA and phantom analysis for TG-142.more » All modalities used in radiation oncology will be discussed, including CBCT, planar kV imaging, planar MV imaging, and imaging and treatment coordinate coincidence. Vendor supplied phantoms as well as a variety of third-party phantoms will be shown, along with appropriate analyses, proper phantom setup procedures and scanning settings, and a discussion of image quality metrics. Tools for process automation will be discussed which include: RIT Cognition (machine learning for phantom image identification), RIT Cerberus (automated file system monitoring and searching), and RunQueueC (batch processing of multiple images). In addition to phantom analysis, tools for statistical tracking, trending, and reporting will be discussed. This discussion will include an introduction to statistical process control, a valuable tool in analyzing data and determining appropriate tolerances. An Introduction to TG-142 Imaging QA Using Standard Imaging Products Presentation Time: 3:15 – 3:45 PM Medical Physicists want to understand the logic behind TG-142 Imaging QA. What is often missing is a firm understanding of the connections between the EPID and OBI phantom imaging, the software “algorithms” that calculate the QA metrics, the establishment of baselines, and the analysis and interpretation of the results. The goal of our brief presentation will be to establish and solidify these connections. Our talk will be motivated by the Standard Imaging, Inc. phantom and software solutions. We will present and explain each of the image quality metrics in TG-142 in terms of the theory, mathematics, and algorithms used to implement them in the Standard Imaging PIPSpro software. In the process, we will identify the regions of phantom images that are analyzed by each algorithm. We then will discuss the process of the creation of baselines and typical ranges of acceptable values for each imaging quality metric.« less

  7. Feature extraction algorithm for space targets based on fractal theory

    NASA Astrophysics Data System (ADS)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  8. Physically-Based Models for the Reflection, Transmission and Subsurface Scattering of Light by Smooth and Rough Surfaces, with Applications to Realistic Image Synthesis

    NASA Astrophysics Data System (ADS)

    He, Xiao Dong

    This thesis studies light scattering processes off rough surfaces. Analytic models for reflection, transmission and subsurface scattering of light are developed. The results are applicable to realistic image generation in computer graphics. The investigation focuses on the basic issue of how light is scattered locally by general surfaces which are neither diffuse nor specular; Physical optics is employed to account for diffraction and interference which play a crucial role in the scattering of light for most surfaces. The thesis presents: (1) A new reflectance model; (2) A new transmittance model; (3) A new subsurface scattering model. All of these models are physically-based, depend on only physical parameters, apply to a wide range of materials and surface finishes and more importantly, provide a smooth transition from diffuse-like to specular reflection as the wavelength and incidence angle are increased or the surface roughness is decreased. The reflectance and transmittance models are based on the Kirchhoff Theory and the subsurface scattering model is based on Energy Transport Theory. They are valid only for surfaces with shallow slopes. The thesis shows that predicted reflectance distributions given by the reflectance model compare favorably with experiment. The thesis also investigates and implements fast ways of computing the reflectance and transmittance models. Furthermore, the thesis demonstrates that a high level of realistic image generation can be achieved due to the physically -correct treatment of the scattering processes by the reflectance model.

  9. Acceptance Presentation and Research Study Summary: Research in Educational Communications and Technology. 1982 Association for Educational Communications and Technology Young Researcher Award, Research and Theory Division.

    ERIC Educational Resources Information Center

    Canelos, James

    An internal cognitive variable--mental imagery representation--was studied using a set of three information-processing strategies under external stimulus visual display conditions for various learning levels. The copy strategy provided verbal and visual dual-coding and required formation of a vivid mental image. The relational strategy combined…

  10. Even the Best Laid Plans Sometimes Go Askew: Career Self-Management Processes, Career Shocks, and the Decision to Pursue Graduate Education

    ERIC Educational Resources Information Center

    Seibert, Scott E.; Kraimer, Maria L.; Holtom, Brooks C.; Pierotti, Abigail J.

    2013-01-01

    Drawing on career self-management frameworks as well as image theory and the unfolding model of turnover, we developed a model predicting early career employees' decisions to pursue graduate education. Using a sample of 337 alumni from 2 universities, we found that early career individuals with intrinsic career goals, who engaged in career…

  11. Complete information acquisition in scanning probe microscopy

    DOE PAGES

    Belianinov, Alex; Kalinin, Sergei V.; Jesse, Stephen

    2015-03-13

    In the last three decades, scanning probe microscopy (SPM) has emerged as a primary tool for exploring and controlling the nanoworld. A critical part of the SPM measurements is the information transfer from the tip-surface junction to a macroscopic measurement system. This process reduces the many degrees of freedom of a vibrating cantilever to relatively few parameters recorded as images. Similarly, the details of dynamic cantilever response at sub-microsecond time scales of transients, higher-order eigenmodes and harmonics are averaged out by transitioning to millisecond time scale of pixel acquisition. Hence, the amount of information available to the external observer ismore » severely limited, and its selection is biased by the chosen data processing method. Here, we report a fundamentally new approach for SPM imaging based on information theory-type analysis of the data stream from the detector. This approach allows full exploration of complex tip-surface interactions, spatial mapping of multidimensional variability of material s properties and their mutual interactions, and SPM imaging at the information channel capacity limit.« less

  12. Directional emittance surface measurement system and process

    NASA Technical Reports Server (NTRS)

    Puram, Chith K. (Inventor); Daryabeigi, Kamran (Inventor); Wright, Robert (Inventor); Alderfer, David W. (Inventor)

    1994-01-01

    Apparatus and process for measuring the variation of directional emittance of surfaces at various temperatures using a radiometric infrared imaging system. A surface test sample is coated onto a copper target plate provided with selective heating within the desired incremental temperature range to be tested and positioned onto a precision rotator to present selected inclination angles of the sample relative to the fixed positioned and optically aligned infrared imager. A thermal insulator holder maintains the target plate on the precision rotator. A screen display of the temperature obtained by the infrared imager, and inclination readings are provided with computer calculations of directional emittance being performed automatically according to equations provided to convert selected incremental target temperatures and inclination angles to relative target directional emittance values. The directional emittance of flat black lacquer and an epoxy resin measurements obtained are in agreement with the predictions of the electromagnetic theory and with directional emittance data inferred from directional reflectance measurements made on a spectrophotometer.

  13. Physics, Techniques and Review of Neuroradiological Applications of Diffusion Kurtosis Imaging (DKI).

    PubMed

    Marrale, M; Collura, G; Brai, M; Toschi, N; Midiri, F; La Tona, G; Lo Casto, A; Gagliardo, C

    2016-12-01

    In recent years many papers about diagnostic applications of diffusion tensor imaging (DTI) have been published. This is because DTI allows to evaluate in vivo and in a non-invasive way the process of diffusion of water molecules in biological tissues. However, the simplified description of the diffusion process assumed in DTI does not permit to completely map the complex underlying cellular components and structures, which hinder and restrict the diffusion of water molecules. These limitations can be partially overcome by means of diffusion kurtosis imaging (DKI). The aim of this paper is the description of the theory of DKI, a new topic of growing interest in radiology. DKI is a higher order diffusion model that is a straightforward extension of the DTI model. Here, we analyze the physics underlying this method, we report our MRI acquisition protocol with the preprocessing pipeline used and the DKI parametric maps obtained on a 1.5 T scanner, and we review the most relevant clinical applications of this technique in various neurological diseases.

  14. A robust nonlinear filter for image restoration.

    PubMed

    Koivunen, V

    1995-01-01

    A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.

  15. A systematic review of visual image theory, assessment, and use in skin cancer and tanning research.

    PubMed

    McWhirter, Jennifer E; Hoffman-Goetz, Laurie

    2014-01-01

    Visual images increase attention, comprehension, and recall of health information and influence health behaviors. Health communication campaigns on skin cancer and tanning often use visual images, but little is known about how such images are selected or evaluated. A systematic review of peer-reviewed, published literature on skin cancer and tanning was conducted to determine (a) what visual communication theories were used, (b) how visual images were evaluated, and (c) how visual images were used in the research studies. Seven databases were searched (PubMed/MEDLINE, EMBASE, PsycINFO, Sociological Abstracts, Social Sciences Full Text, ERIC, and ABI/INFORM) resulting in 5,330 citations. Of those, 47 met the inclusion criteria. Only one study specifically identified a visual communication theory guiding the research. No standard instruments for assessing visual images were reported. Most studies lacked, to varying degrees, comprehensive image description, image pretesting, full reporting of image source details, adequate explanation of image selection or development, and example images. The results highlight the need for greater theoretical and methodological attention to visual images in health communication research in the future. To this end, the authors propose a working definition of visual health communication.

  16. Separation of irradiance and reflectance from observed color images by logarithmical nonlinear diffusion process

    NASA Astrophysics Data System (ADS)

    Saito, Takahiro; Takahashi, Hiromi; Komatsu, Takashi

    2006-02-01

    The Retinex theory was first proposed by Land, and deals with separation of irradiance from reflectance in an observed image. The separation problem is an ill-posed problem. Land and others proposed various Retinex separation algorithms. Recently, Kimmel and others proposed a variational framework that unifies the previous Retinex algorithms such as the Poisson-equation-type Retinex algorithms developed by Horn and others, and presented a Retinex separation algorithm with the time-evolution of a linear diffusion process. However, the Kimmel's separation algorithm cannot achieve physically rational separation, if true irradiance varies among color channels. To cope with this problem, we introduce a nonlinear diffusion process into the time-evolution. Moreover, as to its extension to color images, we present two approaches to treat color channels: the independent approach to treat each color channel separately and the collective approach to treat all color channels collectively. The latter approach outperforms the former. Furthermore, we apply our separation algorithm to a high quality chroma key in which before combining a foreground frame and a background frame into an output image a color of each pixel in the foreground frame are spatially adaptively corrected through transformation of the separated irradiance. Experiments demonstrate superiority of our separation algorithm over the Kimmel's separation algorithm.

  17. Relaxation in x-space magnetic particle imaging.

    PubMed

    Croft, Laura R; Goodwill, Patrick W; Conolly, Steven M

    2012-12-01

    Magnetic particle imaging (MPI) is a new imaging modality that noninvasively images the spatial distribution of superparamagnetic iron oxide nanoparticles (SPIOs). MPI has demonstrated high contrast and zero attenuation with depth, and MPI promises superior safety compared to current angiography methods, X-ray, computed tomography, and magnetic resonance imaging angiography. Nanoparticle relaxation can delay the SPIO magnetization, and in this work we investigate the open problem of the role relaxation plays in MPI scanning and its effect on the image. We begin by amending the x-space theory of MPI to include nanoparticle relaxation effects. We then validate the amended theory with experiments from a Berkeley x-space relaxometer and a Berkeley x-space projection MPI scanner. Our theory and experimental data indicate that relaxation reduces SNR and asymmetrically blurs the image in the scanning direction. While relaxation effects can have deleterious effects on the MPI scan, we show theoretically and experimentally that x-space reconstruction remains robust in the presence of relaxation. Furthermore, the role of relaxation in x-space theory provides guidance as we develop methods to minimize relaxation-induced blurring. This will be an important future area of research for the MPI community.

  18. Brain correlates of stuttering and syllable production. A PET performance-correlation analysis.

    PubMed

    Fox, P T; Ingham, R J; Ingham, J C; Zamarripa, F; Xiong, J H; Lancaster, J L

    2000-10-01

    To distinguish the neural systems of normal speech from those of stuttering, PET images of brain blood flow were probed (correlated voxel-wise) with per-trial speech-behaviour scores obtained during PET imaging. Two cohorts were studied: 10 right-handed men who stuttered and 10 right-handed, age- and sex-matched non-stuttering controls. Ninety PET blood flow images were obtained in each cohort (nine per subject as three trials of each of three conditions) from which r-value statistical parametric images (SPI¿r¿) were computed. Brain correlates of stutter rate and syllable rate showed striking differences in both laterality and sign (i.e. positive or negative correlations). Stutter-rate correlates, both positive and negative, were strongly lateralized to the right cerebral and left cerebellar hemispheres. Syllable correlates in both cohorts were bilateral, with a bias towards the left cerebral and right cerebellar hemispheres, in keeping with the left-cerebral dominance for language and motor skills typical of right-handed subjects. For both stutters and syllables, the brain regions that were correlated positively were those of speech production: the mouth representation in the primary motor cortex; the supplementary motor area; the inferior lateral premotor cortex (Broca's area); the anterior insula; and the cerebellum. The principal difference between syllable-rate and stutter-rate positive correlates was hemispheric laterality. A notable exception to this rule was that cerebellar positive correlates for syllable rate were far more extensive in the stuttering cohort than in the control cohort, which suggests a specific role for the cerebellum in enabling fluent utterances in persons who stutter. Stutters were negatively correlated with right-cerebral regions (superior and middle temporal gyrus) associated with auditory perception and processing, regions which were positively correlated with syllables in both the stuttering and control cohorts. These findings support long-held theories that the brain correlates of stuttering are the speech-motor regions of the non-dominant (right) cerebral hemisphere, and extend this theory to include the non-dominant (left) cerebellar hemisphere. The present findings also indicate a specific role of the cerebellum in the fluent utterances of persons who stutter. Support is also offered for theories that implicate auditory processing problems in stuttering.

  19. Imaging complex objects using learning tomography

    NASA Astrophysics Data System (ADS)

    Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri

    2018-02-01

    Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.

  20. Spatially variant morphological restoration and skeleton representation.

    PubMed

    Bouaynaya, Nidhal; Charif-Chefchaouni, Mohammed; Schonfeld, Dan

    2006-11-01

    The theory of spatially variant (SV) mathematical morphology is used to extend and analyze two important image processing applications: morphological image restoration and skeleton representation of binary images. For morphological image restoration, we propose the SV alternating sequential filters and SV median filters. We establish the relation of SV median filters to the basic SV morphological operators (i.e., SV erosions and SV dilations). For skeleton representation, we present a general framework for the SV morphological skeleton representation of binary images. We study the properties of the SV morphological skeleton representation and derive conditions for its invertibility. We also develop an algorithm for the implementation of the SV morphological skeleton representation of binary images. The latter algorithm is based on the optimal construction of the SV structuring element mapping designed to minimize the cardinality of the SV morphological skeleton representation. Experimental results show the dramatic improvement in the performance of the SV morphological restoration and SV morphological skeleton representation algorithms in comparison to their translation-invariant counterparts.

  1. Fuzzy connectedness and object definition

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Samarasekera, Supun

    1995-04-01

    Approaches to object information extraction from images should attempt to use the fact that images are fuzzy. In past image segmentation research, the notion of `hanging togetherness' of image elements specified by their fuzzy connectedness has been lacking. We present a theory of fuzzy objects for n-dimensional digital spaces based on a notion of fuzzy connectedness of image elements. Although our definitions lead to problems of enormous combinatorial complexity, the theoretical results allow us to reduce this dramatically. We demonstrate the utility of the theory and algorithms in image segmentation based on several practical examples.

  2. Image enhancement using the hypothesis selection filter: theory and application to JPEG decoding.

    PubMed

    Wong, Tak-Shing; Bouman, Charles A; Pollak, Ilya

    2013-03-01

    We introduce the hypothesis selection filter (HSF) as a new approach for image quality enhancement. We assume that a set of filters has been selected a priori to improve the quality of a distorted image containing regions with different characteristics. At each pixel, HSF uses a locally computed feature vector to predict the relative performance of the filters in estimating the corresponding pixel intensity in the original undistorted image. The prediction result then determines the proportion of each filter used to obtain the final processed output. In this way, the HSF serves as a framework for combining the outputs of a number of different user selected filters, each best suited for a different region of an image. We formulate our scheme in a probabilistic framework where the HSF output is obtained as the Bayesian minimum mean square error estimate of the original image. Maximum likelihood estimates of the model parameters are determined from an offline fully unsupervised training procedure that is derived from the expectation-maximization algorithm. To illustrate how to apply the HSF and to demonstrate its potential, we apply our scheme as a post-processing step to improve the decoding quality of JPEG-encoded document images. The scheme consistently improves the quality of the decoded image over a variety of image content with different characteristics. We show that our scheme results in quantitative improvements over several other state-of-the-art JPEG decoding methods.

  3. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    PubMed

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.

  4. Electro-optical design for efficient visual communication

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.; Jobson, Daniel J.; Rahman, Zia-ur

    1994-06-01

    Visual communication can be regarded as efficient only if the amount of information that it conveys from the scene to the observer approaches the maximum possible and the associated cost approaches the minimum possible. To deal with this problem, Fales and Huck have integrated the critical limiting factors that constrain image gathering into classical concepts of communication theory. This paper uses this approach to assess the electro-optical design of the image gathering device. Design variables include the f-number and apodization of the objective lens, the aperture size and sampling geometry of the photodetection mechanism, and lateral inhibition and nonlinear radiance-to-signal conversion akin to the retinal processing in the human eye. It is an agreeable consequence of this approach that the image gathering device that is designed along the guidelines developed from communication theory behaves very much like the human eye. The performance approaches the maximum possible in terms of the information content of the acquired data, and thereby, the fidelity, sharpness and clarity with which fine detail can be restored, the efficiency with which the visual information can be transmitted in the form of decorrelated data, and the robustness of these two attributes to the temporal and spatial variations in scene illumination.

  5. Image Ambiguity and Fluency

    PubMed Central

    Jakesch, Martina; Leder, Helmut; Forster, Michael

    2013-01-01

    Ambiguity is often associated with negative affective responses, and enjoying ambiguity seems restricted to only a few situations, such as experiencing art. Nevertheless, theories of judgment formation, especially the “processing fluency account”, suggest that easy-to-process (non-ambiguous) stimuli are processed faster and are therefore preferred to (ambiguous) stimuli, which are hard to process. In a series of six experiments, we investigated these contrasting approaches by manipulating fluency (presentation duration: 10ms, 50ms, 100ms, 500ms, 1000ms) and testing effects of ambiguity (ambiguous versus non-ambiguous pictures of paintings) on classification performance (Part A; speed and accuracy) and aesthetic appreciation (Part B; liking and interest). As indicated by signal detection analyses, classification accuracy increased with presentation duration (Exp. 1a), but we found no effects of ambiguity on classification speed (Exp. 1b). Fifty percent of the participants were able to successfully classify ambiguous content at a presentation duration of 100 ms, and at 500ms even 75% performed above chance level. Ambiguous artworks were found more interesting (in conditions 50ms to 1000ms) and were preferred over non-ambiguous stimuli at 500ms and 1000ms (Exp. 2a - 2c, 3). Importantly, ambiguous images were nonetheless rated significantly harder to process as non-ambiguous images. These results suggest that ambiguity is an essential ingredient in art appreciation even though or maybe because it is harder to process. PMID:24040172

  6. Dynamic Stimuli And Active Processing In Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Haber, Ralph N.

    1990-03-01

    Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.

  7. The implicit processing of categorical and dimensional strategies: an fMRI study of facial emotion perception

    PubMed Central

    Matsuda, Yoshi-Taka; Fujimura, Tomomi; Katahira, Kentaro; Okada, Masato; Ueno, Kenichi; Cheng, Kang; Okanoya, Kazuo

    2013-01-01

    Our understanding of facial emotion perception has been dominated by two seemingly opposing theories: the categorical and dimensional theories. However, we have recently demonstrated that hybrid processing involving both categorical and dimensional perception can be induced in an implicit manner (Fujimura etal., 2012). The underlying neural mechanisms of this hybrid processing remain unknown. In this study, we tested the hypothesis that separate neural loci might intrinsically encode categorical and dimensional processing functions that serve as a basis for hybrid processing. We used functional magnetic resonance imaging to measure neural correlates while subjects passively viewed emotional faces and performed tasks that were unrelated to facial emotion processing. Activity in the right fusiform face area (FFA) increased in response to psychologically obvious emotions and decreased in response to ambiguous expressions, demonstrating the role of the FFA in categorical processing. The amygdala, insula and medial prefrontal cortex exhibited evidence of dimensional (linear) processing that correlated with physical changes in the emotional face stimuli. The occipital face area and superior temporal sulcus did not respond to these changes in the presented stimuli. Our results indicated that distinct neural loci process the physical and psychological aspects of facial emotion perception in a region-specific and implicit manner. PMID:24133426

  8. Stereomotion is processed by the third-order motion system: reply to comment on Three-systems theory of human visual motion perception: review and update

    NASA Astrophysics Data System (ADS)

    Lu, Zhong-Lin; Sperling, George

    2002-10-01

    Two theories are considered to account for the perception of motion of depth-defined objects in random-dot stereograms (stereomotion). In the LuSperling three-motion-systems theory J. Opt. Soc. Am. A 18 , 2331 (2001), stereomotion is perceived by the third-order motion system, which detects the motion of areas defined as figure (versus ground) in a salience map. Alternatively, in his comment J. Opt. Soc. Am. A 19 , 2142 (2002), Patterson proposes a low-level motion-energy system dedicated to stereo depth. The critical difference between these theories is the preprocessing (figureground based on depth and other cues versus simply stereo depth) rather than the motion-detection algorithm itself (because the motion-extraction algorithm for third-order motion is undetermined). Furthermore, the ability of observers to perceive motion in alternating feature displays in which stereo depth alternates with other features such as texture orientation indicates that the third-order motion system can perceive stereomotion. This reduces the stereomotion question to Is it third-order alone or third-order plus dedicated depth-motion processing? Two new experiments intended to support the dedicated depth-motion processing theory are shown here to be perfectly accounted for by third-order motion, as are many older experiments that have previously been shown to be consistent with third-order motion. Cyclopean and rivalry images are shown to be a likely confound in stereomotion studies, rivalry motion being as strong as stereomotion. The phase dependence of superimposed same-direction stereomotion stimuli, rivalry stimuli, and isoluminant color stimuli indicates that these stimuli are processed in the same (third-order) motion system. The phase-dependence paradigm Lu and Sperling, Vision Res. 35 , 2697 (1995) ultimately can resolve the question of which types of signals share a single motion detector. All the evidence accumulated so far is consistent with the three-motion-systems theory. 2002 Optical Society of America

  9. Neural response to reward anticipation under risk is nonlinear in probabilities.

    PubMed

    Hsu, Ming; Krajbich, Ian; Zhao, Chen; Camerer, Colin F

    2009-02-18

    A widely observed phenomenon in decision making under risk is the apparent overweighting of unlikely events and the underweighting of nearly certain events. This violates standard assumptions in expected utility theory, which requires that expected utility be linear (objective) in probabilities. Models such as prospect theory have relaxed this assumption and introduced the notion of a "probability weighting function," which captures the key properties found in experimental data. This study reports functional magnetic resonance imaging (fMRI) data that neural response to expected reward is nonlinear in probabilities. Specifically, we found that activity in the striatum during valuation of monetary gambles are nonlinear in probabilities in the pattern predicted by prospect theory, suggesting that probability distortion is reflected at the level of the reward encoding process. The degree of nonlinearity reflected in individual subjects' decisions is also correlated with striatal activity across subjects. Our results shed light on the neural mechanisms of reward processing, and have implications for future neuroscientific studies of decision making involving extreme tails of the distribution, where probability weighting provides an explanation for commonly observed behavioral anomalies.

  10. Seismic to­mography; theory and practice

    USGS Publications Warehouse

    Iver, H.M.; Hirahara, Kazuro

    1993-01-01

    Although highly theoretical and computer-orientated, seismic tomography has created spectacular images of anomolies within the Earth with dimensions of thousands of kilometers to few tens of meters. These images have enabled Earth scientists working on diverse areas to attack fundamental problems relating to the deep dynamical processes within our planet. Additionally, this technique is being used extensively to study the Earth's hazardous regions such as earthquake fault zones and volcanoes, as well as features beneficial to man such as oil or mineral-bearing structures. This book has been written by world experts and describes the theories, experimental and analytical procedures and results of applying seismic tomography from global to purely local scale. It represents the collective global perspective on the state of the art and focusses not only on the theoretical and practical aspects, but also on the uses for hydrocarbon, mineral and geothermal exploitation. Students and researchers in the Earth sciences, and research and exploration geophysicists should find this a useful, practical reference book for all aspects of their work.

  11. Developmental dyscalculia is related to visuo-spatial memory and inhibition impairment☆

    PubMed Central

    Szucs, Denes; Devine, Amy; Soltesz, Fruzsina; Nobes, Alison; Gabriel, Florence

    2013-01-01

    Developmental dyscalculia is thought to be a specific impairment of mathematics ability. Currently dominant cognitive neuroscience theories of developmental dyscalculia suggest that it originates from the impairment of the magnitude representation of the human brain, residing in the intraparietal sulcus, or from impaired connections between number symbols and the magnitude representation. However, behavioral research offers several alternative theories for developmental dyscalculia and neuro-imaging also suggests that impairments in developmental dyscalculia may be linked to disruptions of other functions of the intraparietal sulcus than the magnitude representation. Strikingly, the magnitude representation theory has never been explicitly contrasted with a range of alternatives in a systematic fashion. Here we have filled this gap by directly contrasting five alternative theories (magnitude representation, working memory, inhibition, attention and spatial processing) of developmental dyscalculia in 9–10-year-old primary school children. Participants were selected from a pool of 1004 children and took part in 16 tests and nine experiments. The dominant features of developmental dyscalculia are visuo-spatial working memory, visuo-spatial short-term memory and inhibitory function (interference suppression) impairment. We hypothesize that inhibition impairment is related to the disruption of central executive memory function. Potential problems of visuo-spatial processing and attentional function in developmental dyscalculia probably depend on short-term memory/working memory and inhibition impairments. The magnitude representation theory of developmental dyscalculia was not supported. PMID:23890692

  12. Aberration measurement of projection optics in lithographic tools based on two-beam interference theory.

    PubMed

    Ma, Mingying; Wang, Xiangzhao; Wang, Fan

    2006-11-10

    The degradation of image quality caused by aberrations of projection optics in lithographic tools is a serious problem in optical lithography. We propose what we believe to be a novel technique for measuring aberrations of projection optics based on two-beam interference theory. By utilizing the partial coherent imaging theory, a novel model that accurately characterizes the relative image displacement of a fine grating pattern to a large pattern induced by aberrations is derived. Both even and odd aberrations are extracted independently from the relative image displacements of the printed patterns by two-beam interference imaging of the zeroth and positive first orders. The simulation results show that by using this technique we can measure the aberrations present in the lithographic tool with higher accuracy.

  13. Psyche=singularity: A comparison of Carl Jung's transpersonal psychology and Leonard Susskind's holographic string theory

    NASA Astrophysics Data System (ADS)

    Desmond, Timothy

    In this dissertation I discern what Carl Jung calls the mandala image of the ultimate archetype of unity underlying and structuring cosmos and psyche by pointing out parallels between his transpersonal psychology and Stanford physicist Leonard Susskind's string theory. Despite his atheistic, materialistically reductionist interpretation of it, I demonstrate how Susskind's string theory of holographic information conservation at the event horizons of black holes, and the cosmic horizon of the universe, corroborates the following four topics about which Jung wrote: (1) his near-death experience of the cosmic horizon after a heart attack in 1944; ( 2) his equation relating psychic energy to mass, "Psyche=highest intensity in the smallest space" (1997, 162), which I translate into the equation, Psyche=Singularity; (3) his theory that the mandala, a circle or sphere with a central point, is the symbolic image of the ultimate archetype of unity through the union of opposites, which structures both cosmos and psyche, and which rises spontaneously from the collective unconscious to compensate a conscious mind torn by irreconcilable demands (1989, 334-335, 396-397); and (4) his theory of synchronicity. I argue that Susskind's inside-out black hole model of our Big Bang universe forms a geometrically perfect mandala: a central Singularity encompassed by a two-dimensional sphere which serves as a universal memory bank. Moreover, in precise fulfillment of Jung's theory, Susskind used that mandala to reconcile the notoriously incommensurable paradigms of general relativity and quantum mechanics, providing in the process a mathematically plausible explanation for Jung's near-death experience of his past, present, and future life simultaneously at the cosmic horizon. Finally, Susskind's theory also provides a plausible cosmological model to explain Jung's theory of synchronicity--meaningful coincidences may be tied together by strings at the cosmic horizon, from which they radiate inward as the holographic "movie" of our three-dimensional world.

  14. Account Credibility and Public Image: Excuses, Justifications, Denials, and Sexual Harassment.

    ERIC Educational Resources Information Center

    Dunn, Deborah; Cody, Michael J.

    2000-01-01

    Examines and challenges theories of account giving and public image following an accusation of sexual harassment in the workplace, using college students and working adults as subjects. Challenges the existing theories of account giving and public image, and lays to rest the notion that full apologies and excuses are mitigating in serious account…

  15. Image Theory and Career Aspirations: Indirect and Interactive Effects of Status-Related Variables

    ERIC Educational Resources Information Center

    Thompson, Mindi N.; Dahling, Jason J.

    2010-01-01

    The present study applied Image Theory (Beach, 1990) to test how different components of a person's value image (i.e., perceived social status identity and conformity to masculine and feminine gender role norms) interact to influence trajectories toward high career aspirations (i.e., high value for status in one's work and aspirations for…

  16. Aesthetic Pursuits: Windows, Frames, Words, Images--Part II

    ERIC Educational Resources Information Center

    Burke, Ken

    2005-01-01

    In Part I of this study (Burke, 2005), the author presented the essentials of Image Presentation Theory--IPT--and its application to the analytical explication of various spatial designs in and psychological responses to images, from the illusions of depth in what is referred to as "windows" in cinema theory to the more patterned abstractions of…

  17. Spatial estimation from remotely sensed data via empirical Bayes models

    NASA Technical Reports Server (NTRS)

    Hill, J. R.; Hinkley, D. V.; Kostal, H.; Morris, C. N.

    1984-01-01

    Multichannel satellite image data, available as LANDSAT imagery, are recorded as a multivariate time series (four channels, multiple passovers) in two spatial dimensions. The application of parametric empirical Bayes theory to classification of, and estimating the probability of, each crop type at each of a large number of pixels is considered. This theory involves both the probability distribution of imagery data, conditional on crop types, and the prior spatial distribution of crop types. For the latter Markov models indexed by estimable parameters are used. A broad outline of the general theory reveals several questions for further research. Some detailed results are given for the special case of two crop types when only a line transect is analyzed. Finally, the estimation of an underlying continuous process on the lattice is discussed which would be applicable to such quantities as crop yield.

  18. 3D animation model with augmented reality for natural science learning in elementary school

    NASA Astrophysics Data System (ADS)

    Hendajani, F.; Hakim, A.; Lusita, M. D.; Saputra, G. E.; Ramadhana, A. P.

    2018-05-01

    Many opinions from primary school students' on Natural Science are a difficult lesson. Many subjects are not easily understood by students, especially on materials that teach some theories about natural processes. Such as rain process, condensation and many other processes. The difficulty that students experience in understanding it is that students cannot imagine the things that have been taught in the material. Although there is material to practice some theories but is actually quite limited. There is also a video or simulation material in the form of 2D animated images. Understanding concepts in natural science lessons are also poorly understood by students. Natural Science learning media uses 3-dimensional animation models (3D) with augmented reality technology, which offers some visualization of science lessons. This application was created to visualize a process in Natural Science subject matter. The hope of making this application is to improve student's concept. This app is made to run on a personal computer that comes with a webcam with augmented reality. The app will display a 3D animation if the camera can recognize the marker.

  19. Body Talk: Body Image Commentary on Queerty.com.

    PubMed

    Schwartz, Joseph; Grimm, Josh

    2016-08-01

    In this study, we conducted a content analysis of 243 photographic images of men published on the gay male-oriented blog Queerty.com. We also analyzed 435 user-generated comments from a randomly selected 1-year sample. Focusing on images' body types, we found that the range of body types featured on the blog was quite narrow-the vast majority of images had very low levels of body fat and very high levels of muscularity. Users' body image-related comments typically endorsed and celebrated images; critiques of images were comparatively rare. Perspectives from objectification theory and social comparison theory suggest that the images and commentary found on the blog likely reinforce unhealthy body image in gay male communities.

  20. Study on the Integrated Geophysic Methods and Application of Advanced Geological Detection for Complicated Tunnel

    NASA Astrophysics Data System (ADS)

    Zhou, L.; Xiao, G.

    2014-12-01

    The engineering geological and hydrological conditions of current tunnels are more and more complicated, as the tunnels are elongated with deeper depth. In constructing these complicated tunnels, geological hazards prone to occur as induced by unfavorable geological bodies, such as fault zones, karst or hydrous structures, etc. The working emphasis and difficulty of the advanced geological exploration for complicated tunnels are mainly focused on the structure and water content of these unfavorable geological bodies. The technical aspects of my paper systematically studied the advanced geological exploration theory and application aspects for complicated tunnels, with discussion on the key technical points and useful conclusions. For the all-aroundness and accuracy of advanced geological exploration results, the objective of my paper is targeted on the comprehensive examination on the structure and hydrous characteristic of the unfavorable geological bodies in complicated tunnels. By the multi-component seismic modeling on a more real model containing the air medium, the wave field response characteristics of unfavorable geological bodies can be analyzed, thus providing theoretical foundation for the observation system layout, signal processing and interpretation of seismic methods. Based on the tomographic imaging theory of seismic and electromagnetic method, 2D integrated seismic and electromagnetic tomographic imaging and visualization software was designed and applied in the advanced drilling hole in the tunnel face, after validation of the forward and inverse modeling results on theoretical models. The transmission wave imaging technology introduced in my paper can be served as a new criterion for detection of unfavorable geological bodies. After careful study on the basic theory, data processing and interpretation, practical applications of TSP and ground penetrating radar (GPR) method, as well as serious examination on their application examples, my paper formulated a suite of comprehensive application system of seismic and electromagnetic methods for the advanced geological exploration of complicated tunnels. This research is funded by National Natural Science Foundation of China (Grant No. 41202223) .

  1. The Electromagnetic Conception of Nature at the Root of the Special and General Relativity Theories and its Revolutionary Meaning

    NASA Astrophysics Data System (ADS)

    Giannetto, Enrico R. A.

    2009-06-01

    The revolution in XX century physics, induced by relativity theories, had its roots within the electromagnetic conception of Nature. It was developed through a tradition related to Brunian and Leibnizian physics, to the German Naturphilosophie and English XIXth physics. The electromagnetic conception of Nature was in some way realized by the relativistic dynamics of Poincaré of 1905. Einstein, on the contrary, after some years, linked relativistic dynamics to a semi-mechanist conception of Nature. He developed general relativity theory on the same ground, but Hilbert formulated it starting from the electromagnetic conception of Nature. Here, a comparison between these two conceptions is proposed in order to understand the conceptual foundations of special relativity within the context of the changing world views. The whole history of physics as well as history of science can be considered as a conflict among different worldviews. Every theory, as well as every different formulation of a theory implies a different worldview: a particular image of Nature implies a particular image of God (atheism too has a particular image of God) as well as of mankind and of their relationship. Thus, it is very relevant for scientific education to point out which image of Nature belongs to a particular formulation of a theory, which image comes to dominate and for which ideological reason.

  2. Study on the application of MRF and the D-S theory to image segmentation of the human brain and quantitative analysis of the brain tissue

    NASA Astrophysics Data System (ADS)

    Guan, Yihong; Luo, Yatao; Yang, Tao; Qiu, Lei; Li, Junchang

    2012-01-01

    The features of the spatial information of Markov random field image was used in image segmentation. It can effectively remove the noise, and get a more accurate segmentation results. Based on the fuzziness and clustering of pixel grayscale information, we find clustering center of the medical image different organizations and background through Fuzzy cmeans clustering method. Then we find each threshold point of multi-threshold segmentation through two dimensional histogram method, and segment it. The features of fusing multivariate information based on the Dempster-Shafer evidence theory, getting image fusion and segmentation. This paper will adopt the above three theories to propose a new human brain image segmentation method. Experimental result shows that the segmentation result is more in line with human vision, and is of vital significance to accurate analysis and application of tissues.

  3. General imaging of advanced 3D mask objects based on the fully-vectorial extended Nijboer-Zernike (ENZ) theory

    NASA Astrophysics Data System (ADS)

    van Haver, Sven; Janssen, Olaf T. A.; Braat, Joseph J. M.; Janssen, Augustus J. E. M.; Urbach, H. Paul; Pereira, Silvania F.

    2008-03-01

    In this paper we introduce a new mask imaging algorithm that is based on the source point integration method (or Abbe method). The method presented here distinguishes itself from existing methods by exploiting the through-focus imaging feature of the Extended Nijboer-Zernike (ENZ) theory of diffraction. An introduction to ENZ-theory and its application in general imaging is provided after which we describe the mask imaging scheme that can be derived from it. The remainder of the paper is devoted to illustrating the advantages of the new method over existing methods (Hopkins-based). To this extent several simulation results are included that illustrate advantages arising from: the accurate incorporation of isolated structures, the rigorous treatment of the object (mask topography) and the fully vectorial through-focus image formation of the ENZ-based algorithm.

  4. Artificial retina model for the retinally blind based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Zeng, Yan-an; Song, Xin-qiang; Jiang, Fa-gang; Chang, Da-ding

    2007-01-01

    Artificial retina is aimed for the stimulation of remained retinal neurons in the patients with degenerated photoreceptors. Microelectrode arrays have been developed for this as a part of stimulator. Design such microelectrode arrays first requires a suitable mathematical method for human retinal information processing. In this paper, a flexible and adjustable human visual information extracting model is presented, which is based on the wavelet transform. With the flexible of wavelet transform to image information processing and the consistent to human visual information extracting, wavelet transform theory is applied to the artificial retina model for the retinally blind. The response of the model to synthetic image is shown. The simulated experiment demonstrates that the model behaves in a manner qualitatively similar to biological retinas and thus may serve as a basis for the development of an artificial retina.

  5. Is pictorial perception robust? The effect of the observer vantage point on the perceived depth structure of linear-perspective images.

    PubMed

    Todorović, Dejan

    2008-01-01

    Every image of a scene produced in accord with the rules of linear perspective has an associated projection centre. Only if observed from that position does the image provide the stimulus which is equivalent to the one provided by the original scene. According to the perspective-transformation hypothesis, observing the image from other vantage points should result in specific transformations of the structure of the conveyed scene, whereas according to the vantage-point compensation hypothesis it should have little effect. Geometrical analyses illustrating the transformation theory are presented. An experiment is reported to confront the two theories. The results provide little support for the compensation theory and are generally in accord with the transformation theory, but also show systematic deviations from it, possibly due to cue conflict and asymmetry of visual angles.

  6. Flame-Vortex Interactions in Microgravity to Improve Models of Turbulent Combustion

    NASA Technical Reports Server (NTRS)

    Driscoll, James F.

    1999-01-01

    A unique flame-vortex interaction experiment is being operated in microgravity in order to obtain fundamental data to assess the Theory of Flame Stretch which will be used to improve models of turbulent combustion. The experiment provides visual images of the physical process by which an individual eddy in a turbulent flow increases the flame surface area, changes the local flame propagation speed, and can extinguish the reaction. The high quality microgravity images provide benchmark data that are free from buoyancy effects. Results are used to assess Direct Numerical Simulations of Dr. K. Kailasanath at NRL, which were run for the same conditions.

  7. Incoherent imaging of radar targets

    NASA Astrophysics Data System (ADS)

    van Ommen, A.; van der Spek, G. A.

    1986-05-01

    Theory suggests that, if a target can be modeled as a rigid constellation of point scatterers, the RCS pattern over a certain aspect change can be used to produce a one-dimensional image. The results for actual measured RCS patterns, however, are not promising. This is illustrated by processing on 4 s of echo data obtained from a Boeing 737 in straight flight, during which its aspect change is 2 deg. The conclusion might be that, for the application considered, aircraft cannot be modeled as a rigid constellation of point scatterers; this is partly due to the treatment of a three-dimensional target as a line target.

  8. Cognitive factors in subjective stabilization of the visual world.

    PubMed

    Bridgeman, B

    1981-08-01

    If an eye movement signal is fed through a galvanic mirror, to move a projected image which a subject is inspecting, prominent objects in the image may seem to jiggle or jump with the the eye when the gain is just below the threshold for detecting a jump of the entire image (Brune and Lücking 1969). We have refined and extended this observation with both naive and practiced subjects, finding results which contradict all of the current theories about the mechanism of stabilization of the visual world and suggest that cognitive factors in perception important influences on the stabilization process. Using this method with a paired photocell system to detect horizontal eye movements, some subjects saw a prominent object in the display jump slightly while the rest of the scene remained stable. The task was done first with landscape slides, then repeated with Escher prints where two sets of alternating figures completely filled the image. Subjects could concentrate on one set of forms as the "figure" and the other as the "ground", and reverse the two at will. In a majority of practiced subjects and in smaller proportion of naive subjects, motion of part of the "figure" was seen regardless of which alternative set of forms constituted it. Reversibility of the effect controlled for influence of object size, brightness, etc. in inducing the selective jump. These and related observations show that cognitive or attentional variables are as important as image properties or gain alone in determining subjective stabilization of the visual world, though current theories (inflow, outflow, cancellation, etc.) consider image position to be simple variable. Another experiment showed that image movement on the retina during saccades cannot explain saccadic suppression of displacement.

  9. Scalets, wavelets and (complex) turning point quantization

    NASA Astrophysics Data System (ADS)

    Handy, C. R.; Brooks, H. A.

    2001-05-01

    Despite the many successes of wavelet analysis in image and signal processing, the incorporation of continuous wavelet transform theory within quantum mechanics has lacked a compelling, first principles, motivating analytical framework, until now. For arbitrary one-dimensional rational fraction Hamiltonians, we develop a simple, unified formalism, which clearly underscores the complementary, and mutually interdependent, role played by moment quantization theory (i.e. via scalets, as defined herein) and wavelets. This analysis involves no approximation of the Hamiltonian within the (equivalent) wavelet space, and emphasizes the importance of (complex) multiple turning point contributions in the quantization process. We apply the method to three illustrative examples. These include the (double-well) quartic anharmonic oscillator potential problem, V(x) = Z2x2 + gx4, the quartic potential, V(x) = x4, and the very interesting and significant non-Hermitian potential V(x) = -(ix)3, recently studied by Bender and Boettcher.

  10. REMOTE LAND MINE(FIELD) DETECTION. An Overview of Techniques (DETECTIE VAN LANDMIJNEN EN MIJNENVELDEN OP AFSTAND. Een Overzicht van de technieken),

    DTIC Science & Technology

    1994-09-01

    titel DETECTIE VAN LANDMIJNEN EN MIJNENVELDEN OP AFSTAND, een overzicht van de technieken auteur (s) Drs. J.S. Groot, Ir. Y.H.L. Janssen datum september...functions based on set theory . The fundamental theory is developed in the sixties. This theory was applicable to binary images (black-and-white images...held at TNO-FEL. Various subjects related to fusion techniques: Dempster Shafer theory , Bayesian inference, Kalman filtering, fuzzy logic. [A15], [B4

  11. Molecular imaging and the unification of multilevel mechanisms and data in medical physics.

    PubMed

    Nikiforidis, George C; Sakellaropoulos, George C; Kagadis, George C

    2008-08-01

    Molecular imaging (MI) constitutes a recently developed approach of imaging, where modalities and agents have been reinvented and used in novel combinations in order to expose and measure biologic processes occurring at molecular and cellular levels. It is an approach that bridges the gap between modalities acquiring data from high (e.g., computed tomography, magnetic resonance imaging, and positron-emitting isotopes) and low (e.g., PCR, microarrays) levels of a biological organization. While data integration methodologies will lead to improved diagnostic and prognostic performance, interdisciplinary collaboration, triggered by MI, will result in a better perception of the underlying biological mechanisms. Toward the development of a unifying theory describing these mechanisms, medical physicists can formulate new hypotheses, provide the physical constraints bounding them, and consequently design appropriate experiments. Their new scientific and working environment calls for interventions in their syllabi to educate scientists with enhanced capabilities for holistic views and synthesis.

  12. The impact of approximations and arbitrary choices on geophysical images

    NASA Astrophysics Data System (ADS)

    Valentine, Andrew P.; Trampert, Jeannot

    2016-01-01

    Whenever a geophysical image is to be constructed, a variety of choices must be made. Some, such as those governing data selection and processing, or model parametrization, are somewhat arbitrary: there may be little reason to prefer one choice over another. Others, such as defining the theoretical framework within which the data are to be explained, may be more straightforward: typically, an `exact' theory exists, but various approximations may need to be adopted in order to make the imaging problem computationally tractable. Differences between any two images of the same system can be explained in terms of differences between these choices. Understanding the impact of each particular decision is essential if images are to be interpreted properly-but little progress has been made towards a quantitative treatment of this effect. In this paper, we consider a general linearized inverse problem, applicable to a wide range of imaging situations. We write down an expression for the difference between two images produced using similar inversion strategies, but where different choices have been made. This provides a framework within which inversion algorithms may be analysed, and allows us to consider how image effects may arise. In this paper, we take a general view, and do not specialize our discussion to any specific imaging problem or setup (beyond the restrictions implied by the use of linearized inversion techniques). In particular, we look at the concept of `hybrid inversion', in which highly accurate synthetic data (typically the result of an expensive numerical simulation) is combined with an inverse operator constructed based on theoretical approximations. It is generally supposed that this offers the benefits of using the more complete theory, without the full computational costs. We argue that the inverse operator is as important as the forward calculation in determining the accuracy of results. We illustrate this using a simple example, based on imaging the density structure of a vibrating string.

  13. Lexical Processing in Toddlers with ASD: Does Weak Central Coherence Play a Role?

    PubMed

    Ellis Weismer, Susan; Haebig, Eileen; Edwards, Jan; Saffran, Jenny; Venker, Courtney E

    2016-12-01

    This study investigated whether vocabulary delays in toddlers with autism spectrum disorders (ASD) can be explained by a cognitive style that prioritizes processing of detailed, local features of input over global contextual integration-as claimed by the weak central coherence (WCC) theory. Thirty toddlers with ASD and 30 younger, cognition-matched typical controls participated in a looking-while-listening task that assessed whether perceptual or semantic similarities among named images disrupted word recognition relative to a neutral condition. Overlap of perceptual features invited local processing whereas semantic overlap invited global processing. With the possible exception of a subset of toddlers who had very low vocabulary skills, these results provide no evidence that WCC is characteristic of lexical processing in toddlers with ASD.

  14. Adaptive Two Dimensional RLS (Recursive Least Squares) Algorithms

    DTIC Science & Technology

    1989-03-01

    in Monterey wonderful. IX I. INTRODUCTION Adaptive algorithms have been used successfully for many years in a wide range of digital signal...SIMULATION RESULTS The 2-D FRLS algorithm was tested both on computer-generated data and on digitized images. For a baseline reference the 2-D L:rv1S...Alexander, S. T. Adaptivt Signal Processing: Theory and Applications. Springer- Verlag, New York. 1986. 7. Bellanger, Maurice G. Adaptive Digital

  15. Perception for Outdoor Navigation

    DTIC Science & Technology

    1990-11-01

    without lane marktings. Our perception modules use a variety of techniques for video processing (clusering theory, symbolic feature detection, neural nets...on gravel and dirt roads, as expected. The most difficult case involved a dirt road in a forest, which was mainly distinguishable in the video images...in that estimate. u bIsrshigl Neural Nets. Under separate funding, we have driven the Naviab using neural nets to track the road in video iages. We ame

  16. Unsupervised color normalisation for H and E stained histopathology image analysis

    NASA Astrophysics Data System (ADS)

    Celis, Raúl; Romero, Eduardo

    2015-12-01

    In histology, each dye component attempts to specifically characterise different microscopic structures. In the case of the Hematoxylin-Eosin (H&E) stain, universally used for routine examination, quantitative analysis may often require the inspection of different morphological signatures related mainly to nuclei patterns, but also to stroma distribution. Nevertheless, computer systems for automatic diagnosis are often fraught by color variations ranging from the capturing device to the laboratory specific staining protocol and stains. This paper presents a novel colour normalisation method for H&E stained histopathology images. This method is based upon the opponent process theory and blindly estimates the best color basis for the Hematoxylin and Eosin stains without relying on prior knowledge. Stain Normalisation and Color Separation are transversal to any Framework of Histopathology Image Analysis.

  17. Measurement of Galactic Logarithmic Spiral Arm Pitch Angle Using Two-dimensional Fast Fourier Transform Decomposition

    NASA Astrophysics Data System (ADS)

    Davis, Benjamin L.; Berrier, Joel C.; Shields, Douglas W.; Kennefick, Julia; Kennefick, Daniel; Seigar, Marc S.; Lacy, Claud H. S.; Puerari, Ivânio

    2012-04-01

    A logarithmic spiral is a prominent feature appearing in a majority of observed galaxies. This feature has long been associated with the traditional Hubble classification scheme, but historical quotes of pitch angle of spiral galaxies have been almost exclusively qualitative. We have developed a methodology, utilizing two-dimensional fast Fourier transformations of images of spiral galaxies, in order to isolate and measure the pitch angles of their spiral arms. Our technique provides a quantitative way to measure this morphological feature. This will allow comparison of spiral galaxy pitch angle to other galactic parameters and test spiral arm genesis theories. In this work, we detail our image processing and analysis of spiral galaxy images and discuss the robustness of our analysis techniques.

  18. Use of graph algorithms in the processing and analysis of images with focus on the biomedical data.

    PubMed

    Zdimalova, M; Roznovjak, R; Weismann, P; El Falougy, H; Kubikova, E

    2017-01-01

    Image segmentation is a known problem in the field of image processing. A great number of methods based on different approaches to this issue was created. One of these approaches utilizes the findings of the graph theory. Our work focuses on segmentation using shortest paths in a graph. Specifically, we deal with methods of "Intelligent Scissors," which use Dijkstra's algorithm to find the shortest paths. We created a new software in Microsoft Visual Studio 2013 integrated development environment Visual C++ in the language C++/CLI. We created a format application with a graphical users development environment for system Windows, with using the platform .Net (version 4.5). The program was used for handling and processing the original medical data. The major disadvantage of the method of "Intelligent Scissors" is the computational time length of Dijkstra's algorithm. However, after the implementation of a more efficient priority queue, this problem could be alleviated. The main advantage of this method we see in training that enables to adapt to a particular kind of edge, which we need to segment. The user involvement has a significant influence on the process of segmentation, which enormously aids to achieve high-quality results (Fig. 7, Ref. 13).

  19. What you see is what you expect: rapid scene understanding benefits from prior experience.

    PubMed

    Greene, Michelle R; Botros, Abraham P; Beck, Diane M; Fei-Fei, Li

    2015-05-01

    Although we are able to rapidly understand novel scene images, little is known about the mechanisms that support this ability. Theories of optimal coding assert that prior visual experience can be used to ease the computational burden of visual processing. A consequence of this idea is that more probable visual inputs should be facilitated relative to more unlikely stimuli. In three experiments, we compared the perceptions of highly improbable real-world scenes (e.g., an underwater press conference) with common images matched for visual and semantic features. Although the two groups of images could not be distinguished by their low-level visual features, we found profound deficits related to the improbable images: Observers wrote poorer descriptions of these images (Exp. 1), had difficulties classifying the images as unusual (Exp. 2), and even had lower sensitivity to detect these images in noise than to detect their more probable counterparts (Exp. 3). Taken together, these results place a limit on our abilities for rapid scene perception and suggest that perception is facilitated by prior visual experience.

  20. Model-based restoration using light vein for range-gated imaging systems.

    PubMed

    Wang, Canjin; Sun, Tao; Wang, Tingfeng; Wang, Rui; Guo, Jin; Tian, Yuzhen

    2016-09-10

    The images captured by an airborne range-gated imaging system are degraded by many factors, such as light scattering, noise, defocus of the optical system, atmospheric disturbances, platform vibrations, and so on. The characteristics of low illumination, few details, and high noise make the state-of-the-art restoration method fail. In this paper, we present a restoration method especially for range-gated imaging systems. The degradation process is divided into two parts: the static part and the dynamic part. For the static part, we establish the physical model of the imaging system according to the laser transmission theory, and estimate the static point spread function (PSF). For the dynamic part, a so-called light vein feature extraction method is presented to estimate the fuzzy parameter of the atmospheric disturbance and platform movement, which make contributions to the dynamic PSF. Finally, combined with the static and dynamic PSF, an iterative updating framework is used to restore the image. Compared with the state-of-the-art methods, the proposed method can effectively suppress ringing artifacts and achieve better performance in a range-gated imaging system.

  1. Statistical Limits to Super Resolution

    NASA Astrophysics Data System (ADS)

    Lucy, L. B.

    1992-08-01

    The limits imposed by photon statistics on the degree to which Rayleigh's resolution limit for diffraction-limited images can be surpassed by applying image restoration techniques are investigated. An approximate statistical theory is given for the number of detected photons required in the image of an unresolved pair of equal point sources in order that its information content allows in principle resolution by restoration. This theory is confirmed by numerical restoration experiments on synthetic images, and quantitative limits are presented for restoration of diffraction-limited images formed by slit and circular apertures.

  2. On the transition between two-phase and single-phase interface dynamics in multicomponent fluids at supercritical pressures

    NASA Astrophysics Data System (ADS)

    Dahms, Rainer N.; Oefelein, Joseph C.

    2013-09-01

    A theory that explains the operating pressures where liquid injection processes transition from exhibiting classical two-phase spray atomization phenomena to single-phase diffusion-dominated mixing is presented. Imaging from a variety of experiments have long shown that under certain conditions, typically when the pressure of the working fluid exceeds the thermodynamic critical pressure of the liquid phase, the presence of discrete two-phase flow processes become diminished. Instead, the classical gas-liquid interface is replaced by diffusion-dominated mixing. When and how this transition occurs, however, is not well understood. Modern theory still lacks a physically based model to quantify this transition and the precise mechanisms that lead to it. In this paper, we derive a new model that explains how the transition occurs in multicomponent fluids and present a detailed analysis to quantify it. The model applies a detailed property evaluation scheme based on a modified 32-term Benedict-Webb-Rubin equation of state that accounts for the relevant real-fluid thermodynamic and transport properties of the multicomponent system. This framework is combined with Linear Gradient Theory, which describes the detailed molecular structure of the vapor-liquid interface region. Our analysis reveals that the two-phase interface breaks down not necessarily due to vanishing surface tension forces, but due to thickened interfaces at high subcritical temperatures coupled with an inherent reduction of the mean free molecular path. At a certain point, the combination of reduced surface tension, the thicker interface, and reduced mean free molecular path enter the continuum length scale regime. When this occurs, inter-molecular forces approach that of the multicomponent continuum where transport processes dominate across the interfacial region. This leads to a continuous phase transition from compressed liquid to supercritical mixture states. Based on this theory, a regime diagram for liquid injection is developed that quantifies the conditions under which classical sprays transition to dense-fluid jets. It is shown that the chamber pressure required to support diffusion-dominated mixing dynamics depends on the composition and temperature of the injected liquid and ambient gas. To illustrate the method and analysis, we use conditions typical of diesel engine injection. We also present a companion set of high-speed images to provide experimental validation of the presented theory. The basic theory is quite general and applies to a wide range of modern propulsion and power systems such as liquid rockets, gas turbines, and reciprocating engines. Interestingly, the regime diagram associated with diesel engine injection suggests that classical spray phenomena at typical injection conditions do not occur.

  3. Person perception involves functional integration between the extrastriate body area and temporal pole.

    PubMed

    Greven, Inez M; Ramsey, Richard

    2017-02-01

    The majority of human neuroscience research has focussed on understanding functional organisation within segregated patches of cortex. The ventral visual stream has been associated with the detection of physical features such as faces and body parts, whereas the theory-of-mind network has been associated with making inferences about mental states and underlying character, such as whether someone is friendly, selfish, or generous. To date, however, it is largely unknown how such distinct processing components integrate neural signals. Using functional magnetic resonance imaging and connectivity analyses, we investigated the contribution of functional integration to social perception. During scanning, participants observed bodies that had previously been associated with trait-based or neutral information. Additionally, we independently localised the body perception and theory-of-mind networks. We demonstrate that when observing someone who cues the recall of stored social knowledge compared to non-social knowledge, a node in the ventral visual stream (extrastriate body area) shows greater coupling with part of the theory-of-mind network (temporal pole). These results show that functional connections provide an interface between perceptual and inferential processing components, thus providing neurobiological evidence that supports the view that understanding the visual environment involves interplay between conceptual knowledge and perceptual processing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. A Control Theory Model of Smoking

    PubMed Central

    Bobashev, Georgiy; Holloway, John; Solano, Eric; Gutkin, Boris

    2017-01-01

    We present a heuristic control theory model that describes smoking under restricted and unrestricted access to cigarettes. The model is based on the allostasis theory and uses a formal representation of a multiscale opponent process. The model simulates smoking behavior of an individual and produces both short-term (“loading up” after not smoking for a while) and long-term smoking patterns (e.g., gradual transition from a few cigarettes to one pack a day). By introducing a formal representation of withdrawal- and craving-like processes, the model produces gradual increases over time in withdrawal- and craving-like signals associated with abstinence and shows that after 3 months of abstinence, craving disappears. The model was programmed as a computer application allowing users to select simulation scenarios. The application links images of brain regions that are activated during the binge/intoxication, withdrawal, or craving with corresponding simulated states. The model was calibrated to represent smoking patterns described in peer-reviewed literature; however, it is generic enough to be adapted to other drugs, including cocaine and opioids. Although the model does not mechanistically describe specific neurobiological processes, it can be useful in prevention and treatment practices as an illustration of drug-using behaviors and expected dynamics of withdrawal and craving during abstinence. PMID:28868531

  5. The iconographic brain. A critical philosophical inquiry into (the resistance of) the image

    PubMed Central

    De Vos, Jan

    2014-01-01

    The brain image plays a central role in contemporary image culture and, in turn, (co)constructs contemporary forms of subjectivity. The central aim of this paper is to probe the unmistakably potent interpellative power of brain images by delving into the power of imaging and the power of the image itself. This is not without relevance for the neurosciences, inasmuch as these do not take place in a vacuum; hence the importance of inquiring into the status of the image within scientific culture and science itself. I will mount a critical philosophical investigation of the brain qua image, focusing on the issue of mapping the mental onto the brain and how, in turn, the brain image plays a pivotal role in processes of subjectivation. Hereto, I draw upon Science & Technology Studies, juxtaposed with culture and ideology critique and theories of image culture. The first section sets out from Althusser's concept of interpellation, linking ideology to subjectivity. Doing so allows to spell out the central question of the paper: what could serve as the basis for a critical approach, or, where can a locus of resistance be found? In the second section, drawing predominantly on Baudrillard, I delve into the dimension of virtuality as this is opened up by brain image culture. This leads to the question of whether the digital brain must be opposed to old analog psychology: is it the psyche which resists? This issue is taken up in the third section which, ultimately, concludes that the psychological is not the requisite locus of resistance. The fourth section proceeds to delineate how the brain image is constructed from what I call the data-gaze (the claim that brain data are always already visual). In the final section, I discuss how an engagement with theories of iconology affords a critical understanding of the interpellative force of the brain image, which culminates in the somewhat unexpected claim that the sought after resistance lies in the very status of the image itself. PMID:24860480

  6. Internal representations for face detection: an application of noise-based image classification to BOLD responses.

    PubMed

    Nestor, Adrian; Vettel, Jean M; Tarr, Michael J

    2013-11-01

    What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.

  7. A Bayesian Model for Highly Accelerated Phase-Contrast MRI

    PubMed Central

    Rich, Adam; Potter, Lee C.; Jin, Ning; Ash, Joshua; Simonetti, Orlando P.; Ahmad, Rizwan

    2015-01-01

    Purpose Phase-contrast magnetic resonance imaging (PC-MRI) is a noninvasive tool to assess cardiovascular disease by quantifying blood flow; however, low data acquisition efficiency limits the spatial and temporal resolutions, real-time application, and extensions to 4D flow imaging in clinical settings. We propose a new data processing approach called Reconstructing Velocity Encoded MRI with Approximate message passing aLgorithms (ReVEAL) that accelerates the acquisition by exploiting data structure unique to PC-MRI. Theory and Methods ReVEAL models physical correlations across space, time, and velocity encodings. The proposed Bayesian approach exploits the relationships in both magnitude and phase among velocity encodings. A fast iterative recovery algorithm is introduced based on message passing. For validation, prospectively undersampled data are processed from a pulsatile flow phantom and five healthy volunteers. Results ReVEAL is in good agreement, quantified by peak velocity and stroke volume (SV), with reference data for acceleration rates R ≤ 10. For SV, Pearson r ≥ 0.996 for phantom imaging (n = 24) and r ≥ 0.956 for prospectively accelerated in vivo imaging (n = 10) for R ≤ 10. Conclusion ReVEAL enables accurate quantification of blood flow from highly undersampled data. The technique is extensible to 4D flow imaging, where higher acceleration may be possible due to additional redundancy. PMID:26444911

  8. The Müller-Lyer Illusion in a Computational Model of Biological Object Recognition

    PubMed Central

    Zeman, Astrid; Obst, Oliver; Brooks, Kevin R.; Rich, Anina N.

    2013-01-01

    Studying illusions provides insight into the way the brain processes information. The Müller-Lyer Illusion (MLI) is a classical geometrical illusion of size, in which perceived line length is decreased by arrowheads and increased by arrowtails. Many theories have been put forward to explain the MLI, such as misapplied size constancy scaling, the statistics of image-source relationships and the filtering properties of signal processing in primary visual areas. Artificial models of the ventral visual processing stream allow us to isolate factors hypothesised to cause the illusion and test how these affect classification performance. We trained a feed-forward feature hierarchical model, HMAX, to perform a dual category line length judgment task (short versus long) with over 90% accuracy. We then tested the system in its ability to judge relative line lengths for images in a control set versus images that induce the MLI in humans. Results from the computational model show an overall illusory effect similar to that experienced by human subjects. No natural images were used for training, implying that misapplied size constancy and image-source statistics are not necessary factors for generating the illusion. A post-hoc analysis of response weights within a representative trained network ruled out the possibility that the illusion is caused by a reliance on information at low spatial frequencies. Our results suggest that the MLI can be produced using only feed-forward, neurophysiological connections. PMID:23457510

  9. Is the linear modeling technique good enough for optimal form design? A comparison of quantitative analysis models.

    PubMed

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  10. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    PubMed Central

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  11. Maskless EUV lithography: an already difficult technology made even more complicated?

    NASA Astrophysics Data System (ADS)

    Chen, Yijian

    2012-03-01

    In this paper, we present the research progress made in maskless EUV lithography and discuss the emerging opportunities for this disruptive technology. It will be shown nanomirrors based maskless approach is one path to costeffective and defect-free EUV lithography, rather than making it even more complicated. The focus of our work is to optimize the existing vertical comb process and scale down the mirror size from several microns to sub-micron regime. The nanomirror device scaling, system configuration, and design issues will be addressed. We also report our theoretical and simulation study of reflective EUV nanomirror based imaging behavior. Dense line/space patterns are formed with an EUV nanomirror array by assigning a phase shift of π to neighboring nanomirrors. Our simulation results show that phase/intensity imbalance is an inherent characteristic of maskless EUV lithography while it only poses a manageable challenge to CD control and process window. The wafer scan and EUV laser jitter induced image blur phenomenon is discussed and a blurred imaging theory is constructed. This blur effect is found to degrade the image contrast at a level that mainly depends on the wafer scan speed.

  12. Brain Imaging, Forward Inference, and Theories of Reasoning

    PubMed Central

    Heit, Evan

    2015-01-01

    This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities. PMID:25620926

  13. Brain imaging, forward inference, and theories of reasoning.

    PubMed

    Heit, Evan

    2014-01-01

    This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities.

  14. Evaluation of the image quality of telescopes using the star test

    NASA Astrophysics Data System (ADS)

    Vazquez y Monteil, Sergio; Salazar Romero, Marcos A.; Gale, David M.

    2004-10-01

    The Point Spread Function (PSF) or star test is one of the main criteria to be considered in the quality of the image formed by a telescope. In a real system the distribution of irradiance in the image of a point source is given by the PSF, a function which is highly sensitive to aberrations. The PSF of a telescope may be determined by measuring the intensity distribution in the image of a star. Alternatively, if we already know the aberrations present in the optical system, then we may use diffraction theory to calculate the function. In this paper we propose a method for determining the wavefront aberrations from the PSF, using Genetic Algorithms to perform an optimization process starting from the PSF instead of the more traditional method of adjusting an aberration polynomial. We show that this method of phase recuperation is immune to noise-induced errors arising during image aquisition and registration. Some practical results are shown.

  15. BIM-Sim: Interactive Simulation of Broadband Imaging Using Mie Theory

    NASA Astrophysics Data System (ADS)

    Berisha, Sebastian; van Dijk, Thomas; Bhargava, Rohit; Carney, P. Scott; Mayerich, David

    2017-02-01

    Understanding the structure of a scattered electromagnetic (EM) field is critical to improving the imaging process. Mechanisms such as diffraction, scattering, and interference affect an image, limiting the resolution and potentially introducing artifacts. Simulation and visualization of scattered fields thus plays an important role in imaging science. However, the calculation of scattered fields is extremely time-consuming on desktop systems and computationally challenging on task-parallel systems such as supercomputers and cluster systems. In addition, EM fields are high-dimensional, making them difficult to visualize. In this paper, we present a framework for interactively computing and visualizing EM fields scattered by micro and nano-particles. Our software uses graphics hardware for evaluating the field both inside and outside of these particles. We then use Monte-Carlo sampling to reconstruct and visualize the three-dimensional structure of the field, spectral profiles at individual points, the structure of the field at the surface of the particle, and the resulting image produced by an optical system.

  16. Imaging and image restoration of an on-axis three-mirror Cassegrain system with wavefront coding technology.

    PubMed

    Guo, Xiaohu; Dong, Liquan; Zhao, Yuejin; Jia, Wei; Kong, Lingqin; Wu, Yijian; Li, Bing

    2015-04-01

    Wavefront coding (WFC) technology is adopted in the space optical system to resolve the problem of defocus caused by temperature difference or vibration of satellite motion. According to the theory of WFC, we calculate and optimize the phase mask parameter of the cubic phase mask plate, which is used in an on-axis three-mirror Cassegrain (TMC) telescope system. The simulation analysis and the experimental results indicate that the defocused modulation transfer function curves and the corresponding blurred images have a perfect consistency in the range of 10 times the depth of focus (DOF) of the original TMC system. After digital image processing by a Wiener filter, the spatial resolution of the restored images is up to 57.14 line pairs/mm. The results demonstrate that the WFC technology in the TMC system has superior performance in extending the DOF and less sensitivity to defocus, which has great value in resolving the problem of defocus in the space optical system.

  17. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  18. Telerobotic Surgery: An Intelligent Systems Approach to Mitigate the Adverse Effects of Communication Delay. Chapter 4

    NASA Technical Reports Server (NTRS)

    Cardullo, Frank M.; Lewis, Harold W., III; Panfilov, Peter B.

    2007-01-01

    An extremely innovative approach has been presented, which is to have the surgeon operate through a simulator running in real-time enhanced with an intelligent controller component to enhance the safety and efficiency of a remotely conducted operation. The use of a simulator enables the surgeon to operate in a virtual environment free from the impediments of telecommunication delay. The simulator functions as a predictor and periodically the simulator state is corrected with truth data. Three major research areas must be explored in order to ensure achieving the objectives. They are: simulator as predictor, image processing, and intelligent control. Each is equally necessary for success of the project and each of these involves a significant intelligent component in it. These are diverse, interdisciplinary areas of investigation, thereby requiring a highly coordinated effort by all the members of our team, to ensure an integrated system. The following is a brief discussion of those areas. Simulator as a predictor: The delays encountered in remote robotic surgery will be greater than any encountered in human-machine systems analysis, with the possible exception of remote operations in space. Therefore, novel compensation techniques will be developed. Included will be the development of the real-time simulator, which is at the heart of our approach. The simulator will present real-time, stereoscopic images and artificial haptic stimuli to the surgeon. Image processing: Because of the delay and the possibility of insufficient bandwidth a high level of novel image processing is necessary. This image processing will include several innovative aspects, including image interpretation, video to graphical conversion, texture extraction, geometric processing, image compression and image generation at the surgeon station. Intelligent control: Since the approach we propose is in a sense predictor based, albeit a very sophisticated predictor, a controller, which not only optimizes end effector trajectory but also avoids error, is essential. We propose to investigate two different approaches to the controller design. One approach employs an optimal controller based on modern control theory; the other one involves soft computing techniques, i.e. fuzzy logic, neural networks, genetic algorithms and hybrids of these.

  19. My belief or yours? Differential theory of mind deficits in frontotemporal dementia and Alzheimer's disease.

    PubMed

    Le Bouc, Raphaël; Lenfant, Pierre; Delbeuck, Xavier; Ravasi, Laura; Lebert, Florence; Semah, Franck; Pasquier, Florence

    2012-10-01

    Theory of mind reasoning-the ability to understand someone else's mental states, such as beliefs, intentions and desires-is crucial in social interaction. It has been suggested that a theory of mind deficit may account for some of the abnormalities in interpersonal behaviour that characterize patients affected by behavioural variant frontotemporal dementia. However, there are conflicting reports as to whether understanding someone else's mind is a key difference between behavioural variant frontotemporal dementia and other neurodegenerative conditions such as Alzheimer's disease. Literature data on the relationship between theory of mind abilities and executive functions are also contradictory. These disparities may be due to underestimation of the fractionation within theory of mind components. A recent theoretical framework suggests that taking someone else's mental perspective requires two distinct processes: inferring someone else's belief and inhibiting one's own belief, with involvement of the temporoparietal and right frontal cortices, respectively. Therefore, we performed a neuropsychological and neuroimaging study to investigate the hypothesis whereby distinct cognitive deficits could impair theory of mind reasoning in patients with Alzheimer's disease and patients with behavioural variant frontotemporal dementia. We used a three-option false belief task to assess theory of mind components in 11 patients with behavioural variant frontotemporal dementia, 12 patients with Alzheimer's disease and 20 healthy elderly control subjects. The patients with behavioural variant frontotemporal dementia and those with Alzheimer's disease were matched for age, gender, education and global cognitive impairment. [(18)F]-fluorodeoxyglucose-positron emission tomography imaging was used to investigate neural correlates of theory of mind reasoning deficits. Performance in the three-option false belief task revealed differential impairments in the components of theory of mind reasoning; patients with Alzheimer's disease had a predominant deficit in inferring someone else's belief, whereas patients with behavioural variant frontotemporal dementia were selectively impaired in inhibiting their own mental perspective. Moreover, inhibiting one's own perspective was strongly correlated with inhibition in a Stroop task but not with other subprocesses of executive functions. This finding suggests that self-perspective inhibition may depend on cognitive processes that are not specific to the social domain. Last, the severity of the deficit in inferring someone else's beliefs correlated significantly over all subjects with hypometabolism in the left temporoparietal junction, whereas the severity of the deficit in self-perspective inhibition correlated significantly with hypometabolism in the right lateral prefrontal cortex. In conclusion, our findings provided clinical and imaging evidence to support differential deficits in two components of theory of mind reasoning (subserved by distinct brain regions) in patients with Alzheimer's disease and patients with behavioural variant frontotemporal dementia.

  20. The key to unlocking the virtual body: virtual reality in the treatment of obesity and eating disorders.

    PubMed

    Riva, Giuseppe

    2011-03-01

    Obesity and eating disorders are usually considered unrelated problems with different causes. However, various studies identify unhealthful weight-control behaviors (fasting, vomiting, or laxative abuse), induced by a negative experience of the body, as the common antecedents of both obesity and eating disorders. But how might negative body image--common to most adolescents, not only to medical patients--be behind the development of obesity and eating disorders? In this paper, I review the "allocentric lock theory" of negative body image as the possible antecedent of both obesity and eating disorders. Evidence from psychology and neuroscience indicates that our bodily experience involves the integration of different sensory inputs within two different reference frames: egocentric (first-person experience) and allocentric (third-person experience). Even though functional relations between these two frames are usually limited, they influence each other during the interaction between long- and short-term memory processes in spatial cognition. If this process is impaired either through exogenous (e.g., stress) or endogenous causes, the egocentric sensory inputs are unable to update the contents of the stored allocentric representation of the body. In other words, these patients are locked in an allocentric (observer view) negative image of their body, which their sensory inputs are no longer able to update even after a demanding diet and a significant weight loss. This article discusses the possible role of virtual reality in addressing this problem within an integrated treatment approach based on the allocentric lock theory. © 2011 Diabetes Technology Society.

  1. "We shall have to make the best of it": The conversion of Dennis Sciama

    NASA Astrophysics Data System (ADS)

    Hunt, James Christopher

    The cosmologist Dennis W. Sciama (1926-1999) was a long-standing advocate of the steady state model of the universe. This theory, originally proposed in 1948 by Hermann Bondi, Thomas Gold, and Fred Hoyle, suggested that the universe was eternal, and unchanging on the largest scales. Contrary to the popular image of a scientist as a dispassionate, unbiased investigator of nature, Sciama fervently hoped the steady state model to be correct. In addition, and also pace the stereotypical image of a scientist, Sciama was motivated significantly by "extrascientific" or aesthetic factors in his adoption of the model. Finally, Sciama, in a stark contrast to the naive falsificationism usually presented as a virtue of the "scientific method," went through a several-year period of attempting to "save" the model from hostile data. However, Sciama abandoned the model in 1966 due to increasingly reliable data relating to the distribution of quasars. Thus the Sciama case also stands as a counterexample to irrationalist criticisms of science, according to which scientists can and will always find ways to hold on to their "pet" theories until they die, regardless of contradictory data. Sciama's conversion also sheds light on the iterative process that goes on as scientists localize and attempt to repair faults in their theories.

  2. Rigorous diffraction analysis using geometrical theory of diffraction for future mask technology

    NASA Astrophysics Data System (ADS)

    Chua, Gek S.; Tay, Cho J.; Quan, Chenggen; Lin, Qunying

    2004-05-01

    Advanced lithographic techniques such as phase shift masks (PSM) and optical proximity correction (OPC) result in a more complex mask design and technology. In contrast to the binary masks, which have only transparent and nontransparent regions, phase shift masks also take into consideration transparent features with a different optical thickness and a modified phase of the transmitted light. PSM are well-known to show prominent diffraction effects, which cannot be described by the assumption of an infinitely thin mask (Kirchhoff approach) that is used in many commercial photolithography simulators. A correct prediction of sidelobe printability, process windows and linearity of OPC masks require the application of rigorous diffraction theory. The problem of aerial image intensity imbalance through focus with alternating Phase Shift Masks (altPSMs) is performed and compared between a time-domain finite-difference (TDFD) algorithm (TEMPEST) and Geometrical theory of diffraction (GTD). Using GTD, with the solution to the canonical problems, we obtained a relationship between the edge on the mask and the disturbance in image space. The main interest is to develop useful formulations that can be readily applied to solve rigorous diffraction for future mask technology. Analysis of rigorous diffraction effects for altPSMs using GTD approach will be discussed.

  3. Democratization of Nanoscale Imaging and Sensing Tools Using Photonics

    DTIC Science & Technology

    2015-06-12

    representative angular scattering pattern recorded on the cell phone. (b) Measured (black) and Mie theory fitted (red) angle-dependent scattering...sample onto the cell phone image sensor (Figure 3a). The one- dimensional radial scattering profile was then fitted with Mie theory to estimate the...quantitatively well-understood, as the experimental measure- ments closely match the predictions of our theory and simulations.69,84 Furthermore, the signal

  4. Feature-based attention: it is all bottom-up priming.

    PubMed

    Theeuwes, Jan

    2013-10-19

    Feature-based attention (FBA) enhances the representation of image characteristics throughout the visual field, a mechanism that is particularly useful when searching for a specific stimulus feature. Even though most theories of visual search implicitly or explicitly assume that FBA is under top-down control, we argue that the role of top-down processing in FBA may be limited. Our review of the literature indicates that all behavioural and neuro-imaging studies investigating FBA suffer from the shortcoming that they cannot rule out an effect of priming. The mere attending to a feature enhances the mandatory processing of that feature across the visual field, an effect that is likely to occur in an automatic, bottom-up way. Studies that have investigated the feasibility of FBA by means of cueing paradigms suggest that the role of top-down processing in FBA is limited (e.g. prepare for red). Instead, the actual processing of the stimulus is needed to cause the mandatory tuning of responses throughout the visual field. We conclude that it is likely that all FBA effects reported previously are the result of bottom-up priming.

  5. Feature-based attention: it is all bottom-up priming

    PubMed Central

    Theeuwes, Jan

    2013-01-01

    Feature-based attention (FBA) enhances the representation of image characteristics throughout the visual field, a mechanism that is particularly useful when searching for a specific stimulus feature. Even though most theories of visual search implicitly or explicitly assume that FBA is under top-down control, we argue that the role of top-down processing in FBA may be limited. Our review of the literature indicates that all behavioural and neuro-imaging studies investigating FBA suffer from the shortcoming that they cannot rule out an effect of priming. The mere attending to a feature enhances the mandatory processing of that feature across the visual field, an effect that is likely to occur in an automatic, bottom-up way. Studies that have investigated the feasibility of FBA by means of cueing paradigms suggest that the role of top-down processing in FBA is limited (e.g. prepare for red). Instead, the actual processing of the stimulus is needed to cause the mandatory tuning of responses throughout the visual field. We conclude that it is likely that all FBA effects reported previously are the result of bottom-up priming. PMID:24018717

  6. Theory and analysis of a large field polarization imaging system with obliquely incident light.

    PubMed

    Lu, Xiaotian; Jin, Weiqi; Li, Li; Wang, Xia; Qiu, Su; Liu, Jing

    2018-02-05

    Polarization imaging technology provides information about not only the irradiance of a target but also the polarization degree and angle of polarization, which indicates extensive application potential. However, polarization imaging theory is based on paraxial optics. When a beam of obliquely incident light passes an analyser, the direction of light propagation is not perpendicular to the surface of the analyser and the applicability of the traditional paraxial optical polarization imaging theory is challenged. This paper investigates a theoretical model of a polarization imaging system with obliquely incident light and establishes a polarization imaging transmission model with a large field of obliquely incident light. In an imaging experiment with an integrating sphere light source and rotatable polarizer, the polarization imaging transmission model is verified and analysed for two cases of natural light and linearly polarized light incidence. Although the results indicate that the theoretical model is consistent with the experimental results, the theoretical model distinctly differs from the traditional paraxial approximation model. The results prove the accuracy and necessity of the theoretical model and the theoretical guiding significance for theoretical and systematic research of large field polarization imaging.

  7. How Radiologists Think: Understanding Fast and Slow Thought Processing and How It Can Improve Our Teaching.

    PubMed

    van der Gijp, Anouk; Webb, Emily M; Naeger, David M

    2017-06-01

    Scholars have identified two distinct ways of thinking. This "Dual Process Theory" distinguishes a fast, nonanalytical way of thinking, called "System 1," and a slow, analytical way of thinking, referred to as "System 2." In radiology, we use both methods when interpreting and reporting images, and both should ideally be emphasized when educating our trainees. This review provides practical tips for improving radiology education, by enhancing System 1 and System 2 thinking among our trainees. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  8. The software application and classification algorithms for welds radiograms analysis

    NASA Astrophysics Data System (ADS)

    Sikora, R.; Chady, T.; Baniukiewicz, P.; Grzywacz, B.; Lopato, P.; Misztal, L.; Napierała, L.; Piekarczyk, B.; Pietrusewicz, T.; Psuj, G.

    2013-01-01

    The paper presents a software implementation of an Intelligent System for Radiogram Analysis (ISAR). The system has to support radiologists in welds quality inspection. The image processing part of software with a graphical user interface and a welds classification part are described with selected classification results. Classification was based on a few algorithms: an artificial neural network, a k-means clustering, a simplified k-means and a rough sets theory.

  9. General method of pattern classification using the two-domain theory

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E. (Inventor)

    1993-01-01

    Human beings judge patterns (such as images) by complex mental processes, some of which may not be known, while computing machines extract features. By representing the human judgements with simple measurements and reducing them and the machine extracted features to a common metric space and fitting them by regression, the judgements of human experts rendered on a sample of patterns may be imposed on a pattern population to provide automatic classification.

  10. Sensory Information Processing

    DTIC Science & Technology

    1977-04-01

    deblurred image is shown in Figure lUb. This result, with no sensor noise , shows a good representation of the original double star. The orientation of the...which we Page 21 performed to test the theory and to provide an indication of the effects of sensor noise on the performances of these procedures...34^-^^^^-^-^ Page 37 2 Labeyrie has shown experimentally that<|S(u)| > has useful signal-to- noise ratio out to the diffraction limit of the telescope. Korff

  11. Advanced Spectroscopic and Thermal Imaging Instrumentation for Shock Tube and Ballistic Range Facilities

    DTIC Science & Technology

    2010-04-01

    the development process, increase its quality and reduce development time through automation of synthesis, analysis or verification. For this purpose...made of time-non-deterministic systems, improving efficiency and reducing complexity of formal analysis . We also show how our theory relates to, and...of the most recent investigations for Earth and Mars atmospheres will be discussed in the following sections. 2.4.1 Earth: lunar return NASA’s

  12. General method of pattern classification using the two-domain theory

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E. (Inventor)

    1990-01-01

    Human beings judge patterns (such as images) by complex mental processes, some of which may not be known, while computing machines extract features. By representing the human judgements with simple measurements and reducing them and the machine extracted features to a common metric space and fitting them by regression, the judgements of human experts rendered on a sample of patterns may be imposed on a pattern population to provide automatic classification.

  13. Change detection of bitemporal multispectral images based on FCM and D-S theory

    NASA Astrophysics Data System (ADS)

    Shi, Aiye; Gao, Guirong; Shen, Shaohong

    2016-12-01

    In this paper, we propose a change detection method of bitemporal multispectral images based on the D-S theory and fuzzy c-means (FCM) algorithm. Firstly, the uncertainty and certainty regions are determined by thresholding method applied to the magnitudes of difference image (MDI) and spectral angle information (SAI) of bitemporal images. Secondly, the FCM algorithm is applied to the MDI and SAI in the uncertainty region, respectively. Then, the basic probability assignment (BPA) functions of changed and unchanged classes are obtained by the fuzzy membership values from the FCM algorithm. In addition, the optimal value of fuzzy exponent of FCM is adaptively determined by conflict degree between the MDI and SAI in uncertainty region. Finally, the D-S theory is applied to obtain the new fuzzy partition matrix for uncertainty region and further the change map is obtained. Experiments on bitemporal Landsat TM images and bitemporal SPOT images validate that the proposed method is effective.

  14. Theory and algorithms for image reconstruction on chords and within regions of interest

    NASA Astrophysics Data System (ADS)

    Zou, Yu; Pan, Xiaochuan; Sidky, Emilâ Y.

    2005-11-01

    We introduce a formula for image reconstruction on a chord of a general source trajectory. We subsequently develop three algorithms for exact image reconstruction on a chord from data acquired with the general trajectory. Interestingly, two of the developed algorithms can accommodate data containing transverse truncations. The widely used helical trajectory and other trajectories discussed in literature can be interpreted as special cases of the general trajectory, and the developed theory and algorithms are thus directly applicable to reconstructing images exactly from data acquired with these trajectories. For instance, chords on a helical trajectory are equivalent to the n-PI-line segments. In this situation, the proposed algorithms become the algorithms that we proposed previously for image reconstruction on PI-line segments. We have performed preliminary numerical studies, which include the study on image reconstruction on chords of two-circle trajectory, which is nonsmooth, and on n-PI lines of a helical trajectory, which is smooth. Quantitative results of these studies verify and demonstrate the proposed theory and algorithms.

  15. Dehazed Image Quality Assessment by Haze-Line Theory

    NASA Astrophysics Data System (ADS)

    Song, Yingchao; Luo, Haibo; Lu, Rongrong; Ma, Junkai

    2017-06-01

    Images captured in bad weather suffer from low contrast and faint color. Recently, plenty of dehazing algorithms have been proposed to enhance visibility and restore color. However, there is a lack of evaluation metrics to assess the performance of these algorithms or rate them. In this paper, an indicator of contrast enhancement is proposed basing on the newly proposed haze-line theory. The theory assumes that colors of a haze-free image are well approximated by a few hundred distinct colors, which form tight clusters in RGB space. The presence of haze makes each color cluster forms a line, which is named haze-line. By using these haze-lines, we assess performance of dehazing algorithms designed to enhance the contrast by measuring the inter-cluster deviations between different colors of dehazed image. Experimental results demonstrated that the proposed Color Contrast (CC) index correlates well with human judgments of image contrast taken in a subjective test on various scene of dehazed images and performs better than state-of-the-art metrics.

  16. Speckle statistics in adaptive optics images at visible wavelengths

    NASA Astrophysics Data System (ADS)

    Stangalini, Marco; Pedichini, Fernando; Ambrosino, Filippo; Centrone, Mauro; Del Moro, Dario

    2016-07-01

    Residual speckles in adaptive optics (AO) images represent a well known limitation to the achievement of the contrast needed for faint stellar companions detection. Speckles in AO imagery can be the result of either residual atmospheric aberrations, not corrected by the AO, or slowly evolving aberrations induced by the optical system. In this work we take advantage of new high temporal cadence (1 ms) data acquired by the SHARK forerunner experiment at the Large Binocular Telescope (LBT), to characterize the AO residual speckles at visible waveleghts. By means of an automatic identification of speckles, we study the main statistical properties of AO residuals. In addition, we also study the memory of the process, and thus the clearance time of the atmospheric aberrations, by using information Theory. These information are useful for increasing the realism of numerical simulations aimed at assessing the instrumental performances, and for the application of post-processing techniques on AO imagery.

  17. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE)

    PubMed Central

    Sharif, Behzad; Derbyshire, J. Andrew; Faranesh, Anthony Z.; Bresler, Yoram

    2010-01-01

    MR imaging of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional non-gated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly-accelerated non-gated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically-driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient-adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject’s heart-rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high resolution non-gated cardiac MRI during a short breath-hold. PMID:20665794

  18. Imaging quality analysis of multi-channel scanning radiometer

    NASA Astrophysics Data System (ADS)

    Fan, Hong; Xu, Wujun; Wang, Chengliang

    2008-03-01

    Multi-channel scanning radiometer, on boarding FY-2 geostationary meteorological satellite, plays a key role in remote sensing because of its wide field of view and continuous multi-spectral images acquirements. It is significant to evaluate image quality after performance parameters of the imaging system are validated. Several methods of evaluating imaging quality are discussed. Of these methods, the most fundamental is the MTF. The MTF of photoelectric scanning remote instrument, in the scanning direction, is the multiplication of optics transfer function (OTF), detector transfer function (DTF) and electronics transfer function (ETF). For image motion compensation, moving speed of scanning mirror should be considered. The optical MTF measurement is performed in both the EAST/WEST and NORTH/SOUTH direction, whose values are used for alignment purposes and are used to determine the general health of the instrument during integration and testing. Imaging systems cannot perfectly reproduce what they see and end up "blurring" the image. Many parts of the imaging system can cause blurring. Among these are the optical elements, the sampling of the detector itself, post-processing, or the earth's atmosphere for systems that image through it. Through theory calculation and actual measurement, it is proved that DTF and ETF are the main factors of system MTF and the imaging quality can satisfy the requirement of instrument design.

  19. Remote Sensing of Landscapes with Spectral Images

    NASA Astrophysics Data System (ADS)

    Adams, John B.; Gillespie, Alan R.

    2006-05-01

    Remote Sensing of Landscapes with Spectral Images describes how to process and interpret spectral images using physical models to bridge the gap between the engineering and theoretical sides of remote-sensing and the world that we encounter when we venture outdoors. The emphasis is on the practical use of images rather than on theory and mathematical derivations. Examples are drawn from a variety of landscapes and interpretations are tested against the reality seen on the ground. The reader is led through analysis of real images (using figures and explanations); the examples are chosen to illustrate important aspects of the analytic framework. This textbook will form a valuable reference for graduate students and professionals in a variety of disciplines including ecology, forestry, geology, geography, urban planning, archeology and civil engineering. It is supplemented by a web-site hosting digital color versions of figures in the book as well as ancillary images (www.cambridge.org/9780521662214). Presents a coherent view of practical remote sensing, leading from imaging and field work to the generation of useful thematic maps Explains how to apply physical models to help interpret spectral images Supplemented by a website hosting digital colour versions of figures in the book, as well as additional colour figures

  20. Downscaling remotely sensed imagery using area-to-point cokriging and multiple-point geostatistical simulation

    NASA Astrophysics Data System (ADS)

    Tang, Yunwei; Atkinson, Peter M.; Zhang, Jingxiong

    2015-03-01

    A cross-scale data integration method was developed and tested based on the theory of geostatistics and multiple-point geostatistics (MPG). The goal was to downscale remotely sensed images while retaining spatial structure by integrating images at different spatial resolutions. During the process of downscaling, a rich spatial correlation model in the form of a training image was incorporated to facilitate reproduction of similar local patterns in the simulated images. Area-to-point cokriging (ATPCK) was used as locally varying mean (LVM) (i.e., soft data) to deal with the change of support problem (COSP) for cross-scale integration, which MPG cannot achieve alone. Several pairs of spectral bands of remotely sensed images were tested for integration within different cross-scale case studies. The experiment shows that MPG can restore the spatial structure of the image at a fine spatial resolution given the training image and conditioning data. The super-resolution image can be predicted using the proposed method, which cannot be realised using most data integration methods. The results show that ATPCK-MPG approach can achieve greater accuracy than methods which do not account for the change of support issue.

  1. Spatio-temporal Hotelling observer for signal detection from image sequences

    PubMed Central

    Caucci, Luca; Barrett, Harrison H.; Rodríguez, Jeffrey J.

    2010-01-01

    Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection. PMID:19550494

  2. Spatio-temporal Hotelling observer for signal detection from image sequences.

    PubMed

    Caucci, Luca; Barrett, Harrison H; Rodriguez, Jeffrey J

    2009-06-22

    Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection.

  3. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  4. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  5. Multimodal computational microscopy based on transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao

    2016-12-01

    Transport of intensity equation (TIE) is a powerful tool for phase retrieval and quantitative phase imaging, which requires intensity measurements only at axially closely spaced planes without a separate reference beam. It does not require coherent illumination and works well on conventional bright-field microscopes. The quantitative phase reconstructed by TIE gives valuable information that has been encoded in the complex wave field by passage through a sample of interest. Such information may provide tremendous flexibility to emulate various microscopy modalities computationally without requiring specialized hardware components. We develop a requisite theory to describe such a hybrid computational multimodal imaging system, which yields quantitative phase, Zernike phase contrast, differential interference contrast, and light field moment imaging, simultaneously. It makes the various observations for biomedical samples easy. Then we give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens-based TIE system, combined with the appropriate postprocessing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.

  6. Developmental dyscalculia is related to visuo-spatial memory and inhibition impairment.

    PubMed

    Szucs, Denes; Devine, Amy; Soltesz, Fruzsina; Nobes, Alison; Gabriel, Florence

    2013-01-01

    Developmental dyscalculia is thought to be a specific impairment of mathematics ability. Currently dominant cognitive neuroscience theories of developmental dyscalculia suggest that it originates from the impairment of the magnitude representation of the human brain, residing in the intraparietal sulcus, or from impaired connections between number symbols and the magnitude representation. However, behavioral research offers several alternative theories for developmental dyscalculia and neuro-imaging also suggests that impairments in developmental dyscalculia may be linked to disruptions of other functions of the intraparietal sulcus than the magnitude representation. Strikingly, the magnitude representation theory has never been explicitly contrasted with a range of alternatives in a systematic fashion. Here we have filled this gap by directly contrasting five alternative theories (magnitude representation, working memory, inhibition, attention and spatial processing) of developmental dyscalculia in 9-10-year-old primary school children. Participants were selected from a pool of 1004 children and took part in 16 tests and nine experiments. The dominant features of developmental dyscalculia are visuo-spatial working memory, visuo-spatial short-term memory and inhibitory function (interference suppression) impairment. We hypothesize that inhibition impairment is related to the disruption of central executive memory function. Potential problems of visuo-spatial processing and attentional function in developmental dyscalculia probably depend on short-term memory/working memory and inhibition impairments. The magnitude representation theory of developmental dyscalculia was not supported. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Modeling the depth-sectioning effect in reflection-mode dynamic speckle-field interferometric microscopy

    PubMed Central

    Zhou, Renjie; Jin, Di; Hosseini, Poorya; Singh, Vijay Raj; Kim, Yang-hyo; Kuang, Cuifang; Dasari, Ramachandra R.; Yaqoob, Zahid; So, Peter T. C.

    2017-01-01

    Unlike most optical coherence microscopy (OCM) systems, dynamic speckle-field interferometric microscopy (DSIM) achieves depth sectioning through the spatial-coherence gating effect. Under high numerical aperture (NA) speckle-field illumination, our previous experiments have demonstrated less than 1 μm depth resolution in reflection-mode DSIM, while doubling the diffraction limited resolution as under structured illumination. However, there has not been a physical model to rigorously describe the speckle imaging process, in particular explaining the sectioning effect under high illumination and imaging NA settings in DSIM. In this paper, we develop such a model based on the diffraction tomography theory and the speckle statistics. Using this model, we calculate the system response function, which is used to further obtain the depth resolution limit in reflection-mode DSIM. Theoretically calculated depth resolution limit is in an excellent agreement with experiment results. We envision that our physical model will not only help in understanding the imaging process in DSIM, but also enable better designing such systems for depth-resolved measurements in biological cells and tissues. PMID:28085800

  8. Salient contour extraction from complex natural scene in night vision image

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lian-fa

    2014-03-01

    The theory of center-surround interaction in non-classical receptive field can be applied in night vision information processing. In this work, an optimized compound receptive field modulation method is proposed to extract salient contour from complex natural scene in low-light-level (LLL) and infrared images. The kernel idea is that multi-feature analysis can recognize the inhomogeneity in modulatory coverage more accurately and that center and surround with the grouping structure satisfying Gestalt rule deserves high connection-probability. Computationally, a multi-feature contrast weighted inhibition model is presented to suppress background and lower mutual inhibition among contour elements; a fuzzy connection facilitation model is proposed to achieve the enhancement of contour response, the connection of discontinuous contour and the further elimination of randomly distributed noise and texture; a multi-scale iterative attention method is designed to accomplish dynamic modulation process and extract contours of targets in multi-size. This work provides a series of biologically motivated computational visual models with high-performance for contour detection from cluttered scene in night vision images.

  9. An innovative and shared methodology for event reconstruction using images in forensic science.

    PubMed

    Milliet, Quentin; Jendly, Manon; Delémont, Olivier

    2015-09-01

    This study presents an innovative methodology for forensic science image analysis for event reconstruction. The methodology is based on experiences from real cases. It provides real added value to technical guidelines such as standard operating procedures (SOPs) and enriches the community of practices at stake in this field. This bottom-up solution outlines the many facets of analysis and the complexity of the decision-making process. Additionally, the methodology provides a backbone for articulating more detailed and technical procedures and SOPs. It emerged from a grounded theory approach; data from individual and collective interviews with eight Swiss and nine European forensic image analysis experts were collected and interpreted in a continuous, circular and reflexive manner. Throughout the process of conducting interviews and panel discussions, similarities and discrepancies were discussed in detail to provide a comprehensive picture of practices and points of view and to ultimately formalise shared know-how. Our contribution sheds light on the complexity of the choices, actions and interactions along the path of data collection and analysis, enhancing both the researchers' and participants' reflexivity. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Synthesis of Systemic Functional Theory & Dynamical Systems Theory for Socio-Cultural Modeling

    DTIC Science & Technology

    2011-01-26

    is, language and other resources (e.g. images and sound resources) are conceptualised as inter-locking systems of meaning which realise four...hierarchical ranks and strata (e.g. sounds, word groups, clauses, and complex discourse structures in language, and elements, figures and episodes in images ...integrating platform for describing how language and other resources (e.g. images and sound) work together to fulfil particular objectives. While

  11. Generalized Wideband Harmonic Imaging of Nonlinearly Loaded Scatterers: Theory, Analysis, and Application for Forward-Looking Radar Target Detection

    DTIC Science & Technology

    2014-09-01

    signal) operations; it is general enough so that it can accommodate high - power (large-signal) sensing as well—which may be needed to detect targets... Generalized Wideband Harmonic Imaging of Nonlinearly Loaded Scatterers: Theory, Analysis, and Application for Forward-Looking Radar Target...Research Laboratory Adelphi, MD 20783-1138 ARL-TR-7121 September 2014 Generalized Wideband Harmonic Imaging of Nonlinearly Loaded

  12. Superior temporal sulcus--It's my area: or is it?

    PubMed

    Hein, Grit; Knight, Robert T

    2008-12-01

    The superior temporal sulcus (STS) is the chameleon of the human brain. Several research areas claim the STS as the host brain region for their particular behavior of interest. Some see it as one of the core structures for theory of mind. For others, it is the main region for audiovisual integration. It plays an important role in biological motion perception, but is also claimed to be essential for speech processing and processing of faces. We review the foci of activations in the STS from multiple functional magnetic resonance imaging studies, focusing on theory of mind, audiovisual integration, motion processing, speech processing, and face processing. The results indicate a differentiation of the STS region in an anterior portion, mainly involved in speech processing, and a posterior portion recruited by cognitive demands of all these different research areas. The latter finding argues against a strict functional subdivision of the STS. In line with anatomical evidence from tracer studies, we propose that the function of the STS varies depending on the nature of network coactivations with different regions in the frontal cortex and medial-temporal lobe. This view is more in keeping with the notion that the same brain region can support different cognitive operations depending on task-dependent network connections, emphasizing the role of network connectivity analysis in neuroimaging.

  13. Lexical Processing in Toddlers with ASD: Does Weak Central Coherence Play a Role?

    PubMed Central

    Weismer, Susan Ellis; Haebig, Eileen; Edwards, Jan; Saffran, Jenny; Venker, Courtney E.

    2016-01-01

    This study investigated whether vocabulary delays in toddlers with autism spectrum disorders (ASD) can be explained by a cognitive style that prioritizes processing of detailed, local features of input over global contextual integration – as claimed by the weak central coherence (WCC) theory. Thirty toddlers with ASD and 30 younger, cognition-matched typical controls participated in a looking-while-listening task that assessed whether perceptual or semantic similarities among named images disrupted word recognition relative to a neutral condition. Overlap of perceptual features invited local processing whereas semantic overlap invited global processing. With the possible exception of a subset of toddlers who had very low vocabulary skills, these results provide no evidence that WCC is characteristic of lexical processing in toddlers with ASD. PMID:27696177

  14. The current ability to test theories of gravity with black hole shadows

    NASA Astrophysics Data System (ADS)

    Mizuno, Yosuke; Younsi, Ziri; Fromm, Christian M.; Porth, Oliver; De Laurentis, Mariafelicia; Olivares, Hector; Falcke, Heino; Kramer, Michael; Rezzolla, Luciano

    2018-04-01

    Our Galactic Centre, Sagittarius A*, is believed to harbour a supermassive black hole, as suggested by observations tracking individual orbiting stars1,2. Upcoming submillimetre very-long baseline interferometry images of Sagittarius A* carried out by the Event Horizon Telescope collaboration (EHTC)3,4 are expected to provide critical evidence for the existence of this supermassive black hole5,6. We assess our present ability to use EHTC images to determine whether they correspond to a Kerr black hole as predicted by Einstein's theory of general relativity or to a black hole in alternative theories of gravity. To this end, we perform general-relativistic magnetohydrodynamical simulations and use general-relativistic radiative-transfer calculations to generate synthetic shadow images of a magnetized accretion flow onto a Kerr black hole. In addition, we perform these simulations and calculations for a dilaton black hole, which we take as a representative solution of an alternative theory of gravity. Adopting the very-long baseline interferometry configuration from the 2017 EHTC campaign, we find that it could be extremely difficult to distinguish between black holes from different theories of gravity, thus highlighting that great caution is needed when interpreting black hole images as tests of general relativity.

  15. The electromagnetic-trait imaging computation of traveling wave method in breast tumor microwave sensor system.

    PubMed

    Tao, Zhi-Fu; Han, Zhong-Ling; Yao, Meng

    2011-01-01

    Using the difference of dielectric constant between malignant tumor tissue and normal breast tissue, breast tumor microwave sensor system (BRATUMASS) determines the detected target of imaging electromagnetic trait by analyzing the properties of target tissue back wave obtained after near-field microwave radicalization (conelrad). The key of obtained target properties relationship and reconstructed detected space is to analyze the characteristics of the whole process from microwave transmission to back wave reception. Using traveling wave method, we derive spatial transmission properties and the relationship of the relation detected points distances, and valuate the properties of each unit by statistical valuation theory. This chapter gives the experimental data analysis results.

  16. Computational characterization of ordered nanostructured surfaces

    NASA Astrophysics Data System (ADS)

    Mohieddin Abukhdeir, Nasser

    2016-08-01

    A vital and challenging task for materials researchers is to determine relationships between material characteristics and desired properties. While the measurement and assessment of material properties can be complex, quantitatively characterizing their structure is frequently a more challenging task. This issue is magnified for materials researchers in the areas of nanoscience and nanotechnology, where material structure is further complicated by phenomena such as self-assembly, collective behavior, and measurement uncertainty. Recent progress has been made in this area for both self-assembled and nanostructured surfaces due to increasing accessibility of imaging techniques at the nanoscale. In this context, recent advances in nanomaterial surface structure characterization are reviewed including the development of new theory and image processing methods.

  17. Nanometer-scale sizing accuracy of particle suspensions on an unmodified cell phone using elastic light scattering.

    PubMed

    Smith, Zachary J; Chu, Kaiqin; Wachsmann-Hogiu, Sebastian

    2012-01-01

    We report on the construction of a Fourier plane imaging system attached to a cell phone. By illuminating particle suspensions with a collimated beam from an inexpensive diode laser, angularly resolved scattering patterns are imaged by the phone's camera. Analyzing these patterns with Mie theory results in predictions of size distributions of the particles in suspension. Despite using consumer grade electronics, we extracted size distributions of sphere suspensions with better than 20 nm accuracy in determining the mean size. We also show results from milk, yeast, and blood cells. Performing these measurements on a portable device presents opportunities for field-testing of food quality, process monitoring, and medical diagnosis.

  18. Smart light random memory sprays Retinex: a fast Retinex implementation for high-quality brightness adjustment and color correction.

    PubMed

    Banić, Nikola; Lončarić, Sven

    2015-11-01

    Removing the influence of illumination on image colors and adjusting the brightness across the scene are important image enhancement problems. This is achieved by applying adequate color constancy and brightness adjustment methods. One of the earliest models to deal with both of these problems was the Retinex theory. Some of the Retinex implementations tend to give high-quality results by performing local operations, but they are computationally relatively slow. One of the recent Retinex implementations is light random sprays Retinex (LRSR). In this paper, a new method is proposed for brightness adjustment and color correction that overcomes the main disadvantages of LRSR. There are three main contributions of this paper. First, a concept of memory sprays is proposed to reduce the number of LRSR's per-pixel operations to a constant regardless of the parameter values, thereby enabling a fast Retinex-based local image enhancement. Second, an effective remapping of image intensities is proposed that results in significantly higher quality. Third, the problem of LRSR's halo effect is significantly reduced by using an alternative illumination processing method. The proposed method enables a fast Retinex-based image enhancement by processing Retinex paths in a constant number of steps regardless of the path size. Due to the halo effect removal and remapping of the resulting intensities, the method outperforms many of the well-known image enhancement methods in terms of resulting image quality. The results are presented and discussed. It is shown that the proposed method outperforms most of the tested methods in terms of image brightness adjustment, color correction, and computational speed.

  19. The representation of abstract words: why emotion matters.

    PubMed

    Kousta, Stavroula-Thaleia; Vigliocco, Gabriella; Vinson, David P; Andrews, Mark; Del Campo, Elena

    2011-02-01

    Although much is known about the representation and processing of concrete concepts, knowledge of what abstract semantics might be is severely limited. In this article we first address the adequacy of the 2 dominant accounts (dual coding theory and the context availability model) put forward in order to explain representation and processing differences between concrete and abstract words. We find that neither proposal can account for experimental findings and that this is, at least partly, because abstract words are considered to be unrelated to experiential information in both of these accounts. We then address a particular type of experiential information, emotional content, and demonstrate that it plays a crucial role in the processing and representation of abstract concepts: Statistically, abstract words are more emotionally valenced than are concrete words, and this accounts for a residual latency advantage for abstract words, when variables such as imageability (a construct derived from dual coding theory) and rated context availability are held constant. We conclude with a discussion of our novel hypothesis for embodied abstract semantics. (c) 2010 APA, all rights reserved.

  20. No strong evidence for abnormal levels of dysfunctional attitudes, automatic thoughts, and emotional information-processing biases in remitted bipolar I affective disorder.

    PubMed

    Lex, Claudia; Meyer, Thomas D; Marquart, Barbara; Thau, Kenneth

    2008-03-01

    Beck extended his original cognitive theory of depression by suggesting that mania was a mirror image of depression characterized by extreme positive cognition about the self, the world, and the future. However, there were no suggestions what might be special regarding cognitive features in bipolar patients (Mansell & Scott, 2006). We therefore used different indicators to evaluate cognitive processes in bipolar patients and healthy controls. We compared 19 remitted bipolar I patients (BPs) without any Axis I comorbidity with 19 healthy individuals (CG). All participants completed the Beck Depression Inventory, the Dysfunctional Attitude Scale, the Automatic Thoughts Questionnaire, the Emotional Stroop Test, and an incidental recall task. No significant group differences were found in automatic thinking and the information-processing styles (Emotional Stroop Test, incidental recall task). Regarding dysfunctional attitudes, we obtained ambiguous results. It appears that individuals with remitted bipolar affective disorder do not show cognitive vulnerability as proposed in Beck's theory of depression if they only report subthreshold levels of depressive symptoms. Perhaps, the cognitive vulnerability might only be observable if mood induction procedures are used.

  1. Exact image theory for the problem of dielectric/magnetic slab

    NASA Technical Reports Server (NTRS)

    Lindell, I. V.

    1987-01-01

    Exact image method, recently introduced for the exact solution of electromagnetic field problems involving homogeneous half spaces and microstrip-like geometries, is developed for the problem of homogeneous slab of dielectric and/or magnetic material in free space. Expressions for image sources, creating the exact reflected and transmitted fields, are given and their numerical evaluation is demonstrated. Nonradiating modes, guided by the slab and responsible for the loss of convergence of the image functions, are considered and extracted. The theory allows, for example, an analysis of finite ground planes in microstrip antenna structures.

  2. An information theory of image gathering

    NASA Technical Reports Server (NTRS)

    Fales, Carl L.; Huck, Friedrich O.

    1991-01-01

    Shannon's mathematical theory of communication is extended to image gathering. Expressions are obtained for the total information that is received with a single image-gathering channel and with parallel channels. It is concluded that the aliased signal components carry information even though these components interfere with the within-passband components in conventional image gathering and restoration, thereby degrading the fidelity and visual quality of the restored image. An examination of the expression for minimum mean-square-error, or Wiener-matrix, restoration from parallel image-gathering channels reveals a method for unscrambling the within-passband and aliased signal components to restore spatial frequencies beyond the sampling passband out to the spatial frequency response cutoff of the optical aperture.

  3. a Band Selection Method for High Precision Registration of Hyperspectral Image

    NASA Astrophysics Data System (ADS)

    Yang, H.; Li, X.

    2018-04-01

    During the registration of hyperspectral images and high spatial resolution images, too much bands in a hyperspectral image make it difficult to select bands with good registration performance. Terrible bands are possible to reduce matching speed and accuracy. To solve this problem, an algorithm based on Cram'er-Rao lower bound theory is proposed to select good matching bands in this paper. The algorithm applies the Cram'er-Rao lower bound theory to the study of registration accuracy, and selects good matching bands by CRLB parameters. Experiments show that the algorithm in this paper can choose good matching bands and provide better data for the registration of hyperspectral image and high spatial resolution image.

  4. Temporal lobe abnormalities in semantic processing by criminal psychopaths as revealed by functional magnetic resonance imaging.

    PubMed

    Kiehl, Kent A; Smith, Andra M; Mendrek, Adrianna; Forster, Bruce B; Hare, Robert D; Liddle, Peter F

    2004-04-30

    We tested the hypothesis that psychopathy is associated with abnormalities in semantic processing of linguistic information. Functional magnetic resonance imaging (fMRI) was used to elucidate and characterize the neural architecture underlying lexico-semantic processes in criminal psychopathic individuals and in a group of matched control participants. Participants performed a lexical decision task in which blocks of linguistic stimuli alternated with a resting baseline condition. In each lexical decision block, the stimuli were either concrete words and pseudowords or abstract words and pseudowords. Consistent with our hypothesis, psychopathic individuals, relative to controls, showed poorer behavioral performance for processing abstract words. Analysis of the fMRI data for both groups indicated that processing of word stimuli, compared with the resting baseline condition, was associated with neural activation in bilateral fusiform gyrus, anterior cingulate, left middle temporal gyrus, right posterior superior temporal gyrus, and left and right inferior frontal gyrus. Analyses confirmed our prediction that psychopathic individuals would fail to show the appropriate neural differentiation between abstract and concrete stimuli in the right anterior temporal gyrus and surrounding cortex. The results are consistent with other studies of semantic processing in psychopathy and support the theory that psychopathy is associated with right hemisphere abnormalities for processing conceptually abstract material.

  5. Temporal lobe abnormalities in semantic processing by criminal psychopaths as revealed by functional magnetic resonance imaging.

    PubMed

    Kiehl, Kent A; Smith, Andra M; Mendrek, Adrianna; Forster, Bruce B; Hare, Robert D; Liddle, Peter F

    2004-01-15

    We tested the hypothesis that psychopathy is associated with abnormalities in semantic processing of linguistic information. Functional magnetic resonance imaging (fMRI) was used to elucidate and characterize the neural architecture underlying lexico-semantic processes in criminal psychopathic individuals and in a group of matched control participants. Participants performed a lexical decision task in which blocks of linguistic stimuli alternated with a resting baseline condition. In each lexical decision block, the stimuli were either concrete words and pseudowords or abstract words and pseudowords. Consistent with our hypothesis, psychopathic individuals, relative to controls, showed poorer behavioral performance for processing abstract words. Analysis of the fMRI data for both groups indicated that processing of word stimuli, compared with the resting baseline condition, was associated with neural activation in bilateral fusiform gyrus, anterior cingulate, left middle temporal gyrus, right posterior superior temporal gyrus, and left and right inferior frontal gyrus. Analyses confirmed our prediction that psychopathic individuals would fail to show the appropriate neural differentiation between abstract and concrete stimuli in the right anterior temporal gyrus and surrounding cortex. The results are consistent with other studies of semantic processing in psychopathy and support the theory that psychopathy is associated with right hemisphere abnormalities for processing conceptually abstract material.

  6. Long-Range Reduced Predictive Information Transfers of Autistic Youths in EEG Sensor-Space During Face Processing.

    PubMed

    Khadem, Ali; Hossein-Zadeh, Gholam-Ali; Khorrami, Anahita

    2016-03-01

    The majority of previous functional/effective connectivity studies conducted on the autistic patients converged to the underconnectivity theory of ASD: "long-range underconnectivity and sometimes short-rang overconnectivity". However, to the best of our knowledge the total (linear and nonlinear) predictive information transfers (PITs) of autistic patients have not been investigated yet. Also, EEG data have rarely been used for exploring the information processing deficits in autistic subjects. This study is aimed at comparing the total (linear and nonlinear) PITs of autistic and typically developing healthy youths during human face processing by using EEG data. The ERPs of 12 autistic youths and 19 age-matched healthy control (HC) subjects were recorded while they were watching upright and inverted human face images. The PITs among EEG channels were quantified using two measures separately: transfer entropy with self-prediction optimality (TESPO), and modified transfer entropy with self-prediction optimality (MTESPO). Afterwards, the directed differential connectivity graphs (dDCGs) were constructed to characterize the significant changes in the estimated PITs of autistic subjects compared with HC ones. By using both TESPO and MTESPO, long-range reduction of PITs of ASD group during face processing was revealed (particularly from frontal channels to right temporal channels). Also, it seemed the orientation of face images (upright or upside down) did not modulate the binary pattern of PIT-based dDCGs, significantly. Moreover, compared with TESPO, the results of MTESPO were more compatible with the underconnectivity theory of ASD in the sense that MTESPO showed no long-range increase in PIT. It is also noteworthy that to the best of our knowledge it is the first time that a version of MTE is applied for patients (here ASD) and it is also its first use for EEG data analysis.

  7. A theoretical Gaussian framework for anomalous change detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Acito, Nicola; Diani, Marco; Corsini, Giovanni

    2017-10-01

    Exploitation of temporal series of hyperspectral images is a relatively new discipline that has a wide variety of possible applications in fields like remote sensing, area surveillance, defense and security, search and rescue and so on. In this work, we discuss how images taken at two different times can be processed to detect changes caused by insertion, deletion or displacement of small objects in the monitored scene. This problem is known in the literature as anomalous change detection (ACD) and it can be viewed as the extension, to the multitemporal case, of the well-known anomaly detection problem in a single image. In fact, in both cases, the hyperspectral images are processed blindly in an unsupervised manner and without a-priori knowledge about the target spectrum. We introduce the ACD problem using an approach based on the statistical decision theory and we derive a common framework including different ACD approaches. Particularly, we clearly define the observation space, the data statistical distribution conditioned to the two competing hypotheses and the procedure followed to come with the solution. The proposed overview places emphasis on techniques based on the multivariate Gaussian model that allows a formal presentation of the ACD problem and the rigorous derivation of the possible solutions in a way that is both mathematically more tractable and easier to interpret. We also discuss practical problems related to the application of the detectors in the real world and present affordable solutions. Namely, we describe the ACD processing chain including the strategies that are commonly adopted to compensate pervasive radiometric changes, caused by the different illumination/atmospheric conditions, and to mitigate the residual geometric image co-registration errors. Results obtained on real freely available data are discussed in order to test and compare the methods within the proposed general framework.

  8. A Framework for Integrating Cultural Factors in Military Modeling and Simulation

    DTIC Science & Technology

    2006-01-01

    how individuals and groups view their surroundings. Here, Beach’s (1990) image theory is used to elucidate the major cultural image questions relevant...10 Figure 3: Beach’s Image Theory for Cultural Knowledge Capture ............................. 13 Figure 4: Cultural Cognition of Peace Symbol...and language to define the rhythms of war, including new methods of deception. The Coalition Forces, led by the USA, are challenged with the daily

  9. Improving Range Estimation of a 3-Dimensional Flash Ladar via Blind Deconvolution

    DTIC Science & Technology

    2010-09-01

    12 2.1.4 Optical Imaging as a Linear and Nonlinear System 15 2.1.5 Coherence Theory and Laser Light Statistics . . . 16 2.2 Deconvolution...rather than deconvolution. 2.1.5 Coherence Theory and Laser Light Statistics. Using [24] and [25], this section serves as background on coherence theory...the laser light incident on the detector surface. The image intensity related to different types of coherence is governed by the laser light’s spatial

  10. Simultaneous measurement of lipid and aqueous layers of tear film using optical coherence tomography and statistical decision theory

    NASA Astrophysics Data System (ADS)

    Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Rolland, Jannick P.

    2014-03-01

    The prevalence of Dry Eye Disease (DED) in the USA is approximately 40 million in aging adults with about $3.8 billion economic burden. However, a comprehensive understanding of tear film dynamics, which is the prerequisite to advance the management of DED, is yet to be realized. To extend our understanding of tear film dynamics, we investigate the simultaneous estimation of the lipid and aqueous layers thicknesses with the combination of optical coherence tomography (OCT) and statistical decision theory. In specific, we develop a mathematical model for Fourier-domain OCT where we take into account the different statistical processes associated with the imaging chain. We formulate the first-order and second-order statistical quantities of the output of the OCT system, which can generate some simulated OCT spectra. A tear film model, which includes a lipid and aqueous layer on top of a rough corneal surface, is the object being imaged. Then we further implement a Maximum-likelihood (ML) estimator to interpret the simulated OCT data to estimate the thicknesses of both layers of the tear film. Results show that an axial resolution of 1 μm allows estimates down to nanometers scale. We use the root mean square error of the estimates as a metric to evaluate the system parameters, such as the tradeoff between the imaging speed and the precision of estimation. This framework further provides the theoretical basics to optimize the imaging setup for a specific thickness estimation task.

  11. Revealing topological organization of human brain functional networks with resting-state functional near infrared spectroscopy.

    PubMed

    Niu, Haijing; Wang, Jinhui; Zhao, Tengda; Shu, Ni; He, Yong

    2012-01-01

    The human brain is a highly complex system that can be represented as a structurally interconnected and functionally synchronized network, which assures both the segregation and integration of information processing. Recent studies have demonstrated that a variety of neuroimaging and neurophysiological techniques such as functional magnetic resonance imaging (MRI), diffusion MRI and electroencephalography/magnetoencephalography can be employed to explore the topological organization of human brain networks. However, little is known about whether functional near infrared spectroscopy (fNIRS), a relatively new optical imaging technology, can be used to map functional connectome of the human brain and reveal meaningful and reproducible topological characteristics. We utilized resting-state fNIRS (R-fNIRS) to investigate the topological organization of human brain functional networks in 15 healthy adults. Brain networks were constructed by thresholding the temporal correlation matrices of 46 channels and analyzed using graph-theory approaches. We found that the functional brain network derived from R-fNIRS data had efficient small-world properties, significant hierarchical modular structure and highly connected hubs. These results were highly reproducible both across participants and over time and were consistent with previous findings based on other functional imaging techniques. Our results confirmed the feasibility and validity of using graph-theory approaches in conjunction with optical imaging techniques to explore the topological organization of human brain networks. These results may expand a methodological framework for utilizing fNIRS to study functional network changes that occur in association with development, aging and neurological and psychiatric disorders.

  12. a Study of the Effects of Processing Chemistry on the Holographic Image Space.

    NASA Astrophysics Data System (ADS)

    Kocher, Clive Joseph

    Available from UMI in association with The British Library. Processing methods for reflection and transmission holograms were evaluated with a view to minimising distortion in the images of small, metallic, near field subjects, whilst retaining optimum quality. The study was limited to recordings made with the HeNe laser (633 nm) in conjunction with the Agfa Gevaert 8E75 HD silver halide emulsion on glass or film support (5^{' '} x 4^{' '} format). Simple ray diagrams were used to help predict angular distortion arising from emulsion shrinkage for a two-dimensional model. The main conclusions are: (a) Serious distortion of the order of several millimetres, and loss of resolution will occur in the images of reflection holograms unless careful attention is given to processing procedures. Evidence supports the hypothesis that shrinkage due to processing causes the fringe system to collapse with a resultant change in inclination angle, and hence a distortion of the reconstructed image. Minimum distortion occurs with a laser reconstructed hologram processed in a high tanning developer and rehalogenating bleach, none being detected under the test conditions. (b) The same problem was not apparent for the transmission hologram due to a different fringe orientation, and within the limitations of the measuring system, no distortion was detected for any processing system. Comparative tests were made to evaluate the differences in performance for the Agfa 8E75 HD emulsion on plate and film support. Results show a significant increase in speed for film (as high as times4) and shrinkage (~3%), under all processing conditions. The advantages of using Phenidone based developers are shown. The report also includes a comprehensive background theory section covering basic concepts, silver halide recording material, holographic processing chemistry, distortion in holograms and pulsed laser holography. A review of previous work on phase holograms is given. Although primarily intended for measurement, this report contains useful information of benefit to display holography.

  13. Minimization of dependency length in written English.

    PubMed

    Temperley, David

    2007-11-01

    Gibson's Dependency Locality Theory (DLT) [Gibson, E. 1998. Linguistic complexity: locality of syntactic dependencies. Cognition, 68, 1-76; Gibson, E. 2000. The dependency locality theory: A distance-based theory of linguistic complexity. In A. Marantz, Y. Miyashita, & W. O'Neil (Eds.), Image, Language, Brain (pp. 95-126). Cambridge, MA: MIT Press.] proposes that the processing complexity of a sentence is related to the length of its syntactic dependencies: longer dependencies are more difficult to process. The DLT is supported by a variety of phenomena in language comprehension. This raises the question: Does language production reflect a preference for shorter dependencies as well? I examine this question in a corpus study of written English, using the Wall Street Journal portion of the Penn Treebank. The DLT makes a number of predictions regarding the length of constituents in different contexts; these predictions were tested in a series of statistical tests. A number of findings support the theory: the greater length of subject noun phrases in inverted versus uninverted quotation constructions, the greater length of direct-object versus subject NPs, the greater length of postmodifying versus premodifying adverbial clauses, the greater length of relative-clause subjects within direct-object NPs versus subject NPs, the tendency towards "short-long" ordering of postmodifying adjuncts and coordinated conjuncts, and the shorter length of subject NPs (but not direct-object NPs) in clauses with premodifying adjuncts versus those without.

  14. If it bleeds, it leads: separating threat from mere negativity

    PubMed Central

    Boshyan, Jasmine; Adams, Reginald B.; Mote, Jasmine; Betz, Nicole; Ward, Noreen; Hadjikhani, Nouchine; Bar, Moshe; Barrett, Lisa F.

    2015-01-01

    Most theories of emotion hold that negative stimuli are threatening and aversive. Yet in everyday experiences some negative sights (e.g. car wrecks) attract curiosity, whereas others repel (e.g. a weapon pointed in our face). To examine the diversity in negative stimuli, we employed four classes of visual images (Direct Threat, Indirect Threat, Merely Negative and Neutral) in a set of behavioral and functional magnetic resonance imaging studies. Participants reliably discriminated between the images, evaluating Direct Threat stimuli most quickly, and Merely Negative images most slowly. Threat images evoked greater and earlier blood oxygen level-dependent (BOLD) activations in the amygdala and periaqueductal gray, structures implicated in representing and responding to the motivational salience of stimuli. Conversely, the Merely Negative images evoked larger BOLD signal in the parahippocampal, retrosplenial, and medial prefrontal cortices, regions which have been implicated in contextual association processing. Ventrolateral as well as medial and lateral orbitofrontal cortices were activated by both threatening and Merely Negative images. In conclusion, negative visual stimuli can repel or attract scrutiny depending on their current threat potential, which is assessed by dynamic shifts in large-scale brain network activity. PMID:24493851

  15. Comparison and analysis of nonlinear algorithms for compressed sensing in MRI.

    PubMed

    Yu, Yeyang; Hong, Mingjian; Liu, Feng; Wang, Hua; Crozier, Stuart

    2010-01-01

    Compressed sensing (CS) theory has been recently applied in Magnetic Resonance Imaging (MRI) to accelerate the overall imaging process. In the CS implementation, various algorithms have been used to solve the nonlinear equation system for better image quality and reconstruction speed. However, there are no explicit criteria for an optimal CS algorithm selection in the practical MRI application. A systematic and comparative study of those commonly used algorithms is therefore essential for the implementation of CS in MRI. In this work, three typical algorithms, namely, the Gradient Projection For Sparse Reconstruction (GPSR) algorithm, Interior-point algorithm (l(1)_ls), and the Stagewise Orthogonal Matching Pursuit (StOMP) algorithm are compared and investigated in three different imaging scenarios, brain, angiogram and phantom imaging. The algorithms' performances are characterized in terms of image quality and reconstruction speed. The theoretical results show that the performance of the CS algorithms is case sensitive; overall, the StOMP algorithm offers the best solution in imaging quality, while the GPSR algorithm is the most efficient one among the three methods. In the next step, the algorithm performances and characteristics will be experimentally explored. It is hoped that this research will further support the applications of CS in MRI.

  16. Parallel, confocal, and complete spectrum imager for fluorescent detection of high-density microarray

    NASA Astrophysics Data System (ADS)

    Bogdanov, Valery L.; Boyce-Jacino, Michael

    1999-05-01

    Confined arrays of biochemical probes deposited on a solid support surface (analytical microarray or 'chip') provide an opportunity to analysis multiple reactions simultaneously. Microarrays are increasingly used in genetics, medicine and environment scanning as research and analytical instruments. A power of microarray technology comes from its parallelism which grows with array miniaturization, minimization of reagent volume per reaction site and reaction multiplexing. An optical detector of microarray signals should combine high sensitivity, spatial and spectral resolution. Additionally, low-cost and a high processing rate are needed to transfer microarray technology into biomedical practice. We designed an imager that provides confocal and complete spectrum detection of entire fluorescently-labeled microarray in parallel. Imager uses microlens array, non-slit spectral decomposer, and high- sensitive detector (cooled CCD). Two imaging channels provide a simultaneous detection of localization, integrated and spectral intensities for each reaction site in microarray. A dimensional matching between microarray and imager's optics eliminates all in moving parts in instrumentation, enabling highly informative, fast and low-cost microarray detection. We report theory of confocal hyperspectral imaging with microlenses array and experimental data for implementation of developed imager to detect fluorescently labeled microarray with a density approximately 103 sites per cm2.

  17. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  18. Measuring pictorial balance perception at first glance using Japanese calligraphy

    PubMed Central

    Gershoni, Sharon; Hochstein, Shaul

    2011-01-01

    According to art theory, pictorial balance acts to unify picture elements into a cohesive composition. For asymmetrical compositions, balancing elements is thought to be similar to balancing mechanical weights in a framework of symmetry axes. Assessment of preference for balance (APB), based on the symmetry-axes framework suggested in Arnheim R, 1974 Art and Visual Perception: A Psychology of the Creative Eye (Berkeley, CA: University of California Press), successfully matched subject balance ratings of images of geometrical shapes over unlimited viewing time. We now examine pictorial balance perception of Japanese calligraphy during first fixation, isolated from later cognitive processes, comparing APB measures with results from balance-rating and comparison tasks. Results show high between-task correlation, but low correlation with APB. We repeated the rating task, expanding the image set to include five rotations of each image, comparing balance perception of artist and novice participant groups. Rotation has no effect on APB balance computation but dramatically affects balance rating, especially for art experts. We analyze the variety of rotation effects and suggest that, rather than depending on element size and position relative to symmetry axes, first fixation balance processing derives from global processes such as grouping of lines and shapes, object recognition, preference for horizontal and vertical elements, closure, and completion, enhanced by vertical symmetry. PMID:23145242

  19. Classification and Quality Evaluation of Tobacco Leaves Based on Image Processing and Fuzzy Comprehensive Evaluation

    PubMed Central

    Zhang, Fan; Zhang, Xinhong

    2011-01-01

    Most of classification, quality evaluation or grading of the flue-cured tobacco leaves are manually operated, which relies on the judgmental experience of experts, and inevitably limited by personal, physical and environmental factors. The classification and the quality evaluation are therefore subjective and experientially based. In this paper, an automatic classification method of tobacco leaves based on the digital image processing and the fuzzy sets theory is presented. A grading system based on image processing techniques was developed for automatically inspecting and grading flue-cured tobacco leaves. This system uses machine vision for the extraction and analysis of color, size, shape and surface texture. Fuzzy comprehensive evaluation provides a high level of confidence in decision making based on the fuzzy logic. The neural network is used to estimate and forecast the membership function of the features of tobacco leaves in the fuzzy sets. The experimental results of the two-level fuzzy comprehensive evaluation (FCE) show that the accuracy rate of classification is about 94% for the trained tobacco leaves, and the accuracy rate of the non-trained tobacco leaves is about 72%. We believe that the fuzzy comprehensive evaluation is a viable way for the automatic classification and quality evaluation of the tobacco leaves. PMID:22163744

  20. A low-power small-area ADC array for IRFPA readout

    NASA Astrophysics Data System (ADS)

    Zhong, Shengyou; Yao, Libin

    2013-09-01

    The readout integrated circuit (ROIC) is a bridge between the infrared focal plane array (IRFPA) and image processing circuit in an infrared imaging system. The ROIC is the first part of signal processing circuit and connected to detectors directly, so its performance will greatly affect the detector or even the whole imaging system performance. With the development of CMOS technologies, it's possible to digitalize the signal inside the ROIC and develop the digital ROIC. Digital ROIC can reduce complexity of the whole system and improve the system reliability. More importantly, it can accommodate variety of digital signal processing techniques which the traditional analog ROIC cannot achieve. The analog to digital converter (ADC) is the most important building block in the digital ROIC. The requirements for ADCs inside the ROIC are low power, high dynamic range and small area. In this paper we propose an RC hybrid Successive Approximation Register (SAR) ADC as the column ADC for digital ROIC. In our proposed ADC structure, a resistor ladder is used to generate several voltages. The proposed RC hybrid structure not only reduces the area of capacitor array but also releases requirement for capacitor array matching. Theory analysis and simulation show RC hybrid SAR ADC is suitable for ADC array applications

  1. Fisher information matrix for branching processes with application to electron-multiplying charge-coupled devices

    PubMed Central

    Chao, Jerry; Ward, E. Sally; Ober, Raimund J.

    2012-01-01

    The high quantum efficiency of the charge-coupled device (CCD) has rendered it the imaging technology of choice in diverse applications. However, under extremely low light conditions where few photons are detected from the imaged object, the CCD becomes unsuitable as its readout noise can easily overwhelm the weak signal. An intended solution to this problem is the electron-multiplying charge-coupled device (EMCCD), which stochastically amplifies the acquired signal to drown out the readout noise. Here, we develop the theory for calculating the Fisher information content of the amplified signal, which is modeled as the output of a branching process. Specifically, Fisher information expressions are obtained for a general and a geometric model of amplification, as well as for two approximations of the amplified signal. All expressions pertain to the important scenario of a Poisson-distributed initial signal, which is characteristic of physical processes such as photon detection. To facilitate the investigation of different data models, a “noise coefficient” is introduced which allows the analysis and comparison of Fisher information via a scalar quantity. We apply our results to the problem of estimating the location of a point source from its image, as observed through an optical microscope and detected by an EMCCD. PMID:23049166

  2. Ultra-Rapid Categorization of Fourier-Spectrum Equalized Natural Images: Macaques and Humans Perform Similarly

    PubMed Central

    Girard, Pascal; Koenig-Robert, Roger

    2011-01-01

    Background Comparative studies of cognitive processes find similarities between humans and apes but also monkeys. Even high-level processes, like the ability to categorize classes of object from any natural scene under ultra-rapid time constraints, seem to be present in rhesus macaque monkeys (despite a smaller brain and the lack of language and a cultural background). An interesting and still open question concerns the degree to which the same images are treated with the same efficacy by humans and monkeys when a low level cue, the spatial frequency content, is controlled. Methodology/Principal Findings We used a set of natural images equalized in Fourier spectrum and asked whether it is still possible to categorize them as containing an animal and at what speed. One rhesus macaque monkey performed a forced-choice saccadic task with a good accuracy (67.5% and 76% for new and familiar images respectively) although performance was lower than with non-equalized images. Importantly, the minimum reaction time was still very fast (100 ms). We compared the performances of human subjects with the same setup and the same set of (new) images. Overall mean performance of humans was also lower than with original images (64% correct) but the minimum reaction time was still short (140 ms). Conclusion Performances on individual images (% correct but not reaction times) for both humans and the monkey were significantly correlated suggesting that both species use similar features to perform the task. A similar advantage for full-face images was seen for both species. The results also suggest that local low spatial frequency information could be important, a finding that fits the theory that fast categorization relies on a rapid feedforward magnocellular signal. PMID:21326600

  3. Analog signal processing for optical coherence imaging systems

    NASA Astrophysics Data System (ADS)

    Xu, Wei

    Optical coherence tomography (OCT) and optical coherence microscopy (OCM) are non-invasive optical coherence imaging techniques, which enable micron-scale resolution, depth resolved imaging capability. Both OCT and OCM are based on Michelson interferometer theory. They are widely used in ophthalmology, gastroenterology and dermatology, because of their high resolution, safety and low cost. OCT creates cross sectional images whereas OCM obtains en face images. In this dissertation, the design and development of three increasingly complicated analog signal processing (ASP) solutions for optical coherence imaging are presented. The first ASP solution was implemented for a time domain OCT system with a Rapid Scanning Optical Delay line (RSOD)-based optical signal modulation and logarithmic amplifier (Log amp) based demodulation. This OCT system can acquire up to 1600 A-scans per second. The measured dynamic range is 106dB at 200A-scan per second. This OCT signal processing electronics includes an off-the-shelf filter box with a Log amp circuit implemented on a PCB board. The second ASP solution was developed for an OCM system with synchronized modulation and demodulation and compensation for interferometer phase drift. This OCM acquired micron-scale resolution, high dynamic range images at acquisition speeds up to 45,000 pixels/second. This OCM ASP solution is fully custom designed on a perforated circuit board. The third ASP solution was implemented on a single 2.2 mm x 2.2 mm complementary metal oxide semiconductor (CMOS) chip. This design is expandable to a multiple channel OCT system. A single on-chip CMOS photodetector and ASP channel was used for coherent demodulation in a time domain OCT system. Cross-sectional images were acquired with a dynamic range of 76dB (limited by photodetector responsivity). When incorporated with a bump-bonded InGaAs photodiode with higher responsivity, the expected dynamic range is close to 100dB.

  4. On the theoretical description of weakly charged surfaces.

    PubMed

    Wang, Rui; Wang, Zhen-Gang

    2015-03-14

    It is widely accepted that the Poisson-Boltzmann (PB) theory provides a valid description for charged surfaces in the so-called weak coupling limit. Here, we show that the image charge repulsion creates a depletion boundary layer that cannot be captured by a regular perturbation approach. The correct weak-coupling theory must include the self-energy of the ion due to the image charge interaction. The image force qualitatively alters the double layer structure and properties, and gives rise to many non-PB effects, such as nonmonotonic dependence of the surface energy on concentration and charge inversion. In the presence of dielectric discontinuity, there is no limiting condition for which the PB theory is valid.

  5. [Research on Spectral Polarization Imaging System Based on Static Modulation].

    PubMed

    Zhao, Hai-bo; Li, Huan; Lin, Xu-ling; Wang, Zheng

    2015-04-01

    The main disadvantages of traditional spectral polarization imaging system are: complex structure, with moving parts, low throughput. A novel method of spectral polarization imaging system is discussed, which is based on static polarization intensity modulation combined with Savart polariscope interference imaging. The imaging system can obtain real-time information of spectral and four Stokes polarization messages. Compared with the conventional methods, the advantages of the imaging system are compactness, low mass and no moving parts, no electrical control, no slit and big throughput. The system structure and the basic theory are introduced. The experimental system is established in the laboratory. The experimental system consists of reimaging optics, polarization intensity module, interference imaging module, and CCD data collecting and processing module. The spectral range is visible and near-infrared (480-950 nm). The white board and the plane toy are imaged by using the experimental system. The ability of obtaining spectral polarization imaging information is verified. The calibration system of static polarization modulation is set up. The statistical error of polarization degree detection is less than 5%. The validity and feasibility of the basic principle is proved by the experimental result. The spectral polarization data captured by the system can be applied to object identification, object classification and remote sensing detection.

  6. Research on spatial-variant property of bistatic ISAR imaging plane of space target

    NASA Astrophysics Data System (ADS)

    Guo, Bao-Feng; Wang, Jun-Ling; Gao, Mei-Guo

    2015-04-01

    The imaging plane of inverse synthetic aperture radar (ISAR) is the projection plane of the target. When taking an image using the range-Doppler theory, the imaging plane may have a spatial-variant property, which causes the change of scatter’s projection position and results in migration through resolution cells. In this study, we focus on the spatial-variant property of the imaging plane of a three-axis-stabilized space target. The innovative contributions are as follows. 1) The target motion model in orbit is provided based on a two-body model. 2) The instantaneous imaging plane is determined by the method of vector analysis. 3) Three Euler angles are introduced to describe the spatial-variant property of the imaging plane, and the image quality is analyzed. The simulation results confirm the analysis of the spatial-variant property. The research in this study is significant for the selection of the imaging segment, and provides the evidence for the following data processing and compensation algorithm. Project supported by the National Natural Science Foundation of China (Grant No. 61401024), the Shanghai Aerospace Science and Technology Innovation Foundation, China (Grant No. SAST201240), and the Basic Research Foundation of Beijing Institute of Technology (Grant No. 20140542001).

  7. Nuts and Bolts of CEST MR imaging

    PubMed Central

    Liu, Guanshu; Song, Xiaolei; Chan, Kannie W.Y.

    2013-01-01

    Chemical Exchange Saturation Transfer (CEST) has emerged as a novel MRI contrast mechanism that is well suited for molecular imaging studies. This new mechanism can be used to detect small amounts of contrast agent through saturation of rapidly exchanging protons on these agents, allowing a wide range of applications. CEST technology has a number of indispensable features, such as the possibility of simultaneous detection of multiple “colors” of agents and detecting changes in their environment (e.g. pH, metabolites, etc) through MR contrast. Currently a large number of new imaging schemes and techniques have been developed to improve the temporal resolution and specificity and to correct the influence of B0 and B1 inhomogeneities. In this review, the techniques developed over the last decade have been summarized with the different imaging strategies and post-processing methods discussed from a practical point of view including describing their relative merits for detecting CEST agents. The goal of the present work is to provide the reader with a fundamental understanding of the techniques developed, and to provide guidance to help refine future applications of this technology. This review is organized into three main sections: Basics of CEST Contrast, Implementation, Post-Processing, and also includes a brief Introduction section and Summary. The Basics of CEST Contrast section contains a description of the relevant background theory for saturation transfer and frequency labeled transfer, and a brief discussion of methods to determine exchange rates. The Implementation section contains a description of the practical considerations in conducting CEST MRI studies, including choice of magnetic field, pulse sequence, saturation pulse, imaging scheme, and strategies to separate MT and CEST. The Post-Processing section contains a description of the typical image processing employed for B0/B1 correction, Z-spectral interpolation, frequency selective detection, and improving CEST contrast maps. PMID:23303716

  8. Prayer as therapeutic process toward transforming destructiveness within a spiritual direction relationship.

    PubMed

    Kuchan, Karen L

    2011-03-01

    This article will expand previous conceptualizations (Kuchan, Presence Int J Spiritual Dir 12(4):22-34, 2006; J Religion Health 47(2):263-275, 2008; J Pastoral Care Counsel, forthcoming) of what might be occurring during a prayer practice that creates space within a spiritual direction relationship for the creation of inner images that reveal a person's unconscious relational longings and co-created representations of God that seem to facilitate therapeutic process toward aliveness. In previous articles, I suggest one way to understand the prayer experience is through a lens of Winnicottian notions of transitional space, illusion, and co-creation of God images. This article expands on these ideas to include an understanding of God as Objective Other (Lewis, The four loves, 1960) interacting with a part of a person's self (Jung, in: The structure and dynamics of the psyche, collected works 8, 1934; Symington, Narcissism, a new theory, 1993) that has capacity for subjectivity (Benjamin, Like subjects, love objects: Essays on recognition and sexual difference, 1995) and co-creation (Winnicott, Home is where we start from: Essays by a psychoanalyst, 1990), of inner representations of God (Ulanov, Winnicott, god and psychic reality, 2001). I also expand on a notion of God as "Source of aliveness" by integrating an aspect of how Symington (Narcissism, a new theory, 1993) thinks about "the lifegiver," which he understands to be a mental object. After offering this theoretical expansion of the prayer practice/experience, one woman's inner representations of self and God are reflected upon in terms of a therapeutic process toward transforming destructiveness, utilizing ideas from Winnicott, Kohut, and Benjamin.

  9. Long-range speckle imaging theory, simulation, and brassboard results

    NASA Astrophysics Data System (ADS)

    Riker, Jim F.; Tyler, Glenn A.; Vaughn, Jeff L.

    2017-09-01

    In the SPIE 2016 Unconventional Imaging session, the authors laid out a breakthrough new theory for active array imaging that exploits the speckle return to generate a high-resolution picture of the target. Since then, we have pursued that theory even in long-range (<1000-km) engagement scenarios and shown how we can obtain that high-resolution image of the target using only a few illuminators, or by using many illuminators. There is a trade of illuminators versus receivers, but many combinations provide the same synthetic aperture resolution. We will discuss that trade, along with the corresponding radiometric and speckle-imaging Signal-to-Noise Ratios (SNR) for geometries that can fit on relatively small aircraft, such as an Unmanned Aerial Vehicle (UAV). Furthermore, we have simulated the performance of the technique, and we have created a laboratory version of the approach that is able to obtain high-resolution speckle imagery. The principal results presented in this paper are the Signal to Noise Ratios (SNR) for both the radiometric and the speckle imaging portions of the problem, and the simulated results obtained for representative arrays.

  10. The practical application of signal detection theory to image quality assessment in x-ray image intensifier-TV fluoroscopy.

    PubMed

    Marshall, N W

    2001-06-01

    This paper applies a published version of signal detection theory to x-ray image intensifier fluoroscopy data and compares the results with more conventional subjective image quality measures. An eight-bit digital framestore was used to acquire temporally contiguous frames of fluoroscopy data from which the modulation transfer function (MTF(u)) and noise power spectrum were established. These parameters were then combined to give detective quantum efficiency (DQE(u)) and used in conjunction with signal detection theory to calculate contrast-detail performance. DQE(u) was found to lie between 0.1 and 0.5 for a range of fluoroscopy systems. Two separate image quality experiments were then performed in order to assess the correspondence between the objective and subjective methods. First, image quality for a given fluoroscopy system was studied as a function of doserate using objective parameters and a standard subjective contrast-detail method. Following this, the two approaches were used to assess three different fluoroscopy units. Agreement between objective and subjective methods was good; doserate changes were modelled correctly while both methods ranked the three systems consistently.

  11. Recent Advances of Malaria Parasites Detection Systems Based on Mathematical Morphology

    PubMed Central

    Di Ruberto, Cecilia; Kocher, Michel

    2018-01-01

    Malaria is an epidemic health disease and a rapid, accurate diagnosis is necessary for proper intervention. Generally, pathologists visually examine blood stained slides for malaria diagnosis. Nevertheless, this kind of visual inspection is subjective, error-prone and time-consuming. In order to overcome the issues, numerous methods of automatic malaria diagnosis have been proposed so far. In particular, many researchers have used mathematical morphology as a powerful tool for computer aided malaria detection and classification. Mathematical morphology is not only a theory for the analysis of spatial structures, but also a very powerful technique widely used for image processing purposes and employed successfully in biomedical image analysis, especially in preprocessing and segmentation tasks. Microscopic image analysis and particularly malaria detection and classification can greatly benefit from the use of morphological operators. The aim of this paper is to present a review of recent mathematical morphology based methods for malaria parasite detection and identification in stained blood smears images. PMID:29419781

  12. Recent Advances of Malaria Parasites Detection Systems Based on Mathematical Morphology.

    PubMed

    Loddo, Andrea; Di Ruberto, Cecilia; Kocher, Michel

    2018-02-08

    Malaria is an epidemic health disease and a rapid, accurate diagnosis is necessary for proper intervention. Generally, pathologists visually examine blood stained slides for malaria diagnosis. Nevertheless, this kind of visual inspection is subjective, error-prone and time-consuming. In order to overcome the issues, numerous methods of automatic malaria diagnosis have been proposed so far. In particular, many researchers have used mathematical morphology as a powerful tool for computer aided malaria detection and classification. Mathematical morphology is not only a theory for the analysis of spatial structures, but also a very powerful technique widely used for image processing purposes and employed successfully in biomedical image analysis, especially in preprocessing and segmentation tasks. Microscopic image analysis and particularly malaria detection and classification can greatly benefit from the use of morphological operators. The aim of this paper is to present a review of recent mathematical morphology based methods for malaria parasite detection and identification in stained blood smears images.

  13. Perceptual load-dependent neural correlates of distractor interference inhibition.

    PubMed

    Xu, Jiansong; Monterosso, John; Kober, Hedy; Balodis, Iris M; Potenza, Marc N

    2011-01-18

    The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing) at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits) is also smaller at high rather than low perceptual load, as might be predicted based on the load theory. We studied 24 healthy participants using functional magnetic resonance imaging (fMRI) during a visual target identification task with two perceptual loads (low vs. high). Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN), striatum, thalamus, and extensive sensory cortices at high load. Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load.

  14. The price of your soul: neural evidence for the non-utilitarian representation of sacred values

    PubMed Central

    Berns, Gregory S.; Bell, Emily; Capra, C. Monica; Prietula, Michael J.; Moore, Sara; Anderson, Brittany; Ginges, Jeremy; Atran, Scott

    2012-01-01

    Sacred values, such as those associated with religious or ethnic identity, underlie many important individual and group decisions in life, and individuals typically resist attempts to trade off their sacred values in exchange for material benefits. Deontological theory suggests that sacred values are processed based on rights and wrongs irrespective of outcomes, while utilitarian theory suggests that they are processed based on costs and benefits of potential outcomes, but which mode of processing an individual naturally uses is unknown. The study of decisions over sacred values is difficult because outcomes cannot typically be realized in a laboratory, and hence little is known about the neural representation and processing of sacred values. We used an experimental paradigm that used integrity as a proxy for sacredness and which paid real money to induce individuals to sell their personal values. Using functional magnetic resonance imaging (fMRI), we found that values that people refused to sell (sacred values) were associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval. This suggests that sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits. PMID:22271790

  15. The price of your soul: neural evidence for the non-utilitarian representation of sacred values.

    PubMed

    Berns, Gregory S; Bell, Emily; Capra, C Monica; Prietula, Michael J; Moore, Sara; Anderson, Brittany; Ginges, Jeremy; Atran, Scott

    2012-03-05

    Sacred values, such as those associated with religious or ethnic identity, underlie many important individual and group decisions in life, and individuals typically resist attempts to trade off their sacred values in exchange for material benefits. Deontological theory suggests that sacred values are processed based on rights and wrongs irrespective of outcomes, while utilitarian theory suggests that they are processed based on costs and benefits of potential outcomes, but which mode of processing an individual naturally uses is unknown. The study of decisions over sacred values is difficult because outcomes cannot typically be realized in a laboratory, and hence little is known about the neural representation and processing of sacred values. We used an experimental paradigm that used integrity as a proxy for sacredness and which paid real money to induce individuals to sell their personal values. Using functional magnetic resonance imaging (fMRI), we found that values that people refused to sell (sacred values) were associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval. This suggests that sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.

  16. On consciousness, resting state fMRI, and neurodynamics

    PubMed Central

    2010-01-01

    Background During the last years, functional magnetic resonance imaging (fMRI) of the brain has been introduced as a new tool to measure consciousness, both in a clinical setting and in a basic neurocognitive research. Moreover, advanced mathematical methods and theories have arrived the field of fMRI (e.g. computational neuroimaging), and functional and structural brain connectivity can now be assessed non-invasively. Results The present work deals with a pluralistic approach to "consciousness'', where we connect theory and tools from three quite different disciplines: (1) philosophy of mind (emergentism and global workspace theory), (2) functional neuroimaging acquisitions, and (3) theory of deterministic and statistical neurodynamics – in particular the Wilson-Cowan model and stochastic resonance. Conclusions Based on recent experimental and theoretical work, we believe that the study of large-scale neuronal processes (activity fluctuations, state transitions) that goes on in the living human brain while examined with functional MRI during "resting state", can deepen our understanding of graded consciousness in a clinical setting, and clarify the concept of "consiousness" in neurocognitive and neurophilosophy research. PMID:20522270

  17. The Study of Imperfection in Rough Set on the Field of Engineering and Education

    NASA Astrophysics Data System (ADS)

    Sheu, Tian-Wei; Liang, Jung-Chin; You, Mei-Li; Wen, Kun-Li

    Based on the characteristic of rough set, rough set theory overlaps with many other theories, especially with fuzzy set theory, evidence theory and Boolean reasoning methods. And the rough set methodology has found many real-life applications, such as medical data analysis, finance, banking, engineering, voice recognition, image processing and others. Till now, there is rare research associating to this issue in the imperfection of rough set. Hence, the main purpose of this paper is to study the imperfection of rough set in the field of engineering and education. First of all, we preview the mathematics model of rough set, and a given two examples to enhance our approach, which one is the weighting of influence factor in muzzle noise suppressor, and the other is the weighting of evaluation factor in English learning. Third, we also apply Matlab to develop a complete human-machine interface type of toolbox in order to support the complex calculation and verification the huge data. Finally, some further suggestions are indicated for the research in the future.

  18. Adaptive restoration of a partially coherent blurred image using an all-optical feedback interferometer with a liquid-crystal device.

    PubMed

    Shirai, Tomohiro; Barnes, Thomas H

    2002-02-01

    A liquid-crystal adaptive optics system using all-optical feedback interferometry is applied to partially coherent imaging through a phase disturbance. A theoretical analysis based on the propagation of the cross-spectral density shows that the blurred image due to the phase disturbance can be restored, in principle, irrespective of the state of coherence of the light illuminating the object. Experimental verification of the theory has been performed for two cases when the object to be imaged is illuminated by spatially coherent light originating from a He-Ne laser and by spatially incoherent white light from a halogen lamp. We observed in both cases that images blurred by the phase disturbance were successfully restored, in agreement with the theory, immediately after the adaptive optics system was activated. The origin of the deviation of the experimental results from the theory, together with the effect of the feedback misalignment inherent in our optical arrangement, is also discussed.

  19. Using Personal Construct Theory to Explore Self-Image with Adolescents with Learning Disabilities

    ERIC Educational Resources Information Center

    Thomas, Samantha; Butler, Richard; Hare, Dougal Julian; Green, David

    2011-01-01

    A young person's construct of self can be fundamental to their psychological well being (Glick 1999; Emler 2001). However limited research has been conducted in the United Kingdom to explore self-image with adolescents with learning disabilities. Previous studies have demonstrated the effective use of personal construct theory with children…

  20. The Role and Design of Screen Images in Software Documentation.

    ERIC Educational Resources Information Center

    van der Meij, Hans

    2000-01-01

    Discussion of learning a new computer software program focuses on how to support the joint handling of a manual, input devices, and screen display. Describes a study that examined three design styles for manuals that included screen images to reduce split-attention problems and discusses theory versus practice and cognitive load theory.…

  1. Semantics of User Interface for Image Retrieval: Possibility Theory and Learning Techniques.

    ERIC Educational Resources Information Center

    Crehange, M.; And Others

    1989-01-01

    Discusses the need for a rich semantics for the user interface in interactive image retrieval and presents two methods for building such interfaces: possibility theory applied to fuzzy data retrieval, and a machine learning technique applied to learning the user's deep need. Prototypes developed using videodisks and knowledge-based software are…

  2. Functional dissociation of stimulus intensity encoding and predictive coding of pain in the insula

    PubMed Central

    Geuter, Stephan; Boll, Sabrina; Eippert, Falk; Büchel, Christian

    2017-01-01

    The computational principles by which the brain creates a painful experience from nociception are still unknown. Classic theories suggest that cortical regions either reflect stimulus intensity or additive effects of intensity and expectations, respectively. By contrast, predictive coding theories provide a unified framework explaining how perception is shaped by the integration of beliefs about the world with mismatches resulting from the comparison of these beliefs against sensory input. Using functional magnetic resonance imaging during a probabilistic heat pain paradigm, we investigated which computations underlie pain perception. Skin conductance, pupil dilation, and anterior insula responses to cued pain stimuli strictly followed the response patterns hypothesized by the predictive coding model, whereas posterior insula encoded stimulus intensity. This novel functional dissociation of pain processing within the insula together with previously observed alterations in chronic pain offer a novel interpretation of aberrant pain processing as disturbed weighting of predictions and prediction errors. DOI: http://dx.doi.org/10.7554/eLife.24770.001 PMID:28524817

  3. [About creativity].

    PubMed

    Rubia Vila, Francisco José

    2006-01-01

    The term 'creativity', meaning to produce something out of nothing, is not accurate. A definition that included the establishment, the founding or the introduction of something anew for the first time would be rather appropiate. The most accurate interpretation of the creativity process is the one proposed by Alfred Rothenberg which establishes the hypothesis that creativity is due to what he calls a 'janusian thinking' characterized by conceiving simultaneously two or more opposed ideas, images or concepts. Two examples illustrate such way of thinking: one is Einstein's General Theory of Relativity and the other is Darwin's Theory of Natural Selection. The overcome of a dualistic thinking while keeping full consciousnees, that is, the utilization of both the primary and the secondary processes postulated by Freud, would be the key to creative thinking. From a neurophysiological point of view, it is very likely that the right hemisphere is rather connected to creativity, given that it is a mental state that requires non-focalized attention, greater right hemisphere activation, and low levels of prefrontal cortical activation allowing cognitive inhibition.

  4. How are the Concepts and Theories of Acid Base Reactions Presented? Chemistry in Textbooks and as Presented by Teachers

    NASA Astrophysics Data System (ADS)

    Furió-Más, Carlos; Calatayud, María Luisa; Guisasola, Jenaro; Furió-Gómez, Cristina

    2005-09-01

    This paper investigates the views of science and scientific activity that can be found in chemistry textbooks and heard from teachers when acid base reactions are introduced to grade 12 and university chemistry students. First, the main macroscopic and microscopic conceptual models are developed. Second, we attempt to show how the existence of views of science in textbooks and of chemistry teachers contributes to an impoverished image of chemistry. A varied design has been elaborated to analyse some epistemological deficiencies in teaching acid base reactions. Textbooks have been analysed and teachers have been interviewed. The results obtained show that the teaching process does not emphasize the macroscopic presentation of acids and bases. Macroscopic and microscopic conceptual models involved in the explanation of acid base processes are mixed in textbooks and by teachers. Furthermore, the non-problematic introduction of concepts, such as the hydrolysis concept, and the linear, cumulative view of acid base theories (Arrhenius and Brönsted) were detected.

  5. A circumstellar disk associated with a massive protostellar object.

    PubMed

    Jiang, Zhibo; Tamura, Motohide; Fukagawa, Misato; Hough, Jim; Lucas, Phil; Suto, Hiroshi; Ishii, Miki; Yang, Ji

    2005-09-01

    The formation process for stars with masses several times that of the Sun is still unclear. The two main theories are mergers of several low-mass young stellar objects, which requires a high stellar density, or mass accretion from circumstellar disks in the same way as low-mass stars are formed, accompanied by outflows during the process of gravitational infall. Although a number of disks have been discovered around low- and intermediate-mass young stellar objects, the presence of disks around massive young stellar objects is still uncertain and the mass of the disk system detected around one such object, M17, is disputed. Here we report near-infrared imaging polarimetry that reveals an outflow/disk system around the Becklin-Neugebauer protostellar object, which has a mass of at least seven solar masses (M(o)). This strongly supports the theory that stars with masses of at least 7M(o) form in the same way as lower mass stars.

  6. Mapping injustice, visualizing equity: why theory, metaphors and images matter in tackling inequalities.

    PubMed

    Krieger, N; Dorling, D; McCartney, G

    2012-03-01

    This symposia discussed "Mapping injustice, visualizing equity: why theory, metaphors and images matter in tackling inequalities". It sought to provoke critical thinking about the current theories used to analyze the health impact of injustice, variously referred to as "health inequalities" in the UK, "social inequalities in health" in the US, and "health inequities" more globally. Our focus was the types of explanations, images, and metaphors these theories employ. Building on frameworks that emphasize politics, agency, and accountability, we suggested that it was essential to engage the general public in the politics of health inequities if progress is to be made. We showcased some examples of such engagement before inviting the audience to consider how this might apply in their own areas of responsibility. Copyright © 2012 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  7. Conclusiveness of natural languages and recognition of images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wojcik, Z.M.

    1983-01-01

    The conclusiveness is investigated using recognition processes and one-one correspondence between expressions of a natural language and graphs representing events. The graphs, as conceived in psycholinguistics, are obtained as a result of perception processes. It is possible to generate and process the graphs automatically, using computers and then to convert the resulting graphs into expressions of a natural language. Correctness and conclusiveness of the graphs and sentences are investigated using the fundamental condition for events representation processes. Some consequences of the conclusiveness are discussed, e.g. undecidability of arithmetic, human brain assymetry, correctness of statistical calculations and operations research. It ismore » suggested that the group theory should be imposed on mathematical models of any real system. Proof of the fundamental condition is also presented. 14 references.« less

  8. Prefrontal Cortex, Emotion, and Approach/Withdrawal Motivation

    PubMed Central

    Spielberg, Jeffrey M.; Stewart, Jennifer L.; Levin, Rebecca L.; Miller, Gregory A.; Heller, Wendy

    2010-01-01

    This article provides a selective review of the literature and current theories regarding the role of prefrontal cortex, along with some other critical brain regions, in emotion and motivation. Seemingly contradictory findings have often appeared in this literature. Research attempting to resolve these contradictions has been the basis of new areas of growth and has led to more sophisticated understandings of emotional and motivational processes as well as neural networks associated with these processes. Progress has, in part, depended on methodological advances that allow for increased resolution in brain imaging. A number of issues are currently in play, among them the role of prefrontal cortex in emotional or motivational processes. This debate fosters research that will likely lead to further refinement of conceptualizations of emotion, motivation, and the neural processes associated with them. PMID:20574551

  9. Understanding the Psychological Process of Avoidance-Based Self-Regulation on Facebook.

    PubMed

    Marder, Ben; Houghton, David; Joinson, Adam; Shankar, Avi; Bull, Eleanor

    2016-05-01

    In relation to social network sites, prior research has evidenced behaviors (e.g., censoring) enacted by individuals used to avoid projecting an undesired image to their online audiences. However, no work directly examines the psychological process underpinning such behavior. Drawing upon the theory of self-focused attention and related literature, a model is proposed to fill this research gap. Two studies examine the process whereby public self-awareness (stimulated by engaging with Facebook) leads to a self-comparison with audience expectations and, if discrepant, an increase in social anxiety, which results in the intention to perform avoidance-based self-regulation. By finding support for this process, this research contributes an extended understanding of the psychological factors leading to avoidance-based regulation when online selves are subject to surveillance.

  10. Prefrontal Cortex, Emotion, and Approach/Withdrawal Motivation.

    PubMed

    Spielberg, Jeffrey M; Stewart, Jennifer L; Levin, Rebecca L; Miller, Gregory A; Heller, Wendy

    2008-01-01

    This article provides a selective review of the literature and current theories regarding the role of prefrontal cortex, along with some other critical brain regions, in emotion and motivation. Seemingly contradictory findings have often appeared in this literature. Research attempting to resolve these contradictions has been the basis of new areas of growth and has led to more sophisticated understandings of emotional and motivational processes as well as neural networks associated with these processes. Progress has, in part, depended on methodological advances that allow for increased resolution in brain imaging. A number of issues are currently in play, among them the role of prefrontal cortex in emotional or motivational processes. This debate fosters research that will likely lead to further refinement of conceptualizations of emotion, motivation, and the neural processes associated with them.

  11. Detectability of radiological images: the influence of anatomical noise

    NASA Astrophysics Data System (ADS)

    Bochud, Francois O.; Verdun, Francis R.; Hessler, Christian; Valley, Jean-Francois

    1995-04-01

    Radiological image quality can be objectively quantified by the statistical decision theory. This theory is commonly applied with the noise of the imaging system alone (quantum, screen and film noises) whereas the actual noise present on the image is the 'anatomical noise' (sum of the system noise and the anatomical texture). This anatomical texture should play a role in the detection task. This paper compares these two kinds of noises by performing 2AFC experiments and computing the area under the ROC-curve. It is shown that the 'anatomical noise' cannot be considered as a noise in the sense of Wiener spectrum approach and that the detectability performance is the same as the one obtained with the system noise alone in the case of a small object to be detected. Furthermore, the statistical decision theory and the non- prewhitening observer does not match the experimental results. This is especially the case in the low contrast values for which the theory predicts an increase of the detectability as soon as the contrast is different from zero whereas the experimental result demonstrates an offset of the contrast value below which the detectability is purely random. The theory therefore needs to be improved in order to take this result into account.

  12. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    PubMed Central

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  13. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array.

    PubMed

    Yan, Gang; Zhou, Li

    2018-02-21

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method.

  14. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array

    PubMed Central

    Zhou, Li

    2018-01-01

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method. PMID:29466310

  15. Hurricane Imaging Radiometer Wind Speed and Rain Rate Retrievals during the 2010 GRIP Flight Experiment

    NASA Technical Reports Server (NTRS)

    Sahawneh, Saleem; Farrar, Spencer; Johnson, James; Jones, W. Linwood; Roberts, Jason; Biswas, Sayak; Cecil, Daniel

    2014-01-01

    Microwave remote sensing observations of hurricanes, from NOAA and USAF hurricane surveillance aircraft, provide vital data for hurricane research and operations, for forecasting the intensity and track of tropical storms. The current operational standard for hurricane wind speed and rain rate measurements is the Stepped Frequency Microwave Radiometer (SFMR), which is a nadir viewing passive microwave airborne remote sensor. The Hurricane Imaging Radiometer, HIRAD, will extend the nadir viewing SFMR capability to provide wide swath images of wind speed and rain rate, while flying on a high altitude aircraft. HIRAD was first flown in the Genesis and Rapid Intensification Processes, GRIP, NASA hurricane field experiment in 2010. This paper reports on geophysical retrieval results and provides hurricane images from GRIP flights. An overview of the HIRAD instrument and the radiative transfer theory based, wind speed/rain rate retrieval algorithm is included. Results are presented for hurricane wind speed and rain rate for Earl and Karl, with comparison to collocated SFMR retrievals and WP3D Fuselage Radar images for validation purposes.

  16. Understanding the experiences of people with disfigurements: An integration of four models of social and psychological functioning.

    PubMed

    Kent, G

    2000-05-01

    Both psychological (Cash, 1996; Partridge, 1998; Leary et al ., 1998) and sociological (Goffman, 1968) models have been used to explain the personal and social consequences of cosmetic blemishes. In this study, people with the skin disease vitiligo were asked to describe a situation in which their condition had recently affected their lives. Consistent with theories of body image disturbance, incidents usually involved a triggering event when concerns about appearance were raised due to bodily exposure or enacted stigma. These events led respondents to be vigilant to others' behaviour, to be self-conscious and to attribute the cause of the event to their appearance. Theories of social anxiety could be used to account for how the respondents used impression management strategies such as avoidance and concealment. Respondents described how they could be uncertain as to how to deal with others' behaviour, illustrating the relevance of social skills models. In addition, avoidance/concealment had a number of social and personal costs, including the loss of valued activities, reluctance to develop intimate relationships and continuing anxiety. Thus, theories of body image, social anxiety, social skills and the sociology of stigma could be used to understand the respondents' experiences. It seems likely that therapeutic interventions based on different models are useful because they influence different aspects of the above process.

  17. The Martian Outflow Channels: Mgs Sheds New Light On Viking and Pathfinder Results

    NASA Astrophysics Data System (ADS)

    Lanz, J.; Jaumann, R.

    The Mars Global Surveyor (MGS) Mission has, as most successful missions before, given stunningly new insights in the processes that shaped the Martian surface. But how do these findings and observations fit in the context of our pre-MGS knowledge? and do they fit at all? Combining data from the Viking, Pathfinder and MGS Missions, erosion processes in the circum-Chryse Region have been newly and extensively examined. Maximum discharge rates and flow velocities within the major outflow channels were calculated as well as sediment transport and sediment volumes eroded by the flows evaluating the erosion balance of the region. In a second step a detailed study of the available high resolution MOC-Images and lower resolution MOC and Viking context images was performed to evaluate the geologic and morphologic inventory of the outflow chan- nels. Focusing on morphologic and hydrologic differences to terrestrial outflow chan- nels as well as differences to earlier pre-MGS studies, theories and hypothesis con- cerning the outflow channels have been tested for their validity. New hydrologic cal- culations e.g. give different results than previously measured (e.g. Carr 1979, Robin- son &Tanaka 1990, Komatsu &Baker 1997). Maximum discharge rates are generally smaller (see also Williams et al. 2000), in some cases up to a factor of 2 to 3 (e.g. Ares Vallis), having a strong impact on the northern ocean theory. Some morphologic fea- tures that are typical for terrestrial flood features (such as inner channels, bar deposits, gravel dunes, etc) could not or not clearly be identified in any of the large outflow channels even in high resolution MOC-imagery. Younger resurfacing processes might have covered or obscured them. Others are hard to distinguish from non-fluvial, i.e. eo- lian, features from satellite images. Nevertheless, the overall absence of such features in the outflow channels is striking and shows again that processes on Mars differ sig- nificantly from those on Earth and similar features might well have different origins. A simple comparison of similarities only, will inevitably be misleading or incomplete.

  18. Empowering potential: a theory of wellness motivation.

    PubMed

    Fleury, J D

    1991-01-01

    Data were collected from 29 individuals who were attempting to initiate and sustain programs of cardiac risk factor modification. Data were analyzed through the technique of constant comparative analysis. Empowering potential, the basic social process identified from the data, explained individual motivation to initiate and sustain cardiovascular health behavior. Empowering potential was a continuous process of individual growth and development which facilitated the emergence of new and positive health patterns. Within the process of empowering potential, individuals use a variety of strategies which guide the initiation and maintenance of health-related change. The process of empowering potential consists of three stages: appraising readiness, changing, and integrating change. Two categories occurred throughout the process of empowering potential: imaging and social support systems. These findings provide a better understanding of how motivated action is initiated and reinitiated over time.

  19. Fusion of Local Statistical Parameters for Buried Underwater Mine Detection in Sonar Imaging

    NASA Astrophysics Data System (ADS)

    Maussang, F.; Rombaut, M.; Chanussot, J.; Hétet, A.; Amate, M.

    2008-12-01

    Detection of buried underwater objects, and especially mines, is a current crucial strategic task. Images provided by sonar systems allowing to penetrate in the sea floor, such as the synthetic aperture sonars (SASs), are of great interest for the detection and classification of such objects. However, the signal-to-noise ratio is fairly low and advanced information processing is required for a correct and reliable detection of the echoes generated by the objects. The detection method proposed in this paper is based on a data-fusion architecture using the belief theory. The input data of this architecture are local statistical characteristics extracted from SAS data corresponding to the first-, second-, third-, and fourth-order statistical properties of the sonar images, respectively. The interest of these parameters is derived from a statistical model of the sonar data. Numerical criteria are also proposed to estimate the detection performances and to validate the method.

  20. A New Standard for Assessing the Performance of High Contrast Imaging Systems

    NASA Astrophysics Data System (ADS)

    Jensen-Clem, Rebecca; Mawet, Dimitri; Gomez Gonzalez, Carlos A.; Absil, Olivier; Belikov, Ruslan; Currie, Thayne; Kenworthy, Matthew A.; Marois, Christian; Mazoyer, Johan; Ruane, Garreth; Tanner, Angelle; Cantalloube, Faustine

    2018-01-01

    As planning for the next generation of high contrast imaging instruments (e.g., WFIRST, HabEx, and LUVOIR, TMT-PFI, EELT-EPICS) matures and second-generation ground-based extreme adaptive optics facilities (e.g., VLT-SPHERE, Gemini-GPI) finish their principal surveys, it is imperative that the performance of different designs, post-processing algorithms, observing strategies, and survey results be compared in a consistent, statistically robust framework. In this paper, we argue that the current industry standard for such comparisons—the contrast curve—falls short of this mandate. We propose a new figure of merit, the “performance map,” that incorporates three fundamental concepts in signal detection theory: the true positive fraction, the false positive fraction, and the detection threshold. By supplying a theoretical basis and recipe for generating the performance map, we hope to encourage the widespread adoption of this new metric across subfields in exoplanet imaging.

Top