Sample records for resolution limit problem

  1. Easy way to determine quantitative spatial resolution distribution for a general inverse problem

    NASA Astrophysics Data System (ADS)

    An, M.; Feng, M.

    2013-12-01

    The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.

  2. Fundamental limits of reconstruction-based superresolution algorithms under local translation.

    PubMed

    Lin, Zhouchen; Shum, Heung-Yeung

    2004-01-01

    Superresolution is a technique that can produce images of a higher resolution than that of the originally captured ones. Nevertheless, improvement in resolution using such a technique is very limited in practice. This makes it significant to study the problem: "Do fundamental limits exist for superresolution?" In this paper, we focus on a major class of superresolution algorithms, called the reconstruction-based algorithms, which compute high-resolution images by simulating the image formation process. Assuming local translation among low-resolution images, this paper is the first attempt to determine the explicit limits of reconstruction-based algorithms, under both real and synthetic conditions. Based on the perturbation theory of linear systems, we obtain the superresolution limits from the conditioning analysis of the coefficient matrix. Moreover, we determine the number of low-resolution images that are sufficient to achieve the limit. Both real and synthetic experiments are carried out to verify our analysis.

  3. Interactive Display of High-Resolution Images on the World Wide Web.

    ERIC Educational Resources Information Center

    Clyde, Stephen W.; Hirschi, Gregory W.

    Viewing high-resolution images on the World Wide Web at a level of detail necessary for collaborative research is still a problem today, given the Internet's current bandwidth limitations and its ever increasing network traffic. ImageEyes is an interactive display tool being developed at Utah State University that addresses this problem by…

  4. Super resolution reconstruction of infrared images based on classified dictionary learning

    NASA Astrophysics Data System (ADS)

    Liu, Fei; Han, Pingli; Wang, Yi; Li, Xuan; Bai, Lu; Shao, Xiaopeng

    2018-05-01

    Infrared images always suffer from low-resolution problems resulting from limitations of imaging devices. An economical approach to combat this problem involves reconstructing high-resolution images by reasonable methods without updating devices. Inspired by compressed sensing theory, this study presents and demonstrates a Classified Dictionary Learning method to reconstruct high-resolution infrared images. It classifies features of the samples into several reasonable clusters and trained a dictionary pair for each cluster. The optimal pair of dictionaries is chosen for each image reconstruction and therefore, more satisfactory results is achieved without the increase in computational complexity and time cost. Experiments and results demonstrated that it is a viable method for infrared images reconstruction since it improves image resolution and recovers detailed information of targets.

  5. Uncovering the effective interval of resolution parameter across multiple community optimization measures

    NASA Astrophysics Data System (ADS)

    Li, Hui-Jia; Cheng, Qing; Mao, He-Jin; Wang, Huanian; Chen, Junhua

    2017-03-01

    The study of community structure is a primary focus of network analysis, which has attracted a large amount of attention. In this paper, we focus on two famous functions, i.e., the Hamiltonian function H and the modularity density measure D, and intend to uncover the effective thresholds of their corresponding resolution parameter γ without resolution limit problem. Two widely used example networks are employed, including the ring network of lumps as well as the ad hoc network. In these two networks, we use discrete convex analysis to study the interval of resolution parameter of H and D that will not cause the misidentification. By comparison, we find that in both examples, for Hamiltonian function H, the larger the value of resolution parameter γ, the less resolution limit the network suffers; while for modularity density D, the less resolution limit the network suffers when we decrease the value of γ. Our framework is mathematically strict and efficient and can be applied in a lot of scientific fields.

  6. Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy.

    PubMed

    Zhang, Jialin; Sun, Jiasong; Chen, Qian; Li, Jiaji; Zuo, Chao

    2017-09-18

    High-resolution wide field-of-view (FOV) microscopic imaging plays an essential role in various fields of biomedicine, engineering, and physical sciences. As an alternative to conventional lens-based scanning techniques, lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and FOV of conventional microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). Here, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method which can solve, or at least partially alleviate these limitations. Our approach addresses the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target (~29.85 mm 2 ) and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67µm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  7. LITE microscopy: Tilted light-sheet excitation of model organisms offers high resolution and low photobleaching

    PubMed Central

    Gerbich, Therese M.; Rana, Kishan; Suzuki, Aussie; Schaefer, Kristina N.; Heppert, Jennifer K.; Boothby, Thomas C.; Allbritton, Nancy L.; Gladfelter, Amy S.; Maddox, Amy S.

    2018-01-01

    Fluorescence microscopy is a powerful approach for studying subcellular dynamics at high spatiotemporal resolution; however, conventional fluorescence microscopy techniques are light-intensive and introduce unnecessary photodamage. Light-sheet fluorescence microscopy (LSFM) mitigates these problems by selectively illuminating the focal plane of the detection objective by using orthogonal excitation. Orthogonal excitation requires geometries that physically limit the detection objective numerical aperture (NA), thereby limiting both light-gathering efficiency (brightness) and native spatial resolution. We present a novel live-cell LSFM method, lateral interference tilted excitation (LITE), in which a tilted light sheet illuminates the detection objective focal plane without a sterically limiting illumination scheme. LITE is thus compatible with any detection objective, including oil immersion, without an upper NA limit. LITE combines the low photodamage of LSFM with high resolution, high brightness, and coverslip-based objectives. We demonstrate the utility of LITE for imaging animal, fungal, and plant model organisms over many hours at high spatiotemporal resolution. PMID:29490939

  8. Parental conflict resolution styles and children's adjustment: children's appraisals and emotion regulation as mediators.

    PubMed

    Siffert, Andrea; Schwarz, Beate

    2011-01-01

    Guided by the emotional security hypothesis and the cognitive-contextual framework, the authors investigated whether the associations between negative parental conflict resolution styles and children's internalizing and externalizing problems were mediated by children's appraisals of threat and self-blame and their emotion regulation. Participants were 192 Swiss 2-parent families with children aged 9-12 years (M age = 10.62 years, SD = 0.41 years). Structural equation modeling was used to test the empirical validity of the theoretical model. Results indicated that children's maladaptive emotion regulation mediated the association between negative parental conflict resolution styles and children's internalizing as well as externalizing problems. Whereas perceived threat was related only to children's internalizing problems, self-blame did not mediate the links between negative parental conflict resolution styles and children's adjustment. Implications for understanding the mechanisms by which exposure to interparental conflict could lead to children's maladjustment and limitations of the study are discussed.

  9. Object Manifold Alignment for Multi-Temporal High Resolution Remote Sensing Images Classification

    NASA Astrophysics Data System (ADS)

    Gao, G.; Zhang, M.; Gu, Y.

    2017-05-01

    Multi-temporal remote sensing images classification is very useful for monitoring the land cover changes. Traditional approaches in this field mainly face to limited labelled samples and spectral drift of image information. With spatial resolution improvement, "pepper and salt" appears and classification results will be effected when the pixelwise classification algorithms are applied to high-resolution satellite images, in which the spatial relationship among the pixels is ignored. For classifying the multi-temporal high resolution images with limited labelled samples, spectral drift and "pepper and salt" problem, an object-based manifold alignment method is proposed. Firstly, multi-temporal multispectral images are cut to superpixels by simple linear iterative clustering (SLIC) respectively. Secondly, some features obtained from superpixels are formed as vector. Thirdly, a majority voting manifold alignment method aiming at solving high resolution problem is proposed and mapping the vector data to alignment space. At last, all the data in the alignment space are classified by using KNN method. Multi-temporal images from different areas or the same area are both considered in this paper. In the experiments, 2 groups of multi-temporal HR images collected by China GF1 and GF2 satellites are used for performance evaluation. Experimental results indicate that the proposed method not only has significantly outperforms than traditional domain adaptation methods in classification accuracy, but also effectively overcome the problem of "pepper and salt".

  10. High-resolution two dimensional advective transport

    USGS Publications Warehouse

    Smith, P.E.; Larock, B.E.

    1989-01-01

    The paper describes a two-dimensional high-resolution scheme for advective transport that is based on a Eulerian-Lagrangian method with a flux limiter. The scheme is applied to the problem of pure-advection of a rotated Gaussian hill and shown to preserve the monotonicity property of the governing conservation law.

  11. A community detection algorithm using network topologies and rule-based hierarchical arc-merging strategies

    PubMed Central

    2017-01-01

    The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100

  12. Pixel-super-resolved lensfree holography using adaptive relaxation factor and positional error correction

    NASA Astrophysics Data System (ADS)

    Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao

    2018-01-01

    Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  13. Thruster Limitation Consideration for Formation Flight Control

    NASA Technical Reports Server (NTRS)

    Xu, Yunjun; Fitz-Coy, Norman; Mason, Paul

    2003-01-01

    Physical constraints of any real system can have a drastic effect on its performance. Some of the more recognized constraints are actuator and sensor saturation and bandwidth, power consumption, sampling rate (sensor and control-loop) and computation limits. These constraints can degrade system s performance, such as settling time, overshoot, rising time, and stability margins. In order to address these issues, researchers have investigated the use of robust and nonlinear controllers that can incorporate uncertainty and constraints into a controller design. For instance, uncertainties can be addressed in the synthesis model used in such algorithms as H(sub infinity), or mu. There is a significant amount of literature addressing this type of problem. However, there is one constraint that has not often been considered; that is, actuator authority resolution. In this work, thruster resolution and controller schemes to compensate for this effect are investigated for position and attitude control of a Low Earth Orbit formation flight system In many academic problems, actuators are assumed to have infinite resolution. In real system applications, such as formation flight systems, the system actuators will not have infinite resolution. High-precision formation flying requires the relative position and the relative attitude to be controlled on the order of millimeters and arc-seconds, respectively. Therefore, the minimum force resolution is a significant concern in this application. Without the sufficient actuator resolution, the system may be unable to attain the required pointing and position precision control. Furthermore, fuel may be wasted due to high-frequency chattering phenomena when attempting to provide a fine control with inadequate actuators. To address this issue, a Sliding Mode Controller is developed along with the boundary Layer Control to provide the best control resolution constraints. A Genetic algorithm is used to optimize the controller parameters according to the states error and fuel consumption criterion. The tradeoffs and effects of the minimum force limitation on performance are studied and compared to the case without the limitation. Furthermore, two methods are proposed to reduce chattering and improve precision.

  14. Spread spectrum phase modulation for coherent X-ray diffraction imaging.

    PubMed

    Zhang, Xuesong; Jiang, Jing; Xiangli, Bin; Arce, Gonzalo R

    2015-09-21

    High dynamic range, phase ambiguity and radiation limited resolution are three challenging issues in coherent X-ray diffraction imaging (CXDI), which limit the achievable imaging resolution. This paper proposes a spread spectrum phase modulation (SSPM) method to address the aforementioned problems in a single strobe. The requirements on phase modulator parameters are presented, and a practical implementation of SSPM is discussed via ray optics analysis. Numerical experiments demonstrate the performance of SSPM under the constraint of available X-ray optics fabrication accuracy, showing its potential to real CXDI applications.

  15. A self-organizing Lagrangian particle method for adaptive-resolution advection-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Reboux, Sylvain; Schrader, Birte; Sbalzarini, Ivo F.

    2012-05-01

    We present a novel adaptive-resolution particle method for continuous parabolic problems. In this method, particles self-organize in order to adapt to local resolution requirements. This is achieved by pseudo forces that are designed so as to guarantee that the solution is always well sampled and that no holes or clusters develop in the particle distribution. The particle sizes are locally adapted to the length scale of the solution. Differential operators are consistently evaluated on the evolving set of irregularly distributed particles of varying sizes using discretization-corrected operators. The method does not rely on any global transforms or mapping functions. After presenting the method and its error analysis, we demonstrate its capabilities and limitations on a set of two- and three-dimensional benchmark problems. These include advection-diffusion, the Burgers equation, the Buckley-Leverett five-spot problem, and curvature-driven level-set surface refinement.

  16. Solving phase appearance/disappearance two-phase flow problems with high resolution staggered grid and fully implicit schemes by the Jacobian-free Newton–Krylov Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-04-01

    The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integrationmore » methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.« less

  17. High resolution time of arrival estimation for a cooperative sensor system

    NASA Astrophysics Data System (ADS)

    Morhart, C.; Biebl, E. M.

    2010-09-01

    Distance resolution of cooperative sensors is limited by the signal bandwidth. For the transmission mainly lower frequency bands are used which are more narrowband than classical radar frequencies. To compensate this resolution problem the combination of a pseudo-noise coded pulse compression system with superresolution time of arrival estimation is proposed. Coded pulsecompression allows secure and fast distance measurement in multi-user scenarios which can easily be adapted for data transmission purposes (Morhart and Biebl, 2009). Due to the lack of available signal bandwidth the measurement accuracy degrades especially in multipath scenarios. Superresolution time of arrival algorithms can improve this behaviour by estimating the channel impulse response out of a band-limited channel view. For the given test system the implementation of a MUSIC algorithm permitted a two times better distance resolution as the standard pulse compression.

  18. Compartmentalized Low-Rank Recovery for High-Resolution Lipid Unsuppressed MRSI

    PubMed Central

    Bhattacharya, Ipshita; Jacob, Mathews

    2017-01-01

    Purpose To introduce a novel algorithm for the recovery of high-resolution magnetic resonance spectroscopic imaging (MRSI) data with minimal lipid leakage artifacts, from dual-density spiral acquisition. Methods The reconstruction of MRSI data from dual-density spiral data is formulated as a compartmental low-rank recovery problem. The MRSI dataset is modeled as the sum of metabolite and lipid signals, each of which is support limited to the brain and extracranial regions, respectively, in addition to being orthogonal to each other. The reconstruction problem is formulated as an optimization problem, which is solved using iterative reweighted nuclear norm minimization. Results The comparisons of the scheme against dual-resolution reconstruction algorithm on numerical phantom and in vivo datasets demonstrate the ability of the scheme to provide higher spatial resolution and lower lipid leakage artifacts. The experiments demonstrate the ability of the scheme to recover the metabolite maps, from lipid unsuppressed datasets with echo time (TE)=55 ms. Conclusion The proposed reconstruction method and data acquisition strategy provide an efficient way to achieve high-resolution metabolite maps without lipid suppression. This algorithm would be beneficial for fast metabolic mapping and extension to multislice acquisitions. PMID:27851875

  19. Reconstruction From Multiple Particles for 3D Isotropic Resolution in Fluorescence Microscopy.

    PubMed

    Fortun, Denis; Guichard, Paul; Hamel, Virginie; Sorzano, Carlos Oscar S; Banterle, Niccolo; Gonczy, Pierre; Unser, Michael

    2018-05-01

    The imaging of proteins within macromolecular complexes has been limited by the low axial resolution of optical microscopes. To overcome this problem, we propose a novel computational reconstruction method that yields isotropic resolution in fluorescence imaging. The guiding principle is to reconstruct a single volume from the observations of multiple rotated particles. Our new operational framework detects particles, estimates their orientation, and reconstructs the final volume. The main challenge comes from the absence of initial template and a priori knowledge about the orientations. We formulate the estimation as a blind inverse problem, and propose a block-coordinate stochastic approach to solve the associated non-convex optimization problem. The reconstruction is performed jointly in multiple channels. We demonstrate that our method is able to reconstruct volumes with 3D isotropic resolution on simulated data. We also perform isotropic reconstructions from real experimental data of doubly labeled purified human centrioles. Our approach revealed the precise localization of the centriolar protein Cep63 around the centriole microtubule barrel. Overall, our method offers new perspectives for applications in biology that require the isotropic mapping of proteins within macromolecular assemblies.

  20. Quantum interpolation for high-resolution sensing

    PubMed Central

    Ajoy, Ashok; Liu, Yi-Xiang; Saha, Kasturi; Marseglia, Luca; Jaskula, Jean-Christophe; Bissbort, Ulf; Cappellaro, Paola

    2017-01-01

    Recent advances in engineering and control of nanoscale quantum sensors have opened new paradigms in precision metrology. Unfortunately, hardware restrictions often limit the sensor performance. In nanoscale magnetic resonance probes, for instance, finite sampling times greatly limit the achievable sensitivity and spectral resolution. Here we introduce a technique for coherent quantum interpolation that can overcome these problems. Using a quantum sensor associated with the nitrogen vacancy center in diamond, we experimentally demonstrate that quantum interpolation can achieve spectroscopy of classical magnetic fields and individual quantum spins with orders of magnitude finer frequency resolution than conventionally possible. Not only is quantum interpolation an enabling technique to extract structural and chemical information from single biomolecules, but it can be directly applied to other quantum systems for superresolution quantum spectroscopy. PMID:28196889

  1. Quantum interpolation for high-resolution sensing.

    PubMed

    Ajoy, Ashok; Liu, Yi-Xiang; Saha, Kasturi; Marseglia, Luca; Jaskula, Jean-Christophe; Bissbort, Ulf; Cappellaro, Paola

    2017-02-28

    Recent advances in engineering and control of nanoscale quantum sensors have opened new paradigms in precision metrology. Unfortunately, hardware restrictions often limit the sensor performance. In nanoscale magnetic resonance probes, for instance, finite sampling times greatly limit the achievable sensitivity and spectral resolution. Here we introduce a technique for coherent quantum interpolation that can overcome these problems. Using a quantum sensor associated with the nitrogen vacancy center in diamond, we experimentally demonstrate that quantum interpolation can achieve spectroscopy of classical magnetic fields and individual quantum spins with orders of magnitude finer frequency resolution than conventionally possible. Not only is quantum interpolation an enabling technique to extract structural and chemical information from single biomolecules, but it can be directly applied to other quantum systems for superresolution quantum spectroscopy.

  2. A study of digital holographic filter generation

    NASA Technical Reports Server (NTRS)

    Calhoun, M.; Ingels, F.

    1976-01-01

    Problems associated with digital computer generation of holograms are discussed along with a criteria for producing optimum digital holograms. This criteria revolves around amplitude resolution and spatial frequency limitations induced by the computer and plotter process.

  3. Single-Step 3-D Image Reconstruction in Magnetic Induction Tomography: Theoretical Limits of Spatial Resolution and Contrast to Noise Ratio

    PubMed Central

    Hollaus, Karl; Rosell-Ferrer, Javier; Merwa, Robert

    2006-01-01

    Magnetic induction tomography (MIT) is a low-resolution imaging modality for reconstructing the changes of the complex conductivity in an object. MIT is based on determining the perturbation of an alternating magnetic field, which is coupled from several excitation coils to the object. The conductivity distribution is reconstructed from the corresponding voltage changes induced in several receiver coils. Potential medical applications comprise the continuous, non-invasive monitoring of tissue alterations which are reflected in the change of the conductivity, e.g. edema, ventilation disorders, wound healing and ischemic processes. MIT requires the solution of an ill-posed inverse eddy current problem. A linearized version of this problem was solved for 16 excitation coils and 32 receiver coils with a model of two spherical perturbations within a cylindrical phantom. The method was tested with simulated measurement data. Images were reconstructed with a regularized single-step Gauss–Newton approach. Theoretical limits for spatial resolution and contrast/noise ratio were calculated and compared with the empirical results from a Monte-Carlo study. The conductivity perturbations inside a homogeneous cylinder were localized for a SNR between 44 and 64 dB. The results prove the feasibility of difference imaging with MIT and give some quantitative data on the limitations of the method. PMID:17031597

  4. Plutonium and uranium determination in environmental samples: combined solvent extraction-liquid scintillation method.

    PubMed

    McDowell, W J; Farrar, D T; Billings, M R

    1974-12-01

    A method for the determination of uranium and plutonium by a combined high-resolution liquid scintillation-solvent extraction method is presented. Assuming a sample count equal to background count to be the detection limit, the lower detection limit for these and other alpha-emitting nuclides is 1.0 dpm with a Pyrex sample tube, 0.3 dpm with a quartz sample tube using present detector shielding or 0.02 d.p.m. with pulse-shape discrimination. Alpha-counting efficiency is 100%. With the counting data presented as an alpha-energy spectrum, an energy resolution of 0.2-0.3 MeV peak half-width and an energy identification to +/-0.1 MeV are possible. Thus, within these limits, identification and quantitative determination of a specific alpha-emitter, independent of chemical separation, are possible. The separation procedure allows greater than 98% recovery of uranium and plutonium from solution containing large amounts of iron and other interfering substances. In most cases uranium, even when present in 10(8)-fold molar ratio, may be quantitatively separated from plutonium without loss of the plutonium. Potential applications of this general analytical concept to other alpha-counting problems are noted. Special problems associated with the determination of plutonium in soil and water samples are discussed. Results of tests to determine the pulse-height and energy-resolution characteristics of several scintillators are presented. Construction of the high-resolution liquid scintillation detector is described.

  5. Resolving complex fibre architecture by means of sparse spherical deconvolution in the presence of isotropic diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Michailovich, O.; Rathi, Y.

    2014-03-01

    High angular resolution diffusion imaging (HARDI) improves upon more traditional diffusion tensor imaging (DTI) in its ability to resolve the orientations of crossing and branching neural fibre tracts. The HARDI signals are measured over a spherical shell in q-space, and are usually used as an input to q-ball imaging (QBI) which allows estimation of the diffusion orientation distribution functions (ODFs) associated with a given region-of interest. Unfortunately, the partial nature of single-shell sampling imposes limits on the estimation accuracy. As a result, the recovered ODFs may not possess sufficient resolution to reveal the orientations of fibre tracts which cross each other at acute angles. A possible solution to the problem of limited resolution of QBI is provided by means of spherical deconvolution, a particular instance of which is sparse deconvolution. However, while capable of yielding high-resolution reconstructions over spacial locations corresponding to white matter, such methods tend to become unstable when applied to anatomical regions with a substantial content of isotropic diffusion. To resolve this problem, a new deconvolution approach is proposed in this paper. Apart from being uniformly stable across the whole brain, the proposed method allows one to quantify the isotropic component of cerebral diffusion, which is known to be a useful diagnostic measure by itself.

  6. High-resolution continuum observations of the Sun

    NASA Technical Reports Server (NTRS)

    Zirin, Harold

    1987-01-01

    The aim of the PFI or photometric filtergraph instrument is to observe the Sun in the continuum with as high resolution as possible and utilizing the widest range of wavelengths. Because of financial and political problems the CCD was eliminated so that the highest photometric accuracy is only obtainable by comparison with the CFS images. Presently there is a limitation to wavelengths above 2200 A due to the lack of sensitivity of untreated film below 2200 A. Therefore the experiment at present consists of a film camera with 1000 feet of film and 12 filters. The PFI experiments are outlined using only two cameras. Some further problems of the experiment are addressed.

  7. The Black Hole Information Problem

    NASA Astrophysics Data System (ADS)

    Polchinski, Joseph

    The black hole information problem has been a challenge since Hawking's original 1975 paper. It led to the discovery of AdS/CFT, which gave a partial resolution of the paradox. However, recent developments, in particular the firewall puzzle, show that there is much that we do not understand. I review the black hole, Hawking radiation, and the Page curve, and the classic form of the paradox. I discuss AdS/CFT as a partial resolution. I then discuss black hole complementarity and its limitations, leading to many proposals for different kinds of `drama.' I conclude with some recent ideas. Presented at the 2014-15 Jerusalem Winter School and the 2015 TASI.

  8. Facing the Limitations of Electronic Document Handling.

    ERIC Educational Resources Information Center

    Moralee, Dennis

    1985-01-01

    This essay addresses problems associated with technology used in the handling of high-resolution visual images in electronic document delivery. Highlights include visual fidelity, laser-driven optical disk storage, electronics versus micrographics for document storage, videomicrographics, and system configurations and peripherals. (EJS)

  9. Magnetic Resonance Super-resolution Imaging Measurement with Dictionary-optimized Sparse Learning

    NASA Astrophysics Data System (ADS)

    Li, Jun-Bao; Liu, Jing; Pan, Jeng-Shyang; Yao, Hongxun

    2017-06-01

    Magnetic Resonance Super-resolution Imaging Measurement (MRIM) is an effective way of measuring materials. MRIM has wide applications in physics, chemistry, biology, geology, medical and material science, especially in medical diagnosis. It is feasible to improve the resolution of MR imaging through increasing radiation intensity, but the high radiation intensity and the longtime of magnetic field harm the human body. Thus, in the practical applications the resolution of hardware imaging reaches the limitation of resolution. Software-based super-resolution technology is effective to improve the resolution of image. This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method. The framework is to solve the problem of sample selection for dictionary learning of sparse reconstruction. The textural complexity-based image quality representation is proposed to choose the optimal samples for dictionary learning. Comprehensive experiments show that the dictionary-optimized sparse learning improves the performance of sparse representation.

  10. Nature's crucible: Manufacturing optical nonlinearities for high resolution, high sensitivity encoding in the compound eye of the fly, Musca domestica

    NASA Technical Reports Server (NTRS)

    Wilcox, Mike

    1993-01-01

    The number of pixels per unit area sampling an image determines Nyquist resolution. Therefore, the highest pixel density is the goal. Unfortunately, as reduction in pixel size approaches the wavelength of light, sensitivity is lost and noise increases. Animals face the same problems and have achieved novel solutions. Emulating these solutions offers potentially unlimited sensitivity with detector size approaching the diffraction limit. Once an image is 'captured', cellular preprocessing of information allows extraction of high resolution information from the scene. Computer simulation of this system promises hyperacuity for machine vision.

  11. Improvement of the energy resolution of pixelated CdTe detectors for applications in 0νββ searches

    NASA Astrophysics Data System (ADS)

    Gleixner, T.; Anton, G.; Filipenko, M.; Seller, P.; Veale, M. C.; Wilson, M. D.; Zang, A.; Michel, T.

    2015-07-01

    Experiments trying to detect 0νββ are very challenging. Their requirements include a good energy resolution and a good detection efficiency. With current fine pixelated CdTe detectors there is a trade off between the energy resolution and the detection efficiency, which limits their performance. It will be shown with simulations that this problem can be mostly negated by analysing the cathode signal which increases the optimal sensor thickness. We will compare different types of fine pixelated CdTe detectors (Timepix, Dosepix, HEXITEC) from this point of view.

  12. Ultrafast fluorescence spectroscopy via upconversion applications to biophysics.

    PubMed

    Xu, Jianhua; Knutson, Jay R

    2008-01-01

    This chapter reviews basic concepts of nonlinear fluorescence upconversion, a technique whose temporal resolution is essentially limited only by the pulse width of the ultrafast laser. Design aspects for upconversion spectrophotofluorometers are discussed, and a recently developed system is described. We discuss applications in biophysics, particularly the measurement of time-resolved fluorescence spectra of proteins (with subpicosecond time resolution). Application of this technique to biophysical problems such as dynamics of tryptophan, peptides, proteins, and nucleic acids is reviewed.

  13. Cascaded VLSI neural network architecture for on-line learning

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P. (Inventor); Duong, Tuan A. (Inventor); Daud, Taher (Inventor)

    1992-01-01

    High-speed, analog, fully-parallel, and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A computation intensive feature classification application was demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as an application specific coprocessor for solving real world problems at extremely high data rates.

  14. Cascaded VLSI neural network architecture for on-line learning

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor); Daud, Taher (Inventor); Thakoor, Anilkumar P. (Inventor)

    1995-01-01

    High-speed, analog, fully-parallel and asynchronous building blocks are cascaded for larger sizes and enhanced resolution. A hardware-compatible algorithm permits hardware-in-the-loop learning despite limited weight resolution. A comparison-intensive feature classification application has been demonstrated with this flexible hardware and new algorithm at high speed. This result indicates that these building block chips can be embedded as application-specific-coprocessors for solving real-world problems at extremely high data rates.

  15. Theoretical limit of spatial resolution in diffuse optical tomography using a perturbation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konovalov, A B; Vlasov, V V

    2014-03-28

    We have assessed the limit of spatial resolution of timedomain diffuse optical tomography (DOT) based on a perturbation reconstruction model. From the viewpoint of the structure reconstruction accuracy, three different approaches to solving the inverse DOT problem are compared. The first approach involves reconstruction of diffuse tomograms from straight lines, the second – from average curvilinear trajectories of photons and the third – from total banana-shaped distributions of photon trajectories. In order to obtain estimates of resolution, we have derived analytical expressions for the point spread function and modulation transfer function, as well as have performed a numerical experiment onmore » reconstruction of rectangular scattering objects with circular absorbing inhomogeneities. It is shown that in passing from reconstruction from straight lines to reconstruction using distributions of photon trajectories we can improve resolution by almost an order of magnitude and exceed the accuracy of reconstruction of multi-step algorithms used in DOT. (optical tomography)« less

  16. The 2015 super-resolution microscopy roadmap

    NASA Astrophysics Data System (ADS)

    Hell, Stefan W.; Sahl, Steffen J.; Bates, Mark; Zhuang, Xiaowei; Heintzmann, Rainer; Booth, Martin J.; Bewersdorf, Joerg; Shtengel, Gleb; Hess, Harald; Tinnefeld, Philip; Honigmann, Alf; Jakobs, Stefan; Testa, Ilaria; Cognet, Laurent; Lounis, Brahim; Ewers, Helge; Davis, Simon J.; Eggeling, Christian; Klenerman, David; Willig, Katrin I.; Vicidomini, Giuseppe; Castello, Marco; Diaspro, Alberto; Cordes, Thorben

    2015-11-01

    Far-field optical microscopy using focused light is an important tool in a number of scientific disciplines including chemical, (bio)physical and biomedical research, particularly with respect to the study of living cells and organisms. Unfortunately, the applicability of the optical microscope is limited, since the diffraction of light imposes limitations on the spatial resolution of the image. Consequently the details of, for example, cellular protein distributions, can be visualized only to a certain extent. Fortunately, recent years have witnessed the development of ‘super-resolution’ far-field optical microscopy (nanoscopy) techniques such as stimulated emission depletion (STED), ground state depletion (GSD), reversible saturated optical (fluorescence) transitions (RESOLFT), photoactivation localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), structured illumination microscopy (SIM) or saturated structured illumination microscopy (SSIM), all in one way or another addressing the problem of the limited spatial resolution of far-field optical microscopy. While SIM achieves a two-fold improvement in spatial resolution compared to conventional optical microscopy, STED, RESOLFT, PALM/STORM, or SSIM have all gone beyond, pushing the limits of optical image resolution to the nanometer scale. Consequently, all super-resolution techniques open new avenues of biomedical research. Because the field is so young, the potential capabilities of different super-resolution microscopy approaches have yet to be fully explored, and uncertainties remain when considering the best choice of methodology. Thus, even for experts, the road to the future is sometimes shrouded in mist. The super-resolution optical microscopy roadmap of Journal of Physics D: Applied Physics addresses this need for clarity. It provides guidance to the outstanding questions through a collection of short review articles from experts in the field, giving a thorough discussion on the concepts underlying super-resolution optical microscopy, the potential of different approaches, the importance of label optimization (such as reversible photoswitchable proteins) and applications in which these methods will have a significant impact. Mark Bates, Christian Eggeling

  17. Ultrafast Fluorescence Spectroscopy via Upconversion: Applications to Biophysics

    PubMed Central

    Xu, Jianhua; Knutson, Jay R.

    2012-01-01

    This chapter reviews basic concepts of nonlinear fluorescence upconversion, a technique whose temporal resolution is essentially limited only by the pulse width of the ultrafast laser. Design aspects for upconversion spectrophotofluorometers are discussed, and a recently developed system is described. We discuss applications in biophysics, particularly the measurement of time-resolved fluorescence spectra of proteins (with subpicosecond time resolution). Application of this technique to biophysical problems such as dynamics of tryptophan, peptides, proteins, and nucleic acids is reviewed. PMID:19152860

  18. High resolution decadal precipitation predictions over the continental United States for impacts assessment

    NASA Astrophysics Data System (ADS)

    Salvi, Kaustubh; Villarini, Gabriele; Vecchi, Gabriel A.

    2017-10-01

    Unprecedented alterations in precipitation characteristics over the last century and especially in the last two decades have posed serious socio-economic problems to society in terms of hydro-meteorological extremes, in particular flooding and droughts. The origin of these alterations has its roots in changing climatic conditions; however, its threatening implications can only be dealt with through meticulous planning that is based on realistic and skillful decadal precipitation predictions (DPPs). Skillful DPPs represent a very challenging prospect because of the complexities associated with precipitation predictions. Because of the limited skill and coarse spatial resolution, the DPPs provided by General Circulation Models (GCMs) fail to be directly applicable for impact assessment. Here, we focus on nine GCMs and quantify the seasonally and regionally averaged skill in DPPs over the continental United States. We address the problems pertaining to the limited skill and resolution by applying linear and kernel regression-based statistical downscaling approaches. For both the approaches, statistical relationships established over the calibration period (1961-1990) are applied to the retrospective and near future decadal predictions by GCMs to obtain DPPs at ∼4 km resolution. The skill is quantified across different metrics that evaluate potential skill, biases, long-term statistical properties, and uncertainty. Both the statistical approaches show improvements with respect to the raw GCM data, particularly in terms of the long-term statistical properties and uncertainty, irrespective of lead time. The outcome of the study is monthly DPPs from nine GCMs with 4-km spatial resolution, which can be used as a key input for impacts assessments.

  19. Research and application of spectral inversion technique in frequency domain to improve resolution of converted PS-wave

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; He, Zhen-Hua; Li, Ya-Lin; Li, Rui; He, Guamg-Ming; Li, Zhong

    2017-06-01

    Multi-wave exploration is an effective means for improving precision in the exploration and development of complex oil and gas reservoirs that are dense and have low permeability. However, converted wave data is characterized by a low signal-to-noise ratio and low resolution, because the conventional deconvolution technology is easily affected by the frequency range limits, and there is limited scope for improving its resolution. The spectral inversion techniques is used to identify λ/8 thin layers and its breakthrough regarding band range limits has greatly improved the seismic resolution. The difficulty associated with this technology is how to use the stable inversion algorithm to obtain a high-precision reflection coefficient, and then to use this reflection coefficient to reconstruct broadband data for processing. In this paper, we focus on how to improve the vertical resolution of the converted PS-wave for multi-wave data processing. Based on previous research, we propose a least squares inversion algorithm with a total variation constraint, in which we uses the total variance as a priori information to solve under-determined problems, thereby improving the accuracy and stability of the inversion. Here, we simulate the Gaussian fitting amplitude spectrum to obtain broadband wavelet data, which we then process to obtain a higher resolution converted wave. We successfully apply the proposed inversion technology in the processing of high-resolution data from the Penglai region to obtain higher resolution converted wave data, which we then verify in a theoretical test. Improving the resolution of converted PS-wave data will provide more accurate data for subsequent velocity inversion and the extraction of reservoir reflection information.

  20. Multi-color localization microscopy of fixed cells as a promising tool to study organization of bacterial cytoskeleton

    NASA Astrophysics Data System (ADS)

    Vedyaykin, A. D.; Gorbunov, V. V.; Sabantsev, A. V.; Polinovskaya, V. S.; Vishnyakov, I. E.; Melnikov, A. S.; Serdobintsev, P. Yu; Khodorkovskii, M. A.

    2015-11-01

    Localization microscopy allows visualization of biological structures with resolution well below the diffraction limit. Localization microscopy was used to study FtsZ organization in Escherichia coli previously in combination with fluorescent protein labeling, but the fact that fluorescent chimeric protein was unable to rescue temperature-sensitive ftsZ mutants suggests that obtained images may not represent native FtsZ structures faithfully. Indirect immunolabeling of FtsZ not only overcomes this problem, but also allows the use of the powerful visualization methods arsenal available for different structures in fixed cells. In this work we simultaneously obtained super-resolution images of FtsZ structures and diffraction-limited or super-resolution images of DNA and cell surface in E. coli, which allows for the study of the spatial arrangement of FtsZ structures with respect to the nucleoid positions and septum formation.

  1. Understanding healthcare professionals' self-efficacy to resolve interprofessional conflict.

    PubMed

    Sexton, Martha; Orchard, Carole

    2016-05-01

    Conflict within interprofessional healthcare teams, when not effectively resolved, has been linked to detrimental consequences; however, effective conflict resolution has been shown to enhance team performance, increase patient safety, and improve patient outcomes. Alarmingly, knowledge of healthcare professionals' ability to resolve conflict has been limited, largely due to the challenges that arise when researchers attempt to observe a conflict occurring in real time. Research literature has identified three central components that seem to influence healthcare professional's perceived ability to resolve conflict: communication competence, problem-solving ability, and conflict resolution education and training. The purpose of this study was to investigate the impact of communication competence, problem-solving ability, and conflict resolution education and training on healthcare professionals' perceived ability to resolve conflicts. This study employed a cross-sectional survey design. Multiple regression analyses demonstrated that two of the three central components-conflict resolution education and training and communication competence-were found to be statistically significant predictors of healthcare professionals' perceived ability to resolve conflict. Implications include a call to action for clinicians and academicians to recognize the importance of communication competence and conflict resolution education and training as a vital area in interprofessional pre- and post-licensure education and collaborative practice.

  2. Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography

    NASA Technical Reports Server (NTRS)

    Xu, Feng; Deshpande, Manohar

    2012-01-01

    Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.

  3. Super-Resolution of Plant Disease Images for the Acceleration of Image-based Phenotyping and Vigor Diagnosis in Agriculture.

    PubMed

    Yamamoto, Kyosuke; Togami, Takashi; Yamaguchi, Norio

    2017-11-06

    Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture-in cooperation with image processing technologies-for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis.

  4. Super-Resolution of Plant Disease Images for the Acceleration of Image-based Phenotyping and Vigor Diagnosis in Agriculture

    PubMed Central

    Togami, Takashi; Yamaguchi, Norio

    2017-01-01

    Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture—in cooperation with image processing technologies—for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis. PMID:29113104

  5. The Partition of Multi-Resolution LOD Based on Qtm

    NASA Astrophysics Data System (ADS)

    Hou, M.-L.; Xing, H.-Q.; Zhao, X.-S.; Chen, J.

    2011-08-01

    The partition hierarch of Quaternary Triangular Mesh (QTM) determine the accuracy of spatial analysis and application based on QTM. In order to resolve the problem that the partition hierarch of QTM is limited by the level of the computer hardware, the new method that Multi- Resolution LOD (Level of Details) based on QTM will be discussed in this paper. This method can make the resolution of the cells varying with the viewpoint position by partitioning the cells of QTM, selecting the particular area according to the viewpoint; dealing with the cracks caused by different subdivisions, it satisfies the request of unlimited partition in part.

  6. Multi-pinhole SPECT Imaging with Silicon Strip Detectors

    PubMed Central

    Peterson, Todd E.; Shokouhi, Sepideh; Furenlid, Lars R.; Wilson, Donald W.

    2010-01-01

    Silicon double-sided strip detectors offer outstanding instrinsic spatial resolution with reasonable detection efficiency for iodine-125 emissions. This spatial resolution allows for multiple-pinhole imaging at low magnification, minimizing the problem of multiplexing. We have conducted imaging studies using a prototype system that utilizes a detector of 300-micrometer thickness and 50-micrometer strip pitch together with a 23-pinhole collimator. These studies include an investigation of the synthetic-collimator imaging approach, which combines multiple-pinhole projections acquired at multiple magnifications to obtain tomographic reconstructions from limited-angle data using the ML-EM algorithm. Sub-millimeter spatial resolution was obtained, demonstrating the basic validity of this approach. PMID:20953300

  7. Imaging photonic crystals using hemispherical digital condensers and phase-recovery techniques.

    PubMed

    Alotaibi, Maged; Skinner-Ramos, Sueli; Farooq, Hira; Alharbi, Nouf; Alghasham, Hawra; de Peralta, Luis Grave

    2018-05-10

    We describe experiments where Fourier ptychographic microscopy (FPM) and dual-space microscopy (DSM) are implemented for imaging photonic crystals using a hemispherical digital condenser (HDC). Phase-recovery imaging simulations show that both techniques should be able to image photonic crystals with a period below the Rayleigh resolution limit. However, after processing the experimental images using both phase-recovery algorithms, we found that DSM can, but FPM cannot, image periodic structures with a period below the diffraction limit. We studied the origin of this apparent contradiction between simulations and experiments, and we concluded that the occurrence of unwanted reflections in the HDC is the source of the apparent failure of FPM. We thereafter solved the problem of reflections by using a single-directional illumination source and showed that FPM can image photonic crystals with a period below the Rayleigh resolution limit.

  8. Avalanche statistics from data with low time resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.

    Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less

  9. Avalanche statistics from data with low time resolution

    DOE PAGES

    LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.; ...

    2016-11-22

    Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less

  10. The Enhancement of 3D Scans Depth Resolution Obtained by Confocal Scanning of Porous Materials

    NASA Astrophysics Data System (ADS)

    Martisek, Dalibor; Prochazkova, Jana

    2017-12-01

    The 3D reconstruction of simple structured materials using a confocal microscope is widely used in many different areas including civil engineering. Nonetheless, scans of porous materials such as concrete or cement paste are highly problematic. The well-known problem of these scans is low depth resolution in comparison to the horizontal and vertical resolution. The degradation of the image depth resolution is caused by systematic errors and especially by different random events. Our method is focused on the elimination of such random events, mainly the additive noise. We use an averaging method based on the Lindeberg-Lévy theorem that improves the final depth resolution to a level comparable with horizontal and vertical resolution. Moreover, using the least square method, we also precisely determine the limit value of a depth resolution. Therefore, we can continuously evaluate the difference between current resolution and the optimal one. This substantially simplifies the scanning process because the operator can easily determine the required number of scans.

  11. Lessons Learned During Solutions of Multidisciplinary Design Optimization Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Suna N.; Coroneos, Rula M.; Hopkins, Dale A.; Lavelle, Thomas M.

    2000-01-01

    Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. During solution of the multidisciplinary problems several issues were encountered. This paper lists four issues and discusses the strategies adapted for their resolution: (1) The optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. (2) Optimum solutions obtained were infeasible for aircraft and air-breathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. (3) Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. (4) The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through six problems: (1) design of an engine component, (2) synthesis of a subsonic aircraft, (3) operation optimization of a supersonic engine, (4) design of a wave-rotor-topping device, (5) profile optimization of a cantilever beam, and (6) design of a cvlindrical shell. The combined effort of designers and researchers can bring the optimization method from academia to industry.

  12. An electron beam linear scanning mode for industrial limited-angle nano-computed tomography.

    PubMed

    Wang, Chengxiang; Zeng, Li; Yu, Wei; Zhang, Lingli; Guo, Yumeng; Gong, Changcheng

    2018-01-01

    Nano-computed tomography (nano-CT), which utilizes X-rays to research the inner structure of some small objects and has been widely utilized in biomedical research, electronic technology, geology, material sciences, etc., is a high spatial resolution and non-destructive research technique. A traditional nano-CT scanning model with a very high mechanical precision and stability of object manipulator, which is difficult to reach when the scanned object is continuously rotated, is required for high resolution imaging. To reduce the scanning time and attain a stable and high resolution imaging in industrial non-destructive testing, we study an electron beam linear scanning mode of nano-CT system that can avoid mechanical vibration and object movement caused by the continuously rotated object. Furthermore, to further save the scanning time and study how small the scanning range could be considered with acceptable spatial resolution, an alternating iterative algorithm based on ℓ 0 minimization is utilized to limited-angle nano-CT reconstruction problem with the electron beam linear scanning mode. The experimental results confirm the feasibility of the electron beam linear scanning mode of nano-CT system.

  13. Mobile and embedded fast high resolution image stitching for long length rectangular monochromatic objects with periodic structure

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Tropin, Daniil; Savelyev, Boris; Mamay, Igor; Nikolaev, Dmitry

    2018-04-01

    In this paper we describe stitching protocol, which allows to obtain high resolution images of long length monochromatic objects with periodic structure. This protocol can be used for long length documents or human-induced objects in satellite images of uninhabitable regions like Arctic regions. The length of such objects can reach notable values, while modern camera sensors have limited resolution and are not able to provide good enough image of the whole object for further processing, e.g. using in OCR system. The idea of the proposed method is to acquire a video stream containing full object in high resolution and use image stitching. We expect the scanned object to have straight boundaries and periodic structure, which allow us to introduce regularization to the stitching problem and adapt algorithm for limited computational power of mobile and embedded CPUs. With the help of detected boundaries and structure we estimate homography between frames and use this information to reduce complexity of stitching. We demonstrate our algorithm on mobile device and show image processing speed of 2 fps on Samsung Exynos 5422 processor

  14. An electron beam linear scanning mode for industrial limited-angle nano-computed tomography

    NASA Astrophysics Data System (ADS)

    Wang, Chengxiang; Zeng, Li; Yu, Wei; Zhang, Lingli; Guo, Yumeng; Gong, Changcheng

    2018-01-01

    Nano-computed tomography (nano-CT), which utilizes X-rays to research the inner structure of some small objects and has been widely utilized in biomedical research, electronic technology, geology, material sciences, etc., is a high spatial resolution and non-destructive research technique. A traditional nano-CT scanning model with a very high mechanical precision and stability of object manipulator, which is difficult to reach when the scanned object is continuously rotated, is required for high resolution imaging. To reduce the scanning time and attain a stable and high resolution imaging in industrial non-destructive testing, we study an electron beam linear scanning mode of nano-CT system that can avoid mechanical vibration and object movement caused by the continuously rotated object. Furthermore, to further save the scanning time and study how small the scanning range could be considered with acceptable spatial resolution, an alternating iterative algorithm based on ℓ0 minimization is utilized to limited-angle nano-CT reconstruction problem with the electron beam linear scanning mode. The experimental results confirm the feasibility of the electron beam linear scanning mode of nano-CT system.

  15. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    PubMed

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  16. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    NASA Astrophysics Data System (ADS)

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure.

  17. High resolution telescope

    DOEpatents

    Massie, Norbert A.; Oster, Yale

    1992-01-01

    A large effective-aperture, low-cost optical telescope with diffraction-limited resolution enables ground-based observation of near-earth space objects. The telescope has a non-redundant, thinned-aperture array in a center-mount, single-structure space frame. It employs speckle interferometric imaging to achieve diffraction-limited resolution. The signal-to-noise ratio problem is mitigated by moving the wavelength of operation to the near-IR, and the image is sensed by a Silicon CCD. The steerable, single-structure array presents a constant pupil. The center-mount, radar-like mount enables low-earth orbit space objects to be tracked as well as increases stiffness of the space frame. In the preferred embodiment, the array has elemental telescopes with subaperture of 2.1 m in a circle-of-nine configuration. The telescope array has an effective aperture of 12 m which provides a diffraction-limited resolution of 0.02 arc seconds. Pathlength matching of the telescope array is maintained by an electro-optical system employing laser metrology. Speckle imaging relaxes pathlength matching tolerance by one order of magnitude as compared to phased arrays. Many features of the telescope contribute to substantial reduction in costs. These include eliminating the conventional protective dome and reducing on-site construction activites. The cost of the telescope scales with the first power of the aperture rather than its third power as in conventional telescopes.

  18. Chrominance watermark for mobile applications

    NASA Astrophysics Data System (ADS)

    Reed, Alastair; Rogers, Eliot; James, Dan

    2010-01-01

    Creating an imperceptible watermark which can be read by a broad range of cell phone cameras is a difficult problem. The problems are caused by the inherently low resolution and noise levels of typical cell phone cameras. The quality limitations of these devices compared to a typical digital camera are caused by the small size of the cell phone and cost trade-offs made by the manufacturer. In order to achieve this, a low resolution watermark is required which can be resolved by a typical cell phone camera. The visibility of a traditional luminance watermark was too great at this lower resolution, so a chrominance watermark was developed. The chrominance watermark takes advantage of the relatively low sensitivity of the human visual system to chrominance changes. This enables a chrominance watermark to be inserted into an image which is imperceptible to the human eye but can be read using a typical cell phone camera. Sample images will be presented showing images with a very low visibility which can be easily read by a typical cell phone camera.

  19. Examples of current radar technology and applications, chapter 5, part B

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Basic principles and tradeoff considerations for SLAR are summarized. There are two fundamental types of SLAR sensors available to the remote sensing user: real aperture and synthetic aperture. The primary difference between the two types is that a synthetic aperture system is capable of significant improvements in target resolution but requires equally significant added complexity and cost. The advantages of real aperture SLAR include long range coverage, all-weather operation, in-flight processing and image viewing, and lower cost. The fundamental limitation of the real aperture approach is target resolution. Synthetic aperture processing is the most practical approach for remote sensing problems that require resolution higher than 30 to 40 m.

  20. Computer programs: Operational and mathematical, a compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.

  1. A resolution measure for three-dimensional microscopy

    PubMed Central

    Chao, Jerry; Ram, Sripad; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.

    2009-01-01

    A three-dimensional (3D) resolution measure for the conventional optical microscope is introduced which overcomes the drawbacks of the classical 3D (axial) resolution limit. Formulated within the context of a parameter estimation problem and based on the Cramer-Rao lower bound, this 3D resolution measure indicates the accuracy with which a given distance between two objects in 3D space can be determined from the acquired image. It predicts that, given enough photons from the objects of interest, arbitrarily small distances of separation can be estimated with prespecified accuracy. Using simulated images of point source pairs, we show that the maximum likelihood estimator is capable of attaining the accuracy predicted by the resolution measure. We also demonstrate how different factors, such as extraneous noise sources and the spatial orientation of the imaged object pair, can affect the accuracy with which a given distance of separation can be determined. PMID:20161040

  2. A Microfluidic Platform for Correlative Live-Cell and Super-Resolution Microscopy

    PubMed Central

    Tam, Johnny; Cordier, Guillaume Alan; Bálint, Štefan; Sandoval Álvarez, Ángel; Borbely, Joseph Steven; Lakadamyali, Melike

    2014-01-01

    Recently, super-resolution microscopy methods such as stochastic optical reconstruction microscopy (STORM) have enabled visualization of subcellular structures below the optical resolution limit. Due to the poor temporal resolution, however, these methods have mostly been used to image fixed cells or dynamic processes that evolve on slow time-scales. In particular, fast dynamic processes and their relationship to the underlying ultrastructure or nanoscale protein organization cannot be discerned. To overcome this limitation, we have recently developed a correlative and sequential imaging method that combines live-cell and super-resolution microscopy. This approach adds dynamic background to ultrastructural images providing a new dimension to the interpretation of super-resolution data. However, currently, it suffers from the need to carry out tedious steps of sample preparation manually. To alleviate this problem, we implemented a simple and versatile microfluidic platform that streamlines the sample preparation steps in between live-cell and super-resolution imaging. The platform is based on a microfluidic chip with parallel, miniaturized imaging chambers and an automated fluid-injection device, which delivers a precise amount of a specified reagent to the selected imaging chamber at a specific time within the experiment. We demonstrate that this system can be used for live-cell imaging, automated fixation, and immunostaining of adherent mammalian cells in situ followed by STORM imaging. We further demonstrate an application by correlating mitochondrial dynamics, morphology, and nanoscale mitochondrial protein distribution in live and super-resolution images. PMID:25545548

  3. Photon theory hypothesis about photon tunneling microscope's subwavelength resolution

    NASA Astrophysics Data System (ADS)

    Zhu, Yanbin; Ma, Junfu

    1995-09-01

    The foundation for the invention of the photon scanning tunneling microscope (PSTM) are the near field scanning optical microscope, the optical fiber technique, the total internal reflection, high sensitive opto-electronic detecting technique and computer technique etc. Recent research results show the subwavelength resolution of 1 - 3 nm is obtained. How to explain the PSTM has got such high subwavelength resolution? What value is the PSTM's limiting of subwavelength resolution? For resolving these problems this paper presented a photon theory hypothesis about PSTM that is based on the following two basic laws: (1) Photon is not only a carrier bringing energy and optical information, but also is a particle occupied fixed space size. (2) When a photon happened reflection, refraction, scattering, etc., only changed its energy and optical information carried, its particle size doesn't change. g (DOT) pphoton equals constant. Using these two basic laws to PSTM, the `evanescent field' is practically a weak photon distribution field and the detecting fiber tip diameter is practically a `gate' which size controlled the photon numbers into fiber tip. Passing through some calculation and inference, the following three conclusions can be given: (1) Under the PSTM's detection system sensitivity is high enough, the diameter D of detecting fiber tip and the near field detecting distance Z are the two most important factors to decide the subwavelength resolution of PSTM. (2) The limiting of PSTM's resolution will be given upon the conditions of D equals pphoton and Z equals pphoton, where pphoton is one photon size. (2) The final resolution limit R of PSTM will be lim R equals pphoton, D yields pphoton, Z yields pphoton.

  4. Approach to simultaneously denoise and invert backscatter and extinction from photon-limited atmospheric lidar observations.

    PubMed

    Marais, Willem J; Holz, Robert E; Hu, Yu Hen; Kuehn, Ralph E; Eloranta, Edwin E; Willett, Rebecca M

    2016-10-10

    Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Standard methods for solving this inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. This paper describes a novel method for solving the inverse problem with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The novelty is twofold. First, the inferences of the backscatter and extinction are applied to images, whereas current lidar algorithms only use the information content of single profiles. Hence, the latent spatial and temporal information in noisy images are utilized to infer the cross-sections. Second, the noise associated with photon-counting lidar observations can be modeled using a Poisson distribution, and state-of-the-art tools for solving Poisson inverse problems are adapted to the atmospheric lidar problem. It is demonstrated through photon-counting high spectral resolution lidar (HSRL) simulations that the proposed algorithm yields inverted backscatter and extinction cross-sections (per unit volume) with smaller mean squared error values at higher spatial and temporal resolutions, compared to the standard approach. Two case studies of real experimental data are also provided where the proposed algorithm is applied on HSRL observations and the inverted backscatter and extinction cross-sections are compared against the standard approach.

  5. Quantum Theory of Three-Dimensional Superresolution Using Rotating-PSF Imagery

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Yu, Z.

    The inverse of the quantum Fisher information (QFI) matrix (and extensions thereof) provides the ultimate lower bound on the variance of any unbiased estimation of a parameter from statistical data, whether of intrinsically quantum mechanical or classical character. We calculate the QFI for Poisson-shot-noise-limited imagery using the rotating PSF that can localize and resolve point sources fully in all three dimensions. We also propose an experimental approach based on the use of computer generated hologram and projective measurements to realize the QFI-limited variance for the problem of super-resolving a closely spaced pair of point sources at a highly reduced photon cost. The paper presents a preliminary analysis of quantum-limited three-dimensional (3D) pair optical super-resolution (OSR) problem with potential applications to astronomical imaging and 3D space-debris localization.

  6. A cost-effective strategy for nonoscillatory convection without clipping

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Niknafs, H. S.

    1990-01-01

    Clipping of narrow extrema and distortion of smooth profiles is a well known problem associated with so-called high resolution nonoscillatory convection schemes. A strategy is presented for accurately simulating highly convective flows containing discontinuities such as density fronts or shock waves, without distorting smooth profiles or clipping narrow local extrema. The convection algorithm is based on non-artificially diffusive third-order upwinding in smooth regions, with automatic adaptive stencil expansion to (in principle, arbitrarily) higher order upwinding locally, in regions of rapidly changing gradients. This is highly cost effective because the wider stencil is used only where needed-in isolated narrow regions. A recently developed universal limiter assures sharp monotonic resolution of discontinuities without introducing artificial diffusion or numerical compression. An adaptive discriminator is constructed to distinguish between spurious overshoots and physical peaks; this automatically relaxes the limiter near local turning points, thereby avoiding loss of resolution in narrow extrema. Examples are given for one-dimensional pure convection of scalar profiles at constant velocity.

  7. Fundamental Constraints on the Coherence of Probing Signals in the Problem of Maximizing the Resolution and Range in the Stroboscopic Range of Asteroids

    NASA Astrophysics Data System (ADS)

    Zakharchenko, V. D.; Kovalenko, I. G.; Pak, O. V.; Ryzhkov, V. Yu.

    2018-05-01

    The problem of coherence violation in stroboscopic ranging with a high resolution in the range due to mutual phase instability of probing and reference radio signals has been considered. It has been shown that the violation of coherence in stroboscopic ranging systems is equivalent to the action of modulating interface and leads to a decrease in the system sensitivity. Requirements have been formulated for the coherence of reference generators in the stroboscopic processing system. The results of statistical modeling have been presented. It was shown that, in the current state of technology with stability of the frequencies of the reference generators, the achieved coherence is sufficient to probe asteroids with super-resolving signals in the range of up to 70 million kilometers. In this case, the dispersion of the signal in cosmic plasma limits the value of the linear resolution of the asteroid details at this range by the value of 2.7 m. Comparison with the current radar resolution of asteroids has been considered, which, at the end of 2015, were 7.5 m in the range of 7 million kilometers.

  8. A practical approach to superresolution

    NASA Astrophysics Data System (ADS)

    Farsiu, Sina; Elad, Michael; Milanfar, Peyman

    2006-01-01

    Theoretical and practical limitations usually constrain the achievable resolution of any imaging device. Super-Resolution (SR) methods are developed through the years to go beyond this limit by acquiring and fusing several low-resolution (LR) images of the same scene, producing a high-resolution (HR) image. The early works on SR, although occasionally mathematically optimal for particular models of data and noise, produced poor results when applied to real images. In this paper, we discuss two of the main issues related to designing a practical SR system, namely reconstruction accuracy and computational efficiency. Reconstruction accuracy refers to the problem of designing a robust SR method applicable to images from different imaging systems. We study a general framework for optimal reconstruction of images from grayscale, color, or color filtered (CFA) cameras. The performance of our proposed method is boosted by using powerful priors and is robust to both measurement (e.g. CCD read out noise) and system noise (e.g. motion estimation error). Noting that the motion estimation is often considered a bottleneck in terms of SR performance, we introduce the concept of "constrained motions" for enhancing the quality of super-resolved images. We show that using such constraints will enhance the quality of the motion estimation and therefore results in more accurate reconstruction of the HR images. We also justify some practical assumptions that greatly reduce the computational complexity and memory requirements of the proposed methods. We use efficient approximation of the Kalman Filter (KF) and adopt a dynamic point of view to the SR problem. Novel methods for addressing these issues are accompanied by experimental results on real data.

  9. Asymptotic-Preserving methods and multiscale models for plasma physics

    NASA Astrophysics Data System (ADS)

    Degond, Pierre; Deluzet, Fabrice

    2017-05-01

    The purpose of the present paper is to provide an overview of Asymptotic-Preserving methods for multiscale plasma simulations by addressing three singular perturbation problems. First, the quasi-neutral limit of fluid and kinetic models is investigated in the framework of non-magnetized as well as magnetized plasmas. Second, the drift limit for fluid descriptions of thermal plasmas under large magnetic fields is addressed. Finally efficient numerical resolutions of anisotropic elliptic or diffusion equations arising in magnetized plasma simulation are reviewed.

  10. Proceedings of the 1993 Complex Systems Engineering Synthesis and Assessment Technology Workshop (CSESAW 󈨡)

    DTIC Science & Technology

    1993-10-17

    34, "in criteria, and scoring each applicable high resolution mode’, "within 10 minutes of element as I (satisfactory) or 0 power -on...everone ese, humanity. We humans are kowledg limited and we The specificatio concept development design, and ye the problem caused by that limitation...human task that is component, we would have 53=125 integration spaces. within the power of a normal, single, specialized As you can imagine this could

  11. High-resolution X-ray crystal structure of bovine H-protein using the high-pressure cryocooling method.

    PubMed

    Higashiura, Akifumi; Ohta, Kazunori; Masaki, Mika; Sato, Masaru; Inaka, Koji; Tanaka, Hiroaki; Nakagawa, Atsushi

    2013-11-01

    Recently, many technical improvements in macromolecular X-ray crystallography have increased the number of structures deposited in the Protein Data Bank and improved the resolution limit of protein structures. Almost all high-resolution structures have been determined using a synchrotron radiation source in conjunction with cryocooling techniques, which are required in order to minimize radiation damage. However, optimization of cryoprotectant conditions is a time-consuming and difficult step. To overcome this problem, the high-pressure cryocooling method was developed (Kim et al., 2005) and successfully applied to many protein-structure analyses. In this report, using the high-pressure cryocooling method, the X-ray crystal structure of bovine H-protein was determined at 0.86 Å resolution. Structural comparisons between high- and ambient-pressure cryocooled crystals at ultra-high resolution illustrate the versatility of this technique. This is the first ultra-high-resolution X-ray structure obtained using the high-pressure cryocooling method.

  12. Wavelet Filter Banks for Super-Resolution SAR Imaging

    NASA Technical Reports Server (NTRS)

    Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess

    2011-01-01

    This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.

  13. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  14. Unleashing spatially distributed ecohydrology modeling using Big Data tools

    NASA Astrophysics Data System (ADS)

    Miles, B.; Idaszak, R.

    2015-12-01

    Physically based spatially distributed ecohydrology models are useful for answering science and management questions related to the hydrology and biogeochemistry of prairie, savanna, forested, as well as urbanized ecosystems. However, these models can produce hundreds of gigabytes of spatial output for a single model run over decadal time scales when run at regional spatial scales and moderate spatial resolutions (~100-km2+ at 30-m spatial resolution) or when run for small watersheds at high spatial resolutions (~1-km2 at 3-m spatial resolution). Numerical data formats such as HDF5 can store arbitrarily large datasets. However even in HPC environments, there are practical limits on the size of single files that can be stored and reliably backed up. Even when such large datasets can be stored, querying and analyzing these data can suffer from poor performance due to memory limitations and I/O bottlenecks, for example on single workstations where memory and bandwidth are limited, or in HPC environments where data are stored separately from computational nodes. The difficulty of storing and analyzing spatial data from ecohydrology models limits our ability to harness these powerful tools. Big Data tools such as distributed databases have the potential to surmount the data storage and analysis challenges inherent to large spatial datasets. Distributed databases solve these problems by storing data close to computational nodes while enabling horizontal scalability and fault tolerance. Here we present the architecture of and preliminary results from PatchDB, a distributed datastore for managing spatial output from the Regional Hydro-Ecological Simulation System (RHESSys). The initial version of PatchDB uses message queueing to asynchronously write RHESSys model output to an Apache Cassandra cluster. Once stored in the cluster, these data can be efficiently queried to quickly produce both spatial visualizations for a particular variable (e.g. maps and animations), as well as point time series of arbitrary variables at arbitrary points in space within a watershed or river basin. By treating ecohydrology modeling as a Big Data problem, we hope to provide a platform for answering transformative science and management questions related to water quantity and quality in a world of non-stationary climate.

  15. Coronal Heating and the Need for High-Resolution Observations

    NASA Technical Reports Server (NTRS)

    Klimchuk, James A.

    2008-01-01

    Despite excellent progress in recent years in understanding coronal heating, there remain many crucial questions that are still unanswered. Limitations in the observations are one important reason. Both theoretical and observational considerations point to the importance of small spatial scales, impulsive energy release, strong dynamics, and extreme plasma nonuniformity. As a consequence, high spatial resolution, broad temperature coverage, high temperature fidelity, and sensitivity to velocities and densities are all critical observational parameters. Current instruments lack one or more of these properties, and this has led to considerable ambiguity and confusion. In this talk, I will discuss recent ideas about coronal heating and emphasize that high spatial resolution observations, especially spectroscopic observations, are needed to make major progress on this important problem.

  16. Collimator application for microchannel plate image intensifier resolution improvement

    DOEpatents

    Thomas, Stanley W.

    1996-02-27

    A collimator is included in a microchannel plate image intensifier (MCPI). Collimators can be useful in improving resolution of MCPIs by eliminating the scattered electron problem and by limiting the transverse energy of electrons reaching the screen. Due to its optical absorption, a collimator will also increase the extinction ratio of an intensifier by approximately an order of magnitude. Additionally, the smooth surface of the collimator will permit a higher focusing field to be employed in the MCP-to-collimator region than is currently permitted in the MCP-to-screen region by the relatively rough and fragile aluminum layer covering the screen. Coating the MCP and collimator surfaces with aluminum oxide appears to permit additional significant increases in the field strength, resulting in better resolution.

  17. Nanopositioning for polarimetric characterization.

    PubMed

    Qureshi, Naser; Kolokoltsev, Oleg V; Ortega-Martínez, Roberto; Ordoñez-Romero, C L

    2008-12-01

    A positioning system with approximately nanometer resolution has been developed based on a new implementation of a motor-driven screw scheme. In contrast to conventional positioning systems based on piezoelectric elements, this system shows remarkably low levels of drift and vibration, and eliminates the need for position feedback during typical data acquisition processes. During positioning or scanning processes, non-repeatability and hysteresis problems inherent in mechanical positioning systems are greatly reduced using a software feedback scheme. As a result, we are able to demonstrate an average mechanical resolution of 1.45 nm and near diffraction-limited imaging using scanning optical microscopy. We propose this approach to nanopositioning as a readily accessible alternative enabling high spatial resolution scanning probe characterization (e.g., polarimetry) and provide practical details for its implementation.

  18. Hybrid network modeling and the effect of image resolution on digitally-obtained petrophysical and two-phase flow properties

    NASA Astrophysics Data System (ADS)

    Aghaei, A.

    2017-12-01

    Digital imaging and modeling of rocks and subsequent simulation of physical phenomena in digitally-constructed rock models are becoming an integral part of core analysis workflows. One of the inherent limitations of image-based analysis, at any given scale, is image resolution. This limitation becomes more evident when the rock has multiple scales of porosity such as in carbonates and tight sandstones. Multi-scale imaging and constructions of hybrid models that encompass images acquired at multiple scales and resolutions are proposed as a solution to this problem. In this study, we investigate the effect of image resolution and unresolved porosity on petrophysical and two-phase flow properties calculated based on images. A helical X-ray micro-CT scanner with a high cone-angle is used to acquire digital rock images that are free of geometric distortion. To remove subjectivity from the analyses, a semi-automated image processing technique is used to process and segment the acquired data into multiple phases. Direct and pore network based models are used to simulate physical phenomena and obtain absolute permeability, formation factor and two-phase flow properties such as relative permeability and capillary pressure. The effect of image resolution on each property is investigated. Finally a hybrid network model incorporating images at multiple resolutions is built and used for simulations. The results from the hybrid model are compared against results from the model built at the highest resolution and those from laboratory tests.

  19. Numerical Modeling of Poroelastic-Fluid Systems Using High-Resolution Finite Volume Methods

    NASA Astrophysics Data System (ADS)

    Lemoine, Grady

    Poroelasticity theory models the mechanics of porous, fluid-saturated, deformable solids. It was originally developed by Maurice Biot to model geophysical problems, such as seismic waves in oil reservoirs, but has also been applied to modeling living bone and other porous media. Poroelastic media often interact with fluids, such as in ocean bottom acoustics or propagation of waves from soft tissue into bone. This thesis describes the development and testing of high-resolution finite volume numerical methods, and simulation codes implementing these methods, for modeling systems of poroelastic media and fluids in two and three dimensions. These methods operate on both rectilinear grids and logically rectangular mapped grids. To allow the use of these methods, Biot's equations of poroelasticity are formulated as a first-order hyperbolic system with a source term; this source term is incorporated using operator splitting. Some modifications are required to the classical high-resolution finite volume method. Obtaining correct solutions at interfaces between poroelastic media and fluids requires a novel transverse propagation scheme and the removal of the classical second-order correction term at the interface, and in three dimensions a new wave limiting algorithm is also needed to correctly limit shear waves. The accuracy and convergence rates of the methods of this thesis are examined for a variety of analytical solutions, including simple plane waves, reflection and transmission of waves at an interface between different media, and scattering of acoustic waves by a poroelastic cylinder. Solutions are also computed for a variety of test problems from the computational poroelasticity literature, as well as some original test problems designed to mimic possible applications for the simulation code.

  20. GRACE Hydrological estimates for small basins: Evaluating processing approaches on the High Plains Aquifer, USA

    NASA Astrophysics Data System (ADS)

    Longuevergne, Laurent; Scanlon, Bridget R.; Wilson, Clark R.

    2010-11-01

    The Gravity Recovery and Climate Experiment (GRACE) satellites provide observations of water storage variation at regional scales. However, when focusing on a region of interest, limited spatial resolution and noise contamination can cause estimation bias and spatial leakage, problems that are exacerbated as the region of interest approaches the GRACE resolution limit of a few hundred km. Reliable estimates of water storage variations in small basins require compromises between competing needs for noise suppression and spatial resolution. The objective of this study was to quantitatively investigate processing methods and their impacts on bias, leakage, GRACE noise reduction, and estimated total error, allowing solution of the trade-offs. Among the methods tested is a recently developed concentration algorithm called spatiospectral localization, which optimizes the basin shape description, taking into account limited spatial resolution. This method is particularly suited to retrieval of basin-scale water storage variations and is effective for small basins. To increase confidence in derived methods, water storage variations were calculated for both CSR (Center for Space Research) and GRGS (Groupe de Recherche de Géodésie Spatiale) GRACE products, which employ different processing strategies. The processing techniques were tested on the intensively monitored High Plains Aquifer (450,000 km2 area), where application of the appropriate optimal processing method allowed retrieval of water storage variations over a portion of the aquifer as small as ˜200,000 km2.

  1. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    PubMed Central

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-01-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images, but also by detection sensitivity. As the probe size is reduced to below 1 µm, for example, a low signal in each pixel limits lateral resolution due to counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432

  2. Fast super-resolution with affine motion using an adaptive Wiener filter and its application to airborne imaging.

    PubMed

    Hardie, Russell C; Barnard, Kenneth J; Ordonez, Raul

    2011-12-19

    Fast nonuniform interpolation based super-resolution (SR) has traditionally been limited to applications with translational interframe motion. This is in part because such methods are based on an underlying assumption that the warping and blurring components in the observation model commute. For translational motion this is the case, but it is not true in general. This presents a problem for applications such as airborne imaging where translation may be insufficient. Here we present a new Fourier domain analysis to show that, for many image systems, an affine warping model with limited zoom and shear approximately commutes with the point spread function when diffraction effects are modeled. Based on this important result, we present a new fast adaptive Wiener filter (AWF) SR algorithm for non-translational motion and study its performance with affine motion. The fast AWF SR method employs a new smart observation window that allows us to precompute all the needed filter weights for any type of motion without sacrificing much of the full performance of the AWF. We evaluate the proposed algorithm using simulated data and real infrared airborne imagery that contains a thermal resolution target allowing for objective resolution analysis.

  3. G.A.M.E.: GPU-accelerated mixture elucidator.

    PubMed

    Schurz, Alioune; Su, Bo-Han; Tu, Yi-Shu; Lu, Tony Tsung-Yu; Lin, Olivia A; Tseng, Yufeng J

    2017-09-15

    GPU acceleration is useful in solving complex chemical information problems. Identifying unknown structures from the mass spectra of natural product mixtures has been a desirable yet unresolved issue in metabolomics. However, this elucidation process has been hampered by complex experimental data and the inability of instruments to completely separate different compounds. Fortunately, with current high-resolution mass spectrometry, one feasible strategy is to define this problem as extending a scaffold database with sidechains of different probabilities to match the high-resolution mass obtained from a high-resolution mass spectrum. By introducing a dynamic programming (DP) algorithm, it is possible to solve this NP-complete problem in pseudo-polynomial time. However, the running time of the DP algorithm grows by orders of magnitude as the number of mass decimal digits increases, thus limiting the boost in structural prediction capabilities. By harnessing the heavily parallel architecture of modern GPUs, we designed a "compute unified device architecture" (CUDA)-based GPU-accelerated mixture elucidator (G.A.M.E.) that considerably improves the performance of the DP, allowing up to five decimal digits for input mass data. As exemplified by four testing datasets with verified constitutions from natural products, G.A.M.E. allows for efficient and automatic structural elucidation of unknown mixtures for practical procedures. Graphical abstract .

  4. Multiplexed phase-space imaging for 3D fluorescence microscopy.

    PubMed

    Liu, Hsiou-Yuan; Zhong, Jingshan; Waller, Laura

    2017-06-26

    Optical phase-space functions describe spatial and angular information simultaneously; examples of optical phase-space functions include light fields in ray optics and Wigner functions in wave optics. Measurement of phase-space enables digital refocusing, aberration removal and 3D reconstruction. High-resolution capture of 4D phase-space datasets is, however, challenging. Previous scanning approaches are slow, light inefficient and do not achieve diffraction-limited resolution. Here, we propose a multiplexed method that solves these problems. We use a spatial light modulator (SLM) in the pupil plane of a microscope in order to sequentially pattern multiplexed coded apertures while capturing images in real space. Then, we reconstruct the 3D fluorescence distribution of our sample by solving an inverse problem via regularized least squares with a proximal accelerated gradient descent solver. We experimentally reconstruct a 101 Megavoxel 3D volume (1010×510×500µm with NA 0.4), demonstrating improved acquisition time, light throughput and resolution compared to scanning aperture methods. Our flexible patterning scheme further allows sparsity in the sample to be exploited for reduced data capture.

  5. Large-scale computations in fluid mechanics; Proceedings of the Fifteenth Summer Seminar on Applied Mathematics, University of California, La Jolla, CA, June 27-July 8, 1983. Parts 1 & 2

    NASA Technical Reports Server (NTRS)

    Engquist, B. E. (Editor); Osher, S. (Editor); Somerville, R. C. J. (Editor)

    1985-01-01

    Papers are presented on such topics as the use of semi-Lagrangian advective schemes in meteorological modeling; computation with high-resolution upwind schemes for hyperbolic equations; dynamics of flame propagation in a turbulent field; a modified finite element method for solving the incompressible Navier-Stokes equations; computational fusion magnetohydrodynamics; and a nonoscillatory shock capturing scheme using flux-limited dissipation. Consideration is also given to the use of spectral techniques in numerical weather prediction; numerical methods for the incorporation of mountains in atmospheric models; techniques for the numerical simulation of large-scale eddies in geophysical fluid dynamics; high-resolution TVD schemes using flux limiters; upwind-difference methods for aerodynamic problems governed by the Euler equations; and an MHD model of the earth's magnetosphere.

  6. Image super-resolution via adaptive filtering and regularization

    NASA Astrophysics Data System (ADS)

    Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming

    2014-11-01

    Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.

  7. CrowdPhase: crowdsourcing the phase problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jorda, Julien; Sawaya, Michael R.; Yeates, Todd O., E-mail: yeates@mbi.ucla.edu

    The idea of attacking the phase problem by crowdsourcing is introduced. Using an interactive, multi-player, web-based system, participants work simultaneously to select phase sets that correspond to better electron-density maps in order to solve low-resolution phasing problems. The human mind innately excels at some complex tasks that are difficult to solve using computers alone. For complex problems amenable to parallelization, strategies can be developed to exploit human intelligence in a collective form: such approaches are sometimes referred to as ‘crowdsourcing’. Here, a first attempt at a crowdsourced approach for low-resolution ab initio phasing in macromolecular crystallography is proposed. A collaborativemore » online game named CrowdPhase was designed, which relies on a human-powered genetic algorithm, where players control the selection mechanism during the evolutionary process. The algorithm starts from a population of ‘individuals’, each with a random genetic makeup, in this case a map prepared from a random set of phases, and tries to cause the population to evolve towards individuals with better phases based on Darwinian survival of the fittest. Players apply their pattern-recognition capabilities to evaluate the electron-density maps generated from these sets of phases and to select the fittest individuals. A user-friendly interface, a training stage and a competitive scoring system foster a network of well trained players who can guide the genetic algorithm towards better solutions from generation to generation via gameplay. CrowdPhase was applied to two synthetic low-resolution phasing puzzles and it was shown that players could successfully obtain phase sets in the 30° phase error range and corresponding molecular envelopes showing agreement with the low-resolution models. The successful preliminary studies suggest that with further development the crowdsourcing approach could fill a gap in current crystallographic methods by making it possible to extract meaningful information in cases where limited resolution might otherwise prevent initial phasing.« less

  8. Language, arithmetic word problems, and deaf students: Linguistic strategies used to solve tasks

    NASA Astrophysics Data System (ADS)

    Zevenbergen, Robyn; Hyde, Merv; Power, Des

    2001-12-01

    There has been limited examination of the intersection between language and arithmetic in the performance of deaf students, although some previous research has shown that deaf and hearing-impaired1 students are delayed in both their language acquisition and arithmetic performance. This paper examines the performance of deaf and hearing-impaired students in South-East Queensland, Australia, in solving arithmetic word problems. It was found that the subjects' solutions of word problems confirmed trends for hearing students, but that their performance was delayed in comparison. The results confirm other studies where deaf and hearing-impaired students are delayed in their language acquisition and this impacts on their capacity to successfully undertake the resolution of word problems.

  9. Resolution of VTI anisotropy with elastic full-waveform inversion: theory and basic numerical examples

    NASA Astrophysics Data System (ADS)

    Podgornova, O.; Leaney, S.; Liang, L.

    2018-07-01

    Extracting medium properties from seismic data faces some limitations due to the finite frequency content of the data and restricted spatial positions of the sources and receivers. Some distributions of the medium properties make low impact on the data (including none). If these properties are used as the inversion parameters, then the inverse problem becomes overparametrized, leading to ambiguous results. We present an analysis of multiparameter resolution for the linearized inverse problem in the framework of elastic full-waveform inversion. We show that the spatial and multiparameter sensitivities are intertwined and non-sensitive properties are spatial distributions of some non-trivial combinations of the conventional elastic parameters. The analysis accounts for the Hessian information and frequency content of the data; it is semi-analytical (in some scenarios analytical), easy to interpret and enhances results of the widely used radiation pattern analysis. Single-type scattering is shown to have limited sensitivity, even for full-aperture data. Finite-frequency data lose multiparameter sensitivity at smooth and fine spatial scales. Also, we establish ways to quantify a spatial-multiparameter coupling and demonstrate that the theoretical predictions agree well with the numerical results.

  10. Extended-range high-resolution dynamical downscaling over a continental-scale spatial domain with atmospheric and surface nudging

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.

  11. A Cell-Centered Multigrid Algorithm for All Grid Sizes

    NASA Technical Reports Server (NTRS)

    Gjesdal, Thor

    1996-01-01

    Multigrid methods are optimal; that is, their rate of convergence is independent of the number of grid points, because they use a nested sequence of coarse grids to represent different scales of the solution. This nesting does, however, usually lead to certain restrictions of the permissible size of the discretised problem. In cases where the modeler is free to specify the whole problem, such constraints are of little importance because they can be taken into consideration from the outset. We consider the situation in which there are other competing constraints on the resolution. These restrictions may stem from the physical problem (e.g., if the discretised operator contains experimental data measured on a fixed grid) or from the need to avoid limitations set by the hardware. In this paper we discuss a modification to the cell-centered multigrid algorithm, so that it can be used br problems with any resolution. We discuss in particular a coarsening strategy and choice of intergrid transfer operators that can handle grids with both an even or odd number of cells. The method is described and applied to linear equations obtained by discretization of two- and three-dimensional second-order elliptic PDEs.

  12. Collimator application for microchannel plate image intensifier resolution improvement

    DOEpatents

    Thomas, S.W.

    1996-02-27

    A collimator is included in a microchannel plate image intensifier (MCPI). Collimators can be useful in improving resolution of MCPIs by eliminating the scattered electron problem and by limiting the transverse energy of electrons reaching the screen. Due to its optical absorption, a collimator will also increase the extinction ratio of an intensifier by approximately an order of magnitude. Additionally, the smooth surface of the collimator will permit a higher focusing field to be employed in the MCP-to-collimator region than is currently permitted in the MCP-to-screen region by the relatively rough and fragile aluminum layer covering the screen. Coating the MCP and collimator surfaces with aluminum oxide appears to permit additional significant increases in the field strength, resulting in better resolution. 2 figs.

  13. Design of UAV high resolution image transmission system

    NASA Astrophysics Data System (ADS)

    Gao, Qiang; Ji, Ming; Pang, Lan; Jiang, Wen-tao; Fan, Pengcheng; Zhang, Xingcheng

    2017-02-01

    In order to solve the problem of the bandwidth limitation of the image transmission system on UAV, a scheme with image compression technology for mini UAV is proposed, based on the requirements of High-definition image transmission system of UAV. The video codec standard H.264 coding module and key technology was analyzed and studied for UAV area video communication. Based on the research of high-resolution image encoding and decoding technique and wireless transmit method, The high-resolution image transmission system was designed on architecture of Android and video codec chip; the constructed system was confirmed by experimentation in laboratory, the bit-rate could be controlled easily, QoS is stable, the low latency could meets most applied requirement not only for military use but also for industrial applications.

  14. Optimization of Designs for Nanotube-based Scanning Probes

    NASA Technical Reports Server (NTRS)

    Harik, V. M.; Gates, T. S.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Optimization of designs for nanotube-based scanning probes, which may be used for high-resolution characterization of nanostructured materials, is examined. Continuum models to analyze the nanotube deformations are proposed to help guide selection of the optimum probe. The limitations on the use of these models that must be accounted for before applying to any design problem are presented. These limitations stem from the underlying assumptions and the expected range of nanotube loading, end conditions, and geometry. Once the limitations are accounted for, the key model parameters along with the appropriate classification of nanotube structures may serve as a basis for the design optimization of nanotube-based probe tips.

  15. Pan-sharpening via compressed superresolution reconstruction and multidictionary learning

    NASA Astrophysics Data System (ADS)

    Shi, Cheng; Liu, Fang; Li, Lingling; Jiao, Licheng; Hao, Hongxia; Shang, Ronghua; Li, Yangyang

    2018-01-01

    In recent compressed sensing (CS)-based pan-sharpening algorithms, pan-sharpening performance is affected by two key problems. One is that there are always errors between the high-resolution panchromatic (HRP) image and the linear weighted high-resolution multispectral (HRM) image, resulting in spatial and spectral information lost. The other is that the dictionary construction process depends on the nontruth training samples. These problems have limited applications to CS-based pan-sharpening algorithm. To solve these two problems, we propose a pan-sharpening algorithm via compressed superresolution reconstruction and multidictionary learning. Through a two-stage implementation, compressed superresolution reconstruction model reduces the error effectively between the HRP and the linear weighted HRM images. Meanwhile, the multidictionary with ridgelet and curvelet is learned for both the two stages in the superresolution reconstruction process. Since ridgelet and curvelet can better capture the structure and directional characteristics, a better reconstruction result can be obtained. Experiments are done on the QuickBird and IKONOS satellites images. The results indicate that the proposed algorithm is competitive compared with the recent CS-based pan-sharpening methods and other well-known methods.

  16. Mapping atomic motions with ultrabright electrons: towards fundamental limits in space-time resolution.

    PubMed

    Manz, Stephanie; Casandruc, Albert; Zhang, Dongfang; Zhong, Yinpeng; Loch, Rolf A; Marx, Alexander; Hasegawa, Taisuke; Liu, Lai Chung; Bayesteh, Shima; Delsim-Hashemi, Hossein; Hoffmann, Matthias; Felber, Matthias; Hachmann, Max; Mayet, Frank; Hirscht, Julian; Keskin, Sercan; Hada, Masaki; Epp, Sascha W; Flöttmann, Klaus; Miller, R J Dwayne

    2015-01-01

    The long held objective of directly observing atomic motions during the defining moments of chemistry has been achieved based on ultrabright electron sources that have given rise to a new field of atomically resolved structural dynamics. This class of experiments requires not only simultaneous sub-atomic spatial resolution with temporal resolution on the 100 femtosecond time scale but also has brightness requirements approaching single shot atomic resolution conditions. The brightness condition is in recognition that chemistry leads generally to irreversible changes in structure during the experimental conditions and that the nanoscale thin samples needed for electron structural probes pose upper limits to the available sample or "film" for atomic movies. Even in the case of reversible systems, the degree of excitation and thermal effects require the brightest sources possible for a given space-time resolution to observe the structural changes above background. Further progress in the field, particularly to the study of biological systems and solution reaction chemistry, requires increased brightness and spatial coherence, as well as an ability to tune the electron scattering cross-section to meet sample constraints. The electron bunch density or intensity depends directly on the magnitude of the extraction field for photoemitted electron sources and electron energy distribution in the transverse and longitudinal planes of electron propagation. This work examines the fundamental limits to optimizing these parameters based on relativistic electron sources using re-bunching cavity concepts that are now capable of achieving 10 femtosecond time scale resolution to capture the fastest nuclear motions. This analysis is given for both diffraction and real space imaging of structural dynamics in which there are several orders of magnitude higher space-time resolution with diffraction methods. The first experimental results from the Relativistic Electron Gun for Atomic Exploration (REGAE) are given that show the significantly reduced multiple electron scattering problem in this regime, which opens up micron scale systems, notably solution phase chemistry, to atomically resolved structural dynamics.

  17. Spectral analysis of views interpolated by chroma subpixel downsampling for 3D autosteroscopic displays

    NASA Astrophysics Data System (ADS)

    Marson, Avishai; Stern, Adrian

    2015-05-01

    One of the main limitations of horizontal parallax autostereoscopic displays is the horizontal resolution loss due the need to repartition the pixels of the display panel among the multiple views. Recently we have shown that this problem can be alleviated by applying a color sub-pixel rendering technique1. Interpolated views are generated by down-sampling the panel pixels at sub-pixel level, thus increasing the number of views. The method takes advantage of lower acuity of the human eye to chromatic resolution. Here we supply further support of the technique by analyzing the spectra of the subsampled images.

  18. High resolution particle tracking method by suppressing the wavefront aberrations

    NASA Astrophysics Data System (ADS)

    Chang, Xinyu; Yang, Yuan; Kou, Li; Jin, Lei; Lu, Junsheng; Hu, Xiaodong

    2018-01-01

    Digital in-line holographic microscopy is one of the most efficient methods for particle tracking as it can precisely measure the axial position of particles. However, imaging systems are often limited by detector noise, image distortions and human operator misjudgment making the particles hard to locate. A general method is used to solve this problem. The normalized holograms of particles were reconstructed to the pupil plane and then fit to a linear superposition of the Zernike polynomial functions to suppress the aberrations. Relative experiments were implemented to validate the method and the results show that nanometer scale resolution was achieved even when the holograms were poorly recorded.

  19. Planetary investigation utilizing an imaging spectrometer system based upon charge injection technology

    NASA Technical Reports Server (NTRS)

    Wattson, R. B.; Harvey, P.; Swift, R.

    1975-01-01

    An intrinsic silicon charge injection device (CID) television sensor array has been used in conjunction with a CaMoO4 colinear tunable acousto optic filter, a 61 inch reflector, a sophisticated computer system, and a digital color TV scan converter/computer to produce near IR images of Saturn and Jupiter with 10A spectral resolution and approximately 3 inch spatial resolution. The CID camera has successfully obtained digitized 100 x 100 array images with 5 minutes of exposure time, and slow-scanned readout to a computer. Details of the equipment setup, innovations, problems, experience, data and final equipment performance limits are given.

  20. High resolution telescope including an array of elemental telescopes aligned along a common axis and supported on a space frame with a pivot at its geometric center

    DOEpatents

    Norbert, M.A.; Yale, O.

    1992-04-28

    A large effective-aperture, low-cost optical telescope with diffraction-limited resolution enables ground-based observation of near-earth space objects. The telescope has a non-redundant, thinned-aperture array in a center-mount, single-structure space frame. It employes speckle interferometric imaging to achieve diffraction-limited resolution. The signal-to-noise ratio problem is mitigated by moving the wavelength of operation to the near-IR, and the image is sensed by a Silicon CCD. The steerable, single-structure array presents a constant pupil. The center-mount, radar-like mount enables low-earth orbit space objects to be tracked as well as increases stiffness of the space frame. In the preferred embodiment, the array has elemental telescopes with subaperture of 2.1 m in a circle-of-nine configuration. The telescope array has an effective aperture of 12 m which provides a diffraction-limited resolution of 0.02 arc seconds. Pathlength matching of the telescope array is maintained by a electro-optical system employing laser metrology. Speckle imaging relaxes pathlength matching tolerance by one order of magnitude as compared to phased arrays. Many features of the telescope contribute to substantial reduction in costs. These include eliminating the conventional protective dome and reducing on-site construction activities. The cost of the telescope scales with the first power of the aperture rather than its third power as in conventional telescopes. 15 figs.

  1. High resolution telescope including an array of elemental telescopes aligned along a common axis and supported on a space frame with a pivot at its geometric center

    DOEpatents

    Norbert, Massie A.; Yale, Oster

    1992-01-01

    A large effective-aperture, low-cost optical telescope with diffraction-limited resolution enables ground-based observation of near-earth space objects. The telescope has a non-redundant, thinned-aperture array in a center-mount, single-structure space frame. It employes speckle interferometric imaging to achieve diffraction-limited resolution. The signal-to-noise ratio problem is mitigated by moving the wavelength of operation to the near-IR, and the image is sensed by a Silicon CCD. The steerable, single-structure array presents a constant pupil. The center-mount, radar-like mount enables low-earth orbit space objects to be tracked as well as increases stiffness of the space frame. In the preferred embodiment, the array has elemental telescopes with subaperture of 2.1 m in a circle-of-nine configuration. The telescope array has an effective aperture of 12 m which provides a diffraction-limited resolution of 0.02 arc seconds. Pathlength matching of the telescope array is maintained by a electro-optical system employing laser metrology. Speckle imaging relaxes pathlength matching tolerance by one order of magnitude as compared to phased arrays. Many features of the telescope contribute to substantial reduction in costs. These include eliminating the conventional protective dome and reducing on-site construction activities. The cost of the telescope scales with the first power of the aperture rather than its third power as in conventional telescopes.

  2. Optical aperture synthesis with electronically connected telescopes

    PubMed Central

    Dravins, Dainis; Lagadec, Tiphaine; Nuñez, Paul D.

    2015-01-01

    Highest resolution imaging in astronomy is achieved by interferometry, connecting telescopes over increasingly longer distances and at successively shorter wavelengths. Here, we present the first diffraction-limited images in visual light, produced by an array of independent optical telescopes, connected electronically only, with no optical links between them. With an array of small telescopes, second-order optical coherence of the sources is measured through intensity interferometry over 180 baselines between pairs of telescopes, and two-dimensional images reconstructed. The technique aims at diffraction-limited optical aperture synthesis over kilometre-long baselines to reach resolutions showing details on stellar surfaces and perhaps even the silhouettes of transiting exoplanets. Intensity interferometry circumvents problems of atmospheric turbulence that constrain ordinary interferometry. Since the electronic signal can be copied, many baselines can be built up between dispersed telescopes, and over long distances. Using arrays of air Cherenkov telescopes, this should enable the optical equivalent of interferometric arrays currently operating at radio wavelengths. PMID:25880705

  3. The Johns Hopkins RTR Consortium: A Collaborative Approach to Advance Translational Science and Standardize Clinical Monitoring of Restorative Transplantation

    DTIC Science & Technology

    2016-10-01

    based Therapy, Large animal models, Allograft, Hand Transplantation ,Face Transplantation 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT...Changes in Approach b. Problems/Delays and Plans for Resolution c. Changes that Impacted Expenditures d. Changes in use or care of vertebrate animals ...Vascularized Composite Allotransplantation Immunoregulation Tolerance Rejection Ischemia Reperfusion Cell based Therapy Large animal models

  4. On the limits of the hadronic energy resolution of calorimeters

    NASA Astrophysics Data System (ADS)

    Lee, Sehwook; Livan, Michele; Wigmans, Richard

    2018-02-01

    In particle physics experiments, the quality of calorimetric particle detection is typically considerably worse for hadrons than for electromagnetic showers. In this paper, we investigate the root causes of this problem and evaluate two different methods that have been exploited to remedy this situation: compensation and dual readout. It turns out that the latter approach is more promising, as evidenced by experimental results.

  5. New Directions in the Digital Signal Processing of Image Data.

    DTIC Science & Technology

    1987-05-01

    and identify by block number) FIELD GROUP SUB-GROUP Object detection and idLntification 12 01 restoration of photon noise limited imagery 15 04 image...from incomplete information, restoration of blurred images in additive and multiplicative noise , motion analysis with fast hierarchical algorithms...different resolutions. As is well known, the solution to the matched filter problem under additive white noise conditions is the correlation receiver

  6. Approach to characterization of the higher order structure of disulfide-containing proteins using hydrogen/deuterium exchange and top-down mass spectrometry.

    PubMed

    Wang, Guanbo; Kaltashov, Igor A

    2014-08-05

    Top-down hydrogen/deuterium exchange (HDX) with mass spectrometric (MS) detection has recently matured to become a potent biophysical tool capable of providing valuable information on higher order structure and conformational dynamics of proteins at an unprecedented level of structural detail. However, the scope of the proteins amenable to the analysis by top-down HDX MS still remains limited, with the protein size and the presence of disulfide bonds being the two most important limiting factors. While the limitations imposed by the physical size of the proteins gradually become more relaxed as the sensitivity, resolution and dynamic range of modern MS instrumentation continue to improve at an ever accelerating pace, the presence of the disulfide linkages remains a much less forgiving limitation even for the proteins of relatively modest size. To circumvent this problem, we introduce an online chemical reduction step following completion and quenching of the HDX reactions and prior to the top-down MS measurements of deuterium occupancy of individual backbone amides. Application of the new methodology to the top-down HDX MS characterization of a small (99 residue long) disulfide-containing protein β2-microglobulin allowed the backbone amide protection to be probed with nearly a single-residue resolution across the entire sequence. The high-resolution backbone protection pattern deduced from the top-down HDX MS measurements carried out under native conditions is in excellent agreement with the crystal structure of the protein and high-resolution NMR data, suggesting that introduction of the chemical reduction step to the top-down routine does not trigger hydrogen scrambling either during the electrospray ionization process or in the gas phase prior to the protein ion dissociation.

  7. A Shadowing Problem in the Detection of Overlapping Communities: Lifting the Resolution Limit through a Cascading Procedure

    PubMed Central

    Young, Jean-Gabriel; Allard, Antoine; Hébert-Dufresne, Laurent; Dubé, Louis J.

    2015-01-01

    Community detection is the process of assigning nodes and links in significant communities (e.g. clusters, function modules) and its development has led to a better understanding of complex networks. When applied to sizable networks, we argue that most detection algorithms correctly identify prominent communities, but fail to do so across multiple scales. As a result, a significant fraction of the network is left uncharted. We show that this problem stems from larger or denser communities overshadowing smaller or sparser ones, and that this effect accounts for most of the undetected communities and unassigned links. We propose a generic cascading approach to community detection that circumvents the problem. Using real and artificial network datasets with three widely used community detection algorithms, we show how a simple cascading procedure allows for the detection of the missing communities. This work highlights a new detection limit of community structure, and we hope that our approach can inspire better community detection algorithms. PMID:26461919

  8. Whole-cell imaging of the budding yeast Saccharomyces cerevisiae by high-voltage scanning transmission electron tomography.

    PubMed

    Murata, Kazuyoshi; Esaki, Masatoshi; Ogura, Teru; Arai, Shigeo; Yamamoto, Yuta; Tanaka, Nobuo

    2014-11-01

    Electron tomography using a high-voltage electron microscope (HVEM) provides three-dimensional information about cellular components in sections thicker than 1 μm, although in bright-field mode image degradation caused by multiple inelastic scattering of transmitted electrons limit the attainable resolution. Scanning transmission electron microscopy (STEM) is believed to give enhanced contrast and resolution compared to conventional transmission electron microscopy (CTEM). Samples up to 1 μm in thickness have been analyzed with an intermediate-voltage electron microscope because inelastic scattering is not a critical limitation, and probe broadening can be minimized. Here, we employed STEM at 1 MeV high-voltage to extend the useful specimen thickness for electron tomography, which we demonstrate by a seamless tomographic reconstruction of a whole, budding Saccharomyces cerevisiae yeast cell, which is ~3 μm in thickness. High-voltage STEM tomography, especially in the bright-field mode, demonstrated sufficiently enhanced contrast and intensity, compared to CTEM tomography, to permit segmentation of major organelles in the whole cell. STEM imaging also reduced specimen shrinkage during tilt-series acquisition. The fidelity of structural preservation was limited by cytoplasmic extraction, and the spatial resolution was limited by the relatively large convergence angle of the scanning probe. However, the new technique has potential to solve longstanding problems of image blurring in biological specimens beyond 1 μm in thickness, and may facilitate new research in cellular structural biology. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Highly reproducible laser beam scanning device for an internal source laser desorption microprobe Fourier transform mass spectrometer

    NASA Astrophysics Data System (ADS)

    Scott, Jill R.; Tremblay, Paul L.

    2002-03-01

    Traditionally, mass spectrometry has relied on manipulating the sample target to provide scanning capabilities for laser desorption microprobes. This has been problematic for an internal source laser desorption Fourier transform mass spectrometer (LD-FTMS) because of the high magnetic field (7 Tesla) and geometric constraints of the superconducting magnet bore. To overcome these limitations, we have implemented a unique external laser scanning mechanism for an internal source LD-FTMS. This mechanism provides adjustable resolution enhancement so that the spatial resolution at the target is not limited to that of the stepper motors at the light source (˜5 μm/step). The spatial resolution is now limited by the practical optical diffraction limit of the final focusing lens. The scanning mechanism employs a virtual source that is wavelength independent up to the final focusing lens, which can be controlled remotely to account for focal length dependence on wavelength. A binary index provides an automatic alignment feature. The virtual source is located ˜9 ft from the sample; therefore, it is completely outside of the vacuum system and beyond the 50 G line of the fringing magnetic field. To eliminate reproducibility problems associated with vacuum pump vibrations, we have taken advantage of the magnetic field inherent to the FTMS to utilize Lenz's law for vibrational dampening. The LD-FTMS microprobe has exceptional reproducibility, which enables successive mapping sequences for depth-profiling studies.

  10. River morphodynamics from space: the Landsat frontier

    NASA Astrophysics Data System (ADS)

    Schwenk, Jon; Khandelwal, Ankush; Fratkin, Mulu; Kumar, Vipin; Foufoula-Georgiou, Efi

    2017-04-01

    NASA's Landsat family of satellites have been observing the entire globe since 1984, providing over 30 years of snapshots with an 18 day frequency and 30 meter resolution. These publicly-available Landsat data are particularly exciting to researchers interested in river morphodynamics, who are often limited to use of historical maps, aerial photography, and field surveys with poor and irregular time resolutions and limited spatial extents. Landsat archives show potential for overcoming these limitations, but techniques and tools for accurately and efficiently mining the vault of scenes must first be developed. In this PICO presentation, we detail the problems we encountered while mapping and quantifying planform dynamics of over 1,300 km of the actively-migrating, meandering Ucayali River in Peru from Landsat imagery. We also present methods to overcome these obstacles and introduce the Matlab-based RivMAP (River Morphodynamics from Analysis of Planforms) toolbox that we developed to extract banklines and centerlines, compute widths, curvatures, and angles, identify cutoffs, and quantify planform changes via centerline migration and erosion/accretion over large spatial domains with high temporal resolution. Measurement uncertainties were estimated by analyzing immobile, abandoned oxbow lakes. Our results identify hotspots of planform changes, and combined with limited precipitation, stage, and topography data, we parse three simultaneous controls on river migration: climate, sediment, and meander cutoff. Overall, this study demonstrates the vast potential locked within Landsat archives to identify multi-scale controls on river migration, observe the co-evolution of width, curvature, discharge, and migration, and discover and develop new geomorphic insights.

  11. A Petaflops Era Computing Analysis

    NASA Technical Reports Server (NTRS)

    Preston, Frank S.

    1998-01-01

    This report covers a study of the potential for petaflops (1O(exp 15) floating point operations per second) computing. This study was performed within the year 1996 and should be considered as the first step in an on-going effort. 'Me analysis concludes that a petaflop system is technically feasible but not feasible with today's state-of-the-art. Since the computer arena is now a commodity business, most experts expect that a petaflops system will evolve from current technology in an evolutionary fashion. To meet the price expectations of users waiting for petaflop performance, great improvements in lowering component costs will be required. Lower power consumption is also a must. The present rate of progress in improved performance places the date of introduction of petaflop systems at about 2010. Several years before that date, it is projected that the resolution limit of chips will reach the now known resolution limit. Aside from the economic problems and constraints, software is identified as the major problem. The tone of this initial study is more pessimistic than most of the Super-published material available on petaflop systems. Workers in the field are expected to generate more data which could serve to provide a basis for a more informed projection. This report includes an annotated bibliography.

  12. Assessment of a vertical high-resolution distributed-temperature-sensing system in a shallow thermohaline environment

    NASA Astrophysics Data System (ADS)

    Suárez, F.; Aravena, J. E.; Hausner, M. B.; Childress, A. E.; Tyler, S. W.

    2011-03-01

    In shallow thermohaline-driven lakes it is important to measure temperature on fine spatial and temporal scales to detect stratification or different hydrodynamic regimes. Raman spectra distributed temperature sensing (DTS) is an approach available to provide high spatial and temporal temperature resolution. A vertical high-resolution DTS system was constructed to overcome the problems of typical methods used in the past, i.e., without disturbing the water column, and with resistance to corrosive environments. This paper describes a method to quantitatively assess accuracy, precision and other limitations of DTS systems to fully utilize the capacity of this technology, with a focus on vertical high-resolution to measure temperatures in shallow thermohaline environments. It also presents a new method to manually calibrate temperatures along the optical fiber achieving significant improved resolution. The vertical high-resolution DTS system is used to monitor the thermal behavior of a salt-gradient solar pond, which is an engineered shallow thermohaline system that allows collection and storage of solar energy for a long period of time. The vertical high-resolution DTS system monitors the temperature profile each 1.1 cm vertically and in time averages as small as 10 s. Temperature resolution as low as 0.035 °C is obtained when the data are collected at 5-min intervals.

  13. Enhancing the isotropy of lateral resolution in coherent structured illumination microscopy

    PubMed Central

    Park, Joo Hyun; Lee, Jae Yong; Lee, Eun Seong

    2014-01-01

    We present a method to improve the isotropy of spatial resolution in a structured illumination microscopy (SIM) implemented for imaging non-fluorescent samples. To alleviate the problem of anisotropic resolution involved with the previous scheme of coherent SIM that employs the two orthogonal standing-wave illumination, referred to as the orthogonal SIM, we introduce a hexagonal-lattice illumination that incorporates three standing-wave fields simultaneously superimposed at the orientations equally divided in the lateral plane. A theoretical formulation is worked out rigorously for the coherent image formation with such a simultaneous multiple-beam illumination and an explicit Fourier-domain framework is derived for reconstructing an image with enhanced resolution. Using a computer-synthesized resolution target as a 2D coherent sample, we perform numerical simulations to examine the imaging characteristics of our three-angle SIM compared with the orthogonal SIM. The investigation on the 2D resolving power with the various test patterns of different periods and orientations reveal that the orientation-dependent undulation of lateral resolution can be reduced from 27% to 8% by using the three-angle SIM while the best resolution (0.54 times the resolution limit of conventional coherent imaging) in the directions of structured illumination is slightly deteriorated by 4.6% from that of the orthogonal SIM. PMID:24940548

  14. How to Design a Spectrometer.

    PubMed

    Scheeline, Alexander

    2017-10-01

    Designing a spectrometer requires knowledge of the problem to be solved, the molecules whose properties will contribute to a solution of that problem and skill in many subfields of science and engineering. A seemingly simple problem, design of an ultraviolet, visible, and near-infrared spectrometer, is used to show the reasoning behind the trade-offs in instrument design. Rather than reporting a fully optimized instrument, the Yin and Yang of design choices, leading to decisions about financial cost, materials choice, resolution, throughput, aperture, and layout are described. To limit scope, aspects such as grating blaze, electronics design, and light sources are not presented. The review illustrates the mixture of mathematical rigor, rule of thumb, esthetics, and availability of components that contribute to the art of spectrometer design.

  15. Pushing the plasmonic imaging nanolithography to nano-manufacturing

    NASA Astrophysics Data System (ADS)

    Gao, Ping; Li, Xiong; Zhao, Zeyu; Ma, Xiaoliang; Pu, Mingbo; Wang, Changtao; Luo, Xiangang

    2017-12-01

    Suffering from the so-called diffraction limit, the minimum resolution of conventional photolithography is limited to λ / 2 or λ / 4, where λ is the incident wavelength. The physical mechanism of this limit lies at the fact that the evanescent waves that carry subwavelength information of the object decay exponentially in a medium, and cannot reach the image plane. Surface plasmons (SPs) are non-radiative electromagnetic waves that propagate along the interface between metal and dielectric, which exhibits unique sub-diffraction optical characteristics. In recent years, benefiting from SPs' features, researchers have proposed a variety of plasmonic lithography methods in the manner of interference, imaging and direct writing, and have demonstrated that sub-diffraction resolution could be achieved by theoretical simulations or experiments. Among the various plasmonic lithography modes, plasmonic imaging lithography seems to be of particular importance for applications due to its compatibility with conventional lithography. Recent results show that the half pitch of nanograting can be shrinked down to 22 nm and even 16 nm. This paper will give an overview of research progress, representative achievements of plasmonic imaging lithography, the remained problems and outlook of further developments.

  16. Visible-to-visible four-photon ultrahigh resolution microscopic imaging with 730-nm diode laser excited nanocrystals.

    PubMed

    Wang, Baoju; Zhan, Qiuqiang; Zhao, Yuxiang; Wu, Ruitao; Liu, Jing; He, Sailing

    2016-01-25

    Further development of multiphoton microscopic imaging is confronted with a number of limitations, including high-cost, high complexity and relatively low spatial resolution due to the long excitation wavelength. To overcome these problems, for the first time, we propose visible-to-visible four-photon ultrahigh resolution microscopic imaging by using a common cost-effective 730-nm laser diode to excite the prepared Nd(3+)-sensitized upconversion nanoparticles (Nd(3+)-UCNPs). An ordinary multiphoton scanning microscope system was built using a visible CW diode laser and the lateral imaging resolution as high as 161-nm was achieved via the four-photon upconversion process. The demonstrated large saturation excitation power for Nd(3+)-UCNPs would be more practical and facilitate the four-photon imaging in the application. A sample with fine structure was imaged to demonstrate the advantages of visible-to-visible four-photon ultrahigh resolution microscopic imaging with 730-nm diode laser excited nanocrystals. Combining the uniqueness of UCNPs, the proposed visible-to-visible four-photon imaging would be highly promising and attractive in the field of multiphoton imaging.

  17. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  18. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  19. Achievable Performance and Effective Interrogator Design for SAW RFID Sensor Tags

    NASA Technical Reports Server (NTRS)

    Barton, Richard J.

    2011-01-01

    For many NASA missions, remote sensing is a critical application that supports activities such as environmental monitoring, planetary science, structural shape and health monitoring, non-destructive evaluation, etc. The utility of the remote sensing devices themselves is greatly increased if they are passive that is, they do not require any on-board power supply such as batteries and if they can be identified uniquely during the sensor interrogation process. Additional passive sensor characteristics that enable greater utilization in space applications are small size and weight, long read ranges with low interrogator power, ruggedness, and operability in extreme environments (vacuum, extreme high/low temperature, high radiation, etc.) In this paper, we consider one very promising passive sensor technology, called surface acoustic wave (SAW) radio-frequency identification (RFID), that satisfies all of these criteria. Although SAW RFID tags have great potential for use in numerous space-based remote sensing applications, the limited collision resolution capability of current generation tags limits the performance in a cluttered sensing environment. That is, as more SAW-based sensors are added to the environment, numerous tag responses are superimposed at the receiver and decoding all or even a subset of the telemetry becomes increasingly difficult. Background clutter generated by reflectors other than the sensors themselves is also a problem, as is multipath interference and signal distortion, but the limiting factor in many remote sensing applications can be expected to be tag mutual interference. This problem may be greatly mitigated by proper design of the SAW tag waveform, but that remains an open research problem, and in the meantime, several other related questions remain to be answered including: What are the fundamental relationships between tag parameters such as bit-rate, time-bandwidth-product, SNR, and achievable collision resolution? What are the differences in optimal or near-optimal interrogator designs between noise-limited environments and interference-limited environments? What are the performance characteristics of different interrogator designs in term of parameters such as transmitter power level, range, and number of interfering tags? In this paper, we present the results of a research effort aimed at providing at least partial answers to all of these questions.

  20. Review of surface particulate monitoring of dust events using geostationary satellite remote sensing

    NASA Astrophysics Data System (ADS)

    Sowden, M.; Mueller, U.; Blake, D.

    2018-06-01

    The accurate measurements of natural and anthropogenic aerosol particulate matter (PM) is important in managing both environmental and health risks; however, limited monitoring in regional areas hinders accurate quantification. This article provides an overview of the ability of recently launched geostationary earth orbit (GEO) satellites, such as GOES-R (North America) and HIMAWARI (Asia and Oceania), to provide near real-time ground-level PM concentrations (GLCs). The review examines the literature relating to the spatial and temporal resolution required by air quality studies, the removal of cloud and surface effects, the aerosol inversion problem, and the computation of ground-level concentrations rather than columnar aerosol optical depth (AOD). Determining surface PM concentrations using remote sensing is complicated by differentiating intrinsic aerosol properties (size, shape, composition, and quantity) from extrinsic signal intensities, particularly as the number of unknown intrinsic parameters exceeds the number of known extrinsic measurements. The review confirms that development of GEO satellite products has led to improvements in the use of coupled products such as GEOS-CHEM, aerosol types have consolidated on model species rather than prior descriptive classifications, and forward radiative transfer models have led to a better understanding of predictive spectra interdependencies across different aerosol types, despite fewer wavelength bands. However, it is apparent that the aerosol inversion problem remains challenging because there are limited wavelength bands for characterising localised mineralogy. The review finds that the frequency of GEO satellite data exceeds the temporal resolution required for air quality studies, but the spatial resolution is too coarse for localised air quality studies. Continual monitoring necessitates using the less sensitive thermal infra-red bands, which also reduce surface absorption effects. However, given the challenges of the aerosol inversion problem and difficulties in converting columnar AOD to surface concentrations, the review identifies coupled GEO-neural networks as potentially the most viable option for improving quantification.

  1. Multifocal interferometric synthetic aperture microscopy

    PubMed Central

    Xu, Yang; Chng, Xiong Kai Benjamin; Adie, Steven G.; Boppart, Stephen A.; Scott Carney, P.

    2014-01-01

    There is an inherent trade-off between transverse resolution and depth of field (DOF) in optical coherence tomography (OCT) which becomes a limiting factor for certain applications. Multifocal OCT and interferometric synthetic aperture microscopy (ISAM) each provide a distinct solution to the trade-off through modification to the experiment or via post-processing, respectively. In this paper, we have solved the inverse problem of multifocal OCT and present a general algorithm for combining multiple ISAM datasets. Multifocal ISAM (MISAM) uses a regularized combination of the resampled datasets to bring advantages of both multifocal OCT and ISAM to achieve optimal transverse resolution, extended effective DOF and improved signal-to-noise ratio. We present theory, simulation and experimental results. PMID:24977909

  2. Snorkelling between the stars: submarine methods for astronomical observations.

    NASA Astrophysics Data System (ADS)

    Velasco, S.; Quevedo, E.; Font, J.; Oscoz, A.; López, R. L.; Puga, M.; Rebolo, R.; Hernáandez Brito, J.; Llinas, O.; Marrero Callico, G.; Sarmiento, R.

    2017-03-01

    Trying to reach diffraction-limited astronomical observations from ground-based telescopes is very challenging due to the atmospheric effects contributing to a general blurring of the images. However, astronomy is not the only science facing turbulence problems; obtaining quality images of the undersea world is as ambitious as it is on the sky. One of the solutions contemplated to reach high-resolution images is the use of multiple frames of the same target, known as fusion super-resolution (Quevedo et al. 2015), which is the principle for Lucky Imaging (Velasco et al. 2016). Here we present the successful result of joining efforts between the undersea and the astronomical research done at the Canary Islands.

  3. Compton imaging tomography technique for NDE of large nonuniform structures

    NASA Astrophysics Data System (ADS)

    Grubsky, Victor; Romanov, Volodymyr; Patton, Ned; Jannson, Tomasz

    2011-09-01

    In this paper we describe a new nondestructive evaluation (NDE) technique called Compton Imaging Tomography (CIT) for reconstructing the complete three-dimensional internal structure of an object, based on the registration of multiple two-dimensional Compton-scattered x-ray images of the object. CIT provides high resolution and sensitivity with virtually any material, including lightweight structures and organics, which normally pose problems in conventional x-ray computed tomography because of low contrast. The CIT technique requires only one-sided access to the object, has no limitation on the object's size, and can be applied to high-resolution real-time in situ NDE of large aircraft/spacecraft structures and components. Theoretical and experimental results will be presented.

  4. Wave optics theory and 3-D deconvolution for the light field microscope

    PubMed Central

    Broxton, Michael; Grosenick, Logan; Yang, Samuel; Cohen, Noy; Andalman, Aaron; Deisseroth, Karl; Levoy, Marc

    2013-01-01

    Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method. PMID:24150383

  5. Resolving embarrassing medical conditions with online health information.

    PubMed

    Redston, Sarah; de Botte, Sharon; Smith, Carl

    2018-06-01

    Reliance on online health information is proliferating and the Internet has the potential to revolutionize the provision of public health information. The anonymity of online health information may be particularly appealing to people seeking advice on 'embarrassing' health problems. The purpose of this study was to investigate (1) whether data generated by the embarrassingproblems.com health information site showed any temporal patterns in problem resolution, and (2) whether successful resolution of a medical problem using online information varied with the type of medical problem. We analyzed the responses of visitors to the embarrassingproblems.com website on the resolution of their problems. The dataset comprised 100,561 responses to information provided on 77 different embarrassing problems grouped into 9 classes of medical problem over an 82-month period. Data were analyzed with a Bernoulli Generalized Linear Model using Bayesian inference. We detected a statistically important interaction between embarrassing problem type and the time period in which data were collected, with an improvement in problem resolution over time for all of the classes of medical problem on the website but with a lower rate of increase in resolution for urinary health problems and medical problems associated with the mouth and face. As far as we are aware, this is the first analysis of data of this nature. Findings support the growing recognition that online health information can contribute to the resolution of embarrassing medical problems, but demonstrate that outcomes may vary with medical problem type. The results indicate that building data collection into online information provision can help to refine and focus health information for online users. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Reliability of Fault Tolerant Control Systems. Part 2

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2000-01-01

    This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.

  7. Dynamical Scaling Relations and the Angular Momentum Problem in the FIRE Simulations

    NASA Astrophysics Data System (ADS)

    Schmitz, Denise; Hopkins, Philip F.; Quataert, Eliot; Keres, Dusan; Faucher-Giguere, Claude-Andre

    2015-01-01

    Simulations are an extremely important tool with which to study galaxy formation and evolution. However, even state-of-the-art simulations still fail to accurately predict important galaxy properties such as star formation rates and dynamical scaling relations. One possible explanation is the inadequacy of sub-grid models to capture the range of stellar feedback mechanisms which operate below the resolution limit of simulations. FIRE (Feedback in Realistic Environments) is a set of high-resolution cosmological galaxy simulations run using the code GIZMO. It includes more realistic models for various types of feedback including radiation pressure, supernovae, stellar winds, and photoionization and photoelectric heating. Recent FIRE results have demonstrated good agreement with the observed stellar mass-halo mass relation as well as more realistic star formation histories than previous simulations. We investigate the effects of FIRE's improved feedback prescriptions on the simulation "angular momentum problem," i.e., whether FIRE can reproduce observed scaling relations between galaxy stellar mass and rotational/dispersion velocities.

  8. Hybrid parallelization of the XTOR-2F code for the simulation of two-fluid MHD instabilities in tokamaks

    NASA Astrophysics Data System (ADS)

    Marx, Alain; Lütjens, Hinrich

    2017-03-01

    A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.

  9. Estimating the resolution limit of the map equation in community detection

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro; Rosvall, Martin

    2015-01-01

    A community detection algorithm is considered to have a resolution limit if the scale of the smallest modules that can be resolved depends on the size of the analyzed subnetwork. The resolution limit is known to prevent some community detection algorithms from accurately identifying the modular structure of a network. In fact, any global objective function for measuring the quality of a two-level assignment of nodes into modules must have some sort of resolution limit or an external resolution parameter. However, it is yet unknown how the resolution limit affects the so-called map equation, which is known to be an efficient objective function for community detection. We derive an analytical estimate and conclude that the resolution limit of the map equation is set by the total number of links between modules instead of the total number of links in the full network as for modularity. This mechanism makes the resolution limit much less restrictive for the map equation than for modularity; in practice, it is orders of magnitudes smaller. Furthermore, we argue that the effect of the resolution limit often results from shoehorning multilevel modular structures into two-level descriptions. As we show, the hierarchical map equation effectively eliminates the resolution limit for networks with nested multilevel modular structures.

  10. A new high resolution permafrost map of Iceland from Earth Observation data

    NASA Astrophysics Data System (ADS)

    Barnie, Talfan; Conway, Susan; Balme, Matt; Graham, Alastair

    2017-04-01

    High resolution maps of permafrost are required for ongoing monitoring of environmental change and the resulting hazards to ecosystems, people and infrastructure. However, permafrost maps are difficult to construct - direct observations require maintaining networks of sensors and boreholes in harsh environments and are thus limited in extent in space and time, and indirect observations require models or assumptions relating the measurements (e.g. weather station air temperature, basal snow temperature) to ground temperature. Operationally produced Land Surface Temperature maps from Earth Observation data can be used to make spatially contiguous estimates of mean annual skin temperature, which has been used a proxy for the presence of permafrost. However these maps are subject to biases due to (i) selective sampling during the day due to limited satellite overpass times, (ii) selective sampling over the year due to seasonally varying cloud cover, (iii) selective sampling of LST only during clearsky conditions, (iv) errors in cloud masking (v) errors in temperature emissivity separation (vi) smoothing over spatial variability. In this study we attempt to compensate for some of these problems using a bayesian modelling approach and high resolution topography-based downscaling.

  11. Preparation of wholemount mouse intestine for high-resolution three-dimensional imaging using two-photon microscopy.

    PubMed

    Appleton, P L; Quyn, A J; Swift, S; Näthke, I

    2009-05-01

    Visualizing overall tissue architecture in three dimensions is fundamental for validating and integrating biochemical, cell biological and visual data from less complex systems such as cultured cells. Here, we describe a method to generate high-resolution three-dimensional image data of intact mouse gut tissue. Regions of highest interest lie between 50 and 200 mum within this tissue. The quality and usefulness of three-dimensional image data of tissue with such depth is limited owing to problems associated with scattered light, photobleaching and spherical aberration. Furthermore, the highest-quality oil-immersion lenses are designed to work at a maximum distance of

  12. Progress in the Development of CdZnTe Unipolar Detectors for Different Anode Geometries and Data Corrections

    PubMed Central

    Zhang, Qiushi; Zhang, Congzhe; Lu, Yanye; Yang, Kun; Ren, Qiushi

    2013-01-01

    CdZnTe detectors have been under development for the past two decades, providing good stopping power for gamma rays, lightweight camera heads and improved energy resolution. However, the performance of this type of detector is limited primarily by incomplete charge collection problems resulting from charge carriers trapping. This paper is a review of the progress in the development of CdZnTe unipolar detectors with some data correction techniques for improving performance of the detectors. We will first briefly review the relevant theories. Thereafter, two aspects of the techniques for overcoming the hole trapping issue are summarized, including irradiation direction configuration and pulse shape correction methods. CdZnTe detectors of different geometries are discussed in detail, covering the principal of the electrode geometry design, the design and performance characteristics, some detector prototypes development and special correction techniques to improve the energy resolution. Finally, the state of art development of 3-D position sensing and Compton imaging technique are also discussed. Spectroscopic performance of CdZnTe semiconductor detector will be greatly improved even to approach the statistical limit on energy resolution with the combination of some of these techniques. PMID:23429509

  13. Network community-detection enhancement by proper weighting

    NASA Astrophysics Data System (ADS)

    Khadivi, Alireza; Ajdari Rad, Ali; Hasler, Martin

    2011-04-01

    In this paper, we show how proper assignment of weights to the edges of a complex network can enhance the detection of communities and how it can circumvent the resolution limit and the extreme degeneracy problems associated with modularity. Our general weighting scheme takes advantage of graph theoretic measures and it introduces two heuristics for tuning its parameters. We use this weighting as a preprocessing step for the greedy modularity optimization algorithm of Newman to improve its performance. The result of the experiments of our approach on computer-generated and real-world data networks confirm that the proposed approach not only mitigates the problems of modularity but also improves the modularity optimization.

  14. High definition clouds and precipitation for climate prediction -results from a unified German research initiative on high resolution modeling and observations

    NASA Astrophysics Data System (ADS)

    Rauser, F.

    2013-12-01

    We present results from the German BMBF initiative 'High Definition Cloud and Precipitation for advancing Climate Prediction -HD(CP)2'. This initiative addresses most of the problems that are discussed in this session in one, unified approach: cloud physics, convection, boundary layer development, radiation and subgrid variability are approached in one organizational framework. HD(CP)2 merges both observation and high performance computing / model development communities to tackle a shared problem: how to improve the understanding of the most important subgrid-scale processes of cloud and precipitation physics, and how to utilize this knowledge for improved climate predictions. HD(CP)2 is a coordinated initiative to: (i) realize; (ii) evaluate; and (iii) statistically characterize and exploit for the purpose of both parameterization development and cloud / precipitation feedback analysis; ultra-high resolution (100 m in the horizontal, 10-50 m in the vertical) regional hind-casts over time periods (3-15 y) and spatial scales (1000-1500 km) that are climatically meaningful. HD(CP)2 thus consists of three elements (the model development and simulations, their observational evaluation and exploitation/synthesis to advance CP prediction) and its first three-year phase has started on October 1st 2012. As a central part of HD(CP)2, the HD(CP)2 Observational Prototype Experiment (HOPE) has been carried out in spring 2013. In this campaign, high resolution measurements with a multitude of instruments from all major centers in Germany have been carried out in a limited domain, to allow for unprecedented resolution and precision in the observation of microphysics parameters on a resolution that will allow for evaluation and improvement of ultra-high resolution models. At the same time, a local area version of the new climate model ICON of the Max Planck Institute and the German weather service has been developed that allows for LES-type simulations on high resolutions on limited domains. The advantage of modifying an existing, evolving climate model is to share insights from high resolution runs directly with the large-scale modelers and to allow for easy intercomparison and evaluation later on. Within this presentation, we will give a short overview on HD(CP)2 , show results from the observation campaign HOPE and the LES simulations of the same domain and conditions and will discuss how these will lead to an improved understanding and evaluation background for the efforts to improve fast physics in our climate model.

  15. Improving lateral resolution and image quality of optical coherence tomography by the multi-frame superresolution technique for 3D tissue imaging.

    PubMed

    Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R

    2017-11-01

    The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues.

  16. An evaluation of independent consumer assistance centers on problem resolution and user satisfaction: the consumer perspective.

    PubMed

    Nascimento, Lori Miller; Cousineau, Michael R

    2005-04-01

    Individuals who wish to receive independent assistance to resolve access to care health problems have limited options. The Health Consumer Alliance (HCA) is an independent, coordinated effort of nine legal services organizations that provide free assistance to low-income health consumers in 10 California counties. The need for the HCA stems from the vast number of health consumers with unanswered questions and unresolved problems relating to access to care issues, among both insured and uninsured populations. However, little is known about the effectiveness of independent consumer assistance centers. This paper examines the effectiveness of a network of independent consumer assistance programs in resolving consumer problems and consumers' level of satisfaction with services received. As the project evaluators, we conducted telephone surveys with 1,291 users of the HCA to assess if this independent program resolved consumer problems, and to measure the level of satisfaction among HCA users. Specifically, we asked questions about the HCA's influence on problem resolution, consumer satisfaction, health insurance status and use of preventive care services. From 1997 to 2001, more than 46,000 consumers contacted the seven health consumer centers (HCCs). According to our sample of respondents, results show that the HCCs are an important resource for low-income Californians trying to access health care. After contacting the HCCs, 62 percent of the participants report that their problems were resolved. In addition, 87 percent of the participants said the HCCs were helpful and 95 percent said they would be likely to contact the HCC again if necessary.

  17. q-Space Upsampling Using x-q Space Regularization.

    PubMed

    Chen, Geng; Dong, Bin; Zhang, Yong; Shen, Dinggang; Yap, Pew-Thian

    2017-09-01

    Acquisition time in diffusion MRI increases with the number of diffusion-weighted images that need to be acquired. Particularly in clinical settings, scan time is limited and only a sparse coverage of the vast q -space is possible. In this paper, we show how non-local self-similar information in the x - q space of diffusion MRI data can be harnessed for q -space upsampling. More specifically, we establish the relationships between signal measurements in x - q space using a patch matching mechanism that caters to unstructured data. We then encode these relationships in a graph and use it to regularize an inverse problem associated with recovering a high q -space resolution dataset from its low-resolution counterpart. Experimental results indicate that the high-resolution datasets reconstructed using the proposed method exhibit greater quality, both quantitatively and qualitatively, than those obtained using conventional methods, such as interpolation using spherical radial basis functions (SRBFs).

  18. Display challenges resulting from the use of wide field of view imaging devices

    NASA Astrophysics Data System (ADS)

    Petty, Gregory J.; Fulton, Jack; Nicholson, Gail; Seals, Ean

    2012-06-01

    As focal plane array technologies advance and imagers increase in resolution, display technology must outpace the imaging improvements in order to adequately represent the complete data collection. Typical display devices tend to have an aspect ratio similar to 4:3 or 16:9, however a breed of Wide Field of View (WFOV) imaging devices exist that skew from the norm with aspect ratios as high as 5:1. This particular quality, when coupled with a high spatial resolution, presents a unique challenge for display devices. Standard display devices must choose between resizing the image data to fit the display and displaying the image data in native resolution and truncating potentially important information. The problem compounds when considering the applications; WFOV high-situationalawareness imagers are sought for space-limited military vehicles. Tradeoffs between these issues are assessed to the image quality of the WFOV sensor.

  19. Resolution enhancement of robust Bayesian pre-stack inversion in the frequency domain

    NASA Astrophysics Data System (ADS)

    Yin, Xingyao; Li, Kun; Zong, Zhaoyun

    2016-10-01

    AVO/AVA (amplitude variation with an offset or angle) inversion is one of the most practical and useful approaches to estimating model parameters. So far, publications on AVO inversion in the Fourier domain have been quite limited in view of its poor stability and sensitivity to noise compared with time-domain inversion. For the resolution and stability of AVO inversion in the Fourier domain, a novel robust Bayesian pre-stack AVO inversion based on the mixed domain formulation of stationary convolution is proposed which could solve the instability and achieve superior resolution. The Fourier operator will be integrated into the objective equation and it avoids the Fourier inverse transform in our inversion process. Furthermore, the background constraints of model parameters are taken into consideration to improve the stability and reliability of inversion which could compensate for the low-frequency components of seismic signals. Besides, the different frequency components of seismic signals can realize decoupling automatically. This will help us to solve the inverse problem by means of multi-component successive iterations and the convergence precision of the inverse problem could be improved. So, superior resolution compared with the conventional time-domain pre-stack inversion could be achieved easily. Synthetic tests illustrate that the proposed method could achieve high-resolution results with a high degree of agreement with the theoretical model and verify the quality of anti-noise. Finally, applications on a field data case demonstrate that the proposed method could obtain stable inversion results of elastic parameters from pre-stack seismic data in conformity with the real logging data.

  20. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  1. ULTRA-SHARP solution of the Smith-Hutton problem

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Mokhtari, Simin

    1992-01-01

    Highly convective scalar transport involving near-discontinuities and strong streamline curvature was addressed in a paper by Smith and Hutton in 1982, comparing several different convection schemes applied to a specially devised test problem. First order methods showed significant artificial diffusion, whereas higher order methods gave less smearing but had a tendency to overshoot and oscillate. Perhaps because unphysical oscillations are more obvious than unphysical smearing, the intervening period has seen a rise in popularity of low order artificially diffusive schemes, especially in the numerical heat transfer industry. The present paper describes an alternate strategy of using non-artificially diffusive high order methods, while maintaining strictly monotonic transitions through the use of simple flux limited constraints. Limited third order upwinding is usually found to be the most cost effective basic convection scheme. Tighter resolution of discontinuities can be obtained at little additional cost by using automatic adaptive stencil expansion to higher order in local regions, as needed.

  2. A validated non-linear Kelvin-Helmholtz benchmark for numerical hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lecoanet, D.; McCourt, M.; Quataert, E.; Burns, K. J.; Vasil, G. M.; Oishi, J. S.; Brown, B. P.; Stone, J. M.; O'Leary, R. M.

    2016-02-01

    The non-linear evolution of the Kelvin-Helmholtz instability is a popular test for code verification. To date, most Kelvin-Helmholtz problems discussed in the literature are ill-posed: they do not converge to any single solution with increasing resolution. This precludes comparisons among different codes and severely limits the utility of the Kelvin-Helmholtz instability as a test problem. The lack of a reference solution has led various authors to assert the accuracy of their simulations based on ad hoc proxies, e.g. the existence of small-scale structures. This paper proposes well-posed two-dimensional Kelvin-Helmholtz problems with smooth initial conditions and explicit diffusion. We show that in many cases numerical errors/noise can seed spurious small-scale structure in Kelvin-Helmholtz problems. We demonstrate convergence to a reference solution using both ATHENA, a Godunov code, and DEDALUS, a pseudo-spectral code. Problems with constant initial density throughout the domain are relatively straightforward for both codes. However, problems with an initial density jump (which are the norm in astrophysical systems) exhibit rich behaviour and are more computationally challenging. In the latter case, ATHENA simulations are prone to an instability of the inner rolled-up vortex; this instability is seeded by grid-scale errors introduced by the algorithm, and disappears as resolution increases. Both ATHENA and DEDALUS exhibit late-time chaos. Inviscid simulations are riddled with extremely vigorous secondary instabilities which induce more mixing than simulations with explicit diffusion. Our results highlight the importance of running well-posed test problems with demonstrated convergence to a reference solution. To facilitate future comparisons, we include as supplementary material the resolved, converged solutions to the Kelvin-Helmholtz problems in this paper in machine-readable form.

  3. Electrical capacitance volume tomography with high contrast dielectrics using a cuboid sensor geometry

    NASA Astrophysics Data System (ADS)

    Nurge, Mark A.

    2007-05-01

    An electrical capacitance volume tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 × 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This paper presents a method of reconstructing images of high contrast dielectric materials using only the self-capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminium structure inserted at different positions within the sensing region. Comparisons with standard two-dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.

  4. Electrical capacitance volume tomography of high contrast dielectrics using a cuboid geometry

    NASA Astrophysics Data System (ADS)

    Nurge, Mark A.

    An Electrical Capacitance Volume Tomography system has been created for use with a new image reconstruction algorithm capable of imaging high contrast dielectric distributions. The electrode geometry consists of two 4 x 4 parallel planes of copper conductors connected through custom built switch electronics to a commercially available capacitance to digital converter. Typical electrical capacitance tomography (ECT) systems rely solely on mutual capacitance readings to reconstruct images of dielectric distributions. This dissertation presents a method of reconstructing images of high contrast dielectric materials using only the self capacitance measurements. By constraining the unknown dielectric material to one of two values, the inverse problem is no longer ill-determined. Resolution becomes limited only by the accuracy and resolution of the measurement circuitry. Images were reconstructed using this method with both synthetic and real data acquired using an aluminum structure inserted at different positions within the sensing region. Comparisons with standard two dimensional ECT systems highlight the capabilities and limitations of the electronics and reconstruction algorithm.

  5. Compact holographic optical neural network system for real-time pattern recognition

    NASA Astrophysics Data System (ADS)

    Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.

    1996-08-01

    One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.

  6. Immobilized polysaccharide derivatives: chiral packing materials for efficient HPLC resolution.

    PubMed

    Ikai, Tomoyuki; Yamamoto, Chiyo; Kamigaito, Masami; Okamoto, Yoshio

    2007-01-01

    Polysaccharide-based chiral packing materials (CPMs) for high-performance liquid chromatography have frequently been used not only to determine the enantiomeric excess of chiral compounds but also to preparatively resolve a wide range of racemates. However, these CPMs can be used with only a limited number of solvents as mobile phases because some organic solvents, such as tetrahydrofuran, chloroform, and so on, dissolve or swell the polysaccharide derivatives coated on a support, e.g., silica gel, and destroy their packed columns. The limitation of mobile phase selection is sometimes a serious problem for the efficient analytical and preparative resolution of enantiomers. This defect can be resolved by the immobilization of the polysaccharide derivatives onto silica gel. Efficient immobilizations have been attained through the radical copolymerization of the polysaccharide derivatives bearing small amounts of polymerizable residues and also through the polycondensation of the polysaccharide derivatives containing a few percent of 3-(triethoxysilyl)propyl residue. (c) 2007 The Japan Chemical Journal Forum and Wiley Periodicals, Inc.

  7. Metrology for the manufacturing of freeform optics

    NASA Astrophysics Data System (ADS)

    Blalock, Todd; Myer, Brian; Ferralli, Ian; Brunelle, Matt; Lynch, Tim

    2017-10-01

    Recently the use of freeform surfaces have become a realization for optical designers. These non-symmetrical optical surfaces have allowed unique solutions to optical design problems. The implementation of freeform optical surfaces has been limited by manufacturing capabilities and quality. However over the past several years freeform fabrication processes have improved in capability and precision. But as with any manufacturing, proper metrology is required to monitor and verify the process. Typical optics metrology such as interferometry has its challenges and limitations with the unique shapes of freeform optics. Two contact metrology methods for freeform metrology are presented; a Leitz coordinate measurement machine (CMM) with an uncertainty of +/- 0.5 μm and a high resolution profilometer (Panasonic UA3P) with a measurement uncertainty of +/- 0.05 μm. We are also developing a non-contact high resolution technique based on the fringe reflection technique known as deflectometry. This fast non-contact metrology has the potential to compete with accuracies of the contact methods but also can acquire data in seconds rather than minutes or hours.

  8. Facing the phase problem in Coherent Diffractive Imaging via Memetic Algorithms.

    PubMed

    Colombo, Alessandro; Galli, Davide Emilio; De Caro, Liberato; Scattarella, Francesco; Carlino, Elvio

    2017-02-09

    Coherent Diffractive Imaging is a lensless technique that allows imaging of matter at a spatial resolution not limited by lens aberrations. This technique exploits the measured diffraction pattern of a coherent beam scattered by periodic and non-periodic objects to retrieve spatial information. The diffracted intensity, for weak-scattering objects, is proportional to the modulus of the Fourier Transform of the object scattering function. Any phase information, needed to retrieve its scattering function, has to be retrieved by means of suitable algorithms. Here we present a new approach, based on a memetic algorithm, i.e. a hybrid genetic algorithm, to face the phase problem, which exploits the synergy of deterministic and stochastic optimization methods. The new approach has been tested on simulated data and applied to the phasing of transmission electron microscopy coherent electron diffraction data of a SrTiO 3 sample. We have been able to quantitatively retrieve the projected atomic potential, and also image the oxygen columns, which are not directly visible in the relevant high-resolution transmission electron microscopy images. Our approach proves to be a new powerful tool for the study of matter at atomic resolution and opens new perspectives in those applications in which effective phase retrieval is necessary.

  9. A High-Speed, Event-Driven, Active Pixel Sensor Readout for Photon-Counting Microchannel Plate Detectors

    NASA Technical Reports Server (NTRS)

    Kimble, Randy A.; Pain, Bedabrata; Norton, Timothy J.; Haas, J. Patrick; Oegerle, William R. (Technical Monitor)

    2002-01-01

    Silicon array readouts for microchannel plate intensifiers offer several attractive features. In this class of detector, the electron cloud output of the MCP intensifier is converted to visible light by a phosphor; that light is then fiber-optically coupled to the silicon array. In photon-counting mode, the resulting light splashes on the silicon array are recognized and centroided to fractional pixel accuracy by off-chip electronics. This process can result in very high (MCP-limited) spatial resolution while operating at a modest MCP gain (desirable for dynamic range and long term stability). The principal limitation of intensified CCD systems of this type is their severely limited local dynamic range, as accurate photon counting is achieved only if there are not overlapping event splashes within the frame time of the device. This problem can be ameliorated somewhat by processing events only in pre-selected windows of interest of by using an addressable charge injection device (CID) for the readout array. We are currently pursuing the development of an intriguing alternative readout concept based on using an event-driven CMOS Active Pixel Sensor. APS technology permits the incorporation of discriminator circuitry within each pixel. When coupled with suitable CMOS logic outside the array area, the discriminator circuitry can be used to trigger the readout of small sub-array windows only when and where an event splash has been detected, completely eliminating the local dynamic range problem, while achieving a high global count rate capability and maintaining high spatial resolution. We elaborate on this concept and present our progress toward implementing an event-driven APS readout.

  10. Principle, system, and applications of tip-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, MingQian; Wang, Rui; Wu, XiaoBin; Wang, Jia

    2012-08-01

    Raman spectroscopy is a powerful technique in chemical information characterization. However, this spectral method is subject to two obstacles in nano-material detection. One is diffraction limited spatial resolution, and the other is its inherent small Raman cross section and weak signaling. To resolve these problems, a new approach has been developed, denoted as tip-enhanced Raman spectroscopy (TERS). TERS is capable of high-resolution and high-sensitivity detection and demonstrated to be a promising spectroscopic and micro-topographic method to characterize nano-materials and nanostructures. In this paper, the principle and experimental system of TERS are discussed. The latest application of TERS in molecule detection, biological specimen identification, nanao-material characterization, and semi-conductor material determination with some specific experimental examples are presented.

  11. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  12. Traditional Athabascan Law Ways and Their Relationship to Contemporary Problems of "Bush Justice". Some Preliminary Observations on Structure and Function. Institute of Social, Economic and Government Research (ISEGR) Occasional Papers No. 7.

    ERIC Educational Resources Information Center

    Hippler, Arthur E.; Conn, Stephen

    Resolution of conflicts and disputes in traditional Athabascan society was based on assumptions that: (1) the authority of the leader was absolute, for as representative of both village and victim, he was limited only by the fact that the crime had to be serious enough for third party intervention and that severe sanctions demanded village…

  13. Blind Bayesian restoration of adaptive optics telescope images using generalized Gaussian Markov random field models

    NASA Astrophysics Data System (ADS)

    Jeffs, Brian D.; Christou, Julian C.

    1998-09-01

    This paper addresses post processing for resolution enhancement of sequences of short exposure adaptive optics (AO) images of space objects. The unknown residual blur is removed using Bayesian maximum a posteriori blind image restoration techniques. In the problem formulation, both the true image and the unknown blur psf's are represented by the flexible generalized Gaussian Markov random field (GGMRF) model. The GGMRF probability density function provides a natural mechanism for expressing available prior information about the image and blur. Incorporating such prior knowledge in the deconvolution optimization is crucial for the success of blind restoration algorithms. For example, space objects often contain sharp edge boundaries and geometric structures, while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits smoothed, random , texture-like features on a peaked central core. By properly choosing parameters, GGMRF models can accurately represent both the blur psf and the object, and serve to regularize the deconvolution problem. These two GGMRF models also serve as discriminator functions to separate blur and object in the solution. Algorithm performance is demonstrated with examples from synthetic AO images. Results indicate significant resolution enhancement when applied to partially corrected AO images. An efficient computational algorithm is described.

  14. New and unconventional approaches for advancing resolution in biological transmission electron microscopy by improving macromolecular specimen preparation and preservation.

    PubMed

    Massover, William H

    2011-02-01

    Resolution in transmission electron microscopy (TEM) now is limited by the properties of specimens, rather than by those of instrumentation. The long-standing difficulties in obtaining truly high-resolution structure from biological macromolecules with TEM demand the development, testing, and application of new ideas and unconventional approaches. This review concisely describes some new concepts and innovative methodologies for TEM that deal with unsolved problems in the preparation and preservation of macromolecular specimens. The selected topics include use of better support films, a more protective multi-component matrix surrounding specimens for cryo-TEM and negative staining, and, several quite different changes in microscopy and micrography that should decrease the effects of electron radiation damage; all these practical approaches are non-traditional, but have promise to advance resolution for specimens of biological macromolecules beyond its present level of 3-10 Å (0.3-1.0 nm). The result of achieving truly high resolution will be a fulfillment of the still unrealized potential of transmission electron microscopy for directly revealing the structure of biological macromolecules down to the atomic level. Published by Elsevier Ltd.

  15. Assessment of a vertical high-resolution distributed-temperature-sensing system in a shallow thermohaline environment

    NASA Astrophysics Data System (ADS)

    Suárez, F.; Aravena, J. E.; Hausner, M. B.; Childress, A. E.; Tyler, S. W.

    2011-01-01

    In shallow thermohaline-driven lakes it is important to measure temperature on fine spatial and temporal scales to detect stratification or different hydrodynamic regimes. Raman spectra distributed temperature sensing (DTS) is an approach available to provide high spatial and temporal temperature resolution. A vertical high-resolution DTS system was constructed to overcome the problems of typical methods used in the past, i.e., without disturbing the water column, and with resistance to corrosive environments. This system monitors the temperature profile each 1.1 cm vertically and in time averages as small as 10 s. Temperature resolution as low as 0.035 °C is obtained when the data are collected at 5-min intervals. The vertical high-resolution DTS system is used to monitor the thermal behavior of a salt-gradient solar pond, which is an engineered shallow thermohaline system that allows collection and storage of solar energy for a long period of time. This paper describes a method to quantitatively assess accuracy, precision and other limitations of DTS systems to fully utilize the capacity of this technology. It also presents, for the first time, a method to manually calibrate temperatures along the optical fiber.

  16. A High Spatial Resolution Depth Sensing Method Based on Binocular Structured Light

    PubMed Central

    Yao, Huimin; Ge, Chenyang; Xue, Jianru; Zheng, Nanning

    2017-01-01

    Depth information has been used in many fields because of its low cost and easy availability, since the Microsoft Kinect was released. However, the Kinect and Kinect-like RGB-D sensors show limited performance in certain applications and place high demands on accuracy and robustness of depth information. In this paper, we propose a depth sensing system that contains a laser projector similar to that used in the Kinect, and two infrared cameras located on both sides of the laser projector, to obtain higher spatial resolution depth information. We apply the block-matching algorithm to estimate the disparity. To improve the spatial resolution, we reduce the size of matching blocks, but smaller matching blocks generate lower matching precision. To address this problem, we combine two matching modes (binocular mode and monocular mode) in the disparity estimation process. Experimental results show that our method can obtain higher spatial resolution depth without loss of the quality of the range image, compared with the Kinect. Furthermore, our algorithm is implemented on a low-cost hardware platform, and the system can support the resolution of 1280 × 960, and up to a speed of 60 frames per second, for depth image sequences. PMID:28397759

  17. Chemically Resolved Imaging of Biological Cells and Thin Films by Infrared Scanning Near-Field Optical Microscopy

    PubMed Central

    Cricenti, Antonio; Generosi, Renato; Luce, Marco; Perfetti, Paolo; Margaritondo, Giorgio; Talley, David; Sanghera, Jas S.; Aggarwal, Ishwar D.; Tolk, Norman H.; Congiu-Castellano, Agostina; Rizzo, Mark A.; Piston, David W.

    2003-01-01

    The infrared (IR) absorption of a biological system can potentially report on fundamentally important microchemical properties. For example, molecular IR profiles are known to change during increases in metabolic flux, protein phosphorylation, or proteolytic cleavage. However, practical implementation of intracellular IR imaging has been problematic because the diffraction limit of conventional infrared microscopy results in low spatial resolution. We have overcome this limitation by using an IR spectroscopic version of scanning near-field optical microscopy (SNOM), in conjunction with a tunable free-electron laser source. The results presented here clearly reveal different chemical constituents in thin films and biological cells. The space distribution of specific chemical species was obtained by taking SNOM images at IR wavelengths (λ) corresponding to stretch absorption bands of common biochemical bonds, such as the amide bond. In our SNOM implementation, this chemical sensitivity is combined with a lateral resolution of 0.1 μm (≈λ/70), well below the diffraction limit of standard infrared microscopy. The potential applications of this approach touch virtually every aspect of the life sciences and medical research, as well as problems in materials science, chemistry, physics, and environmental research. PMID:14507733

  18. A Novel Modified Omega-K Algorithm for Synthetic Aperture Imaging Lidar through the Atmosphere

    PubMed Central

    Guo, Liang; Xing, Mendao; Tang, Yu; Dan, Jing

    2008-01-01

    The spatial resolution of a conventional imaging lidar system is constrained by the diffraction limit of the telescope's aperture. The combination of the lidar and synthetic aperture (SA) processing techniques may overcome the diffraction limit and pave the way for a higher resolution air borne or space borne remote sensor. Regarding the lidar transmitting frequency modulation continuous-wave (FMCW) signal, the motion during the transmission of a sweep and the reception of the corresponding echo were expected to be one of the major problems. The given modified Omega-K algorithm takes the continuous motion into account, which can compensate for the Doppler shift induced by the continuous motion efficiently and azimuth ambiguity for the low pulse recurrence frequency limited by the tunable laser. And then, simulation of Phase Screen (PS) distorted by atmospheric turbulence following the von Karman spectrum by using Fourier Transform is implemented in order to simulate turbulence. Finally, the computer simulation shows the validity of the modified algorithm and if in the turbulence the synthetic aperture length does not exceed the similar coherence length of the atmosphere for SAIL, we can ignore the effect of the turbulence. PMID:27879865

  19. Improving lateral resolution and image quality of optical coherence tomography by the multi-frame superresolution technique for 3D tissue imaging

    PubMed Central

    Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R.

    2017-01-01

    The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues. PMID:29188089

  20. [The optimizing design and experiment for a MOEMS micro-mirror spectrometer].

    PubMed

    Mo, Xiang-xia; Wen, Zhi-yu; Zhang, Zhi-hai; Guo, Yuan-jun

    2011-12-01

    A MOEMS micro-mirror spectrometer, which uses micro-mirror as a light switch so that spectrum can be detected by a single detector, has the advantages of transforming DC into AC, applying Hadamard transform optics without additional template, high pixel resolution and low cost. In this spectrometer, the vital problem is the conflict between the scales of slit and the light intensity. Hence, in order to improve the resolution of this spectrometer, the present paper gives the analysis of the new effects caused by micro structure, and optimal values of the key factors. Firstly, the effects of diffraction limitation, spatial sample rate and curved slit image on the resolution of the spectrum were proposed. Then, the results were simulated; the key values were tested on the micro mirror spectrometer. Finally, taking all these three effects into account, this micro system was optimized. With a scale of 70 mm x 130 mm, decreasing the height of the image at the plane of micro mirror can not diminish the influence of curved slit image in the spectrum; under the demand of spatial sample rate, the resolution must be twice over the pixel resolution; only if the width of the slit is 1.818 microm and the pixel resolution is 2.2786 microm can the spectrometer have the best performance.

  1. Magnetic Resonance Elastography of the Brain using Multi-Shot Spiral Readouts with Self-Navigated Motion Correction

    PubMed Central

    Johnson, Curtis L.; McGarry, Matthew D. J.; Van Houten, Elijah E. W.; Weaver, John B.; Paulsen, Keith D.; Sutton, Bradley P.; Georgiadis, John G.

    2012-01-01

    MRE has been introduced in clinical practice as a possible surrogate for mechanical palpation, but its application to study the human brain in vivo has been limited by low spatial resolution and the complexity of the inverse problem associated with biomechanical property estimation. Here, we report significant improvements in brain MRE data acquisition by reporting images with high spatial resolution and signal-to-noise ratio as quantified by octahedral shear strain metrics. Specifically, we have developed a sequence for brain MRE based on multi-shot, variable-density spiral imaging and three-dimensional displacement acquisition, and implemented a correction scheme for any resulting phase errors. A Rayleigh damped model of brain tissue mechanics was adopted to represent the parenchyma, and was integrated via a finite element-based iterative inversion algorithm. A multi-resolution phantom study demonstrates the need for obtaining high-resolution MRE data when estimating focal mechanical properties. Measurements on three healthy volunteers demonstrate satisfactory resolution of grey and white matter, and mechanical heterogeneities correspond well with white matter histoarchitecture. Together, these advances enable MRE scans that result in high-fidelity, spatially-resolved estimates of in vivo brain tissue mechanical properties, improving upon lower resolution MRE brain studies which only report volume averaged stiffness values. PMID:23001771

  2. A new approach for solving seismic tomography problems and assessing the uncertainty through the use of graph theory and direct methods

    NASA Astrophysics Data System (ADS)

    Bogiatzis, P.; Ishii, M.; Davis, T. A.

    2016-12-01

    Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.

  3. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoa T. Nguyen; Stone, Daithi; E. Wes Bethel

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different casemore » studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.« less

  4. Elevated-temperature luminescence measurements to improve spatial resolution

    NASA Astrophysics Data System (ADS)

    Pluska, Mariusz; Czerwinski, Andrzej

    2018-01-01

    Various branches of applied physics use luminescence based methods to investigate light-emitting specimens with high spatial resolution. A key problem is that luminescence signals lack all the advantages of high locality (i.e. of high spatial resolution) when structures with strong built-in electric field are measured. Such fields exist intentionally in most photonic structures, and occur unintentionally in many other materials. In this case, as a result of beam-induced current generation and its outflow, information that indicates irregularities, nonuniformities and inhomogeneities, such as defects, is lost. We show that to avoid nonlocality and enable truly local luminescence measurements, an elevated measurement temperature as high as 350 K (or even higher) is, perhaps surprisingly, advantageous. This is in contrast to a widely used approach, where cryogenic temperatures, or at least room temperature, are recommended. The elevated temperature of a specimen, together with the current outflow being limited by focused ion beam (FIB) milling, is shown to improve the spatial resolution of luminescence measurements greatly. All conclusions drawn using the example of cathodoluminescence are useful for other luminescence techniques.

  5. Slitless Spectroscopy

    NASA Astrophysics Data System (ADS)

    Davila, J. M.; O'Neill, J. F.

    2013-12-01

    Spectrographs provide a unique window into plasma parameters in the solar atmosphere. In fact spectrographs provide the most accurate measurements of plasma parameters such as density, temperature, and flow speed. However, traditionally spectrographic instruments have suffered from the inability to cover large spatial regions of the Sun quickly. To cover an active region sized spatial region, the slit must be rastered over the area of interest with an exposure taken at each pointing location. Because of this long cycle time, the spectra of dynamic events like flares, CME initiations, or transient brightening are obtained only rarely. And even if spectra are obtained they are either taken over an extremely small spatial region, or the spectra are not co-temporal across the raster. Either of these complicates the interpretation of the spectral raster results. Imagers are able to provide high time and spatial resolution images of the full Sun but with limited spectral resolution. The telescopes onboard the Solar Dynamics Observatory (SDO) normally take a full disk solar image every 10 seconds with roughly 1 arcsec spatial resolution. However the spectral resolution of the multilayer imagers on SDO is of order 100 times less than a typical spectrograph. Because of this it is difficult to interpret multilayer imaging data to accurately obtain plasma parameters like temperature and density from these data, and there is no direct measure of plasma flow velocity. SERTS and EIS partially addressed this problem by using a wide slit to produce monochromatic images with limited FOV to limit overlapping. However dispersion within the wide slit image remained a problem which prevented the determination of intensity, Doppler shift, and line width in the wide slit. Kankelborg and Thomas introduced the idea of using multiple images -1, 0, and +1 spectral orders of a single emission line. This scheme provided three independent images to measure the three spectral line parameters in each pixel with the Multi-Order Solar EUV Spectrograph (MOSES) instrument. We suggest a reconstruction approach based on tomographic methods with regularization. Preliminary results show that the typical Doppler shift and line width error introduced by the reconstruction method is of order a few km/s at 300 A. This is on the order of the error obtained in narrow slit spectrographs but with data obtained over a two-dimensional field of view.

  6. Interior tomography in microscopic CT with image reconstruction constrained by full field of view scan at low spatial resolution

    NASA Astrophysics Data System (ADS)

    Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang

    2018-04-01

    In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.

  7. Superresolution imaging of Drosophila tissues using expansion microscopy.

    PubMed

    Jiang, Nan; Kim, Hyeon-Jin; Chozinski, Tyler J; Azpurua, Jorge E; Eaton, Benjamin A; Vaughan, Joshua C; Parrish, Jay Z

    2018-06-15

    The limited resolving power of conventional diffraction-limited microscopy hinders analysis of small, densely packed structural elements in cells. Expansion microscopy (ExM) provides an elegant solution to this problem, allowing for increased resolution with standard microscopes via physical expansion of the specimen in a swellable polymer hydrogel. Here, we apply, validate, and optimize ExM protocols that enable the study of Drosophila embryos, larval brains, and larval and adult body walls. We achieve a lateral resolution of ∼70 nm in Drosophila tissues using a standard confocal microscope, and we use ExM to analyze fine intracellular structures and intercellular interactions. First, we find that ExM reveals features of presynaptic active zone (AZ) structure that are observable with other superresolution imaging techniques but not with standard confocal microscopy. We further show that synapses known to exhibit age-dependent changes in activity also exhibit age-dependent changes in AZ structure. Finally, we use the significantly improved axial resolution of ExM to show that dendrites of somatosensory neurons are inserted into epithelial cells at a higher frequency than previously reported in confocal microscopy studies. Altogether, our study provides a foundation for the application of ExM to Drosophila tissues and underscores the importance of tissue-specific optimization of ExM procedures.

  8. A Conformal, Bio-interfaced Class of Silicon Electronics for Mapping Cardiac Electrophysiology

    PubMed Central

    Viventi, Jonathan; Kim, Dae-Hyeong; Moss, Joshua D.; Kim, Yun-Soung; Blanco, Justin A.; Annetta, Nicholas; Hicks, Andrew; Xiao, Jianliang; Huang, Younggang; Callans, David J.; Rogers, John A.; Litt, Brian

    2011-01-01

    The sophistication and resolution of current implantable medical devices are limited by the need connect each sensor separately to data acquisition systems. The ability of these devices to sample and modulate tissues is further limited by the rigid, planar nature of the electronics and the electrode-tissue interface. Here, we report the development of a class of mechanically flexible silicon electronics for measuring signals in an intimate, conformal integrated mode on the dynamic, three dimensional surfaces of soft tissues in the human body. We illustrate this technology in sensor systems composed of 2016 silicon nanomembrane transistors configured to record electrical activity directly from the curved, wet surface of a beating heart in vivo. The devices sample with simultaneous sub-millimeter and sub-millisecond resolution through 288 amplified and multiplexed channels. We use these systems to map the spread of spontaneous and paced ventricular depolarization in real time, at high resolution, on the epicardial surface in a porcine animal model. This clinical-scale demonstration represents one example of many possible uses of this technology in minimally invasive medical devices. [Conformal electronics and sensors intimately integrated with living tissues enable a new generation of implantable devices capable of addressing important problems in human health.] PMID:20375008

  9. Numerical viscosity and resolution of high-order weighted essentially nonoscillatory schemes for compressible flows with high Reynolds numbers.

    PubMed

    Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye

    2003-10-01

    A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.

  10. High-resolution scanning electron microscopy of frozen-hydrated cells.

    PubMed

    Walther, P; Chen, Y; Pech, L L; Pawley, J B

    1992-11-01

    Cryo-fixed yeast Paramecia and sea urchin embryos were investigated with an in-lens type field-emission SEM using a cold stage. The goal was to further develop and investigate the processing of frozen samples for the low-temperature scanning electron microscope (LTSEM). Uncoated frozen-hydrated samples were imaged with the low-voltage backscattered electron signal (BSE). Resolution and contrast were sufficient to visualize cross-fractured membranes, nuclear pores and small vesicles in the cytoplasm. It is assumed that the resolution of this approach is limited by the extraction depth of the BSE which depends upon the accelerating voltage of the primary beam (V0). In this study, the lowest possible V0 was 2.6 kV because below this value the sensitivity of the BSE detector is insufficient. It is concluded that the resolution of the uncoated specimen could be improved if equipment were available for high-resolution BSE imaging at 0.5-2 kV. Higher resolution was obtained with platinum cryo-coated samples, on which intramembranous particles were easily imaged. These images even show the ring-like appearance of the hexagonally arranged intramembranous particles known from high-resolution replica studies. On fully hydrated samples at high magnification, the observation time for a particular area is limited by mass loss caused by electron irradiation. Other potential sources of artefacts are the deposition of water vapour contamination and shrinkage caused by the sublimation of ice. Imaging of partially dehydrated (partially freeze-dried) samples, e.g. high-pressure frozen Paramecium and sea urchin embryos, will probably become the main application in cell biology. In spite of possible shrinkage problems, this approach has a number of advantages compared with any other electron microscopy preparation method: no chemical fixation is necessary, eliminating this source of artefacts; due to partial removal of the water additional structures in the cytoplasm can be investigated; and finally, the mass loss due to electron beam irradiation is greatly reduced compared to fully frozen-hydrated specimens.

  11. Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.

    PubMed

    Jia, Qi; Gao, Xinkai; Guo, He; Luo, Zhongxuan; Wang, Yi

    2015-03-19

    In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.

  12. Learning automata-based solutions to the nonlinear fractional knapsack problem with applications to optimal resource allocation.

    PubMed

    Granmo, Ole-Christoffer; Oommen, B John; Myrer, Svein Arild; Olsen, Morten Goodwin

    2007-02-01

    This paper considers the nonlinear fractional knapsack problem and demonstrates how its solution can be effectively applied to two resource allocation problems dealing with the World Wide Web. The novel solution involves a "team" of deterministic learning automata (LA). The first real-life problem relates to resource allocation in web monitoring so as to "optimize" information discovery when the polling capacity is constrained. The disadvantages of the currently reported solutions are explained in this paper. The second problem concerns allocating limited sampling resources in a "real-time" manner with the purpose of estimating multiple binomial proportions. This is the scenario encountered when the user has to evaluate multiple web sites by accessing a limited number of web pages, and the proportions of interest are the fraction of each web site that is successfully validated by an HTML validator. Using the general LA paradigm to tackle both of the real-life problems, the proposed scheme improves a current solution in an online manner through a series of informed guesses that move toward the optimal solution. At the heart of the scheme, a team of deterministic LA performs a controlled random walk on a discretized solution space. Comprehensive experimental results demonstrate that the discretization resolution determines the precision of the scheme, and that for a given precision, the current solution (to both problems) is consistently improved until a nearly optimal solution is found--even for switching environments. Thus, the scheme, while being novel to the entire field of LA, also efficiently handles a class of resource allocation problems previously not addressed in the literature.

  13. Reconstructing Images in Astrophysics, an Inverse Problem Point of View

    NASA Astrophysics Data System (ADS)

    Theys, Céline; Aime, Claude

    2016-04-01

    After a short introduction, a first section provides a brief tutorial to the physics of image formation and its detection in the presence of noises. The rest of the chapter focuses on the resolution of the inverse problem . In the general form, the observed image is given by a Fredholm integral containing the object and the response of the instrument. Its inversion is formulated using a linear algebra. The discretized object and image of size N × N are stored in vectors x and y of length N 2. They are related one another by the linear relation y = H x, where H is a matrix of size N 2 × N 2 that contains the elements of the instrument response. This matrix presents particular properties for a shift invariant point spread function for which the Fredholm integral is reduced to a convolution relation. The presence of noise complicates the resolution of the problem. It is shown that minimum variance unbiased solutions fail to give good results because H is badly conditioned, leading to the need of a regularized solution. Relative strength of regularization versus fidelity to the data is discussed and briefly illustrated on an example using L-curves. The origins and construction of iterative algorithms are explained, and illustrations are given for the algorithms ISRA , for a Gaussian additive noise, and Richardson-Lucy , for a pure photodetected image (Poisson statistics). In this latter case, the way the algorithm modifies the spatial frequencies of the reconstructed image is illustrated for a diluted array of apertures in space. Throughout the chapter, the inverse problem is formulated in matrix form for the general case of the Fredholm integral, while numerical illustrations are limited to the deconvolution case, allowing the use of discrete Fourier transforms, because of computer limitations.

  14. Physical principles for scalable neural recording

    PubMed Central

    Zamft, Bradley M.; Maguire, Yael G.; Shapiro, Mikhail G.; Cybulski, Thaddeus R.; Glaser, Joshua I.; Amodei, Dario; Stranges, P. Benjamin; Kalhor, Reza; Dalrymple, David A.; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M.; Carmena, Jose M.; Rabaey, Jan M.; Boyden, Edward S.; Church, George M.; Kording, Konrad P.

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power–bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and wireless data transmission would only be viable, however, given major improvements to the power efficiency of microelectronic devices. PMID:24187539

  15. Issues and Strategies in Solving Multidisciplinary Optimization Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya

    2013-01-01

    Optimization research at NASA Glenn Research Center has addressed the design of structures, aircraft and airbreathing propulsion engines. The accumulated multidisciplinary design activity is collected under a testbed entitled COMETBOARDS. Several issues were encountered during the solution of the problems. Four issues and the strategies adapted for their resolution are discussed. This is followed by a discussion on analytical methods that is limited to structural design application. An optimization process can lead to an inefficient local solution. This deficiency was encountered during design of an engine component. The limitation was overcome through an augmentation of animation into optimization. Optimum solutions obtained were infeasible for aircraft and airbreathing propulsion engine problems. Alleviation of this deficiency required a cascading of multiple algorithms. Profile optimization of a beam produced an irregular shape. Engineering intuition restored the regular shape for the beam. The solution obtained for a cylindrical shell by a subproblem strategy converged to a design that can be difficult to manufacture. Resolution of this issue remains a challenge. The issues and resolutions are illustrated through a set of problems: Design of an engine component, Synthesis of a subsonic aircraft, Operation optimization of a supersonic engine, Design of a wave-rotor-topping device, Profile optimization of a cantilever beam, and Design of a cylindrical shell. This chapter provides a cursory account of the issues. Cited references provide detailed discussion on the topics. Design of a structure can also be generated by traditional method and the stochastic design concept. Merits and limitations of the three methods (traditional method, optimization method and stochastic concept) are illustrated. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions can be produced by all the three methods. The variation in the weight calculated by the methods was found to be modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.

  16. X-ray ptychography

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Franz

    2018-01-01

    X-ray ptychographic microscopy combines the advantages of raster scanning X-ray microscopy with the more recently developed techniques of coherent diffraction imaging. It is limited neither by the fabricational challenges associated with X-ray optics nor by the requirements of isolated specimen preparation, and offers in principle wavelength-limited resolution, as well as stable access and solution to the phase problem. In this Review, we discuss the basic principles of X-ray ptychography and summarize the main milestones in the evolution of X-ray ptychographic microscopy and tomography over the past ten years, since its first demonstration with X-rays. We also highlight the potential for applications in the life and materials sciences, and discuss the latest advanced concepts and probable future developments.

  17. Adaptive optics technique to overcome the turbulence in a large-aperture collimator.

    PubMed

    Mu, Quanquan; Cao, Zhaoliang; Li, Dayu; Hu, Lifa; Xuan, Li

    2008-03-20

    A collimator with a long focal length and large aperture is a very important apparatus for testing large-aperture optical systems. But it suffers from internal air turbulence, which may limit its performance and reduce the testing accuracy. To overcome this problem, an adaptive optics system is introduced to compensate for the turbulence. This system includes a liquid crystal on silicon device as a wavefront corrector and a Shack-Hartmann wavefront sensor. After correction, we can get a plane wavefront with rms of about 0.017 lambda (lambda=0.6328 microm) emitted out of a larger than 500 mm diameter aperture. The whole system reaches diffraction-limited resolution.

  18. Tomographic phase microscopy: principles and applications in bioimaging [Invited

    PubMed Central

    Jin, Di; Zhou, Renjie; Yaqoob, Zahid; So, Peter T. C.

    2017-01-01

    Tomographic phase microscopy (TPM) is an emerging optical microscopic technique for bioimaging. TPM uses digital holographic measurements of complex scattered fields to reconstruct three-dimensional refractive index (RI) maps of cells with diffraction-limited resolution by solving inverse scattering problems. In this paper, we review the developments of TPM from the fundamental physics to its applications in bioimaging. We first provide a comprehensive description of the tomographic reconstruction physical models used in TPM. The RI map reconstruction algorithms and various regularization methods are discussed. Selected TPM applications for cellular imaging, particularly in hematology, are reviewed. Finally, we examine the limitations of current TPM systems, propose future solutions, and envision promising directions in biomedical research. PMID:29386746

  19. Diffractive X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald K.

    2010-01-01

    Diffractive X-ray telescopes, using zone plates, phase Fresnel lenses, or related optical elements have the potential to provide astronomers with true imaging capability with resolution many orders of magnitude better than available in any other waveband. Lenses that would be relatively easy to fabricate could have an angular resolution of the order of micro-arc-seconds or even better, that would allow, for example, imaging of the distorted spacetime in the immediate vicinity of the super-massive black holes in the center of active galaxies. What then is precluding their immediate adoption? Extremely long focal lengths, very limited bandwidth, and difficulty stabilizing the image are the main problems. The history, and status of the development of such lenses is reviewed here and the prospects for managing the challenges that they present are discussed.

  20. Research and development of novel wireless digital capacitive displacement sensor

    NASA Astrophysics Data System (ADS)

    Cui, Junning; He, Zhangqiang; Sun, Tao; Bian, Xingyuan; Han, Lu

    2015-02-01

    In order to solve the problem of noncontact, wireless and nonmagnetic displacement sensing with nanometer resolution within critical limited space for ultraprecision displacement monitoring in the Joule balance device, a novel wireless digital capacitive displacement sensor (WDCDS) is proposed. The WDCDS is fabricated with brass and other nonmagnetic material and powered with a small battery inside, a small integrated circuit is assembled inside for converting and processing of capacitive signal, and low power Bluetooth is used for wireless signal transmission and communication. Experimental results show that the WDCDS proposed has a resolution of better than 1nm and a nonlinearity of 0.077%, therefore it is a delicate design for ultraprecision noncontact displacement monitoring in the Joule balance device, meeting the demand for properties of wireless, nonmagnetic and miniaturized size.

  1. Strategy for large-scale isolation of enantiomers in drug discovery.

    PubMed

    Leek, Hanna; Thunberg, Linda; Jonson, Anna C; Öhlén, Kristina; Klarqvist, Magnus

    2017-01-01

    A strategy for large-scale chiral resolution is illustrated by the isolation of pure enantiomer from a 5kg batch. Results from supercritical fluid chromatography will be presented and compared with normal phase liquid chromatography. Solubility of the compound in the supercritical mobile phase was shown to be the limiting factor. To circumvent this, extraction injection was used but shown not to be efficient for this compound. Finally, a method for chiral resolution by crystallization was developed and applied to give diastereomeric salt with an enantiomeric excess of 99% at a 91% yield. Direct access to a diverse separation tool box will be shown to be essential for solving separation problems in the most cost and time efficient way. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Genetic interaction analysis of point mutations enables interrogation of gene function at a residue-level resolution

    PubMed Central

    Braberg, Hannes; Moehle, Erica A.; Shales, Michael; Guthrie, Christine; Krogan, Nevan J.

    2014-01-01

    We have achieved a residue-level resolution of genetic interaction mapping – a technique that measures how the function of one gene is affected by the alteration of a second gene – by analyzing point mutations. Here, we describe how to interpret point mutant genetic interactions, and outline key applications for the approach, including interrogation of protein interaction interfaces and active sites, and examination of post-translational modifications. Genetic interaction analysis has proven effective for characterizing cellular processes; however, to date, systematic high-throughput genetic interaction screens have relied on gene deletions or knockdowns, which limits the resolution of gene function analysis and poses problems for multifunctional genes. Our point mutant approach addresses these issues, and further provides a tool for in vivo structure-function analysis that complements traditional biophysical methods. We also discuss the potential for genetic interaction mapping of point mutations in human cells and its application to personalized medicine. PMID:24842270

  3. A High-Speed, Event-Driven, Active Pixel Sensor Readout for Photon-Counting Microchannel Plate Detectors

    NASA Technical Reports Server (NTRS)

    Kimble, Randy A.; Pain, B.; Norton, T. J.; Haas, P.; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    Silicon array readouts for microchannel plate intensifiers offer several attractive features. In this class of detector, the electron cloud output of the MCP intensifier is converted to visible light by a phosphor; that light is then fiber-optically coupled to the silicon array. In photon-counting mode, the resulting light splashes on the silicon array are recognized and centroided to fractional pixel accuracy by off-chip electronics. This process can result in very high (MCP-limited) spatial resolution for the readout while operating at a modest MCP gain (desirable for dynamic range and long term stability). The principal limitation of intensified CCD systems of this type is their severely limited local dynamic range, as accurate photon counting is achieved only if there are not overlapping event splashes within the frame time of the device. This problem can be ameliorated somewhat by processing events only in pre-selected windows of interest or by using an addressable charge injection device (CID) for the readout array. We are currently pursuing the development of an intriguing alternative readout concept based on using an event-driven CMOS Active Pixel Sensor. APS technology permits the incorporation of discriminator circuitry within each pixel. When coupled with suitable CMOS logic outside the array area, the discriminator circuitry can be used to trigger the readout of small sub-array windows only when and where an event splash has been detected, completely eliminating the local dynamic range problem, while achieving a high global count rate capability and maintaining high spatial resolution. We elaborate on this concept and present our progress toward implementing an event-driven APS readout.

  4. Enabling low-noise null-point scanning thermal microscopy by the optimization of scanning thermal microscope probe through a rigorous theory of quantitative measurement.

    PubMed

    Hwang, Gwangseok; Chung, Jaehun; Kwon, Ohmyoung

    2014-11-01

    The application of conventional scanning thermal microscopy (SThM) is severely limited by three major problems: (i) distortion of the measured signal due to heat transfer through the air, (ii) the unknown and variable value of the tip-sample thermal contact resistance, and (iii) perturbation of the sample temperature due to the heat flux through the tip-sample thermal contact. Recently, we proposed null-point scanning thermal microscopy (NP SThM) as a way of overcoming these problems in principle by tracking the thermal equilibrium between the end of the SThM tip and the sample surface. However, in order to obtain high spatial resolution, which is the primary motivation for SThM, NP SThM requires an extremely sensitive SThM probe that can trace the vanishingly small heat flux through the tip-sample nano-thermal contact. Herein, we derive a relation between the spatial resolution and the design parameters of a SThM probe, optimize the thermal and electrical design, and develop a batch-fabrication process. We also quantitatively demonstrate significantly improved sensitivity, lower measurement noise, and higher spatial resolution of the fabricated SThM probes. By utilizing the exceptional performance of these fabricated probes, we show that NP SThM can be used to obtain a quantitative temperature profile with nanoscale resolution independent of the changing tip-sample thermal contact resistance and without perturbation of the sample temperature or distortion due to the heat transfer through the air.

  5. Fault Management Practice: A Roadmap for Improvement

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Oberhettinger, David

    2010-01-01

    Autonomous fault management (FM) is critical for deep space and planetary missions where the limited communication opportunities may prevent timely intervention by ground control. Evidence of pervasive architecture, design, and verification/validation problems with NASA FM engineering has been revealed both during technical reviews of spaceflight missions and in flight. These problems include FM design changes required late in the life-cycle, insufficient project insight into the extent of FM testing required, unexpected test results that require resolution, spacecraft operational limitations because certain functions were not tested, and in-flight anomalies and mission failures attributable to fault management. A recent NASA initiative has characterized the FM state-of-practice throughout the spacecraft development community and identified common NASA, DoD, and commercial concerns that can be addressed in the near term through the development of a FM Practitioner's Handbook and the formation of a FM Working Group. Initial efforts will focus on standardizing FM terminology, establishing engineering processes and tools, and training.

  6. A design method for high performance seismic data acquisition based on oversampling delta-sigma modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shanghua; Xue, Bing

    2017-04-01

    The dynamic range of the currently most widely used 24-bit seismic data acquisition devices is 10-20 dB lower than that of broadband seismometers, and this can affect the completeness of seismic waveform recordings under certain conditions. However, this problem is not easy to solve because of the lack of analog to digital converter (ADC) chips with more than 24 bits in the market. So the key difficulties for higher-resolution data acquisition devices lie in achieving more than 24-bit ADC circuit. In the paper, we propose a method in which an adder, an integrator, a digital to analog converter chip, a field-programmable gate array, and an existing low-resolution ADC chip are used to build a third-order 16-bit oversampling delta-sigma modulator. This modulator is equipped with a digital decimation filter, thus forming a complete analog to digital converting circuit. Experimental results show that, within the 0.1-40 Hz frequency range, the circuit board's dynamic range reaches 158.2 dB, its resolution reaches 25.99 dB, and its linearity error is below 2.5 ppm, which is better than what is achieved by the commercial 24-bit ADC chips ADS1281 and CS5371. This demonstrates that the proposed method may alleviate or even solve the amplitude-limitation problem that broadband observation systems so commonly have to face during strong earthquakes.

  7. Comparison of seismic waveform inversion results for the rupture history of a finite fault: application to the 1986 North Palm Springs, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.

    1989-01-01

    The July 8, 1986, North Palm Strings earthquake is used as a basis for comparison of several different approaches to the solution for the rupture history of a finite fault. The inversion of different waveform data is considered; both teleseismic P waveforms and local strong ground motion records. Linear parametrizations for slip amplitude are compared with nonlinear parametrizations for both slip amplitude and rupture time. Inversions using both synthetic and empirical Green's functions are considered. In general, accurate Green's functions are more readily calculable for the teleseismic problem where simple ray theory and flat-layered velocity structures are usually sufficient. However, uncertainties in the variation in t* with frequency most limit the resolution of teleseismic inversions. A set of empirical Green's functions that are well recorded at teleseismic distances could avoid the uncertainties in attenuation. In the inversion of strong motion data, the accurate calculation of propagation path effects other than attenuation effects is the limiting factor in the resolution of source parameters. -from Author

  8. Real-time data acquisition of commercial microwave link networks for hydrometeorological applications

    NASA Astrophysics Data System (ADS)

    Chwala, Christian; Keis, Felix; Kunstmann, Harald

    2016-03-01

    The usage of data from commercial microwave link (CML) networks for scientific purposes is becoming increasingly popular, in particular for rain rate estimation. However, data acquisition and availability is still a crucial problem and limits research possibilities. To overcome this issue, we have developed an open-source data acquisition system based on the Simple Network Management Protocol (SNMP). It is able to record transmitted and received signal levels of a large number of CMLs simultaneously with a temporal resolution of up to 1 s. We operate this system at Ericsson Germany, acquiring data from 450 CMLs with minutely real-time transfer to our database. Our data acquisition system is not limited to a particular CML hardware model or manufacturer, though. We demonstrate this by running the same system for CMLs of a different manufacturer, operated by an alpine ski resort in Germany. There, the data acquisition is running simultaneously for four CMLs with a temporal resolution of 1 s. We present an overview of our system, describe the details of the necessary SNMP requests and show results from its operational application.

  9. Real time data acquisition of commercial microwave link networks for hydrometeorological applications

    NASA Astrophysics Data System (ADS)

    Chwala, C.; Keis, F.; Kunstmann, H.

    2015-11-01

    The usage of data from commercial microwave link (CML) networks for scientific purposes is becoming increasingly popular, in particular for rain rate estimation. However, data acquisition and availability is still a crucial problem and limits research possibilities. To overcome this issue, we have developed an open source data acquisition system based on the Simple Network Management Protocol (SNMP). It is able to record transmitted- and received signal levels of a large number of CMLs simultaneously with a temporal resolution of up to one second. We operate this system at Ericsson Germany, acquiring data from 450 CMLs with minutely real time transfer to our data base. Our data acquisition system is not limited to a particular CML hardware model or manufacturer, though. We demonstrate this by running the same system for CMLs of a different manufacturer, operated by an alpine skiing resort in Germany. There, the data acquisition is running simultaneously for four CMLs with a temporal resolution of one second. We present an overview of our system, describe the details of the necessary SNMP requests and show results from its operational application.

  10. XFEL diffraction: Developing processing methods to optimize data quality

    DOE PAGES

    Sauter, Nicholas K.

    2015-01-29

    Serial crystallography, using either femtosecond X-ray pulses from free-electron laser sources or short synchrotron-radiation exposures, has the potential to reveal metalloprotein structural details while minimizing damage processes. However, deriving a self-consistent set of Bragg intensities from numerous still-crystal exposures remains a difficult problem, with optimal protocols likely to be quite different from those well established for rotation photography. Here several data processing issues unique to serial crystallography are examined. It is found that the limiting resolution differs for each shot, an effect that is likely to be due to both the sample heterogeneity and pulse-to-pulse variation in experimental conditions. Shotsmore » with lower resolution limits produce lower-quality models for predicting Bragg spot positions during the integration step. Also, still shots by their nature record only partial measurements of the Bragg intensity. An approximate model that corrects to the full-spot equivalent (with the simplifying assumption that the X-rays are monochromatic) brings the distribution of intensities closer to that expected from an ideal crystal, and improves the sharpness of anomalous difference Fourier peaks indicating metal positions.« less

  11. Shotgun Protein Sequencing with Meta-contig Assembly*

    PubMed Central

    Guthals, Adrian; Clauser, Karl R.; Bandeira, Nuno

    2012-01-01

    Full-length de novo sequencing from tandem mass (MS/MS) spectra of unknown proteins such as antibodies or proteins from organisms with unsequenced genomes remains a challenging open problem. Conventional algorithms designed to individually sequence each MS/MS spectrum are limited by incomplete peptide fragmentation or low signal to noise ratios and tend to result in short de novo sequences at low sequencing accuracy. Our shotgun protein sequencing (SPS) approach was developed to ameliorate these limitations by first finding groups of unidentified spectra from the same peptides (contigs) and then deriving a consensus de novo sequence for each assembled set of spectra (contig sequences). But whereas SPS enables much more accurate reconstruction of de novo sequences longer than can be recovered from individual MS/MS spectra, it still requires error-tolerant matching to homologous proteins to group smaller contig sequences into full-length protein sequences, thus limiting its effectiveness on sequences from poorly annotated proteins. Using low and high resolution CID and high resolution HCD MS/MS spectra, we address this limitation with a Meta-SPS algorithm designed to overlap and further assemble SPS contigs into Meta-SPS de novo contig sequences extending as long as 100 amino acids at over 97% accuracy without requiring any knowledge of homologous protein sequences. We demonstrate Meta-SPS using distinct MS/MS data sets obtained with separate enzymatic digestions and discuss how the remaining de novo sequencing limitations relate to MS/MS acquisition settings. PMID:22798278

  12. Shotgun protein sequencing with meta-contig assembly.

    PubMed

    Guthals, Adrian; Clauser, Karl R; Bandeira, Nuno

    2012-10-01

    Full-length de novo sequencing from tandem mass (MS/MS) spectra of unknown proteins such as antibodies or proteins from organisms with unsequenced genomes remains a challenging open problem. Conventional algorithms designed to individually sequence each MS/MS spectrum are limited by incomplete peptide fragmentation or low signal to noise ratios and tend to result in short de novo sequences at low sequencing accuracy. Our shotgun protein sequencing (SPS) approach was developed to ameliorate these limitations by first finding groups of unidentified spectra from the same peptides (contigs) and then deriving a consensus de novo sequence for each assembled set of spectra (contig sequences). But whereas SPS enables much more accurate reconstruction of de novo sequences longer than can be recovered from individual MS/MS spectra, it still requires error-tolerant matching to homologous proteins to group smaller contig sequences into full-length protein sequences, thus limiting its effectiveness on sequences from poorly annotated proteins. Using low and high resolution CID and high resolution HCD MS/MS spectra, we address this limitation with a Meta-SPS algorithm designed to overlap and further assemble SPS contigs into Meta-SPS de novo contig sequences extending as long as 100 amino acids at over 97% accuracy without requiring any knowledge of homologous protein sequences. We demonstrate Meta-SPS using distinct MS/MS data sets obtained with separate enzymatic digestions and discuss how the remaining de novo sequencing limitations relate to MS/MS acquisition settings.

  13. SLIME: scattering labeled imaging of microvasculature in excised tissues using OCT (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Liu, Yehe; Gu, Shi; Watanabe, Michiko; Rollins, Andrew M.; Jenkins, Michael W.

    2017-02-01

    Abnormal coronary development causes various health problems. However, coronary development remains one of the highly neglected areas in developmental cardiology due to limited technology. Currently, there is not a robust method available to map the microvasculature throughout the entire embryonic heart in 3D. This is a challenging task because it requires both micron level resolution over a large field of view and sufficient imaging depth. Speckle-variance optical coherence tomography (OCT) has reasonable resolution for coronary vessel mapping, but limited penetration depth and sensitivity to bulk motion made it impossible to apply this method to late-stage beating hearts. Some success has been achieved with coronary dye perfusion, but smaller vessels are not efficiently stained and penetration depth is still an issue. To address this problem, we present an OCT imaging procedure using optical clearing and a contrast agent (titanium dioxide) that enables 3D mapping of the coronary microvasculature in developing embryonic hearts. In brief, the hearts of stage 36 quail embryos were perfused with a low viscosity mixture of polyvinyl acetate (PVA) and titanium dioxide through the aorta using micropipette injection. After perfusion, the viscosity of the solution was increased by crosslinking the PVA polymer chains with borate ions. The tissue was then optically cleared. The titanium dioxide particles remaining in the coronaries provided a strong OCT signal, while the rest of the cardiac structures became relatively transparent. Using this technique, we are able to investigate coronary morphologies in different disease models.

  14. Future Prospects for Very High Angular Resolution Imaging in the UV/Optical

    NASA Astrophysics Data System (ADS)

    Allen, R. J.

    2004-05-01

    Achieving the most demanding science goals outlined by the previous speakers will ultimately require the development of coherent space-based arrays of UV/Optical light collectors spread over distances of hundreds of meters. It is possible to envisage ``in situ" assembly of large segmented filled-aperture telescopes in space using components ferried up with conventional launchers. However, the cost will grow roughly as the mass of material required, and this will ultimately limit the sizes of the apertures we can afford. Furthermore, since the collecting area and the angular resolution are coupled for diffraction-limited filled apertures, the sensitivity may be much higher than is actually required to do the science. Constellations of collectors deployed over large areas as interferometer arrays or sparse apertures offer the possibility of independently tailoring the angular resolution and the sensitivity in order to optimally match the science requirements. Several concept designs have been proposed to provide imaging data for different classes of targets such as protoplanetary disks, the nuclear regions of the nearest active galaxies, and the surfaces of stars of different types. Constellations of identical collectors may be built and launched at lower cost through mass production, but new challenges arise when they have to be deployed. The ``aperture" synthesized is only as good as the accuracy with which the individual collectors can be placed and held to the required figure. This ``station-keeping" problem is one of the most important engineering problems to be solved before the promise of virtually unlimited angular resolution in the UV/Optical can be realized. Among the attractive features of an array of free-flying collectors configured for imaging is the fact that the figure errors of the ``aperture" so produced may be much more random than is the case for monolithic or segmented telescopes. This can result in a significant improvement in the dynamic range and permit imaging of faint objects near much brighter extraneous nearby sources, a task presently reserved for specially-designed coronagraphs on filled apertures.

  15. Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm.

    PubMed

    Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2013-08-07

    Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.

  16. Eigenspace perturbations for structural uncertainty estimation of turbulence closure models

    NASA Astrophysics Data System (ADS)

    Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).

  17. Multispectral image enhancement processing for microsat-borne imager

    NASA Astrophysics Data System (ADS)

    Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin

    2017-10-01

    With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.

  18. Decision-problem state analysis methodology

    NASA Technical Reports Server (NTRS)

    Dieterly, D. L.

    1980-01-01

    A methodology for analyzing a decision-problem state is presented. The methodology is based on the analysis of an incident in terms of the set of decision-problem conditions encountered. By decomposing the events that preceded an unwanted outcome, such as an accident, into the set of decision-problem conditions that were resolved, a more comprehensive understanding is possible. All human-error accidents are not caused by faulty decision-problem resolutions, but it appears to be one of the major areas of accidents cited in the literature. A three-phase methodology is presented which accommodates a wide spectrum of events. It allows for a systems content analysis of the available data to establish: (1) the resolutions made, (2) alternatives not considered, (3) resolutions missed, and (4) possible conditions not considered. The product is a map of the decision-problem conditions that were encountered as well as a projected, assumed set of conditions that should have been considered. The application of this methodology introduces a systematic approach to decomposing the events that transpired prior to the accident. The initial emphasis is on decision and problem resolution. The technique allows for a standardized method of accident into a scenario which may used for review or the development of a training simulation.

  19. Subpixel target detection and enhancement in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Tiwari, K. C.; Arora, M.; Singh, D.

    2011-06-01

    Hyperspectral data due to its higher information content afforded by higher spectral resolution is increasingly being used for various remote sensing applications including information extraction at subpixel level. There is however usually a lack of matching fine spatial resolution data particularly for target detection applications. Thus, there always exists a tradeoff between the spectral and spatial resolutions due to considerations of type of application, its cost and other associated analytical and computational complexities. Typically whenever an object, either manmade, natural or any ground cover class (called target, endmembers, components or class) gets spectrally resolved but not spatially, mixed pixels in the image result. Thus, numerous manmade and/or natural disparate substances may occur inside such mixed pixels giving rise to mixed pixel classification or subpixel target detection problems. Various spectral unmixing models such as Linear Mixture Modeling (LMM) are in vogue to recover components of a mixed pixel. Spectral unmixing outputs both the endmember spectrum and their corresponding abundance fractions inside the pixel. It, however, does not provide spatial distribution of these abundance fractions within a pixel. This limits the applicability of hyperspectral data for subpixel target detection. In this paper, a new inverse Euclidean distance based super-resolution mapping method has been presented that achieves subpixel target detection in hyperspectral images by adjusting spatial distribution of abundance fraction within a pixel. Results obtained at different resolutions indicate that super-resolution mapping may effectively aid subpixel target detection.

  20. Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; Van Meter, James R.

    2005-01-01

    A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.

  1. Dusty gas with one fluid in smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Laibe, Guillaume; Price, Daniel J.

    2014-05-01

    In a companion paper we have shown how the equations describing gas and dust as two fluids coupled by a drag term can be re-formulated to describe the system as a single-fluid mixture. Here, we present a numerical implementation of the one-fluid dusty gas algorithm using smoothed particle hydrodynamics (SPH). The algorithm preserves the conservation properties of the SPH formalism. In particular, the total gas and dust mass, momentum, angular momentum and energy are all exactly conserved. Shock viscosity and conductivity terms are generalized to handle the two-phase mixture accordingly. The algorithm is benchmarked against a comprehensive suit of problems: DUSTYBOX, DUSTYWAVE, DUSTYSHOCK and DUSTYOSCILL, each of them addressing different properties of the method. We compare the performance of the one-fluid algorithm to the standard two-fluid approach. The one-fluid algorithm is found to solve both of the fundamental limitations of the two-fluid algorithm: it is no longer possible to concentrate dust below the resolution of the gas (they have the same resolution by definition), and the spatial resolution criterion h < csts, required in two-fluid codes to avoid over-damping of kinetic energy, is unnecessary. Implicit time-stepping is straightforward. As a result, the algorithm is up to ten billion times more efficient for 3D simulations of small grains. Additional benefits include the use of half as many particles, a single kernel and fewer SPH interpolations. The only limitation is that it does not capture multi-streaming of dust in the limit of zero coupling, suggesting that in this case a hybrid approach may be required.

  2. Image resolution enhancement via image restoration using neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Shuangteng; Lu, Yihong

    2011-04-01

    Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.

  3. High Resolution Melting (HRM) for High-Throughput Genotyping-Limitations and Caveats in Practical Case Studies.

    PubMed

    Słomka, Marcin; Sobalska-Kwapis, Marta; Wachulec, Monika; Bartosz, Grzegorz; Strapagiel, Dominik

    2017-11-03

    High resolution melting (HRM) is a convenient method for gene scanning as well as genotyping of individual and multiple single nucleotide polymorphisms (SNPs). This rapid, simple, closed-tube, homogenous, and cost-efficient approach has the capacity for high specificity and sensitivity, while allowing easy transition to high-throughput scale. In this paper, we provide examples from our laboratory practice of some problematic issues which can affect the performance and data analysis of HRM results, especially with regard to reference curve-based targeted genotyping. We present those examples in order of the typical experimental workflow, and discuss the crucial significance of the respective experimental errors and limitations for the quality and analysis of results. The experimental details which have a decisive impact on correct execution of a HRM genotyping experiment include type and quality of DNA source material, reproducibility of isolation method and template DNA preparation, primer and amplicon design, automation-derived preparation and pipetting inconsistencies, as well as physical limitations in melting curve distinction for alternative variants and careful selection of samples for validation by sequencing. We provide a case-by-case analysis and discussion of actual problems we encountered and solutions that should be taken into account by researchers newly attempting HRM genotyping, especially in a high-throughput setup.

  4. High Resolution Melting (HRM) for High-Throughput Genotyping—Limitations and Caveats in Practical Case Studies

    PubMed Central

    Słomka, Marcin; Sobalska-Kwapis, Marta; Wachulec, Monika; Bartosz, Grzegorz

    2017-01-01

    High resolution melting (HRM) is a convenient method for gene scanning as well as genotyping of individual and multiple single nucleotide polymorphisms (SNPs). This rapid, simple, closed-tube, homogenous, and cost-efficient approach has the capacity for high specificity and sensitivity, while allowing easy transition to high-throughput scale. In this paper, we provide examples from our laboratory practice of some problematic issues which can affect the performance and data analysis of HRM results, especially with regard to reference curve-based targeted genotyping. We present those examples in order of the typical experimental workflow, and discuss the crucial significance of the respective experimental errors and limitations for the quality and analysis of results. The experimental details which have a decisive impact on correct execution of a HRM genotyping experiment include type and quality of DNA source material, reproducibility of isolation method and template DNA preparation, primer and amplicon design, automation-derived preparation and pipetting inconsistencies, as well as physical limitations in melting curve distinction for alternative variants and careful selection of samples for validation by sequencing. We provide a case-by-case analysis and discussion of actual problems we encountered and solutions that should be taken into account by researchers newly attempting HRM genotyping, especially in a high-throughput setup. PMID:29099791

  5. Inherent Limitations of Hydraulic Tomography

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Butler, J.J.

    2010-01-01

    We offer a cautionary note in response to an increasing level of enthusiasm regarding high-resolution aquifer characterization with hydraulic tomography. We use synthetic examples based on two recent field experiments to demonstrate that a high degree of nonuniqueness remains in estimates of hydraulic parameter fields even when those estimates are based on simultaneous analysis of a number of carefully controlled hydraulic tests. We must, therefore, be careful not to oversell the technique to the community of practicing hydrogeologists, promising a degree of accuracy and resolution that, in many settings, will remain unattainable, regardless of the amount of effort invested in the field investigation. No practically feasible amount of hydraulic tomography data will ever remove the need to regularize or bias the inverse problem in some fashion in order to obtain a unique solution. Thus, along with improving the resolution of hydraulic tomography techniques, we must also strive to couple those techniques with procedures for experimental design and uncertainty assessment and with other more cost-effective field methods, such as geophysical surveying and, in unconsolidated formations, direct-push profiling, in order to develop methods for subsurface characterization with the resolution and accuracy needed for practical field applications. Copyright ?? 2010 The Author(s). Journal compilation ?? 2010 National Ground Water Association.

  6. Problems and Resolutions in the Practice of Project Teaching in Higher Vocational Schools

    ERIC Educational Resources Information Center

    Sheng, Zhichong; Tan, Jianhua

    2011-01-01

    Recently, there has been a hot discussion on project teaching theory among many higher vocational schools; however the practice of project teaching is still in the beginning period. Hence, many problems appear in project lead. This paper aims to analyze the existing problems in the practice of project teaching and also raise some resolutions.

  7. Conflict Prevention and Resolution Center (CPRC)

    EPA Pesticide Factsheets

    The Conflict Prevention and Resolution Center is EPA's primary resource for services and expertise in the areas of consensus-building, collaborative problem solving, alternative dispute resolution, and environmental collaboration and conflict resolution.

  8. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    PubMed

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-03

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. A review of spatial downscaling of satellite remotely sensed soil moisture

    NASA Astrophysics Data System (ADS)

    Peng, Jian; Loew, Alexander; Merlin, Olivier; Verhoest, Niko E. C.

    2017-06-01

    Satellite remote sensing technology has been widely used to estimate surface soil moisture. Numerous efforts have been devoted to develop global soil moisture products. However, these global soil moisture products, normally retrieved from microwave remote sensing data, are typically not suitable for regional hydrological and agricultural applications such as irrigation management and flood predictions, due to their coarse spatial resolution. Therefore, various downscaling methods have been proposed to improve the coarse resolution soil moisture products. The purpose of this paper is to review existing methods for downscaling satellite remotely sensed soil moisture. These methods are assessed and compared in terms of their advantages and limitations. This review also provides the accuracy level of these methods based on published validation studies. In the final part, problems and future trends associated with these methods are analyzed.

  10. Mesoscale temperature and moisture fields from satellite infrared soundings

    NASA Technical Reports Server (NTRS)

    Hillger, D. W.; Vonderhaar, T. H.

    1976-01-01

    The combined use of radiosonde and satellite infrared soundings can provide mesoscale temperature and moisture fields at the time of satellite coverage. Radiance data from the vertical temperature profile radiometer on NOAA polar-orbiting satellites can be used along with a radiosonde sounding as an initial guess in an iterative retrieval algorithm. The mesoscale temperature and moisture fields at local 9 - 10 a.m., which are produced by retrieving temperature profiles at each scan spot for the BTPR (every 70 km), can be used for analysis or as a forecasting tool for subsequent weather events during the day. The advantage of better horizontal resolution of satellite soundings can be coupled with the radiosonde temperature and moisture profile both as a best initial guess profile and as a means of eliminating problems due to the limited vertical resolution of satellite soundings.

  11. ELM: super-resolution analysis of wide-field images of fluorescent shell structures

    NASA Astrophysics Data System (ADS)

    Manton, James D.; Xiao, Yao; Turner, Robert D.; Christie, Graham; Rees, Eric J.

    2018-07-01

    It is often necessary to precisely quantify the size of specimens in biological studies. When measuring feature size in fluorescence microscopy, significant biases can arise due to blurring of its edges if the feature is smaller than the diffraction limit of resolution. This problem is avoided if an equation describing the feature’s entire image is fitted to its image data. In this paper we present open-source software, ELM, which uses this approach to measure the size of spheroidal or cylindrical fluorescent shells with a precision of around 10 nm. This has been used to measure coat protein locations in bacterial spores and cell wall diameter in vegetative bacilli, and may also be valuable in microbiological studies of algae, fungi and viruses. ELM is available for download at https://github.com/quantitativeimaging/ELM.

  12. Memory effect, resolution, and efficiency measurements of an Al2O3 coated plastic scintillator used for radioxenon detection

    NASA Astrophysics Data System (ADS)

    Bläckberg, L.; Fritioff, T.; Mårtensson, L.; Nielsen, F.; Ringbom, A.; Sjöstrand, H.; Klintenberg, M.

    2013-06-01

    A cylindrical plastic scintillator cell, used for radioxenon monitoring within the verification regime of the Comprehensive Nuclear-Test-Ban Treaty, has been coated with 425 nm Al2O3 using low temperature Atomic Layer Deposition, and its performance has been evaluated. The motivation is to reduce the memory effect caused by radioxenon diffusing into the plastic scintillator material during measurements, resulting in an elevated detection limit. Measurements with the coated detector show both energy resolution and efficiency comparable to uncoated detectors, and a memory effect reduction of a factor of 1000. Provided that the quality of the detector is maintained for a longer period of time, Al2O3 coatings are believed to be a viable solution to the memory effect problem in question.

  13. Importing super-resolution imaging into nanoscale puzzles of materials dynamics

    NASA Astrophysics Data System (ADS)

    King, John; Tsang, Chi Hang Boyce; Wilson, William; Granick, Steve

    2014-03-01

    A limitation of the exciting recent advances in sub-diffraction microscopy is that they focus on imaging rather than dynamical changes. We are engaged in extending this technique beyond the usual biological applications to address materials problems instead. To this end, we employ stimulated emission depletion (STED) microscopy, which relies on selectively turning off fluorescence emitters through stimulated emission, allowing only a small subset of emitters to be detected, such that the excitation spot size can be downsized to tens of nanometers. By coupling the STED excitation scheme to fluorescence correlation spectroscopy (FCS), diffusive processes are studied with nanoscale resolution. Here, we demonstrate the benefits of such experimental capabilities in a diverse range of complex systems, ranging from the diffusion of nano-objects in crowded 3D environments to the study of polymer diffusion on 2D surfaces.

  14. ELM: super-resolution analysis of wide-field images of fluorescent shell structures.

    PubMed

    Manton, James; Xiao, Yao; Turner, Robert; Christie, Graham; Rees, Eric

    2018-05-04

    It is often necessary to precisely quantify the size of specimens in biological studies. When measuring feature size in fluorescence microscopy, significant biases can arise due to blurring of its edges if the feature is smaller than the diffraction limit of resolution. This problem is avoided if an equation describing the feature's entire image is fitted to its image data. In this paper we present open-source software, ELM, which uses this approach to measure the size of spheroidal or cylindrical fluorescent shells with a precision of around 10 nm. This has been used to measure coat protein locations in bacterial spores and cell wall diameter in vegetative bacilli, and may also be valuable in microbiological studies of algae, fungi and viruses. ELM is available for download at https://github.com/quantitativeimaging/ELM. Creative Commons Attribution license.

  15. k-space and q-space: combining ultra-high spatial and angular resolution in diffusion imaging using ZOOPPA at 7 T.

    PubMed

    Heidemann, Robin M; Anwander, Alfred; Feiweier, Thorsten; Knösche, Thomas R; Turner, Robert

    2012-04-02

    There is ongoing debate whether using a higher spatial resolution (sampling k-space) or a higher angular resolution (sampling q-space angles) is the better way to improve diffusion MRI (dMRI) based tractography results in living humans. In both cases, the limiting factor is the signal-to-noise ratio (SNR), due to the restricted acquisition time. One possible way to increase the spatial resolution without sacrificing either SNR or angular resolution is to move to a higher magnetic field strength. Nevertheless, dMRI has not been the preferred application for ultra-high field strength (7 T). This is because single-shot echo-planar imaging (EPI) has been the method of choice for human in vivo dMRI. EPI faces several challenges related to the use of a high resolution at high field strength, for example, distortions and image blurring. These problems can easily compromise the expected SNR gain with field strength. In the current study, we introduce an adapted EPI sequence in conjunction with a combination of ZOOmed imaging and Partially Parallel Acquisition (ZOOPPA). We demonstrate that the method can produce high quality diffusion-weighted images with high spatial and angular resolution at 7 T. We provide examples of in vivo human dMRI with isotropic resolutions of 1 mm and 800 μm. These data sets are particularly suitable for resolving complex and subtle fiber architectures, including fiber crossings in the white matter, anisotropy in the cortex and fibers entering the cortex. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Analysis of single ion channel data incorporating time-interval omission and sampling

    PubMed Central

    The, Yu-Kai; Timmer, Jens

    2005-01-01

    Hidden Markov models are widely used to describe single channel currents from patch-clamp experiments. The inevitable anti-aliasing filter limits the time resolution of the measurements and therefore the standard hidden Markov model is not adequate anymore. The notion of time-interval omission has been introduced where brief events are not detected. The developed, exact solutions to this problem do not take into account that the measured intervals are limited by the sampling time. In this case the dead-time that specifies the minimal detectable interval length is not defined unambiguously. We show that a wrong choice of the dead-time leads to considerably biased estimates and present the appropriate equations to describe sampled data. PMID:16849220

  17. Earth Science Futuristic Trends and Implementing Strategies

    NASA Technical Reports Server (NTRS)

    Habib, Shahid

    2003-01-01

    For the last several years, there is a strong trend among the science community to increase the number of space-based observations to get a much higher temporal and spatial resolution. Such information will eventually be useful in higher resolution models that can provide predictability with higher precision. Such desirability puts a tremendous burden on any single implementing entity in terms of budget, technology readiness and compute power. The health of planet Earth is not governed by a single country, but in reality, is everyone's business living on this planet. Therefore, with this notion, it is becoming an impractical problem by any single organization/country to undertake. So far, each country per their means has proceeded along satisfactorily in implementing or benefiting directly or indirectly from the Earth observation data and scientific products. However, time has come that this is becoming a humongous problem to be undertaken by a single country. Therefore, this paper gives some serious thoughts in what options are there in undertaking this tremendous challenge. The problem is multi-dimensional in terms of budget, technology availability, environmental legislations, public awareness, and communication limitations. Some of these issues are introduced, discussed and possible implementation strategies are provided in this paper to move out of this predicament. A strong emphasis is placed on international cooperation and collaboration to see a collective benefit for this effort.

  18. Andreev bound states. Some quasiclassical reflections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Y., E-mail: yiriolin@illinois.edu; Leggett, A. J.

    2014-12-15

    We discuss a very simple and essentially exactly solvable model problem which illustrates some nice features of Andreev bound states, namely, the trapping of a single Bogoliubov quasiparticle in a neutral s-wave BCS superfluid by a wide and shallow Zeeman trap. In the quasiclassical limit, the ground state is a doublet with a splitting which is proportional to the exponentially small amplitude for “normal” reflection by the edges of the trap. We comment briefly on a prima facie paradox concerning the continuity equation and conjecture a resolution to it.

  19. High temperature strain measurement with a resistance strain gage

    NASA Technical Reports Server (NTRS)

    Lei, Jih-Fen; Fichtel, ED; Mcdaniel, Amos

    1993-01-01

    A PdCr based electrical resistance strain gage was demonstrated in the laboratory to be a viable sensor candidate for static strain measurement at high temperatures. However, difficulties were encountered while transferring the sensor to field applications. This paper is therefore prepared for recognition and resolution of the problems likely to be encountered with PdCr strain gages in field applications. Errors caused by the measurement system, installation technique and lead wire attachment are discussed. The limitations and some considerations related to the temperature compensation technique used for this gage are also addressed.

  20. Role of Kinetic Modeling in Biomedical Imaging

    PubMed Central

    Huang, Sung-Cheng

    2009-01-01

    Biomedical imaging can reveal clear 3-dimensional body morphology non-invasively with high spatial resolution. Its efficacy, in both clinical and pre-clinical settings, is enhanced with its capability to provide in vivo functional/biological information in tissue. The role of kinetic modeling in providing biological/functional information in biomedical imaging is described. General characteristics and limitations in extracting biological information are addressed and practical approaches to solve the problems are discussed and illustrated with examples. Some future challenges and opportunities for kinetic modeling to expand the capability of biomedical imaging are also presented. PMID:20640185

  1. Andreev bound states. Some quasiclassical reflections

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Leggett, A. J.

    2014-12-01

    We discuss a very simple and essentially exactly solvable model problem which illustrates some nice features of Andreev bound states, namely, the trapping of a single Bogoliubov quasiparticle in a neutral s-wave BCS superfluid by a wide and shallow Zeeman trap. In the quasiclassical limit, the ground state is a doublet with a splitting which is proportional to the exponentially small amplitude for "normal" reflection by the edges of the trap. We comment briefly on a prima facie paradox concerning the continuity equation and conjecture a resolution to it.

  2. Statistical Limits to Super Resolution

    NASA Astrophysics Data System (ADS)

    Lucy, L. B.

    1992-08-01

    The limits imposed by photon statistics on the degree to which Rayleigh's resolution limit for diffraction-limited images can be surpassed by applying image restoration techniques are investigated. An approximate statistical theory is given for the number of detected photons required in the image of an unresolved pair of equal point sources in order that its information content allows in principle resolution by restoration. This theory is confirmed by numerical restoration experiments on synthetic images, and quantitative limits are presented for restoration of diffraction-limited images formed by slit and circular apertures.

  3. Deformation of two-phase aggregates using standard numerical methods

    NASA Astrophysics Data System (ADS)

    Duretz, Thibault; Yamato, Philippe; Schmalholz, Stefan M.

    2013-04-01

    Geodynamic problems often involve the large deformation of material encompassing material boundaries. In geophysical fluids, such boundaries often coincide with a discontinuity in the viscosity (or effective viscosity) field and subsequently in the pressure field. Here, we employ popular implementations of the finite difference and finite element methods for solving viscous flow problems. On one hand, we implemented finite difference method coupled with a Lagrangian marker-in-cell technique to represent the deforming fluid. Thanks to it Eulerian nature, this method has a limited geometric flexibility but is characterized by a light and stable discretization. On the other hand, we employ the Lagrangian finite element method which offers full geometric flexibility at the cost of relatively heavier discretization. In order to test the accuracy of the finite difference scheme, we ran large strain simple shear deformation of aggregates containing either weak of strong circular inclusion (1e6 viscosity ratio). The results, obtained for different grid resolutions, are compared to Lagrangian finite element results which are considered as reference solution. The comparison is then used to establish up to which strain can finite difference simulations be run given the nature of the inclusions (dimensions, viscosity) and the resolution of the Eulerian mesh.

  4. Identification of multiple leaks in pipeline: Linearized model, maximum likelihood, and super-resolution localization

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Ghidaoui, Mohamed S.

    2018-07-01

    This paper considers the problem of identifying multiple leaks in a water-filled pipeline based on inverse transient wave theory. The analytical solution to this problem involves nonlinear interaction terms between the various leaks. This paper shows analytically and numerically that these nonlinear terms are of the order of the leak sizes to the power two and; thus, negligible. As a result of this simplification, a maximum likelihood (ML) scheme that identifies leak locations and leak sizes separately is formulated and tested. It is found that the ML estimation scheme is highly efficient and robust with respect to noise. In addition, the ML method is a super-resolution leak localization scheme because its resolvable leak distance (approximately 0.15λmin , where λmin is the minimum wavelength) is below the Nyquist-Shannon sampling theorem limit (0.5λmin). Moreover, the Cramér-Rao lower bound (CRLB) is derived and used to show the efficiency of the ML scheme estimates. The variance of the ML estimator approximates the CRLB proving that the ML scheme belongs to class of best unbiased estimator of leak localization methods.

  5. Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms

    NASA Astrophysics Data System (ADS)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-04-01

    Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.

  6. Inverse electrocardiographic transformations: dependence on the number of epicardial regions and body surface data points.

    PubMed

    Johnston, P R; Walker, S J; Hyttinen, J A; Kilpatrick, D

    1994-04-01

    The inverse problem of electrocardiography, the computation of epicardial potentials from body surface potentials, is influenced by the desired resolution on the epicardium, the number of recording points on the body surface, and the method of limiting the inversion process. To examine the role of these variables in the computation of the inverse transform, Tikhonov's zero-order regularization and singular value decomposition (SVD) have been used to invert the forward transfer matrix. The inverses have been compared in a data-independent manner using the resolution and the noise amplification as endpoints. Sets of 32, 50, 192, and 384 leads were chosen as sets of body surface data, and 26, 50, 74, and 98 regions were chosen to represent the epicardium. The resolution and noise were both improved by using a greater number of electrodes on the body surface. When 60% of the singular values are retained, the results show a trade-off between noise and resolution, with typical maximal epicardial noise levels of less than 0.5% of maximum epicardial potentials for 26 epicardial regions, 2.5% for 50 epicardial regions, 7.5% for 74 epicardial regions, and 50% for 98 epicardial regions. As the number of epicardial regions is increased, the regularization technique effectively fixes the noise amplification but markedly decreases the resolution, whereas SVD results in an increase in noise and a moderate decrease in resolution. Overall the regularization technique performs slightly better than SVD in the noise-resolution relationship. There is a region at the posterior of the heart that was poorly resolved regardless of the number of regions chosen. The variance of the resolution was such as to suggest the use of variable-size epicardial regions based on the resolution.

  7. LOR-interleaving image reconstruction for PET imaging with fractional-crystal collimation

    NASA Astrophysics Data System (ADS)

    Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.

    2015-01-01

    Positron emission tomography (PET) has become an important modality in medical and molecular imaging. However, in most PET applications, the resolution is still mainly limited by the physical crystal sizes or the detector’s intrinsic spatial resolution. To achieve images with better spatial resolution in a central region of interest (ROI), we have previously proposed using collimation in PET scanners. The collimator is designed to partially mask detector crystals to detect lines of response (LORs) within fractional crystals. A sequence of collimator-encoded LORs is measured with different collimation configurations. This novel collimated scanner geometry makes the reconstruction problem challenging, as both detector and collimator effects need to be modeled to reconstruct high-resolution images from collimated LORs. In this paper, we present a LOR-interleaving (LORI) algorithm, which incorporates these effects and has the advantage of reusing existing reconstruction software, to reconstruct high-resolution images for PET with fractional-crystal collimation. We also develop a 3D ray-tracing model incorporating both the collimator and crystal penetration for simulations and reconstructions of the collimated PET. By registering the collimator-encoded LORs with the collimator configurations, high-resolution LORs are restored based on the modeled transfer matrices using the non-negative least-squares method and EM algorithm. The resolution-enhanced images are then reconstructed from the high-resolution LORs using the MLEM or OSEM algorithm. For validation, we applied the LORI method to a small-animal PET scanner, A-PET, with a specially designed collimator. We demonstrate through simulated reconstructions with a hot-rod phantom and MOBY phantom that the LORI reconstructions can substantially improve spatial resolution and quantification compared to the uncollimated reconstructions. The LORI algorithm is crucial to improve overall image quality of collimated PET, which can have significant implications in preclinical and clinical ROI imaging applications.

  8. CrowdPhase: crowdsourcing the phase problem

    PubMed Central

    Jorda, Julien; Sawaya, Michael R.; Yeates, Todd O.

    2014-01-01

    The human mind innately excels at some complex tasks that are difficult to solve using computers alone. For complex problems amenable to parallelization, strategies can be developed to exploit human intelligence in a collective form: such approaches are sometimes referred to as ‘crowdsourcing’. Here, a first attempt at a crowdsourced approach for low-resolution ab initio phasing in macromolecular crystallography is proposed. A collaborative online game named CrowdPhase was designed, which relies on a human-powered genetic algorithm, where players control the selection mechanism during the evolutionary process. The algorithm starts from a population of ‘individuals’, each with a random genetic makeup, in this case a map prepared from a random set of phases, and tries to cause the population to evolve towards individuals with better phases based on Darwinian survival of the fittest. Players apply their pattern-recognition capabilities to evaluate the electron-density maps generated from these sets of phases and to select the fittest individuals. A user-friendly interface, a training stage and a competitive scoring system foster a network of well trained players who can guide the genetic algorithm towards better solutions from generation to generation via gameplay. CrowdPhase was applied to two synthetic low-resolution phasing puzzles and it was shown that players could successfully obtain phase sets in the 30° phase error range and corresponding molecular envelopes showing agreement with the low-resolution models. The successful preliminary studies suggest that with further development the crowdsourcing approach could fill a gap in current crystallographic methods by making it possible to extract meaningful information in cases where limited resolution might otherwise prevent initial phasing. PMID:24914965

  9. Z-Score-Based Modularity for Community Detection in Networks

    PubMed Central

    Miyauchi, Atsushi; Kawase, Yasushi

    2016-01-01

    Identifying community structure in networks is an issue of particular interest in network science. The modularity introduced by Newman and Girvan is the most popular quality function for community detection in networks. In this study, we identify a problem in the concept of modularity and suggest a solution to overcome this problem. Specifically, we obtain a new quality function for community detection. We refer to the function as Z-modularity because it measures the Z-score of a given partition with respect to the fraction of the number of edges within communities. Our theoretical analysis shows that Z-modularity mitigates the resolution limit of the original modularity in certain cases. Computational experiments using both artificial networks and well-known real-world networks demonstrate the validity and reliability of the proposed quality function. PMID:26808270

  10. Super-resolution fluorescence microscopy by stepwise optical saturation

    PubMed Central

    Zhang, Yide; Nallathamby, Prakash D.; Vigil, Genevieve D.; Khan, Aamir A.; Mason, Devon E.; Boerckel, Joel D.; Roeder, Ryan K.; Howard, Scott S.

    2018-01-01

    Super-resolution fluorescence microscopy is an important tool in biomedical research for its ability to discern features smaller than the diffraction limit. However, due to its difficult implementation and high cost, the super-resolution microscopy is not feasible in many applications. In this paper, we propose and demonstrate a saturation-based super-resolution fluorescence microscopy technique that can be easily implemented and requires neither additional hardware nor complex post-processing. The method is based on the principle of stepwise optical saturation (SOS), where M steps of raw fluorescence images are linearly combined to generate an image with a M-fold increase in resolution compared with conventional diffraction-limited images. For example, linearly combining (scaling and subtracting) two images obtained at regular powers extends the resolution by a factor of 1.4 beyond the diffraction limit. The resolution improvement in SOS microscopy is theoretically infinite but practically is limited by the signal-to-noise ratio. We perform simulations and experimentally demonstrate super-resolution microscopy with both one-photon (confocal) and multiphoton excitation fluorescence. We show that with the multiphoton modality, the SOS microscopy can provide super-resolution imaging deep in scattering samples. PMID:29675306

  11. Single image super-resolution via an iterative reproducing kernel Hilbert space method.

    PubMed

    Deng, Liang-Jian; Guo, Weihong; Huang, Ting-Zhu

    2016-11-01

    Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing approaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method.

  12. Tuning of successively scanned two monolithic Vernier-tuned lasers and selective data sampling in optical comb swept source optical coherence tomography

    PubMed Central

    Choi, Dong-hak; Yoshimura, Reiko; Ohbayashi, Kohji

    2013-01-01

    Monolithic Vernier tuned super-structure grating distributed Bragg reflector (SSG-DBR) lasers are expected to become one of the most promising sources for swept source optical coherence tomography (SS-OCT) with a long coherence length, reduced sensitivity roll-off, and potential capability for a very fast A-scan rate. However, previous implementations of the lasers suffer from four main problems: 1) frequencies deviate from the targeted values when scanned, 2) large amounts of noise appear associated with abrupt changes in injection currents, 3) optically aliased noise appears due to a long coherence length, and 4) the narrow wavelength coverage of a single chip limits resolution. We have developed a method of dynamical frequency tuning, a method of selective data sampling to eliminate current switching noise, an interferometer to reduce aliased noise, and an excess-noise-free connection of two serially scanned lasers to enhance resolution to solve these problems. An optical frequency comb SS-OCT system was achieved with a sensitivity of 124 dB and a dynamic range of 55-72 dB that depended on the depth at an A-scan rate of 3.1 kHz with a resolution of 15 μm by discretely scanning two SSG-DBR lasers, i.e., L-band (1.560-1.599 μm) and UL-band (1.598-1.640 μm). A few OCT images with excellent image penetration depth were obtained. PMID:24409394

  13. An analysis of I/O efficient order-statistic-based techniques for noise power estimation in the HRMS sky survey's operational system

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Olsen, E. T.

    1992-01-01

    Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.

  14. Pushing the limits of spatial resolution with the Kuiper Airborne observatory

    NASA Technical Reports Server (NTRS)

    Lester, Daniel

    1994-01-01

    The study of astronomical objects at high spatial resolution in the far-IR is one of the most serious limitations to our work at these wavelengths, which carry information about the luminosity of dusty and obscured sources. At IR wavelengths shorter than 30 microns, ground based telescopes with large apertures at superb sites achieve diffraction-limited performance close to the seeing limit in the optical. At millimeter wavelengths, ground based interferometers achieve resolution that is close to this. The inaccessibility of the far-IR from the ground makes it difficult, however, to achieve complementary resolution in the far-IR. The 1983 IRAS survey, while extraordinarily sensitive, provides us with a sky map at a spatial resolution that is limited by detector size on a spatial scale that is far larger than that available in other wavelengths on the ground. The survey resolution is of order 4 min in the 100 micron bandpass, and 2 min at 60 microns (IRAS Explanatory Supplement, 1988). Information on a scale of 1' is available on some sources from the CPC. Deconvolution and image resolution using this database is one of the subjects of this workshop.

  15. Open data set of live cyanobacterial cells imaged using an X-ray laser

    NASA Astrophysics Data System (ADS)

    van der Schot, Gijs; Svenda, Martin; Maia, Filipe R. N. C.; Hantke, Max F.; Deponte, Daniel P.; Seibert, M. Marvin; Aquila, Andrew; Schulz, Joachim; Kirian, Richard A.; Liang, Mengning; Stellato, Francesco; Bari, Sadia; Iwan, Bianca; Andreasson, Jakob; Timneanu, Nicusor; Bielecki, Johan; Westphal, Daniel; Nunes de Almeida, Francisca; Odić, Duško; Hasse, Dirk; Carlsson, Gunilla H.; Larsson, Daniel S. D.; Barty, Anton; Martin, Andrew V.; Schorb, Sebastian; Bostedt, Christoph; Bozek, John D.; Carron, Sebastian; Ferguson, Ken; Rolles, Daniel; Rudenko, Artem; Epp, Sascha W.; Foucar, Lutz; Rudek, Benedikt; Erk, Benjamin; Hartmann, Robert; Kimmel, Nils; Holl, Peter; Englert, Lars; Loh, N. Duane; Chapman, Henry N.; Andersson, Inger; Hajdu, Janos; Ekeberg, Tomas

    2016-08-01

    Structural studies on living cells by conventional methods are limited to low resolution because radiation damage kills cells long before the necessary dose for high resolution can be delivered. X-ray free-electron lasers circumvent this problem by outrunning key damage processes with an ultra-short and extremely bright coherent X-ray pulse. Diffraction-before-destruction experiments provide high-resolution data from cells that are alive when the femtosecond X-ray pulse traverses the sample. This paper presents two data sets from micron-sized cyanobacteria obtained at the Linac Coherent Light Source, containing a total of 199,000 diffraction patterns. Utilizing this type of diffraction data will require the development of new analysis methods and algorithms for studying structure and structural variability in large populations of cells and to create abstract models. Such studies will allow us to understand living cells and populations of cells in new ways. New X-ray lasers, like the European XFEL, will produce billions of pulses per day, and could open new areas in structural sciences.

  16. Open data set of live cyanobacterial cells imaged using an X-ray laser.

    PubMed

    van der Schot, Gijs; Svenda, Martin; Maia, Filipe R N C; Hantke, Max F; DePonte, Daniel P; Seibert, M Marvin; Aquila, Andrew; Schulz, Joachim; Kirian, Richard A; Liang, Mengning; Stellato, Francesco; Bari, Sadia; Iwan, Bianca; Andreasson, Jakob; Timneanu, Nicusor; Bielecki, Johan; Westphal, Daniel; Nunes de Almeida, Francisca; Odić, Duško; Hasse, Dirk; Carlsson, Gunilla H; Larsson, Daniel S D; Barty, Anton; Martin, Andrew V; Schorb, Sebastian; Bostedt, Christoph; Bozek, John D; Carron, Sebastian; Ferguson, Ken; Rolles, Daniel; Rudenko, Artem; Epp, Sascha W; Foucar, Lutz; Rudek, Benedikt; Erk, Benjamin; Hartmann, Robert; Kimmel, Nils; Holl, Peter; Englert, Lars; Loh, N Duane; Chapman, Henry N; Andersson, Inger; Hajdu, Janos; Ekeberg, Tomas

    2016-08-01

    Structural studies on living cells by conventional methods are limited to low resolution because radiation damage kills cells long before the necessary dose for high resolution can be delivered. X-ray free-electron lasers circumvent this problem by outrunning key damage processes with an ultra-short and extremely bright coherent X-ray pulse. Diffraction-before-destruction experiments provide high-resolution data from cells that are alive when the femtosecond X-ray pulse traverses the sample. This paper presents two data sets from micron-sized cyanobacteria obtained at the Linac Coherent Light Source, containing a total of 199,000 diffraction patterns. Utilizing this type of diffraction data will require the development of new analysis methods and algorithms for studying structure and structural variability in large populations of cells and to create abstract models. Such studies will allow us to understand living cells and populations of cells in new ways. New X-ray lasers, like the European XFEL, will produce billions of pulses per day, and could open new areas in structural sciences.

  17. On Ambiguities in SAR Design

    NASA Technical Reports Server (NTRS)

    Freeman, Anthony

    2006-01-01

    Ambiguities are an aliasing effect caused by the periodic sampling of the scene backscatter inherent to pulsed radar systems such as Synthetic Aperture radar (SAR). In this paper we take a fresh look at the relationship between SAR range and azimuth ambiguity constraints on the allowable pulse repetition frequency (PRF) and the antenna length. We show that for high squint angles smaller antennas may be feasible in some cases. For some applications, the ability to form a synthetic aperture at high squint angles is desirable, but the size of the antenna causes problems in the design of systems capable of such operation. This is because the SAR system design is optimized for a side-looking geometry. In two examples design examples we take a suboptimum antenna size and examine the performance in terms of azimuth resolution and swath width as a function of squint angle. We show that for stripmap SARs, the swath width is usually worse for off-boresight squint angles, because it is severely limited by range walk, except in cases where we relax the spatial resolution. We consider the implications for the design of modest-resolution, narrow swath, scanning SAR scatterometers .

  18. Secure steganography designed for mobile platforms

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Cherukuri, Ravindranath; Sifuentes, Ronnie R.

    2006-05-01

    Adaptive steganography, an intelligent approach to message hiding, integrated with matrix encoding and pn-sequences serves as a promising resolution to recent security assurance concerns. Incorporating the above data hiding concepts with established cryptographic protocols in wireless communication would greatly increase the security and privacy of transmitting sensitive information. We present an algorithm which will address the following problems: 1) low embedding capacity in mobile devices due to fixed image dimensions and memory constraints, 2) compatibility between mobile and land based desktop computers, and 3) detection of stego images by widely available steganalysis software [1-3]. Consistent with the smaller available memory, processor capabilities, and limited resolution associated with mobile devices, we propose a more magnified approach to steganography by focusing adaptive efforts at the pixel level. This deeper method, in comparison to the block processing techniques commonly found in existing adaptive methods, allows an increase in capacity while still offering a desired level of security. Based on computer simulations using high resolution, natural imagery and mobile device captured images, comparisons show that the proposed method securely allows an increased amount of embedding capacity but still avoids detection by varying steganalysis techniques.

  19. A fast mass spring model solver for high-resolution elastic objects

    NASA Astrophysics Data System (ADS)

    Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian

    2017-03-01

    Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.

  20. Open data set of live cyanobacterial cells imaged using an X-ray laser

    PubMed Central

    van der Schot, Gijs; Svenda, Martin; Maia, Filipe R.N.C.; Hantke, Max F.; DePonte, Daniel P.; Seibert, M. Marvin; Aquila, Andrew; Schulz, Joachim; Kirian, Richard A.; Liang, Mengning; Stellato, Francesco; Bari, Sadia; Iwan, Bianca; Andreasson, Jakob; Timneanu, Nicusor; Bielecki, Johan; Westphal, Daniel; Nunes de Almeida, Francisca; Odić, Duško; Hasse, Dirk; Carlsson, Gunilla H.; Larsson, Daniel S.D.; Barty, Anton; Martin, Andrew V.; Schorb, Sebastian; Bostedt, Christoph; Bozek, John D.; Carron, Sebastian; Ferguson, Ken; Rolles, Daniel; Rudenko, Artem; Epp, Sascha W.; Foucar, Lutz; Rudek, Benedikt; Erk, Benjamin; Hartmann, Robert; Kimmel, Nils; Holl, Peter; Englert, Lars; Loh, N. Duane; Chapman, Henry N.; Andersson, Inger; Hajdu, Janos; Ekeberg, Tomas

    2016-01-01

    Structural studies on living cells by conventional methods are limited to low resolution because radiation damage kills cells long before the necessary dose for high resolution can be delivered. X-ray free-electron lasers circumvent this problem by outrunning key damage processes with an ultra-short and extremely bright coherent X-ray pulse. Diffraction-before-destruction experiments provide high-resolution data from cells that are alive when the femtosecond X-ray pulse traverses the sample. This paper presents two data sets from micron-sized cyanobacteria obtained at the Linac Coherent Light Source, containing a total of 199,000 diffraction patterns. Utilizing this type of diffraction data will require the development of new analysis methods and algorithms for studying structure and structural variability in large populations of cells and to create abstract models. Such studies will allow us to understand living cells and populations of cells in new ways. New X-ray lasers, like the European XFEL, will produce billions of pulses per day, and could open new areas in structural sciences. PMID:27479514

  1. Problem of time: facets and Machian strategy.

    PubMed

    Anderson, Edward

    2014-10-01

    The problem of time is that "time" in each of ordinary quantum theory and general relativity are mutually incompatible notions. This causes difficulties in trying to put these two theories together to form a theory of quantum gravity. The problem of time has eight facets in canonical approaches. I clarify that all but one of these facets already occur at the classical level, and reconceptualize and re-name some of these facets as follows. The frozen formalism problem becomes temporal relationalism, the thin sandwich problem becomes configurational relationalism, via the notion of best matching. The problem of observables becomes the problem of beables, and the functional evolution problem becomes the constraint closure problem. I also outline how each of the global and multiple-choice problems of time have their own plurality of facets. This article additionally contains a local resolution to the problem of time at the conceptual level and which is actually realizable for the relational triangle and minisuperspace models. This resolution is, moreover, Machian, and has three levels: classical, semiclassical, and a combined semiclassical-histories-timeless records scheme. I end by delineating the current frontiers of this program toward resolution of the problem of time in the cases of full general relativity and of slightly inhomogeneous cosmology. © 2014 New York Academy of Sciences.

  2. Hybrid Multiscale Finite Volume method for multiresolution simulations of flow and reactive transport in porous media

    NASA Astrophysics Data System (ADS)

    Barajas-Solano, D. A.; Tartakovsky, A. M.

    2017-12-01

    We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.

  3. Development and Operation of a High Resolution Positron Emission Tomography System to Perform Metabolic Studies on Small Animals.

    NASA Astrophysics Data System (ADS)

    Hogan, Matthew John

    A positron emission tomography system designed to perform high resolution imaging of small volumes has been characterized. Two large area planar detectors, used to detect the annihilation gamma rays, formed a large aperture stationary positron camera. The detectors were multiwire proportional chambers coupled to high density lead stack converters. Detector efficiency was 8%. The coincidence resolving time was 500 nsec. The maximum system sensitivity was 60 cps/(mu)Ci for a solid angle of acceptance of 0.74(pi) St. The maximum useful coincidence count rate was 1500 cps and was limited by electronic dead time. Image reconstruction was done by performing a 3-dimensional deconvolution using Fourier transform methods. Noise propagation during reconstruction was minimized by choosing a 'minimum norm' reconstructed image. In the stationary detector system (with a limited angle of acceptance for coincident events) statistical uncertainty in the data limited reconstruction in the direction normal to the detector surfaces. Data from a rotated phantom showed that detector rotation will correct this problem. Resolution was 4 mm in planes parallel to the detectors and (TURN)15 mm in the normal direction. Compton scattering of gamma rays within a source distribution was investigated using both simulated and measured data. Attenuation due to scatter was as high as 60%. For small volume imaging the Compton background was identified and an approximate correction was performed. A semiquantitative blood flow measurement to bone in the leg of a cat using the ('18)F('-) ion was performed. The results were comparable to investigations using more conventional techniques. Qualitative scans using ('18)F labelled deoxy -D-glucose to assess brain glucose metabolism in a rhesus monkey were also performed.

  4. Rock Abrasion on Mars: Clues from the Pathfinder and Viking Landing Sites

    NASA Technical Reports Server (NTRS)

    Bridges, N. T.; Parker, T. J.; Kramer, G. M.

    2000-01-01

    A significant discovery of the Mars Pathfinder (MPF) mission was that many rocks exhibit characteristics of ventifacts, rocks that have been sculpted by saltating particles. Diagnostic features identifying the rocks as ventifacts am elongated pits, flutes, and grooves (collectively referred to as "flutes" unless noted otherwise). Faceted rocks or rock portions, circular pits, rills, and possibly polished rock surfaces are also seen and could be due, to aeolian abrasion. Many of these features were initially identified in rover images, where spatial resolution generally exceeded that of the IMP (Imager for Mars Pathfinder) camera. These images had two major limitations: 1) Only a limited number of rocks were viewed by the rover, biasing flute statistics; and 2) The higher resolution obtained by the rover images and the lack of such pictures at the Viking landing sites hampered comparisons of rock morphologies between the Pathfinder and Viking sites. To avoid this problem, rock morphology and ventifact statistics have been examined using new "super-resolution" IMP and Viking Lander images. Analyses of these images show that: 1) Flutes are seen on about 50% or more of the rocks in the near field at the MPF site; 2) The orientation of these flutes is similar to that for flutes identified in rover images; and 3) Ventifacts are significantly more abundant at the Pathfinder landing site than at the two Viking Landing sites, where rocks have undergone only a limited amount of aeolian abrasion. This is most likely due to the ruggedness of the Pathfinder site and a greater supply of abrading particles available shortly after the Arcs and Tiu Valles outflow channel floods.

  5. Using sea surface temperatures to improve performance of single dynamical downscaling model in flood simulation under climate change

    NASA Astrophysics Data System (ADS)

    Chao, Y.; Cheng, C. T.; Hsiao, Y. H.; Hsu, C. T.; Yeh, K. C.; Liu, P. L.

    2017-12-01

    There are 5.3 typhoons hit Taiwan per year on average in last decade. Typhoon Morakot in 2009, the most severe typhoon, causes huge damage in Taiwan, including 677 casualties and roughly NT 110 billion (3.3 billion USD) in economic loss. Some researches documented that typhoon frequency will decrease but increase in intensity in western North Pacific region. It is usually preferred to use high resolution dynamical model to get better projection of extreme events; because coarse resolution models cannot simulate intense extreme events. Under that consideration, dynamical downscaling climate data was chosen to describe typhoon satisfactorily, this research used the simulation data from AGCM of Meteorological Research Institute (MRI-AGCM). Considering dynamical downscaling methods consume massive computing power, and typhoon number is very limited in a single model simulation, using dynamical downscaling data could cause uncertainty in disaster risk assessment. In order to improve the problem, this research used four sea surfaces temperatures (SSTs) to increase the climate change scenarios under RCP 8.5. In this way, MRI-AGCMs project 191 extreme typhoons in Taiwan (when typhoon center touches 300 km sea area of Taiwan) in late 21th century. SOBEK, a two dimensions flood simulation model, was used to assess the flood risk under four SSTs climate change scenarios in Tainan, Taiwan. The results show the uncertainty of future flood risk assessment is significantly decreased in Tainan, Taiwan in late 21th century. Four SSTs could efficiently improve the problems of limited typhoon numbers in single model simulation.

  6. Prevalence and pathways of recovery from drug and alcohol problems in the United States population: Implications for practice, research, and policy.

    PubMed

    Kelly, John F; Bergman, Brandon; Hoeppner, Bettina B; Vilsaint, Corrie; White, William L

    2017-12-01

    Alcohol and other drug (AOD) problems confer a global, prodigious burden of disease, disability, and premature mortality. Even so, little is known regarding how, and by what means, individuals successfully resolve AOD problems. Greater knowledge would inform policy and guide service provision. Probability-based survey of US adult population estimating: 1) AOD problem resolution prevalence; 2) lifetime use of "assisted" (i.e., treatment/medication, recovery services/mutual help) vs. "unassisted" resolution pathways; 3) correlates of assisted pathway use. Participants (response=63.4% of 39,809) responding "yes" to, "Did you use to have a problem with alcohol or drugs but no longer do?" assessed on substance use, clinical histories, problem resolution. Weighted prevalence of problem resolution was 9.1%, with 46% self-identifying as "in recovery"; 53.9% reported "assisted" pathway use. Most utilized support was mutual-help (45.1%,SE=1.6), followed by treatment (27.6%,SE=1.4), and emerging recovery support services (21.8%,SE=1.4), including recovery community centers (6.2%,SE=0.9). Strongest correlates of "assisted" pathway use were lifetime AOD diagnosis (AOR=10.8[7.42-15.74], model R2=0.13), drug court involvement (AOR=8.1[5.2-12.6], model R2=0.10), and, inversely, absence of lifetime psychiatric diagnosis (AOR=0.3[0.2-0.3], model R2=0.10). Compared to those with primary alcohol problems, those with primary cannabis problems were less likely (AOR=0.7[0.5-0.9]) and those with opioid problems were more likely (AOR=2.2[1.4-3.4]) to use assisted pathways. Indices related to severity were related to assisted pathways (R2<0.03). Tens of millions of Americans have successfully resolved an AOD problem using a variety of traditional and non-traditional means. Findings suggest a need for a broadening of the menu of self-change and community-based options that can facilitate and support long-term AOD problem resolution. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Nonnegative constraint quadratic program technique to enhance the resolution of γ spectra

    NASA Astrophysics Data System (ADS)

    Li, Jinglun; Xiao, Wuyun; Ai, Xianyun; Chen, Ye

    2018-04-01

    Two concepts of the nonnegative least squares problem (NNLS) and the linear complementarity problem (LCP) are introduced for the resolution enhancement of the γ spectra. The respective algorithms such as the active set method and the primal-dual interior point method are applied to solve the above two problems. In mathematics, the nonnegative constraint results in the sparsity of the optimal solution of the deconvolution, and it is this sparsity that enhances the resolution. Finally, a comparison in the peak position accuracy and the computation time is made between these two methods and the boosted L_R and Gold methods.

  8. Electronic health record meets digital library: a new environment for achieving an old goal.

    PubMed

    Humphreys, B L

    2000-01-01

    Linking the electronic health record to the digital library is a Web-era reformulation of the long-standing informatics goal of seamless integration of automated clinical data and relevant knowledge-based information to support informed decisions. The spread of the Internet, the development of the World Wide Web, and converging format standards for electronic health data and digital publications make effective linking increasingly feasible. Some existing systems link electronic health data and knowledge-based information in limited settings or limited ways. Yet many challenging informatics research problems remain to be solved before flexible and seamless linking becomes a reality and before systems become capable of delivering the specific piece of information needed at the time and place a decision must be made. Connecting the electronic health record to the digital library also requires positive resolution of important policy issues, including health data privacy, government encouragement of high-speed communications, electronic intellectual property rights, and standards for health data and for digital libraries. Both the research problems and the policy issues should be important priorities for the field of medical informatics.

  9. Electronic Health Record Meets Digital Library

    PubMed Central

    Humphreys, Betsy L.

    2000-01-01

    Linking the electronic health record to the digital library is a Web-era reformulation of the long-standing informatics goal of seamless integration of automated clinical data and relevant knowledge-based information to support informed decisions. The spread of the Internet, the development of the World Wide Web, and converging format standards for electronic health data and digital publications make effective linking increasingly feasible. Some existing systems link electronic health data and knowledge-based information in limited settings or limited ways. Yet many challenging informatics research problems remain to be solved before flexible and seamless linking becomes a reality and before systems become capable of delivering the specific piece of information needed at the time and place a decision must be made. Connecting the electronic health record to the digital library also requires positive resolution of important policy issues, including health data privacy, government envouragement of high-speed communications, electronic intellectual property rights, and standards for health data and for digital libraries. Both the research problems and the policy issues should be important priorities for the field of medical informatics. PMID:10984463

  10. Beyond MOS and Fibers: Wide-FoV Imaging Fourier Transform Spectroscopy - an Instrumentation Proposal for the Present and Future Mexican Telescopes

    NASA Astrophysics Data System (ADS)

    Rosales-Ortega, F. F.; Castillo, E.; Sánchez, S. F.; Iglesias-Páramo, J.; Mollá, J. I. M.; Chávez, M.

    2016-10-01

    In order to extend the current suite of instruments offered in the Observatorio Astrofísico Guillermo Haro (OAGH) in Cananea, Mexico (INAOE), and to explore a second-generation instrument for the future 6.5 m Telescopio San Pedro Martir (TSPM), we propose a prototype instrument that will provide un-biased wide-field (few arcmin) spectroscopic information, with the flexibility of operating at different spectral resolutions (R˜1-104), with a spatial resolution limited by seeing, and therefore to be used in a wide range of astronomical problems. This instrument will make use of the Fourier Transform Spectroscopy technique, which has been proved to be feasible in the optical wavelength range. Here we give the basic technical description of a Fourier transform spectrograph, as well as the technical advantages and weaknesses, and the science cases in which this instrument can be implemented.

  11. The design and evaluation of grazing incidence relay optics

    NASA Technical Reports Server (NTRS)

    Davis, John M.; Chase, R. C.; Silk, J. K.; Krieger, A. S.

    1989-01-01

    X-ray astronomy, both solar and celestial, has many needs for high spatial resolution observations which have to be performed with electronic detectors. If the resolution is not to be detector limited, plate scales in excess of 25 microns arc/sec, corresponding to focal lengths greater than 5 m, are required. In situations where the physical size is restricted, the problem can be solved by the use of grazing incidence relay optics. A system was developed which employs externally polished hyperboloid-hyperboloid surfaces to be used in conjunction with a Wolter-Schwarzschild primary. The secondary is located in front of the primary focus and provides a magnification of 4, while the system has a plate scale of 28 microns arc/sec and a length of 1.9 m. The design, tolerance specification, fabrication and performance at visible and X-ray wavelengths of this optical system are described.

  12. Artificial fluid properties for large-eddy simulation of compressible turbulent mixing

    NASA Astrophysics Data System (ADS)

    Cook, Andrew W.

    2007-05-01

    An alternative methodology is described for large-eddy simulation (LES) of flows involving shocks, turbulence, and mixing. In lieu of filtering the governing equations, it is postulated that the large-scale behavior of a LES fluid, i.e., a fluid with artificial properties, will be similar to that of a real fluid, provided the artificial properties obey certain constraints. The artificial properties consist of modifications to the shear viscosity, bulk viscosity, thermal conductivity, and species diffusivity of a fluid. The modified transport coefficients are designed to damp out high wavenumber modes, close to the resolution limit, without corrupting lower modes. Requisite behavior of the artificial properties is discussed and results are shown for a variety of test problems, each designed to exercise different aspects of the models. When combined with a tenth-order compact scheme, the overall method exhibits excellent resolution characteristics for turbulent mixing, while capturing shocks and material interfaces in a crisp fashion.

  13. EBIC spectroscopy - A new approach to microscale characterization of deep levels in semi-insulating GaAs

    NASA Technical Reports Server (NTRS)

    Li, C.-J.; Sun, Q.; Lagowski, J.; Gatos, H. C.

    1985-01-01

    The microscale characterization of electronic defects in (SI) GaAs has been a challenging issue in connection with materials problems encountered in GaAs IC technology. The main obstacle which limits the applicability of high resolution electron beam methods such as Electron Beam-Induced Current (EBIC) and cathodoluminescence (CL) is the low concentration of free carriers in semiinsulating (SI) GaAs. The present paper provides a new photo-EBIC characterization approach which combines the spectroscopic advantages of optical methods with the high spatial resolution and scanning capability of EBIC. A scanning electron microscope modified for electronic characterization studies is shown schematically. The instrument can operate in the standard SEM mode, in the EBIC modes (including photo-EBIC and thermally stimulated EBIC /TS-EBIC/), and in the cathodo-luminescence (CL) and scanning modes. Attention is given to the use of CL, Photo-EBIC, and TS-EBIC techniques.

  14. Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction

    PubMed Central

    Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.

    2016-01-01

    X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902

  15. Pitfalls and Limitations in the Interpretation of Geophysical Images for Hydrologic Properties and Processes

    NASA Astrophysics Data System (ADS)

    Day-Lewis, F. D.

    2014-12-01

    Geophysical imaging (e.g., electrical, radar, seismic) can provide valuable information for the characterization of hydrologic properties and monitoring of hydrologic processes, as evidenced in the rapid growth of literature on the subject. Geophysical imaging has been used for monitoring tracer migration and infiltration, mapping zones of focused groundwater/surface-water exchange, and verifying emplacement of amendments for bioremediation. Despite the enormous potential for extraction of hydrologic information from geophysical images, there also is potential for misinterpretation and over-interpretation. These concerns are particularly relevant when geophysical results are used within quantitative frameworks, e.g., conversion to hydrologic properties through petrophysical relations, geostatistical estimation and simulation conditioned to geophysical inversions, and joint inversion. We review pitfalls to interpretation associated with limited image resolution, spatially variable image resolution, incorrect data weighting, errors in the timing of measurements, temporal smearing resulting from changes during data acquisition, support-volume/scale effects, and incorrect assumptions or approximations involved in modeling geophysical or other jointly inverted data. A series of numerical and field-based examples illustrate these potential problems. Our goal in this talk is to raise awareness of common pitfalls and present strategies for recognizing and avoiding them.

  16. Deriving temporally continuous soil moisture estimations at fine resolution by downscaling remotely sensed product

    NASA Astrophysics Data System (ADS)

    Jin, Yan; Ge, Yong; Wang, Jianghao; Heuvelink, Gerard B. M.

    2018-06-01

    Land surface soil moisture (SSM) has important roles in the energy balance of the land surface and in the water cycle. Downscaling of coarse-resolution SSM remote sensing products is an efficient way for producing fine-resolution data. However, the downscaling methods used most widely require full-coverage visible/infrared satellite data as ancillary information. These methods are restricted to cloud-free days, making them unsuitable for continuous monitoring. The purpose of this study is to overcome this limitation to obtain temporally continuous fine-resolution SSM estimations. The local spatial heterogeneities of SSM and multiscale ancillary variables were considered in the downscaling process both to solve the problem of the strong variability of SSM and to benefit from the fusion of ancillary information. The generation of continuous downscaled remote sensing data was achieved via two principal steps. For cloud-free days, a stepwise hybrid geostatistical downscaling approach, based on geographically weighted area-to-area regression kriging (GWATARK), was employed by combining multiscale ancillary variables with passive microwave remote sensing data. Then, the GWATARK-estimated SSM and China Soil Moisture Dataset from Microwave Data Assimilation SSM data were combined to estimate fine-resolution data for cloudy days. The developed methodology was validated by application to the 25-km resolution daily AMSR-E SSM product to produce continuous SSM estimations at 1-km resolution over the Tibetan Plateau. In comparison with ground-based observations, the downscaled estimations showed correlation (R ≥ 0.7) for both ascending and descending overpasses. The analysis indicated the high potential of the proposed approach for producing a temporally continuous SSM product at fine spatial resolution.

  17. Weak data do not make a free lunch, only a cheap meal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Zhipu; Rajashankar, Kanagalaghatta; Dauter, Zbigniew, E-mail: dauter@anl.gov

    2014-02-01

    Refinement and analysis of four structures with various data resolution cutoffs suggests that at present there are no reliable criteria for judging the diffraction data resolution limit and the condition I/σ(I) = 2.0 is reasonable. However, extending the limit by about 0.2 Å beyond the resolution defined by this threshold does not deteriorate the quality of refined structures and in some cases may be beneficial. Four data sets were processed at resolutions significantly exceeding the criteria traditionally used for estimating the diffraction data resolution limit. The analysis of these data and the corresponding model-quality indicators suggests that the criteria ofmore » resolution limits widely adopted in the past may be somewhat conservative. Various parameters, such as R{sub merge} and I/σ(I), optical resolution and the correlation coefficients CC{sub 1/2} and CC*, can be used for judging the internal data quality, whereas the reliability factors R and R{sub free} as well as the maximum-likelihood target values and real-space map correlation coefficients can be used to estimate the agreement between the data and the refined model. However, none of these criteria provide a reliable estimate of the data resolution cutoff limit. The analysis suggests that extension of the maximum resolution by about 0.2 Å beyond the currently adopted limit where the I/σ(I) value drops to 2.0 does not degrade the quality of the refined structural models, but may sometimes be advantageous. Such an extension may be particularly beneficial for significantly anisotropic diffraction. Extension of the maximum resolution at the stage of data collection and structure refinement is cheap in terms of the required effort and is definitely more advisable than accepting a too conservative resolution cutoff, which is unfortunately quite frequent among the crystal structures deposited in the Protein Data Bank.« less

  18. Retinal Prosthetics, Optogenetics, and Chemical Photoswitches

    PubMed Central

    2015-01-01

    Three technologies have emerged as therapies to restore light sensing to profoundly blind patients suffering from late-stage retinal degenerations: (1) retinal prosthetics, (2) optogenetics, and (3) chemical photoswitches. Prosthetics are the most mature and the only approach in clinical practice. Prosthetic implants require complex surgical intervention and provide only limited visual resolution but can potentially restore navigational ability to many blind patients. Optogenetics uses viral delivery of type 1 opsin genes from prokaryotes or eukaryote algae to restore light responses in survivor neurons. Targeting and expression remain major problems, but are potentially soluble. Importantly, optogenetics could provide the ultimate in high-resolution vision due to the long persistence of gene expression achieved in animal models. Nevertheless, optogenetics remains challenging to implement in human eyes with large volumes, complex disease progression, and physical barriers to viral penetration. Now, a new generation of photochromic ligands or chemical photoswitches (azobenzene-quaternary ammonium derivatives) can be injected into a degenerated mouse eye and, in minutes to hours, activate light responses in neurons. These photoswitches offer the potential for rapidly and reversibly screening the vision restoration expected in an individual patient. Chemical photoswitch variants that persist in the cell membrane could make them a simple therapy of choice, with resolution and sensitivity equivalent to optogenetics approaches. A major complexity in treating retinal degenerations is retinal remodeling: pathologic network rewiring, molecular reprogramming, and cell death that compromise signaling in the surviving retina. Remodeling forces a choice between upstream and downstream targeting, each engaging different benefits and defects. Prosthetics and optogenetics can be implemented in either mode, but the use of chemical photoswitches is currently limited to downstream implementations. Even so, given the high density of human foveal ganglion cells, the ultimate chemical photoswitch treatment could deliver cost-effective, high-resolution vision for the blind. PMID:25089879

  19. Resolution experiments using the white light speckle method.

    PubMed

    Conley, E; Cloud, G

    1991-03-01

    Noncoherent light speckle methods have been successfully applied to gauge the motion of glaciers and buildings. Resolution of the optical method was limited by the aberrating turbulent atmosphere through which the images were collected. Sensitivity limitations regarding this particular application of speckle interferometry are discussed and analyzed. Resolution limit experiments that were incidental to glacier flow studies are related to the basic theory of astronomical imaging. Optical resolution of the ice flow measurement technique is shown to be in substantial agreement with the sensitivity predictions of astronomy theory.

  20. For how long can we predict the weather? - Insights into atmospheric predictability from global convection-allowing simulations

    NASA Astrophysics Data System (ADS)

    Judt, Falko

    2017-04-01

    A tremendous increase in computing power has facilitated the advent of global convection-resolving numerical weather prediction (NWP) models. Although this technological breakthrough allows for the seamless prediction of weather from local to global scales, the predictability of multiscale weather phenomena in these models is not very well known. To address this issue, we conducted a global high-resolution (4-km) predictability experiment using the Model for Prediction Across Scales (MPAS), a state-of-the-art global NWP model developed at the National Center for Atmospheric Research. The goals of this experiment are to investigate error growth from convective to planetary scales and to quantify the intrinsic, scale-dependent predictability limits of atmospheric motions. The globally uniform resolution of 4 km allows for the explicit treatment of organized deep moist convection, alleviating grave limitations of previous predictability studies that either used high-resolution limited-area models or global simulations with coarser grids and cumulus parameterization. Error growth is analyzed within the context of an "identical twin" experiment setup: the error is defined as the difference between a 20-day long "nature run" and a simulation that was perturbed with small-amplitude noise, but is otherwise identical. It is found that in convectively active regions, errors grow by several orders of magnitude within the first 24 h ("super-exponential growth"). The errors then spread to larger scales and begin a phase of exponential growth after 2-3 days when contaminating the baroclinic zones. After 16 days, the globally averaged error saturates—suggesting that the intrinsic limit of atmospheric predictability (in a general sense) is about two weeks, which is in line with earlier estimates. However, error growth rates differ between the tropics and mid-latitudes as well as between the troposphere and stratosphere, highlighting that atmospheric predictability is a complex problem. The comparatively slower error growth in the tropics and in the stratosphere indicates that certain weather phenomena could potentially have longer predictability than currently thought.

  1. How stands collapse I

    NASA Astrophysics Data System (ADS)

    Pearle, Philip

    2007-03-01

    In this volume in honour of GianCarlo Ghirardi, I discuss my involvement with ideas of dynamical collapse of the state vector. Ten problems are introduced, nine of which were seen following my initial work. Four of these problems had a resolution in GianCarlo Ghirardi, Alberto Rimini and Tullio Weber's spontaneous localization (SL) model (which added one more problem). This stimulated a (somewhat different) resolution of these five problems in the continuous spontaneous localization (CSL) model, in which I combined my initial work with SL. In an upcoming volume in honour of Abner Shimony, I shall discuss the status of the remaining five post-CSL problems.

  2. High-resolution regional climate model evaluation using variable-resolution CESM over California

    NASA Astrophysics Data System (ADS)

    Huang, X.; Rhoades, A.; Ullrich, P. A.; Zarzycki, C. M.

    2015-12-01

    Understanding the effect of climate change at regional scales remains a topic of intensive research. Though computational constraints remain a problem, high horizontal resolution is needed to represent topographic forcing, which is a significant driver of local climate variability. Although regional climate models (RCMs) have traditionally been used at these scales, variable-resolution global climate models (VRGCMs) have recently arisen as an alternative for studying regional weather and climate allowing two-way interaction between these domains without the need for nudging. In this study, the recently developed variable-resolution option within the Community Earth System Model (CESM) is assessed for long-term regional climate modeling over California. Our variable-resolution simulations will focus on relatively high resolutions for climate assessment, namely 28km and 14km regional resolution, which are much more typical for dynamically downscaled studies. For comparison with the more widely used RCM method, the Weather Research and Forecasting (WRF) model will be used for simulations at 27km and 9km. All simulations use the AMIP (Atmospheric Model Intercomparison Project) protocols. The time period is from 1979-01-01 to 2005-12-31 (UTC), and year 1979 was discarded as spin up time. The mean climatology across California's diverse climate zones, including temperature and precipitation, is analyzed and contrasted with the Weather Research and Forcasting (WRF) model (as a traditional RCM), regional reanalysis, gridded observational datasets and uniform high-resolution CESM at 0.25 degree with the finite volume (FV) dynamical core. The results show that variable-resolution CESM is competitive in representing regional climatology on both annual and seasonal time scales. This assessment adds value to the use of VRGCMs for projecting climate change over the coming century and improve our understanding of both past and future regional climate related to fine-scale processes. This assessment is also relevant for addressing the scale limitation of current RCMs or VRGCMs when next-generation model resolution increases to ~10km and beyond.

  3. Care arrangements, grief and psychological problems among children orphaned by AIDS in China.

    PubMed

    Zhao, G; Li, X; Fang, X; Zhao, J; Yang, H; Stanton, B

    2007-10-01

    The China Ministry of Health has estimated that there are at least 100,000 AIDS orphans in China. The UNICEF China Office estimates that between 150,000 and 250,000 additional children will be orphaned by AIDS over the next five years. However, limited data are available regarding the sociodemographic characteristics, care arrangements, barriers to appropriate grief resolution and psychological problems among AIDS orphans in China. In this article, we review secondary data and reports from scientific literature, government, non-governmental organisations and public media regarding children orphaned by AIDS in China to address their living situation, bereavement process and psychological problems. Our review suggests that AIDS orphans in China are living in a stressful environment, with many orphans struggling with psychological problems and unmet basic needs such as food, shelter, education and medical care. Based on our review, we suggest that future studies should address the psychosocial needs of AIDS orphans in China and develop health promotion programmes to mitigate the negative impact of parental death on the physical and psychosocial well-being of these orphans.

  4. Care arrangement, grief, and psychological problems among children orphaned by AIDS in China

    PubMed Central

    Zhao, Guoxiang; Li, Xiaoming; Fang, Xiaoyi; Zhao, Junfeng; Yang, Hongmei; Stanton, Bonita

    2007-01-01

    The China Ministry of Health has estimated that there are at least 100,000 AIDS orphans in China. The UNICEF China Office estimates that between 150,000 and 250,000 additional children will be orphaned by AIDS over the next five years. However, limited data are available regarding the socio-demographic characteristics, care arrangement, barriers to appropriate grief resolution and psychological problems among AIDS orphans in China. In this article, we review secondary data and reports from scientific literature, government, non-governmental organizations, and public media regarding children orphaned by AIDS in China to address their living situation, bereavement process, and psychological problems. Our review suggests that AIDS orphans in China are living in a stressful environment with many orphans struggling with psychological problems and unmet basic needs such as food, shelter, education, and medical care. Based on our review, we suggest that future studies should address the psychosocial needs of AIDS orphans in China and develop health promotion programs to mitigate the negative impact of parental death on the physical and psychosocial well-being of these orphans. PMID:18058390

  5. Toward Peace: Using Literature to Aid Conflict Resolution.

    ERIC Educational Resources Information Center

    Luke, Jennifer L.; Myers, Catherine M.

    1995-01-01

    Children are exposed to violence in media and everyday life, which may promote aggression as a means to solve problems. Skills and strategies of problem solving, conflict resolution, and peace making can be learned through well-organized and frequent exposure to literature. Books that deal with misunderstanding, jealousy, playground skirmishes,…

  6. The formation of quantum images and their transformation and super-resolution reading

    NASA Astrophysics Data System (ADS)

    Balakin, D. A.; Belinsky, A. V.

    2016-05-01

    Images formed by light with suppressed photon fluctuations are interesting objects for studies with the aim of increasing their limiting information capacity and quality. This light in the sub-Poisson state can be prepared in a resonator filled with a medium with Kerr nonlinearity, in which self-phase modulation takes place. Spatially and temporally multimode light beams are studied and the production of spatial frequency spectra of suppressed photon fluctuations is described. The efficient operation regimes of the system are found. A particular schematic solution is described, which allows one to realize the potential possibilities laid in the formation of the squeezed states of light to a maximum degree during self-phase modulation in a resonator for the maximal suppression of amplitude quantum noises upon two-dimensional imaging. The efficiency of using light with suppressed quantum fluctuations for computer image processing is studied. An algorithm is described for interpreting measurements for increasing the resolution with respect to the geometrical resolution. A mathematical model that characterizes the measurement scheme is constructed and the problem of the image reconstruction is solved. The algorithm for the interpretation of images is verified. Conditions are found for the efficient application of sub-Poisson light for super-resolution imaging. It is found that the image should have a low contrast and be maximally transparent.

  7. Single Anisotropic 3-D MR Image Upsampling via Overcomplete Dictionary Trained From In-Plane High Resolution Slices.

    PubMed

    Jia, Yuanyuan; He, Zhongshi; Gholipour, Ali; Warfield, Simon K

    2016-11-01

    In magnetic resonance (MR), hardware limitation, scanning time, and patient comfort often result in the acquisition of anisotropic 3-D MR images. Enhancing image resolution is desired but has been very challenging in medical image processing. Super resolution reconstruction based on sparse representation and overcomplete dictionary has been lately employed to address this problem; however, these methods require extra training sets, which may not be always available. This paper proposes a novel single anisotropic 3-D MR image upsampling method via sparse representation and overcomplete dictionary that is trained from in-plane high resolution slices to upsample in the out-of-plane dimensions. The proposed method, therefore, does not require extra training sets. Abundant experiments, conducted on simulated and clinical brain MR images, show that the proposed method is more accurate than classical interpolation. When compared to a recent upsampling method based on the nonlocal means approach, the proposed method did not show improved results at low upsampling factors with simulated images, but generated comparable results with much better computational efficiency in clinical cases. Therefore, the proposed approach can be efficiently implemented and routinely used to upsample MR images in the out-of-planes views for radiologic assessment and postacquisition processing.

  8. An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices

    NASA Astrophysics Data System (ADS)

    Li, Houqiang; Wang, Yi; Chen, Chang Wen

    2007-12-01

    With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI) structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.

  9. Application of the phase extension method in virus crystallography.

    PubMed

    Reddy, Vijay S

    2016-01-01

    The procedure for phase extension (PX) involves gradually extending the initial phases from low resolution (e.g., ~8Å) to the high-resolution limit of a diffraction data set. Structural redundancy present in the viral capsids that display icosahedral symmetry results in a high degree of non-crystallographic symmetry (NCS), which in turn translates into higher phasing power and is critical for improving and extending phases to higher resolution. Greater completeness of the diffraction data and determination of a molecular replacement solution, which entails accurately identifying the virus particle orientation(s) and position(s), are important for the smooth progression of the PX procedure. In addition, proper definition of a molecular mask (envelope) around the NCS-asymmetric unit has been found to be important for the success of density modification procedures, such as density averaging and solvent flattening. Regardless of the degree of NCS, the PX method appears to work well in all space groups, provided an accurate molecular mask is used along with reasonable initial phases. However, in the cases with space group P1, in addition to requiring a molecular mask, starting the phase extension at a higher resolution (e.g., 6Å) overcame the previously reported problems due to Babinet phases and phase flipping errors.

  10. Zero-crossing approach to high-resolution reconstruction in frequency-domain optical-coherence tomography.

    PubMed

    Krishnan, Sunder Ram; Seelamantula, Chandra Sekhar; Bouwens, Arno; Leutenegger, Marcel; Lasser, Theo

    2012-10-01

    We address the problem of high-resolution reconstruction in frequency-domain optical-coherence tomography (FDOCT). The traditional method employed uses the inverse discrete Fourier transform, which is limited in resolution due to the Heisenberg uncertainty principle. We propose a reconstruction technique based on zero-crossing (ZC) interval analysis. The motivation for our approach lies in the observation that, for a multilayered specimen, the backscattered signal may be expressed as a sum of sinusoids, and each sinusoid manifests as a peak in the FDOCT reconstruction. The successive ZC intervals of a sinusoid exhibit high consistency, with the intervals being inversely related to the frequency of the sinusoid. The statistics of the ZC intervals are used for detecting the frequencies present in the input signal. The noise robustness of the proposed technique is improved by using a cosine-modulated filter bank for separating the input into different frequency bands, and the ZC analysis is carried out on each band separately. The design of the filter bank requires the design of a prototype, which we accomplish using a Kaiser window approach. We show that the proposed method gives good results on synthesized and experimental data. The resolution is enhanced, and noise robustness is higher compared with the standard Fourier reconstruction.

  11. A weighted optimization approach to time-of-flight sensor fusion.

    PubMed

    Schwarz, Sebastian; Sjostrom, Marten; Olsson, Roger

    2014-01-01

    Acquiring scenery depth is a fundamental task in computer vision, with many applications in manufacturing, surveillance, or robotics relying on accurate scenery information. Time-of-flight cameras can provide depth information in real-time and overcome short-comings of traditional stereo analysis. However, they provide limited spatial resolution and sophisticated upscaling algorithms are sought after. In this paper, we present a sensor fusion approach to time-of-flight super resolution, based on the combination of depth and texture sources. Unlike other texture guided approaches, we interpret the depth upscaling process as a weighted energy optimization problem. Three different weights are introduced, employing different available sensor data. The individual weights address object boundaries in depth, depth sensor noise, and temporal consistency. Applied in consecutive order, they form three weighting strategies for time-of-flight super resolution. Objective evaluations show advantages in depth accuracy and for depth image based rendering compared with state-of-the-art depth upscaling. Subjective view synthesis evaluation shows a significant increase in viewer preference by a factor of four in stereoscopic viewing conditions. To the best of our knowledge, this is the first extensive subjective test performed on time-of-flight depth upscaling. Objective and subjective results proof the suitability of our approach to time-of-flight super resolution approach for depth scenery capture.

  12. Impacting the effect of fMRI noise through hardware and acquisition choices - Implications for controlling false positive rates.

    PubMed

    Wald, Lawrence L; Polimeni, Jonathan R

    2017-07-01

    We review the components of time-series noise in fMRI experiments and the effect of image acquisition parameters on the noise. In addition to helping determine the total amount of signal and noise (and thus temporal SNR), the acquisition parameters have been shown to be critical in determining the ratio of thermal to physiological induced noise components in the time series. Although limited attention has been given to this latter metric, we show that it determines the degree of spatial correlations seen in the time-series noise. The spatially correlations of the physiological noise component are well known, but recent studies have shown that they can lead to a higher than expected false-positive rate in cluster-wise inference based on parametric statistical methods used by many researchers. Based on understanding the effect of acquisition parameters on the noise mixture, we propose several acquisition strategies that might be helpful reducing this elevated false-positive rate, such as moving to high spatial resolution or using highly-accelerated acquisitions where thermal sources dominate. We suggest that the spatial noise correlations at the root of the inflated false-positive rate problem can be limited with these strategies, and the well-behaved spatial auto-correlation functions (ACFs) assumed by the conventional statistical methods are retained if the high resolution data is smoothed to conventional resolutions. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Hybrid spectral CT reconstruction

    PubMed Central

    Clark, Darin P.

    2017-01-01

    Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral separation on the order of the energy resolution of the PCD hardware. PMID:28683124

  14. Time-domain induced polarization - an analysis of Cole-Cole parameter resolution and correlation using Markov Chain Monte Carlo inversion

    NASA Astrophysics Data System (ADS)

    Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest

    2017-12-01

    The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.

  15. Microwave Radiometer and Lidar Synergy for High Vertical Resolution Thermodynamic Profiling in a Cloudy Scenario

    NASA Astrophysics Data System (ADS)

    Barrera Verdejo, M.; Crewell, S.; Loehnert, U.; Di Girolamo, P.

    2016-12-01

    Continuous monitoring of thermodynamic atmospheric profiles is important for many applications, e.g. assessment of atmospheric stability and cloud formation. Nowadays there is a wide variety of ground-based sensors for atmospheric profiling. However, no single instrument is able to simultaneously provide measurements with complete vertical coverage, high vertical and temporal resolution, and good performance under all weather conditions. For this reason, instrument synergies of a wide range of complementary measurements are more and more considered for improving the quality of atmospheric observations. The current work presents synergetic use of a microwave radiometer (MWR) and Raman lidar (RL) within a physically consistent optimal estimation approach. On the one hand, lidar measurements provide humidity and temperature measurements with a high vertical resolution albeit with limited vertical coverage, due to overlapping function problems, sunlight contamination and the presence of clouds. On the other hand, MWRs obtain humidity, temperature and cloud information throughout the troposphere, with however only a very limited vertical resolution. The benefits of MWR+RL synergy have been previously demonstrated for clear sky cases. This work expands this approach to cloudy scenarios. Consistent retrievals of temperature, absolute and relative humidity as well as liquid water path are analyzed. In addition, different measures are presented to demonstrate the improvements achieved via the synergy compared to individual retrievals, e.g. degrees of freedom or theoretical error. We also demonstrate that, compared to the lidar, the higher temporal resolution of the MWR presents a strong advantage for capturing the high temporal variability of the liquid water cloud.. Finally, the results are compared with independent information sources, e.g. GPS or radiosondes, showing good consistency. The study demonstrates the benefits of the sensor combination, being especially strong in regions where lidar data is not available, whereas if both instruments are available, the lidar measurements dominate the retrieval.

  16. Multi-Zone Liquid Thrust Chamber Performance Code with Domain Decomposition for Parallel Processing

    NASA Technical Reports Server (NTRS)

    Navaz, Homayun K.

    2002-01-01

    Computational Fluid Dynamics (CFD) has considerably evolved in the last decade. There are many computer programs that can perform computations on viscous internal or external flows with chemical reactions. CFD has become a commonly used tool in the design and analysis of gas turbines, ramjet combustors, turbo-machinery, inlet ducts, rocket engines, jet interaction, missile, and ramjet nozzles. One of the problems of interest to NASA has always been the performance prediction for rocket and air-breathing engines. Due to the complexity of flow in these engines it is necessary to resolve the flowfield into a fine mesh to capture quantities like turbulence and heat transfer. However, calculation on a high-resolution grid is associated with a prohibitively increasing computational time that can downgrade the value of the CFD for practical engineering calculations. The Liquid Thrust Chamber Performance (LTCP) code was developed for NASA/MSFC (Marshall Space Flight Center) to perform liquid rocket engine performance calculations. This code is a 2D/axisymmetric full Navier-Stokes (NS) solver with fully coupled finite rate chemistry and Eulerian treatment of liquid fuel and/or oxidizer droplets. One of the advantages of this code has been the resemblance of its input file to the JANNAF (Joint Army Navy NASA Air Force Interagency Propulsion Committee) standard TDK code, and its automatic grid generation for JANNAF defined combustion chamber wall geometry. These options minimize the learning effort for TDK users, and make the code a good candidate for performing engineering calculations. Although the LTCP code was developed for liquid rocket engines, it is a general-purpose code and has been used for solving many engineering problems. However, the single zone formulation of the LTCP has limited the code to be applicable to problems with complex geometry. Furthermore, the computational time becomes prohibitively large for high-resolution problems with chemistry, two-equation turbulence model, and two-phase flow. To overcome these limitations, the LTCP code is rewritten to include the multi-zone capability with domain decomposition that makes it suitable for parallel processing, i.e., enabling the code to run every zone or sub-domain on a separate processor. This can reduce the run time by a factor of 6 to 8, depending on the problem.

  17. Diffraction-Limited Plenoptic Imaging with Correlated Light

    NASA Astrophysics Data System (ADS)

    Pepe, Francesco V.; Di Lena, Francesco; Mazzilli, Aldo; Edrei, Eitan; Garuccio, Augusto; Scarcelli, Giuliano; D'Angelo, Milena

    2017-12-01

    Traditional optical imaging faces an unavoidable trade-off between resolution and depth of field (DOF). To increase resolution, high numerical apertures (NAs) are needed, but the associated large angular uncertainty results in a limited range of depths that can be put in sharp focus. Plenoptic imaging was introduced a few years ago to remedy this trade-off. To this aim, plenoptic imaging reconstructs the path of light rays from the lens to the sensor. However, the improvement offered by standard plenoptic imaging is practical and not fundamental: The increased DOF leads to a proportional reduction of the resolution well above the diffraction limit imposed by the lens NA. In this Letter, we demonstrate that correlation measurements enable pushing plenoptic imaging to its fundamental limits of both resolution and DOF. Namely, we demonstrate maintaining the imaging resolution at the diffraction limit while increasing the depth of field by a factor of 7. Our results represent the theoretical and experimental basis for the effective development of promising applications of plenoptic imaging.

  18. Diffraction-Limited Plenoptic Imaging with Correlated Light.

    PubMed

    Pepe, Francesco V; Di Lena, Francesco; Mazzilli, Aldo; Edrei, Eitan; Garuccio, Augusto; Scarcelli, Giuliano; D'Angelo, Milena

    2017-12-15

    Traditional optical imaging faces an unavoidable trade-off between resolution and depth of field (DOF). To increase resolution, high numerical apertures (NAs) are needed, but the associated large angular uncertainty results in a limited range of depths that can be put in sharp focus. Plenoptic imaging was introduced a few years ago to remedy this trade-off. To this aim, plenoptic imaging reconstructs the path of light rays from the lens to the sensor. However, the improvement offered by standard plenoptic imaging is practical and not fundamental: The increased DOF leads to a proportional reduction of the resolution well above the diffraction limit imposed by the lens NA. In this Letter, we demonstrate that correlation measurements enable pushing plenoptic imaging to its fundamental limits of both resolution and DOF. Namely, we demonstrate maintaining the imaging resolution at the diffraction limit while increasing the depth of field by a factor of 7. Our results represent the theoretical and experimental basis for the effective development of promising applications of plenoptic imaging.

  19. Heidegger, environmental ethics, and the metaphysics of nature: inhabiting the earth in a technological age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foltz, B.V.

    1985-01-01

    Previous studies of philosophical problems concerning the human disruption and destruction of the natural environment have tended to presuppose (a) that the problems themselves are adequately defined by the natural sciences, and (b) that the proper philosophical approach is by means of an ethics that restricts itself to determining the character and limits of moral obligation. This dissertation (a) argues that modern natural science, which is expected to define the problem of an environmental crisis, itself employs a concept of nature, derived from the metaphysical tradition, that is generative of the very problems to be resolved; (b) develops, on themore » basis of Heidegger's rethinking of the traditional question of being, a more adequate understanding of nature; and (c) shows that the resolution of these problems can best be accomplished by means of a more broadly conceived ethics that closes the breach between theory and praxis by articulating an appropriate manner of comportment toward entities as a whole (and not soley human, nor even sentient, entities) which displays an integration of thought and action, and which Heidegger calls inhabitation or dwelling.« less

  20. Almucantar radio telescope report 1: A preliminary study of the capabilities of large partially steerable paraboloidal antennas

    NASA Technical Reports Server (NTRS)

    Usher, P. D.

    1971-01-01

    The almucantar radio telescope development and characteristics are presented. The radio telescope consists of a paraboloidal reflector free to rotate in azimuth but limited in altitude between two fixed angles from the zenith. The fixed angles are designed to provide the capability where sources lying between two small circles parallel with the horizon (almucantars) are accessible at any one instant. Basic geometrical considerations in the almucantar design are presented. The capabilities of the almucantar telescope for source counting and for monitoring which are essential to a resolution of the cosmological problem are described.

  1. Fourier Spectral Filter Array for Optimal Multispectral Imaging.

    PubMed

    Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo

    2016-04-01

    Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data.

  2. Perimetric Complexity of Binary Digital Images: Notes on Calculation and Relation to Visual Complexity

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2011-01-01

    Perimetric complexity is a measure of the complexity of binary pictures. It is defined as the sum of inside and outside perimeters of the foreground, squared, divided by the foreground area, divided by 4p . Difficulties arise when this definition is applied to digital images composed of binary pixels. In this paper we identify these problems and propose solutions. Perimetric complexity is often used as a measure of visual complexity, in which case it should take into account the limited resolution of the visual system. We propose a measure of visual perimetric complexity that meets this requirement.

  3. Nonlinear single-spin spectrum analyzer.

    PubMed

    Kotler, Shlomi; Akerman, Nitzan; Glickman, Yinnon; Ozeri, Roee

    2013-03-15

    Qubits have been used as linear spectrum analyzers of their environments. Here we solve the problem of nonlinear spectral analysis, required for discrete noise induced by a strongly coupled environment. Our nonperturbative analytical model shows a nonlinear signal dependence on noise power, resulting in a spectral resolution beyond the Fourier limit as well as frequency mixing. We develop a noise characterization scheme adapted to this nonlinearity. We then apply it using a single trapped ion as a sensitive probe of strong, non-Gaussian, discrete magnetic field noise. Finally, we experimentally compared the performance of equidistant vs Uhrig modulation schemes for spectral analysis.

  4. LANDSAT 4 band 6 data evaluation

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Previously experienced data collection problems were successfully resolved. A limited effort, directed at improved methods of display of TM Band 6 data, has concentrated on implementation of intensity hue and saturation displays using the Band 6 data to control hue. These displays tend to give the appearance of high resolution thermal data and make whole scene thermal interpretation easier by color coding thermal data in a manner that aids visual interpretation. More quantitative efforts were directed at utilizing the reflected bands to define land cover classes and then modifying the thermal displays using long wave optical properties associated with cover type.

  5. Limited Area Coverage/High Resolution Picture Transmission (LAC/HRPT) data vegetative index calculation processor user's manual

    NASA Technical Reports Server (NTRS)

    Obrien, S. O. (Principal Investigator)

    1980-01-01

    The program, LACVIN, calculates vegetative indexes numbers on limited area coverage/high resolution picture transmission data for selected IJ grid sections. The IJ grid sections were previously extracted from the full resolution data tapes and stored on disk files.

  6. A Multi-layer Dynamic Model for Coordination Based Group Decision Making in Water Resource Allocation and Scheduling

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Zhang, Xingnan; Li, Chenming; Wang, Jianying

    Management of group decision-making is an important issue in water source management development. In order to overcome the defects in lacking of effective communication and cooperation in the existing decision-making models, this paper proposes a multi-layer dynamic model for coordination in water resource allocation and scheduling based group decision making. By introducing the scheme-recognized cooperative satisfaction index and scheme-adjusted rationality index, the proposed model can solve the problem of poor convergence of multi-round decision-making process in water resource allocation and scheduling. Furthermore, the problem about coordination of limited resources-based group decision-making process can be solved based on the effectiveness of distance-based group of conflict resolution. The simulation results show that the proposed model has better convergence than the existing models.

  7. Super-resolution optical microscopy for studying membrane structure and dynamics.

    PubMed

    Sezgin, Erdinc

    2017-07-12

    Investigation of cell membrane structure and dynamics requires high spatial and temporal resolution. The spatial resolution of conventional light microscopy is limited due to the diffraction of light. However, recent developments in microscopy enabled us to access the nano-scale regime spatially, thus to elucidate the nanoscopic structures in the cellular membranes. In this review, we will explain the resolution limit, address the working principles of the most commonly used super-resolution microscopy techniques and summarise their recent applications in the biomembrane field.

  8. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  9. Super-Resolution Microscopy Techniques and Their Potential for Applications in Radiation Biophysics.

    PubMed

    Eberle, Jan Philipp; Rapp, Alexander; Krufczik, Matthias; Eryilmaz, Marion; Gunkel, Manuel; Erfle, Holger; Hausmann, Michael

    2017-01-01

    Fluorescence microscopy is an essential tool for imaging tagged biological structures. Due to the wave nature of light, the resolution of a conventional fluorescence microscope is limited laterally to about 200 nm and axially to about 600 nm, which is often referred to as the Abbe limit. This hampers the observation of important biological structures and dynamics in the nano-scaled range ~10 nm to ~100 nm. Consequentially, various methods have been developed circumventing this limit of resolution. Super-resolution microscopy comprises several of those methods employing physical and/or chemical properties, such as optical/instrumental modifications and specific labeling of samples. In this article, we will give a brief insight into a variety of selected optical microscopy methods reaching super-resolution beyond the Abbe limit. We will survey three different concepts in connection to biological applications in radiation research without making a claim to be complete.

  10. Low-count PET image restoration using sparse representation

    NASA Astrophysics Data System (ADS)

    Li, Tao; Jiang, Changhui; Gao, Juan; Yang, Yongfeng; Liang, Dong; Liu, Xin; Zheng, Hairong; Hu, Zhanli

    2018-04-01

    In the field of positron emission tomography (PET), reconstructed images are often blurry and contain noise. These problems are primarily caused by the low resolution of projection data. Solving this problem by improving hardware is an expensive solution, and therefore, we attempted to develop a solution based on optimizing several related algorithms in both the reconstruction and image post-processing domains. As sparse technology is widely used, sparse prediction is increasingly applied to solve this problem. In this paper, we propose a new sparse method to process low-resolution PET images. Two dictionaries (D1 for low-resolution PET images and D2 for high-resolution PET images) are learned from a group real PET image data sets. Among these two dictionaries, D1 is used to obtain a sparse representation for each patch of the input PET image. Then, a high-resolution PET image is generated from this sparse representation using D2. Experimental results indicate that the proposed method exhibits a stable and superior ability to enhance image resolution and recover image details. Quantitatively, this method achieves better performance than traditional methods. This proposed strategy is a new and efficient approach for improving the quality of PET images.

  11. Deploying a quantum annealing processor to detect tree cover in aerial imagery of California

    PubMed Central

    Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Mukhopadhyay, Supratik; Nemani, Ramakrishna R.

    2017-01-01

    Quantum annealing is an experimental and potentially breakthrough computational technology for handling hard optimization problems, including problems of computer vision. We present a case study in training a production-scale classifier of tree cover in remote sensing imagery, using early-generation quantum annealing hardware built by D-wave Systems, Inc. Beginning within a known boosting framework, we train decision stumps on texture features and vegetation indices extracted from four-band, one-meter-resolution aerial imagery from the state of California. We then impose a regulated quadratic training objective to select an optimal voting subset from among these stumps. The votes of the subset define the classifier. For optimization, the logical variables in the objective function map to quantum bits in the hardware device, while quadratic couplings encode as the strength of physical interactions between the quantum bits. Hardware design limits the number of couplings between these basic physical entities to five or six. To account for this limitation in mapping large problems to the hardware architecture, we propose a truncation and rescaling of the training objective through a trainable metaparameter. The boosting process on our basic 108- and 508-variable problems, thus constituted, returns classifiers that incorporate a diverse range of color- and texture-based metrics and discriminate tree cover with accuracies as high as 92% in validation and 90% on a test scene encompassing the open space preserves and dense suburban build of Mill Valley, CA. PMID:28241028

  12. Limit on possible narrow rings around Jupiter

    NASA Technical Reports Server (NTRS)

    Dunham, E.; Elliot, J. L.; Mink, D.; Klemola, A. R.

    1982-01-01

    An upper limit to the optical depth of the Jovian ring at high spatial resolution, determined from stellar occultation data, is reported. The spatial resolution of the observation is limited to about 13 km in Jupiter's equatorial plane by the projection of the Fresnel zone on the equatorial plane in the radial direction. At this resolution, the normal optical depth limit is about 0.008. This limit applies to a strip in the Jovian equatorial plane that crosses the orbits of Amalthea, 1979J1, 1979J3, and the ring. An upper limit on the number density of kilometer-size boulders has been set at one per 11.000 sq km in the equatorial plane.

  13. The Application of High-Resolution Electron Microscopy to Problems in Solid State Chemistry: The Exploits of a Peeping TEM.

    ERIC Educational Resources Information Center

    Eyring, LeRoy

    1980-01-01

    Describes methods for using the high-resolution electron microscope in conjunction with other tools to reveal the identity and environment of atoms. Problems discussed include the ultimate structure of real crystalline solids including defect structure and the mechanisms of chemical reactions. (CS)

  14. Remote sensing of fire and deforestation in the tropics from the International Space Station

    NASA Astrophysics Data System (ADS)

    Hoffman, James W.; Riggan, Philip J.; Brass, James A.

    2000-01-01

    In August of 1999 over 30,000 fire counts were registered by the Advanced Very High Resolution Radiometer aboard NOAA satellites over central Brazil, and an extensive smoke pall produced a health hazard and hindered commercial aviation across large portions of the states of Mato Grosso and Mato Grosso do Sul. Clearly fire was an important part of the Brazilian environment, but limitations in satellite and airborne remote sensing prevented a clear picture of what was burning, how much biomass was consumed, where the most critical resources were threatened, or exactly what was the global environmental impact. Another important problem that must be addressed is the deforestation of the rain forest by unauthorized logging operations. To detect these illegal clear cutting activities, continuous, high resolution monitoring must be initiated. The low altitude Space Station offers an ideal platform from which to monitor the tropical regions for both fires and deforestation from an equatorial orbit. A new micro-bolometer-based thermal imager, the FireMapper, has been designed to provide a solution for these problems in fire and resource monitoring. In this paper we describe potential applications of the FireMapper aboard the International Space Station for demonstration of space-borne fire detection and measurement. .

  15. Dynamic Granger-Geweke causality modeling with application to interictal spike propagation

    PubMed Central

    Lin, Fa-Hsuan; Hara, Keiko; Solo, Victor; Vangel, Mark; Belliveau, John W.; Stufflebeam, Steven M.; Hamalainen, Matti S.

    2010-01-01

    A persistent problem in developing plausible neurophysiological models of perception, cognition, and action is the difficulty of characterizing the interactions between different neural systems. Previous studies have approached this problem by estimating causal influences across brain areas activated during cognitive processing using Structural Equation Modeling and, more recently, with Granger-Geweke causality. While SEM is complicated by the need for a priori directional connectivity information, the temporal resolution of dynamic Granger-Geweke estimates is limited because the underlying autoregressive (AR) models assume stationarity over the period of analysis. We have developed a novel optimal method for obtaining data-driven directional causality estimates with high temporal resolution in both time and frequency domains. This is achieved by simultaneously optimizing the length of the analysis window and the chosen AR model order using the SURE criterion. Dynamic Granger-Geweke causality in time and frequency domains is subsequently calculated within a moving analysis window. We tested our algorithm by calculating the Granger-Geweke causality of epileptic spike propagation from the right frontal lobe to the left frontal lobe. The results quantitatively suggested the epileptic activity at the left frontal lobe was propagated from the right frontal lobe, in agreement with the clinical diagnosis. Our novel computational tool can be used to help elucidate complex directional interactions in the human brain. PMID:19378280

  16. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  17. Identification and characterization of agro-ecological infrastructures by remote sensing

    NASA Astrophysics Data System (ADS)

    Ducrot, D.; Duthoit, S.; d'Abzac, A.; Marais-Sicre, C.; Chéret, V.; Sausse, C.

    2015-10-01

    Agro-Ecological Infrastructures (AEIs) include many semi-natural habitats (hedgerows, grass strips, grasslands, thickets…) and play a key role in biodiversity preservation, water quality and erosion control. Indirect biodiversity indicators based on AEISs are used in many national and European public policies to analyze ecological processes. The identification of these landscape features is difficult and expensive and limits their use. Remote sensing has a great potential to solve this problem. In this study, we propose an operational tool for the identification and characterization of AEISs. The method is based on segmentation, contextual classification and fusion of temporal classifications. Experiments were carried out on various temporal and spatial resolution satellite data (20-m, 10-m, 5-m, 2.5-m, 50-cm), on three French regions southwest landscape (hilly, plain, wooded, cultivated), north (open-field) and Brittany (farmland closed by hedges). The results give a good idea of the potential of remote sensing image processing methods to map fine agro-ecological objects. At 20-m spatial resolution, only larger hedgerows and riparian forests are apparent. Classification results show that 10-m resolution is well suited for agricultural and AEIs applications, most hedges, forest edges, thickets can be detected. Results highlight the multi-temporal data importance. The future Sentinel satellites with a very high temporal resolution and a 10-m spatial resolution should be an answer to AEIs detection. 2.50-m resolution is more precise with more details. But treatments are more complicated. At 50-cm resolution, accuracy level of details is even higher; this amplifies the difficulties previously reported. The results obtained allow calculation of statistics and metrics describing landscape structures.

  18. Imaging ultrasonic dispersive guided wave energy in long bones using linear radon transform.

    PubMed

    Tran, Tho N H T; Nguyen, Kim-Cuong T; Sacchi, Mauricio D; Le, Lawrence H

    2014-11-01

    Multichannel analysis of dispersive ultrasonic energy requires a reliable mapping of the data from the time-distance (t-x) domain to the frequency-wavenumber (f-k) or frequency-phase velocity (f-c) domain. The mapping is usually performed with the classic 2-D Fourier transform (FT) with a subsequent substitution and interpolation via c = 2πf/k. The extracted dispersion trajectories of the guided modes lack the resolution in the transformed plane to discriminate wave modes. The resolving power associated with the FT is closely linked to the aperture of the recorded data. Here, we present a linear Radon transform (RT) to image the dispersive energies of the recorded ultrasound wave fields. The RT is posed as an inverse problem, which allows implementation of the regularization strategy to enhance the focusing power. We choose a Cauchy regularization for the high-resolution RT. Three forms of Radon transform: adjoint, damped least-squares, and high-resolution are described, and are compared with respect to robustness using simulated and cervine bone data. The RT also depends on the data aperture, but not as severely as does the FT. With the RT, the resolution of the dispersion panel could be improved up to around 300% over that of the FT. Among the Radon solutions, the high-resolution RT delineated the guided wave energy with much better imaging resolution (at least 110%) than the other two forms. The Radon operator can also accommodate unevenly spaced records. The results of the study suggest that the high-resolution RT is a valuable imaging tool to extract dispersive guided wave energies under limited aperture. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  19. Muon tomography imaging improvement using optimized limited angle data

    NASA Astrophysics Data System (ADS)

    Bai, Chuanyong; Simon, Sean; Kindem, Joel; Luo, Weidong; Sossong, Michael J.; Steiger, Matthew

    2014-05-01

    Image resolution of muon tomography is limited by the range of zenith angles of cosmic ray muons and the flux rate at sea level. Low flux rate limits the use of advanced data rebinning and processing techniques to improve image quality. By optimizing the limited angle data, however, image resolution can be improved. To demonstrate the idea, physical data of tungsten blocks were acquired on a muon tomography system. The angular distribution and energy spectrum of muons measured on the system was also used to generate simulation data of tungsten blocks of different arrangement (geometry). The data were grouped into subsets using the zenith angle and volume images were reconstructed from the data subsets using two algorithms. One was a distributed PoCA (point of closest approach) algorithm and the other was an accelerated iterative maximal likelihood/expectation maximization (MLEM) algorithm. Image resolution was compared for different subsets. Results showed that image resolution was better in the vertical direction for subsets with greater zenith angles and better in the horizontal plane for subsets with smaller zenith angles. The overall image resolution appeared to be the compromise of that of different subsets. This work suggests that the acquired data can be grouped into different limited angle data subsets for optimized image resolution in desired directions. Use of multiple images with resolution optimized in different directions can improve overall imaging fidelity and the intended applications.

  20. Developmental changes in conflict resolution styles in parent-adolescent relationships: a four-wave longitudinal study.

    PubMed

    Van Doorn, Muriel D; Branje, Susan J T; Meeus, Wim H J

    2011-01-01

    In this study, changes in three conflict resolution styles in parent-adolescent relationships were investigated: positive problem solving, conflict engagement, and withdrawal. Questionnaires about these conflict resolution styles were completed by 314 early adolescents (M = 13.3 years; 50.6% girls) and both parents for four consecutive years. Adolescents' reported use of positive problem solving increased with mothers, but did not change with fathers. Fathers reported an increase of positive problem solving with adolescents, whereas mothers reported no change. Adolescents' use of conflict engagement was found to temporarily increase with mothers, but showed no change with fathers. Mothers and fathers reported a decrease in conflict engagement with adolescents. Adolescents' use of withdrawal with parents increased, although this increase was temporarily with mothers. Mothers reported no change in withdrawal, whereas fathers' use of withdrawal increased. Generally, we found that both adolescents and their parents changed in their use of conflict resolution from early to middle adolescence. These results show that conflict resolution in parent-adolescent relationships gradually change in favor of a more horizontal relationship.

  1. Developmental Changes in Conflict Resolution Styles in Parent–Adolescent Relationships: A Four-Wave Longitudinal Study

    PubMed Central

    Branje, Susan J. T.; Meeus, Wim H. J.

    2010-01-01

    In this study, changes in three conflict resolution styles in parent–adolescent relationships were investigated: positive problem solving, conflict engagement, and withdrawal. Questionnaires about these conflict resolution styles were completed by 314 early adolescents (M = 13.3 years; 50.6% girls) and both parents for four consecutive years. Adolescents’ reported use of positive problem solving increased with mothers, but did not change with fathers. Fathers reported an increase of positive problem solving with adolescents, whereas mothers reported no change. Adolescents’ use of conflict engagement was found to temporarily increase with mothers, but showed no change with fathers. Mothers and fathers reported a decrease in conflict engagement with adolescents. Adolescents’ use of withdrawal with parents increased, although this increase was temporarily with mothers. Mothers reported no change in withdrawal, whereas fathers’ use of withdrawal increased. Generally, we found that both adolescents and their parents changed in their use of conflict resolution from early to middle adolescence. These results show that conflict resolution in parent–adolescent relationships gradually change in favor of a more horizontal relationship. PMID:20177961

  2. From Constraints to Resolution Rules Part II : chains, braids, confluence and T&E

    NASA Astrophysics Data System (ADS)

    Berthier, Denis

    In this Part II, we apply the general theory developed in Part I to a detailed analysis of the Constraint Satisfaction Problem (CSP). We show how specific types of resolution rules can be defined. In particular, we introduce the general notions of a chain and a braid. As in Part I, these notions are illustrated in detail with the Sudoku example - a problem known to be NP-complete and which is therefore typical of a broad class of hard problems. For Sudoku, we also show how far one can go in "approximating" a CSP with a resolution theory and we give an empirical statistical analysis of how the various puzzles, corresponding to different sets of entries, can be classified along a natural scale of complexity. For any CSP, we also prove the confluence property of some Resolution Theories based on braids and we show how it can be used to define different resolution strategies. Finally, we prove that, in any CSP, braids have the same solving capacity as Trial-and-Error (T&E) with no guessing and we comment this result in the Sudoku case.

  3. A generic method for improving the spatial interoperability of medical and ecological databases.

    PubMed

    Ghenassia, A; Beuscart, J B; Ficheur, G; Occelli, F; Babykina, E; Chazard, E; Genin, M

    2017-10-03

    The availability of big data in healthcare and the intensive development of data reuse and georeferencing have opened up perspectives for health spatial analysis. However, fine-scale spatial studies of ecological and medical databases are limited by the change of support problem and thus a lack of spatial unit interoperability. The use of spatial disaggregation methods to solve this problem introduces errors into the spatial estimations. Here, we present a generic, two-step method for merging medical and ecological databases that avoids the use of spatial disaggregation methods, while maximizing the spatial resolution. Firstly, a mapping table is created after one or more transition matrices have been defined. The latter link the spatial units of the original databases to the spatial units of the final database. Secondly, the mapping table is validated by (1) comparing the covariates contained in the two original databases, and (2) checking the spatial validity with a spatial continuity criterion and a spatial resolution index. We used our novel method to merge a medical database (the French national diagnosis-related group database, containing 5644 spatial units) with an ecological database (produced by the French National Institute of Statistics and Economic Studies, and containing with 36,594 spatial units). The mapping table yielded 5632 final spatial units. The mapping table's validity was evaluated by comparing the number of births in the medical database and the ecological databases in each final spatial unit. The median [interquartile range] relative difference was 2.3% [0; 5.7]. The spatial continuity criterion was low (2.4%), and the spatial resolution index was greater than for most French administrative areas. Our innovative approach improves interoperability between medical and ecological databases and facilitates fine-scale spatial analyses. We have shown that disaggregation models and large aggregation techniques are not necessarily the best ways to tackle the change of support problem.

  4. High-resolution coded-aperture design for compressive X-ray tomography using low resolution detectors

    NASA Astrophysics Data System (ADS)

    Mojica, Edson; Pertuz, Said; Arguello, Henry

    2017-12-01

    One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.

  5. Example-Based Super-Resolution Fluorescence Microscopy.

    PubMed

    Jia, Shu; Han, Boran; Kutz, J Nathan

    2018-04-23

    Capturing biological dynamics with high spatiotemporal resolution demands the advancement in imaging technologies. Super-resolution fluorescence microscopy offers spatial resolution surpassing the diffraction limit to resolve near-molecular-level details. While various strategies have been reported to improve the temporal resolution of super-resolution imaging, all super-resolution techniques are still fundamentally limited by the trade-off associated with the longer image acquisition time that is needed to achieve higher spatial information. Here, we demonstrated an example-based, computational method that aims to obtain super-resolution images using conventional imaging without increasing the imaging time. With a low-resolution image input, the method provides an estimate of its super-resolution image based on an example database that contains super- and low-resolution image pairs of biological structures of interest. The computational imaging of cellular microtubules agrees approximately with the experimental super-resolution STORM results. This new approach may offer potential improvements in temporal resolution for experimental super-resolution fluorescence microscopy and provide a new path for large-data aided biomedical imaging.

  6. An Investigation into Solution Verification for CFD-DEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fullmer, William D.; Musser, Jordan

    This report presents the study of the convergence behavior of the computational fluid dynamicsdiscrete element method (CFD-DEM) method, specifically National Energy Technology Laboratory’s (NETL) open source MFiX code (MFiX-DEM) with a diffusion based particle-tocontinuum filtering scheme. In particular, this study focused on determining if the numerical method had a solution in the high-resolution limit where the grid size is smaller than the particle size. To address this uncertainty, fixed particle beds of two primary configurations were studied: i) fictitious beds where the particles are seeded with a random particle generator, and ii) instantaneous snapshots from a transient simulation of anmore » experimentally relevant problem. Both problems considered a uniform inlet boundary and a pressure outflow. The CFD grid was refined from a few particle diameters down to 1/6 th of a particle diameter. The pressure drop between two vertical elevations, averaged across the bed cross-section was considered as the system response quantity of interest. A least-squares regression method was used to extrapolate the grid-dependent results to an approximate “grid-free” solution in the limit of infinite resolution. The results show that the diffusion based scheme does yield a converging solution. However, the convergence is more complicated than encountered in simpler, single-phase flow problems showing strong oscillations and, at times, oscillations superimposed on top of globally non-monotonic behavior. The challenging convergence behavior highlights the importance of using at least four grid resolutions in solution verification problems so that (over-determined) regression-based extrapolation methods may be applied to approximate the grid-free solution. The grid-free solution is very important in solution verification and VVUQ exercise in general as the difference between it and the reference solution largely determines the numerical uncertainty. By testing different randomized particle configurations of the same general problem (for the fictitious case) or different instances of freezing a transient simulation, the numerical uncertainties appeared to be on the same order of magnitude as ensemble or time averaging uncertainties. By testing different drag laws, almost all cases studied show that model form uncertainty in this one, very important closure relation was larger than the numerical uncertainty, at least with a reasonable CFD grid, roughly five particle diameters. In this study, the diffusion width (filtering length scale) was mostly set at a constant of six particle diameters. A few exploratory tests were performed to show that similar convergence behavior was observed for diffusion widths greater than approximately two particle diameters. However, this subject was not investigated in great detail because determining an appropriate filter size is really a validation question which must be determined by comparison to experimental or highly accurate numerical data. Future studies are being considered targeting solution verification of transient simulations as well as validation of the filter size with direct numerical simulation data.« less

  7. Defect of focus in two-line resolution with Hanning amplitude filters

    NASA Astrophysics Data System (ADS)

    Karunasagar, D.; Bhikshamaiah, G.; Keshavulu Goud, M.; Lacha Goud, S.

    In the presence of defocusing the modified Sparrow limits of resolution for two-line objects have been investigated for a diffraction-limited coherent optical system apodized by generalized Hanning amplitude filters. These limits have been studied as a function of different parameters such as intensity ratio, the order of the filter for various amounts of apodization parameter. Results reveal that in some situations the defocusing is effective in enhancing the resolution of an optical system.

  8. Achieving superresolution with illumination-enhanced sparsity.

    PubMed

    Yu, Jiun-Yann; Becker, Stephen R; Folberth, James; Wallin, Bruce F; Chen, Simeng; Cogswell, Carol J

    2018-04-16

    Recent advances in superresolution fluorescence microscopy have been limited by a belief that surpassing two-fold resolution enhancement of the Rayleigh resolution limit requires stimulated emission or the fluorophore to undergo state transitions. Here we demonstrate a new superresolution method that requires only image acquisitions with a focused illumination spot and computational post-processing. The proposed method utilizes the focused illumination spot to effectively reduce the object size and enhance the object sparsity and consequently increases the resolution and accuracy through nonlinear image post-processing. This method clearly resolves 70nm resolution test objects emitting ~530nm light with a 1.4 numerical aperture (NA) objective, and, when imaging through a 0.5NA objective, exhibits high spatial frequencies comparable to a 1.4NA widefield image, both demonstrating a resolution enhancement above two-fold of the Rayleigh resolution limit. More importantly, we examine how the resolution increases with photon numbers, and show that the more-than-two-fold enhancement is achievable with realistic photon budgets.

  9. Teaching Students with Behavioral Disorders to Use a Negotiation Procedure: Impact on Classroom Behavior and Conflict Resolution Strategy

    ERIC Educational Resources Information Center

    Bullock, Cathy

    2012-01-01

    The impact of the instruction of a six-step problem solving negotiation procedure on the conflict resolution strategies and classroom behavior of six elementary students with challenging behaviors was examined. Moderately positive effects were found for the following negotiation strategies used by students: independent problem solving, problem…

  10. The role of vision processing in prosthetic vision.

    PubMed

    Barnes, Nick; He, Xuming; McCarthy, Chris; Horne, Lachlan; Kim, Junae; Scott, Adele; Lieby, Paulette

    2012-01-01

    Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.

  11. High-Energy Electron-Ion and Photon-Ion Collisions: Status and Challenges

    NASA Technical Reports Server (NTRS)

    Kallman, Timothy R.

    2010-01-01

    Non-LTE plasmas are ubiquitous in objects studied in the UV and X-ray energy bands. Collisional and photoionization cross sections for atoms and ions are fundamental to our ability to model such plasmas. Modeling is key in the X-ray band, where detector properties and limited spectral resolution limit the ability to measure model-independent line strengths, or other spectral features. Much of the motivation for studying such collisions and many of the tools, are not new. However, the motivation for such studies and their applications, have been affected by the advent of X-ray spectroscopy with the gratings on Chandra and XMM-Newton. In this talk I will review this motivation and describe the tools currently in use for such studies. I will also describe some current unresolved problems and the likely future needs for such data.

  12. [Limitation of therapeutic effort in Paediatric Intensive Care Units: Bioethical knowledge and attitudes of the medical profession].

    PubMed

    Morales Valdés, Gonzalo; Alvarado Romero, Tatiana; Zuleta Castro, Rodrigo

    2016-01-01

    Paediatric intensive care is a relatively new specialty, with significant technological advances that lead to the prolongation of the dying process. One of the most common bioethical problems is limitation of treatment, which is the adequacy and/or proportionality treatment, trying to avoid obstinacy and futility. To determine the experience of physicians working in Paediatric Intensive Care Units (PICU) when faced with bioethical decisions. An observational, descriptive and cross-sectional study was conducted using an anonymous questionnaire sent to physicians working in PICU. The data requested was related to potential ethical problems generated in the care of the critical child, and the procedure for their resolution. The study was approved by the Ethics Research Committee of the Faculty of Medicine UDD CAS. A total of 126 completed questionnaires were received from physicians working in 34 PICU in Chile. Almost all (98.41%) of them acknowledged having taken therapeutic limitation decisions (TLD). The most common type of TLD mentioned was the Do Not Resuscitate order (n=119), followed by the establishment of no medications (n=113), limited admission to PICU (n=81), with the withdrawal of treatment being the least mentioned (n=81). Around one-third (34.13%) felt that there were no ethical difference between introducing or removing certain treatments. Bioethical dilemmas are common in the PICU, with therapeutic limitation decisions being frequent. Many recognise not having expertise in clinical ethics, and they need continuing education in bioethics. Copyright © 2016 Sociedad Chilena de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. Microsphere-aided optical microscopy and its applications for super-resolution imaging

    NASA Astrophysics Data System (ADS)

    Upputuri, Paul Kumar; Pramanik, Manojit

    2017-12-01

    The spatial resolution of a standard optical microscope (SOM) is limited by diffraction. In visible spectrum, SOM can provide ∼ 200 nm resolution. To break the diffraction limit several approaches were developed including scanning near field microscopy, metamaterial super-lenses, nanoscale solid immersion lenses, super-oscillatory lenses, confocal fluorescence microscopy, techniques that exploit non-linear response of fluorophores like stimulated emission depletion microscopy, stochastic optical reconstruction microscopy, etc. Recently, photonic nanojet generated by a dielectric microsphere was used to break the diffraction limit. The microsphere-approach is simple, cost-effective and can be implemented under a standard microscope, hence it has gained enormous attention for super-resolution imaging. In this article, we briefly review the microsphere approach and its applications for super-resolution imaging in various optical imaging modalities.

  14. Image-based Modeling of PSF Deformation with Application to Limited Angle PET Data

    PubMed Central

    Matej, Samuel; Li, Yusheng; Panetta, Joseph; Karp, Joel S.; Surti, Suleman

    2016-01-01

    The point-spread-functions (PSFs) of reconstructed images can be deformed due to detector effects such as resolution blurring and parallax error, data acquisition geometry such as insufficient sampling or limited angular coverage in dual-panel PET systems, or reconstruction imperfections/simplifications. PSF deformation decreases quantitative accuracy and its spatial variation lowers consistency of lesion uptake measurement across the imaging field-of-view (FOV). This can be a significant problem with dual panel PET systems even when using TOF data and image reconstruction models of the detector and data acquisition process. To correct for the spatially variant reconstructed PSF distortions we propose to use an image-based resolution model (IRM) that includes such image PSF deformation effects. Originally the IRM was mostly used for approximating data resolution effects of standard PET systems with full angular coverage in a computationally efficient way, but recently it was also used to mitigate effects of simplified geometric projectors. Our work goes beyond this by including into the IRM reconstruction imperfections caused by combination of the limited angle, parallax errors, and any other (residual) deformation effects and testing it for challenging dual panel data with strongly asymmetric and variable PSF deformations. We applied and tested these concepts using simulated data based on our design for a dedicated breast imaging geometry (B-PET) consisting of dual-panel, time-of-flight (TOF) detectors. We compared two image-based resolution models; i) a simple spatially invariant approximation to PSF deformation, which captures only the general PSF shape through an elongated 3D Gaussian function, and ii) a spatially variant model using a Gaussian mixture model (GMM) to more accurately capture the asymmetric PSF shape in images reconstructed from data acquired with the B-PET scanner geometry. Results demonstrate that while both IRMs decrease the overall uptake bias in the reconstructed image, the second one with the spatially variant and accurate PSF shape model is also able to ameliorate the spatially variant deformation effects to provide consistent uptake results independent of the lesion location within the FOV. PMID:27812222

  15. Deconvolving instrumental and intrinsic broadening in core-shell x-ray spectroscopies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fister, T. T.; Seidler, G. T.; Rehr, J. J.

    2007-05-01

    Intrinsic and experimental mechanisms frequently lead to broadening of spectral features in core-shell spectroscopies. For example, intrinsic broadening occurs in x-ray absorption spectroscopy (XAS) measurements of heavy elements where the core-hole lifetime is very short. On the other hand, nonresonant x-ray Raman scattering (XRS) and other energy loss measurements are more limited by instrumental resolution. Here, we demonstrate that the Richardson-Lucy (RL) iterative algorithm provides a robust method for deconvolving instrumental and intrinsic resolutions from typical XAS and XRS data. For the K-edge XAS of Ag, we find nearly complete removal of {approx}9.3 eV full width at half maximum broadeningmore » from the combined effects of the short core-hole lifetime and instrumental resolution. We are also able to remove nearly all instrumental broadening in an XRS measurement of diamond, with the resulting improved spectrum comparing favorably with prior soft x-ray XAS measurements. We present a practical methodology for implementing the RL algorithm in these problems, emphasizing the importance of testing for stability of the deconvolution process against noise amplification, perturbations in the initial spectra, and uncertainties in the core-hole lifetime.« less

  16. Equally sloped X-ray microtomography of living insects with low radiation dose and improved resolution capability

    NASA Astrophysics Data System (ADS)

    Yao, Shengkun; Fan, Jiadong; Zong, Yunbing; He, You; Zhou, Guangzhao; Sun, Zhibin; Zhang, Jianhua; Huang, Qingjie; Xiao, Tiqiao; Jiang, Huaidong

    2016-03-01

    Three-dimensional X-ray imaging of living specimens is challenging due to the limited resolution of conventional absorption contrast X-ray imaging and potential irradiation damage of biological specimens. In this letter, we present microtomography of a living specimen combining phase-contrast imaging and a Fourier-based iterative algorithm termed equally sloped tomography. Non-destructive 3D imaging of an anesthetized living yellow mealworm Tenebrio molitor was demonstrated with a relatively low dose using synchrotron generated X-rays. Based on the high-quality 3D images, branching tracheoles and different tissues of the insect in a natural state were identified and analyzed, demonstrating a significant advantage of the technique over conventional X-ray radiography or histotomy. Additionally, the insect survived without problem after a 1.92-s X-ray exposure and subsequent absorbed radiation dose of ˜1.2 Gy. No notable physiological effects were observed after reviving the insect from anesthesia. The improved static tomographic method demonstrated in this letter shows advantage in the non-destructive structural investigation of living insects in three dimensions due to the low radiation dose and high resolution capability, and offers many potential applications in biological science.

  17. Computational microscopy: illumination coding and nonlinear optimization enables gigapixel 3D phase imaging

    NASA Astrophysics Data System (ADS)

    Tian, Lei; Waller, Laura

    2017-05-01

    Microscope lenses can have either large field of view (FOV) or high resolution, not both. Computational microscopy based on illumination coding circumvents this limit by fusing images from different illumination angles using nonlinear optimization algorithms. The result is a Gigapixel-scale image having both wide FOV and high resolution. We demonstrate an experimentally robust reconstruction algorithm based on a 2nd order quasi-Newton's method, combined with a novel phase initialization scheme. To further extend the Gigapixel imaging capability to 3D, we develop a reconstruction method to process the 4D light field measurements from sequential illumination scanning. The algorithm is based on a 'multislice' forward model that incorporates both 3D phase and diffraction effects, as well as multiple forward scatterings. To solve the inverse problem, an iterative update procedure that combines both phase retrieval and 'error back-propagation' is developed. To avoid local minimum solutions, we further develop a novel physical model-based initialization technique that accounts for both the geometric-optic and 1st order phase effects. The result is robust reconstructions of Gigapixel 3D phase images having both wide FOV and super resolution in all three dimensions. Experimental results from an LED array microscope were demonstrated.

  18. Optical system design with wide field of view and high resolution based on monocentric multi-scale construction

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Wang, Hu; Xiao, Nan; Shen, Yang; Xue, Yaoke

    2018-03-01

    With the development of related technology gradually mature in the field of optoelectronic information, it is a great demand to design an optical system with high resolution and wide field of view(FOV). However, as it is illustrated in conventional Applied Optics, there is a contradiction between these two characteristics. Namely, the FOV and imaging resolution are limited by each other. Here, based on the study of typical wide-FOV optical system design, we propose the monocentric multi-scale system design method to solve this problem. Consisting of a concentric spherical lens and a series of micro-lens array, this system has effective improvement on its imaging quality. As an example, we designed a typical imaging system, which has a focal length of 35mm and a instantaneous field angle of 14.7", as well as the FOV set to be 120°. By analyzing the imaging quality, we demonstrate that in different FOV, all the values of MTF at 200lp/mm are higher than 0.4 when the sampling frequency of the Nyquist is 200lp/mm, which shows a good accordance with our design.

  19. Equally sloped X-ray microtomography of living insects with low radiation dose and improved resolution capability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Shengkun; Fan, Jiadong; Zong, Yunbing

    Three-dimensional X-ray imaging of living specimens is challenging due to the limited resolution of conventional absorption contrast X-ray imaging and potential irradiation damage of biological specimens. In this letter, we present microtomography of a living specimen combining phase-contrast imaging and a Fourier-based iterative algorithm termed equally sloped tomography. Non-destructive 3D imaging of an anesthetized living yellow mealworm Tenebrio molitor was demonstrated with a relatively low dose using synchrotron generated X-rays. Based on the high-quality 3D images, branching tracheoles and different tissues of the insect in a natural state were identified and analyzed, demonstrating a significant advantage of the technique overmore » conventional X-ray radiography or histotomy. Additionally, the insect survived without problem after a 1.92-s X-ray exposure and subsequent absorbed radiation dose of ∼1.2 Gy. No notable physiological effects were observed after reviving the insect from anesthesia. The improved static tomographic method demonstrated in this letter shows advantage in the non-destructive structural investigation of living insects in three dimensions due to the low radiation dose and high resolution capability, and offers many potential applications in biological science.« less

  20. Wave equation datuming applied to S-wave reflection seismic data

    NASA Astrophysics Data System (ADS)

    Tinivella, U.; Giustiniani, M.; Nicolich, R.

    2018-05-01

    S-wave high-resolution reflection seismic data was processed using Wave Equation Datuming technique in order to improve signal/noise ratio, attenuating coherent noise, and seismic resolution and to solve static corrections problems. The application of this algorithm allowed obtaining a good image of the shallow subsurface geological features. Wave Equation Datuming moves shots and receivers from a surface to another datum (the datum plane), removing time shifts originated by elevation variation and/or velocity changes in the shallow subsoil. This algorithm has been developed and currently applied to P wave, but it reveals the capacity to highlight S-waves images when used to resolve thin layers in high-resolution prospecting. A good S-wave image facilitates correlation with well stratigraphies, optimizing cost/benefit ratio of any drilling. The application of Wave Equation Datuming requires a reliable velocity field, so refraction tomography was adopted. The new seismic image highlights the details of the subsoil reflectors and allows an easier integration with borehole information and geological surveys than the seismic section obtained by conventional CMP reflection processing. In conclusion, the analysis of S-wave let to characterize the shallow subsurface recognizing levels with limited thickness once we have clearly attenuated ground roll, wind and environmental noise.

  1. High resolution through-the-wall radar image based on beamspace eigenstructure subspace methods

    NASA Astrophysics Data System (ADS)

    Yoon, Yeo-Sun; Amin, Moeness G.

    2008-04-01

    Through-the-wall imaging (TWI) is a challenging problem, even if the wall parameters and characteristics are known to the system operator. Proper target classification and correct imaging interpretation require the application of high resolution techniques using limited array size. In inverse synthetic aperture radar (ISAR), signal subspace methods such as Multiple Signal Classification (MUSIC) are used to obtain high resolution imaging. In this paper, we adopt signal subspace methods and apply them to the 2-D spectrum obtained from the delay-andsum beamforming image. This is in contrast to ISAR, where raw data, in frequency and angle, is directly used to form the estimate of the covariance matrix and array response vector. Using beams rather than raw data has two main advantages, namely, it improves the signal-to-noise ratio (SNR) and can correctly image typical indoor extended targets, such as tables and cabinets, as well as point targets. The paper presents both simulated and experimental results using synthesized and real data. It compares the performance of beam-space MUSIC and Capon beamformer. The experimental data is collected at the test facility in the Radar Imaging Laboratory, Villanova University.

  2. Large scale superres 3D imaging: light-sheet single-molecule localization microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lu, Chieh Han; Chen, Peilin; Chen, Bi-Chang

    2017-02-01

    Optical imaging techniques provide much important information in understanding life science especially cellular structure and morphology because "seeing is believing". However, the resolution of optical imaging is limited by the diffraction limit, which is discovered by Ernst Abbe, i.e. λ/2(NA) (NA is the numerical aperture of the objective lens). Fluorescence super-resolution microscopic techniques such as Stimulated emission depletion microscopy (STED), Photoactivated localization microscopy (PALM), and Stochastic optical reconstruction microscopy (STORM) are invented to have the capability of seeing biological entities down to molecular level that are smaller than the diffraction limit (around 200-nm in lateral resolution). These techniques do not physically violate the Abbe limit of resolution but exploit the photoluminescence properties and labelling specificity of fluorescence molecules to achieve super-resolution imaging. However, these super-resolution techniques limit most of their applications to the 2D imaging of fixed or dead samples due to the high laser power needed or slow speed for the localization process. Extended from 2D imaging, light sheet microscopy has been proven to have a lot of applications on 3D imaging at much better spatiotemporal resolutions due to its intrinsic optical sectioning and high imaging speed. Herein, we combine the advantage of localization microscopy and light-sheet microscopy to have super-resolved cellular imaging in 3D across large field of view. With high-density labeled spontaneous blinking fluorophore and wide-field detection of light-sheet microscopy, these allow us to construct 3D super-resolution multi-cellular imaging at high speed ( minutes) by light-sheet single-molecule localization microscopy.

  3. Surgical management of temple-related problems following lateral wall rim-sparing orbital decompression for thyroid-related orbitopathy.

    PubMed

    Siah, We Fong; Patel, Bhupendra Ck; Malhotra, Raman

    2016-08-01

    To report a case series of patients with persistent temple-related problems following lateral wall rim-sparing (LWRS) orbital decompression for thyroid-related orbitopathy and to discuss their management. Retrospective review of medical records of patients referred to two oculoplastic centres (Corneoplastic Unit, Queen Victoria Hospital, East Grinstead, UK and Moran Eye Center, University of Utah, Salt Lake City, USA) for intervention to improve/alleviate temple-related problems. All patients were seeking treatment for their persistent, temple-related problems of minimum 3 years' duration post decompression. The main outcome measure was the resolution or improvement of temple-related problems. Eleven orbits of six patients (five females) with a median age of 57 years (range 23-65) were included in this study. Temple-related problems consisted of cosmetically bothersome temple hollowness (n=11; 100%), masticatory oscillopsia (n=8; 73%), temple tenderness (n=4; 36%), 'clicking' sensation (n=4; 36%) and gaze-evoked ocular pain (n=4; 36%). Nine orbits were also complicated by proptosis and exposure keratopathy. Preoperative imaging studies showed the absence of lateral wall in all 11 orbits and evidence of prolapsed lacrimal gland into the wall defect in four orbits. Intervention included the repair of the lateral wall defect with a sheet implant, orbital decompression involving fat, the medial wall or orbital floor and autologous fat transfer or synthetic filler for temple hollowness. Postoperatively, there was full resolution of masticatory oscillation, temple tenderness, 'clicking' sensation and gaze-evoked ocular pain, and an improvement in temple hollowness. Pre-existing diplopia in one patient resolved after surgery while two patients developed new-onset diplopia necessitating strabismus surgery. This is the first paper to show that persistent, troublesome temple-related problems following LWRS orbital decompression can be surgically corrected. Patients should be counselled about the potential risk of these complications when considering LWRS orbital decompression. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  4. Peripheral detection and resolution with mid-/long-wavelength and short-wavelength sensitive cone systems.

    PubMed

    Zhu, Hai-Feng; Zele, Andrew J; Suheimat, Marwan; Lambert, Andrew J; Atchison, David A

    2016-08-01

    This study compared neural resolution and detection limits of the human mid-/long-wavelength and short-wavelength cone systems with anatomical estimates of photoreceptor and retinal ganglion cell spacings and sizes. Detection and resolution limits were measured from central fixation out to 35° eccentricity across the horizontal visual field using a modified Lotmar interferometer. The mid-/long-wavelength cone system was studied using a green (550 nm) test stimulus to which S-cones have low sensitivity. To bias resolution and detection to the short-wavelength cone system, a blue (450 nm) test stimulus was presented against a bright yellow background that desensitized the M- and L-cones. Participants were three trichromatic males with normal visual functions. With green stimuli, resolution showed a steep central-peripheral gradient that was similar between participants, whereas the detection gradient was shallower and patterns were different between participants. Detection and resolution with blue stimuli were poorer than for green stimuli. The detection of blue stimuli was superior to resolution across the horizontal visual field and the patterns were different between participants. The mid-/long-wavelength cone system's resolution is limited by midget ganglion cell spacing and its detection is limited by the size of the M- and L-cone photoreceptors, consistent with previous observations. We found that no such simple relationships occur for the short-wavelength cone system between resolution and the bistratified ganglion cell spacing, nor between detection and the S-cone photoreceptor sizes.

  5. Estimation of a super-resolved PSF for the data reduction of undersampled stellar observations. Deriving an accurate model for fitting photometry with Corot space telescope

    NASA Astrophysics Data System (ADS)

    Pinheiro da Silva, L.; Auvergne, M.; Toublanc, D.; Rowe, J.; Kuschnig, R.; Matthews, J.

    2006-06-01

    Context: .Fitting photometry algorithms can be very effective provided that an accurate model of the instrumental point spread function (PSF) is available. When high-precision time-resolved photometry is required, however, the use of point-source star images as empirical PSF models can be unsatisfactory, due to the limits in their spatial resolution. Theoretically-derived models, on the other hand, are limited by the unavoidable assumption of simplifying hypothesis, while the use of analytical approximations is restricted to regularly-shaped PSFs. Aims: .This work investigates an innovative technique for space-based fitting photometry, based on the reconstruction of an empirical but properly-resolved PSF. The aim is the exploitation of arbitrary star images, including those produced under intentional defocus. The cases of both MOST and COROT, the first space telescopes dedicated to time-resolved stellar photometry, are considered in the evaluation of the effectiveness and performances of the proposed methodology. Methods: .PSF reconstruction is based on a set of star images, periodically acquired and presenting relative subpixel displacements due to motion of the acquisition system, in this case the jitter of the satellite attitude. Higher resolution is achieved through the solution of the inverse problem. The approach can be regarded as a special application of super-resolution techniques, though a specialised procedure is proposed to better meet the PSF determination problem specificities. The application of such a model to fitting photometry is illustrated by numerical simulations for COROT and on a complete set of observations from MOST. Results: .We verify that, in both scenarios, significantly better resolved PSFs can be estimated, leading to corresponding improvements in photometric results. For COROT, indeed, subpixel reconstruction enabled the successful use of fitting algorithms despite its rather complex PSF profile, which could hardly be modeled otherwise. For MOST, whose direct-imaging PSF is closer to the ordinary, comparison to other models or photometry techniques were carried out and confirmed the potential of PSF reconstruction in real observational conditions.

  6. Weak data do not make a free lunch, only a cheap meal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Zhipu; Rajashankar, Kanagalaghatta; Dauter, Zbigniew

    2014-01-17

    Four data sets were processed at resolutions significantly exceeding the criteria traditionally used for estimating the diffraction data resolution limit. The analysis of these data and the corresponding model-quality indicators suggests that the criteria of resolution limits widely adopted in the past may be somewhat conservative. Various parameters, such asR mergeandI/σ(I), optical resolution and the correlation coefficients CC 1/2and CC*, can be used for judging the internal data quality, whereas the reliability factorsRandR freeas well as the maximum-likelihood target values and real-space map correlation coefficients can be used to estimate the agreement between the data and the refined model. However,more » none of these criteria provide a reliable estimate of the data resolution cutoff limit. The analysis suggests that extension of the maximum resolution by about 0.2 Å beyond the currently adopted limit where theI/σ(I) value drops to 2.0 does not degrade the quality of the refined structural models, but may sometimes be advantageous. Such an extension may be particularly beneficial for significantly anisotropic diffraction. Extension of the maximum resolution at the stage of data collection and structure refinement is cheap in terms of the required effort and is definitely more advisable than accepting a too conservative resolution cutoff, which is unfortunately quite frequent among the crystal structures deposited in the Protein Data Bank.« less

  7. Review and Analysis of Peak Tracking Techniques for Fiber Bragg Grating Sensors

    PubMed Central

    2017-01-01

    Fiber Bragg Grating (FBG) sensors are among the most popular elements for fiber optic sensor networks used for the direct measurement of temperature and strain. Modern FBG interrogation setups measure the FBG spectrum in real-time, and determine the shift of the Bragg wavelength of the FBG in order to estimate the physical parameters. The problem of determining the peak wavelength of the FBG from a spectral measurement limited in resolution and noise, is referred as the peak-tracking problem. In this work, the several peak-tracking approaches are reviewed and classified, outlining their algorithmic implementations: the methods based on direct estimation, interpolation, correlation, resampling, transforms, and optimization are discussed in all their proposed implementations. Then, a simulation based on coupled-mode theory compares the performance of the main peak-tracking methods, in terms of accuracy and signal to noise ratio resilience. PMID:29039804

  8. Removable orthodontic appliances: new perspectives on capabilities and efficiency.

    PubMed

    Hamid Zafarmand, A; Mahdi Zafarmand, M

    2013-06-01

    Removable appliances are a dependable choice for many patients but like all orthodontic appliances, they have some limitations in use. Patient selection and appropriate appliance design are two key factors for success. Many patients, especially adults, prefer intra-oral appliances to extra-oral devices. Sometimes a removable intra-oral appliance can solve a dental problem in a shorter period of time compared to fixed treatment, and this has also been repeatedly seen in molar distalisation. From the interceptive perspective, the appliance can prevent or alleviate an impending crowding for erupting permanent incisors. This article describes 5 patients with different orthodontic problems: impending crowding for erupting upper canine with 2 approaches, provision of space for upper cuspids, resolution of chronic attrition of anterior teeth, relief of space shortage for upper canines eruption, and reduction of excess overjet. All subjects were treated with removable appliances of various designs.

  9. Spectral partitioning in equitable graphs.

    PubMed

    Barucca, Paolo

    2017-06-01

    Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.

  10. Spectral partitioning in equitable graphs

    NASA Astrophysics Data System (ADS)

    Barucca, Paolo

    2017-06-01

    Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.

  11. Guest Editorial Image Quality

    NASA Astrophysics Data System (ADS)

    Cheatham, Patrick S.

    1982-02-01

    The term image quality can, unfortunately, apply to anything from a public relations firm's discussion to a comparison between corner drugstores' film processing. If we narrow the discussion to optical systems, we clarify the problem somewhat, but only slightly. We are still faced with a multitude of image quality measures all different, and all couched in different terminology. Optical designers speak of MTF values, digital processors talk about summations of before and after image differences, pattern recognition engineers allude to correlation values, and radar imagers use side-lobe response values measured in decibels. Further complexity is introduced by terms such as information content, bandwidth, Strehl ratios, and, of course, limiting resolution. The problem is to compare these different yardsticks and try to establish some concrete ideas about evaluation of a final image. We need to establish the image attributes which are the most important to perception of the image in question and then begin to apply the different system parameters to those attributes.

  12. A Very High Order, Adaptable MESA Implementation for Aeroacoustic Computations

    NASA Technical Reports Server (NTRS)

    Dydson, Roger W.; Goodrich, John W.

    2000-01-01

    Since computational efficiency and wave resolution scale with accuracy, the ideal would be infinitely high accuracy for problems with widely varying wavelength scales. Currently, many of the computational aeroacoustics methods are limited to 4th order accurate Runge-Kutta methods in time which limits their resolution and efficiency. However, a new procedure for implementing the Modified Expansion Solution Approximation (MESA) schemes, based upon Hermitian divided differences, is presented which extends the effective accuracy of the MESA schemes to 57th order in space and time when using 128 bit floating point precision. This new approach has the advantages of reducing round-off error, being easy to program. and is more computationally efficient when compared to previous approaches. Its accuracy is limited only by the floating point hardware. The advantages of this new approach are demonstrated by solving the linearized Euler equations in an open bi-periodic domain. A 500th order MESA scheme can now be created in seconds, making these schemes ideally suited for the next generation of high performance 256-bit (double quadruple) or higher precision computers. This ease of creation makes it possible to adapt the algorithm to the mesh in time instead of its converse: this is ideal for resolving varying wavelength scales which occur in noise generation simulations. And finally, the sources of round-off error which effect the very high order methods are examined and remedies provided that effectively increase the accuracy of the MESA schemes while using current computer technology.

  13. Applying petrophysical models to radar travel time and electrical resistivity tomograms: Resolution-dependent limitations

    USGS Publications Warehouse

    Day-Lewis, F. D.; Singha, K.; Binley, A.M.

    2005-01-01

    Geophysical imaging has traditionally provided qualitative information about geologic structure; however, there is increasing interest in using petrophysical models to convert tomograms to quantitative estimates of hydrogeologic, mechanical, or geochemical parameters of interest (e.g., permeability, porosity, water content, and salinity). Unfortunately, petrophysical estimation based on tomograms is complicated by limited and variable image resolution, which depends on (1) measurement physics (e.g., electrical conduction or electromagnetic wave propagation), (2) parameterization and regularization, (3) measurement error, and (4) spatial variability. We present a framework to predict how core-scale relations between geophysical properties and hydrologic parameters are altered by the inversion, which produces smoothly varying pixel-scale estimates. We refer to this loss of information as "correlation loss." Our approach upscales the core-scale relation to the pixel scale using the model resolution matrix from the inversion, random field averaging, and spatial statistics of the geophysical property. Synthetic examples evaluate the utility of radar travel time tomography (RTT) and electrical-resistivity tomography (ERT) for estimating water content. This work provides (1) a framework to assess tomograms for geologic parameter estimation and (2) insights into the different patterns of correlation loss for ERT and RTT. Whereas ERT generally performs better near boreholes, RTT performs better in the interwell region. Application of petrophysical models to the tomograms in our examples would yield misleading estimates of water content. Although the examples presented illustrate the problem of correlation loss in the context of near-surface geophysical imaging, our results have clear implications for quantitative analysis of tomograms for diverse geoscience applications. Copyright 2005 by the American Geophysical Union.

  14. Studying the Light Pollution around Urban Observatories: Columbus State University’s WestRock Observatory

    NASA Astrophysics Data System (ADS)

    O'Keeffe, Brendon Andrew; Johnson, Michael

    2017-01-01

    Light pollution plays an ever increasing role in the operations of observatories across the world. This is especially true in urban environments like Columbus, GA, where Columbus State University’s WestRock Observatory is located. Light pollution’s effects on an observatory include high background levels, which results in a lower signal to noise ratio. Overall, this will limit what the telescope can detect, and therefore limit the capabilities of the observatory as a whole.Light pollution has been mapped in Columbus before using VIIRS DNB composites. However, this approach did not provide the detailed resolution required to narrow down the problem areas around the vicinity of the observatory. The purpose of this study is to assess the current state of light pollution surrounding the WestRock observatory by measuring and mapping the brightness of the sky due to light pollution using light meters and geographic information system (GIS) software.Compared to VIIRS data this study allows for an improved spatial resolution and a direct measurement of the sky background. This assessment will enable future studies to compare their results to the baseline established here, ensuring that any changes to the way the outdoors are illuminated and their effects can be accurately measured, and counterbalanced.

  15. Optimal and fast rotational alignment of volumes with missing data in Fourier space.

    PubMed

    Shatsky, Maxim; Arbelaez, Pablo; Glaeser, Robert M; Brenner, Steven E

    2013-11-01

    Electron tomography of intact cells has the potential to reveal the entire cellular content at a resolution corresponding to individual macromolecular complexes. Characterization of macromolecular complexes in tomograms is nevertheless an extremely challenging task due to the high level of noise, and due to the limited tilt angle that results in missing data in Fourier space. By identifying particles of the same type and averaging their 3D volumes, it is possible to obtain a structure at a more useful resolution for biological interpretation. Currently, classification and averaging of sub-tomograms is limited by the speed of computational methods that optimize alignment between two sub-tomographic volumes. The alignment optimization is hampered by the fact that the missing data in Fourier space has to be taken into account during the rotational search. A similar problem appears in single particle electron microscopy where the random conical tilt procedure may require averaging of volumes with a missing cone in Fourier space. We present a fast implementation of a method guaranteed to find an optimal rotational alignment that maximizes the constrained cross-correlation function (cCCF) computed over the actual overlap of data in Fourier space. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  16. A Semi-implicit Method for Resolution of Acoustic Waves in Low Mach Number Flows

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2002-09-01

    A semi-implicit numerical method for time accurate simulation of compressible flow is presented. By extending the low Mach number pressure correction method, a Helmholtz equation for pressure is obtained in the case of compressible flow. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity, resulting in significant efficiency gains. Use of a discretization that is centered in both time and space results in zero artificial damping of acoustic waves. The method is attractive for problems in which Mach numbers are low, and the acoustic waves of most interest are those having low frequency, such as acoustic combustion instabilities. Both of these characteristics suggest the use of time steps larger than those allowable by an acoustic CFL limitation. In some cases it may be desirable to include a small amount of numerical dissipation to eliminate oscillations due to small-wavelength, high-frequency, acoustic modes, which are not of interest; therefore, a provision for doing this in a controlled manner is included in the method. Results of the method for several model problems are presented, and the performance of the method in a large eddy simulation is examined.

  17. Analysis and design of numerical schemes for gas dynamics 1: Artificial diffusion, upwind biasing, limiters and their effect on accuracy and multigrid convergence

    NASA Technical Reports Server (NTRS)

    Jameson, Antony

    1994-01-01

    The theory of non-oscillatory scalar schemes is developed in this paper in terms of the local extremum diminishing (LED) principle that maxima should not increase and minima should not decrease. This principle can be used for multi-dimensional problems on both structured and unstructured meshes, while it is equivalent to the total variation diminishing (TVD) principle for one-dimensional problems. A new formulation of symmetric limited positive (SLIP) schemes is presented, which can be generalized to produce schemes with arbitrary high order of accuracy in regions where the solution contains no extrema, and which can also be implemented on multi-dimensional unstructured meshes. Systems of equations lead to waves traveling with distinct speeds and possibly in opposite directions. Alternative treatments using characteristic splitting and scalar diffusive fluxes are examined, together with modification of the scalar diffusion through the addition of pressure differences to the momentum equations to produce full upwinding in supersonic flow. This convective upwind and split pressure (CUSP) scheme exhibits very rapid convergence in multigrid calculations of transonic flow, and provides excellent shock resolution at very high Mach numbers.

  18. Translation-aware semantic segmentation via conditional least-square generative adversarial networks

    NASA Astrophysics Data System (ADS)

    Zhang, Mi; Hu, Xiangyun; Zhao, Like; Pang, Shiyan; Gong, Jinqi; Luo, Min

    2017-10-01

    Semantic segmentation has recently made rapid progress in the field of remote sensing and computer vision. However, many leading approaches cannot simultaneously translate label maps to possible source images with a limited number of training images. The core issue is insufficient adversarial information to interpret the inverse process and proper objective loss function to overcome the vanishing gradient problem. We propose the use of conditional least squares generative adversarial networks (CLS-GAN) to delineate visual objects and solve these problems. We trained the CLS-GAN network for semantic segmentation to discriminate dense prediction information either from training images or generative networks. We show that the optimal objective function of CLS-GAN is a special class of f-divergence and yields a generator that lies on the decision boundary of discriminator that reduces possible vanished gradient. We also demonstrate the effectiveness of the proposed architecture at translating images from label maps in the learning process. Experiments on a limited number of high resolution images, including close-range and remote sensing datasets, indicate that the proposed method leads to the improved semantic segmentation accuracy and can simultaneously generate high quality images from label maps.

  19. Mastering high resolution tip-enhanced Raman spectroscopy: towards a shift of perception.

    PubMed

    Richard-Lacroix, Marie; Zhang, Yao; Dong, Zhenchao; Deckert, Volker

    2017-07-03

    Recent years have seen tremendous improvement of our understanding of high resolution reachable in TERS experiments, forcing us to re-evaluate our understanding of the intrinsic limits of this field, but also exposing several inconsistencies. On the one hand, more and more recent experimental results have provided us with clear indications of spatial resolutions down to a few nanometres or even on the subnanometre scale. Moreover, lessons learned from recent theoretical investigations clearly support such high resolutions, and vice versa the obvious theoretical impossibility to evade high resolution from a purely plasmonic point of view. On the other hand, most of the published TERS results still, to date, claim a resolution on the order of tens of nanometres that would be somehow limited by the tip apex, a statement well accepted for the past 2 decades. Overall, this now leads the field to a fundamental question: how can this divergence be justified? The answer to this question brings up an equally critical one: how can this gap be bridged? This review aims at raising a fundamental discussion related to the resolution limits of tip-enhanced Raman spectroscopy, at revisiting our comprehension of the factors limiting it both from a theoretical and an experimental point of view and at providing indications on how to move the field ahead. It is our belief that a much deeper understanding of the real accessible lateral resolution in TERS and the practical factors that limit them will simultaneously help us to fully explore the potential of this technique for studying nanoscale features in organic, inorganic and biological systems, and also to improve both the reproducibility and the accuracy of routine TERS studies. A significant improvement of our comprehension of the accessible resolution in TERS is thus critical for a broad audience, even in certain contexts where high resolution TERS is not the desired outcome.

  20. Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Held, Isaac; V. Balaji; Fueglistaler, Stephan

    We have constructed and analyzed a series of idealized models of tropical convection interacting with large-scale circulations, with 25-50km resolution and with 1-2km cloud resolving resolution to set the stage for rigorous tests of convection closure schemes in high resolution global climate models. Much of the focus has been on the climatology of tropical cyclogenesis in rotating systems and the related problem of the spontaneous aggregation of convection in non-rotating systems. The PI (Held) will be delivering the honorary Bjerknes lecture at the Fall 2016 AGU meeting in December on this work. We have also provided new analyses of long-standingmore » issues related to the interaction between convection and the large-scale circulation: Kelvin waves in the upper troposphere and lower stratosphere, water vapor transport into the stratosphere, and upper tropospheric temperature trends. The results of these analyses help to improve our understanding of processes, and provide tests for future high resolution global modeling. Our final goal of testing new convections schemes in next-generation global atmospheric models at GFDL has been left for future work due to the complexity of the idealized model results meant as tests for these models uncovered in this work and to computational resource limitations. 11 papers have been published with support from this grant, 2 are in review, and another major summary paper is in preparation.« less

  1. High-Speed Microscale Optical Tracking Using Digital Frequency-Domain Multiplexing.

    PubMed

    Maclachlan, Robert A; Riviere, Cameron N

    2009-06-01

    Position-sensitive detectors (PSDs), or lateral-effect photodiodes, are commonly used for high-speed, high-resolution optical position measurement. This paper describes the instrument design for multidimensional position and orientation measurement based on the simultaneous position measurement of multiple modulated sources using frequency-domain-multiplexed (FDM) PSDs. The important advantages of this optical configuration in comparison with laser/mirror combinations are that it has a large angular measurement range and allows the use of a probe that is small in comparison with the measurement volume. We review PSD characteristics and quantitative resolution limits, consider the lock-in amplifier measurement system as a communication link, discuss the application of FDM to PSDs, and make comparisons with time-domain techniques. We consider the phase-sensitive detector as a multirate DSP problem, explore parallels with Fourier spectral estimation and filter banks, discuss how to choose the modulation frequencies and sample rates that maximize channel isolation under design constraints, and describe efficient digital implementation. We also discuss hardware design considerations, sensor calibration, probe construction and calibration, and 3-D measurement by triangulation using two sensors. As an example, we characterize the resolution, speed, and accuracy of an instrument that measures the position and orientation of a 10 mm × 5 mm probe in 5 degrees of freedom (DOF) over a 30-mm cube with 4-μm peak-to-peak resolution at 1-kHz sampling.

  2. 3D undersampled golden-radial phase encoding for DCE-MRA using inherently regularized iterative SENSE.

    PubMed

    Prieto, Claudia; Uribe, Sergio; Razavi, Reza; Atkinson, David; Schaeffter, Tobias

    2010-08-01

    One of the current limitations of dynamic contrast-enhanced MR angiography is the requirement of both high spatial and high temporal resolution. Several undersampling techniques have been proposed to overcome this problem. However, in most of these methods the tradeoff between spatial and temporal resolution is constant for all the time frames and needs to be specified prior to data collection. This is not optimal for dynamic contrast-enhanced MR angiography where the dynamics of the process are difficult to predict and the image quality requirements are changing during the bolus passage. Here, we propose a new highly undersampled approach that allows the retrospective adaptation of the spatial and temporal resolution. The method combines a three-dimensional radial phase encoding trajectory with the golden angle profile order and non-Cartesian Sensitivity Encoding (SENSE) reconstruction. Different regularization images, obtained from the same acquired data, are used to stabilize the non-Cartesian SENSE reconstruction for the different phases of the bolus passage. The feasibility of the proposed method was demonstrated on a numerical phantom and in three-dimensional intracranial dynamic contrast-enhanced MR angiography of healthy volunteers. The acquired data were reconstructed retrospectively with temporal resolutions from 1.2 sec to 8.1 sec, providing a good depiction of small vessels, as well as distinction of different temporal phases.

  3. Hard-tip, soft-spring lithography.

    PubMed

    Shim, Wooyoung; Braunschweig, Adam B; Liao, Xing; Chai, Jinan; Lim, Jong Kuk; Zheng, Gengfeng; Mirkin, Chad A

    2011-01-27

    Nanofabrication strategies are becoming increasingly expensive and equipment-intensive, and consequently less accessible to researchers. As an alternative, scanning probe lithography has become a popular means of preparing nanoscale structures, in part owing to its relatively low cost and high resolution, and a registration accuracy that exceeds most existing technologies. However, increasing the throughput of cantilever-based scanning probe systems while maintaining their resolution and registration advantages has from the outset been a significant challenge. Even with impressive recent advances in cantilever array design, such arrays tend to be highly specialized for a given application, expensive, and often difficult to implement. It is therefore difficult to imagine commercially viable production methods based on scanning probe systems that rely on conventional cantilevers. Here we describe a low-cost and scalable cantilever-free tip-based nanopatterning method that uses an array of hard silicon tips mounted onto an elastomeric backing. This method-which we term hard-tip, soft-spring lithography-overcomes the throughput problems of cantilever-based scanning probe systems and the resolution limits imposed by the use of elastomeric stamps and tips: it is capable of delivering materials or energy to a surface to create arbitrary patterns of features with sub-50-nm resolution over centimetre-scale areas. We argue that hard-tip, soft-spring lithography is a versatile nanolithography strategy that should be widely adopted by academic and industrial researchers for rapid prototyping applications.

  4. Combining HJ CCD, GF-1 WFV and MODIS Data to Generate Daily High Spatial Resolution Synthetic Data for Environmental Process Monitoring.

    PubMed

    Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao

    2015-08-20

    The limitations of satellite data acquisition mean that there is a lack of satellite data with high spatial and temporal resolutions for environmental process monitoring. In this study, we address this problem by applying the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) and the Spatial and Temporal Data Fusion Approach (STDFA) to combine Huanjing satellite charge coupled device (HJ CCD), Gaofen satellite no. 1 wide field of view camera (GF-1 WFV) and Moderate Resolution Imaging Spectroradiometer (MODIS) data to generate daily high spatial resolution synthetic data for land surface process monitoring. Actual HJ CCD and GF-1 WFV data were used to evaluate the precision of the synthetic images using the correlation analysis method. Our method was tested and validated for two study areas in Xinjiang Province, China. The results show that both the ESTARFM and STDFA can be applied to combine HJ CCD and MODIS reflectance data, and GF-1 WFV and MODIS reflectance data, to generate synthetic HJ CCD data and synthetic GF-1 WFV data that closely match actual data with correlation coefficients (r) greater than 0.8989 and 0.8643, respectively. Synthetic red- and near infrared (NIR)-band data generated by ESTARFM are more suitable for the calculation of Normalized Different Vegetation Index (NDVI) than the data generated by STDFA.

  5. Combining HJ CCD, GF-1 WFV and MODIS Data to Generate Daily High Spatial Resolution Synthetic Data for Environmental Process Monitoring

    PubMed Central

    Wu, Mingquan; Huang, Wenjiang; Niu, Zheng; Wang, Changyao

    2015-01-01

    The limitations of satellite data acquisition mean that there is a lack of satellite data with high spatial and temporal resolutions for environmental process monitoring. In this study, we address this problem by applying the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) and the Spatial and Temporal Data Fusion Approach (STDFA) to combine Huanjing satellite charge coupled device (HJ CCD), Gaofen satellite no. 1 wide field of view camera (GF-1 WFV) and Moderate Resolution Imaging Spectroradiometer (MODIS) data to generate daily high spatial resolution synthetic data for land surface process monitoring. Actual HJ CCD and GF-1 WFV data were used to evaluate the precision of the synthetic images using the correlation analysis method. Our method was tested and validated for two study areas in Xinjiang Province, China. The results show that both the ESTARFM and STDFA can be applied to combine HJ CCD and MODIS reflectance data, and GF-1 WFV and MODIS reflectance data, to generate synthetic HJ CCD data and synthetic GF-1 WFV data that closely match actual data with correlation coefficients (r) greater than 0.8989 and 0.8643, respectively. Synthetic red- and near infrared (NIR)-band data generated by ESTARFM are more suitable for the calculation of Normalized Different Vegetation Index (NDVI) than the data generated by STDFA. PMID:26308017

  6. Numerical methods for systems of conservation laws of mixed type using flux splitting

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1990-01-01

    The essentially non-oscillatory (ENO) finite difference scheme is applied to systems of conservation laws of mixed hyperbolic-elliptic type. A flux splitting, with the corresponding Jacobi matrices having real and positive/negative eigenvalues, is used. The hyperbolic ENO operator is applied separately. The scheme is numerically tested on the van der Waals equation in fluid dynamics. Convergence was observed with good resolution to weak solutions for various Riemann problems, which are then numerically checked to be admissible as the viscosity-capillarity limits. The interesting phenomena of the shrinking of elliptic regions if they are present in the initial conditions were also observed.

  7. A lung sound classification system based on the rational dilation wavelet transform.

    PubMed

    Ulukaya, Sezer; Serbes, Gorkem; Sen, Ipek; Kahya, Yasemin P

    2016-08-01

    In this work, a wavelet based classification system that aims to discriminate crackle, normal and wheeze lung sounds is presented. While the previous works related with this problem use constant low Q-factor wavelets, which have limited frequency resolution and can not cope with oscillatory signals, in the proposed system, the Rational Dilation Wavelet Transform, whose Q-factors can be tuned, is employed. Proposed system yields an accuracy of 95 % for crackle, 97 % for wheeze, 93.50 % for normal and 95.17 % for total sound signal types using energy feature subset and proposed approach is superior to conventional low Q-factor wavelet analysis.

  8. Evaluation of radiometric and geometric characteristics of LANDSAT-D imaging system

    NASA Technical Reports Server (NTRS)

    Bender, L. U.; Podwysocki, M. H.; Rowan, L.; Salisbury, J. (Principal Investigator)

    1983-01-01

    Problems, accomplishments, and significant results associated with the evaluation of the LANDSAT-D thematic mapper system are outlined. The higher resolution (over MSS) causes the TM data to approach more closely the quality of high altitude photographs. Thus far, it appears that the data can be used for map inspection and in certain instances for limited map revision. Image maps can be made at a scale of 1:100,000 and perhaps up to 1:62,500. It was also shown that TM data can help locate rocks containing minerals with high hydroxol content, such as clays, gypsum, alunite, and sericite.

  9. REVIEWS OF TOPICAL PROBLEMS: Global phase-stable radiointerferometric systems

    NASA Astrophysics Data System (ADS)

    Dravskikh, A. F.; Korol'kov, Dimitrii V.; Pariĭskiĭ, Yu N.; Stotskiĭ, A. A.; Finkel'steĭn, A. M.; Fridman, P. A.

    1981-12-01

    We discuss from a unitary standpoint the possibility of building a phase-stable interferometric system with very long baselines that operate around the clock with real-time data processing. The various problems involved in the realization of this idea are discussed: the methods of suppression of instrumental and tropospheric phase fluctuations, the methods for constructing two-dimensional images and determining the coordinates of radio sources with high angular resolution, and the problem of the optimal structure of the interferometric system. We review in detail the scientific problems from the various branches of natural science (astrophysics, cosmology, geophysics, geodynamics, astrometry, etc.) whose solution requires superhigh angular resolution.

  10. Shot-noise-limited monitoring and phase locking of the motion of a single trapped ion.

    PubMed

    Bushev, P; Hétet, G; Slodička, L; Rotter, D; Wilson, M A; Schmidt-Kaler, F; Eschner, J; Blatt, R

    2013-03-29

    We perform a high-resolution real-time readout of the motion of a single trapped and laser-cooled Ba+ ion. By using an interferometric setup, we demonstrate a shot-noise-limited measurement of thermal oscillations with a resolution of 4 times the standard quantum limit. We apply the real-time monitoring for phase control of the ion motion through a feedback loop, suppressing the photon recoil-induced phase diffusion. Because of the spectral narrowing in the phase-locked mode, the coherent ion oscillation is measured with a resolution of about 0.3 times the standard quantum limit.

  11. NEW INSIGHTS INTO THE PROBLEM OF THE SURFACE GRAVITY DISTRIBUTION OF COOL DA WHITE DWARFS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tremblay, P.-E.; Bergeron, P.; Gianninas, A.

    2010-04-01

    We review at length the longstanding problem in the spectroscopic analysis of cool hydrogen-line (DA) white dwarfs (T{sub eff}< 13,000 K) where gravities are significantly higher than those found in hotter DA stars. The first solution that has been proposed for this problem is a mild and systematic helium contamination from convective mixing that would mimic the high gravities. We constrain this scenario by determining the helium abundances in six cool DA white dwarfs using high-resolution spectra from the Keck I 10 m telescope. We obtain no detections, with upper limits as low as He/H = 0.04 in some cases.more » This allows us to put this scenario to rest for good. We also extend our model grid to lower temperatures using improved Stark profiles with non-ideal gas effects from Tremblay and Bergeron and find that the gravity distribution of cool objects remains suspiciously high. Finally, we find that photometric masses are, on average, in agreement with expected values, and that the high-log g problem is so far unique to the spectroscopic approach.« less

  12. Stochastic Downscaling of Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Rasera, Luiz Gustavo; Mariethoz, Gregoire; Lane, Stuart N.

    2016-04-01

    High-resolution digital elevation models (HR-DEMs) are extremely important for the understanding of small-scale geomorphic processes in Alpine environments. In the last decade, remote sensing techniques have experienced a major technological evolution, enabling fast and precise acquisition of HR-DEMs. However, sensors designed to measure elevation data still feature different spatial resolution and coverage capabilities. Terrestrial altimetry allows the acquisition of HR-DEMs with centimeter to millimeter-level precision, but only within small spatial extents and often with dead ground problems. Conversely, satellite radiometric sensors are able to gather elevation measurements over large areas but with limited spatial resolution. In the present study, we propose an algorithm to downscale low-resolution satellite-based DEMs using topographic patterns extracted from HR-DEMs derived for example from ground-based and airborne altimetry. The method consists of a multiple-point geostatistical simulation technique able to generate high-resolution elevation data from low-resolution digital elevation models (LR-DEMs). Initially, two collocated DEMs with different spatial resolutions serve as an input to construct a database of topographic patterns, which is also used to infer the statistical relationships between the two scales. High-resolution elevation patterns are then retrieved from the database to downscale a LR-DEM through a stochastic simulation process. The output of the simulations are multiple equally probable DEMs with higher spatial resolution that also depict the large-scale geomorphic structures present in the original LR-DEM. As these multiple models reflect the uncertainty related to the downscaling, they can be employed to quantify the uncertainty of phenomena that are dependent on fine topography, such as catchment hydrological processes. The proposed methodology is illustrated for a case study in the Swiss Alps. A swissALTI3D HR-DEM (with 5 m resolution) and a SRTM-derived LR-DEM from the Western Alps are used to downscale a SRTM-based LR-DEM from the eastern part of the Alps. The results show that the method is capable of generating multiple high-resolution synthetic DEMs that reproduce the spatial structure and statistics of the original DEM.

  13. Super-Resolution Scanning Laser Microscopy Based on Virtually Structured Detection

    PubMed Central

    Zhi, Yanan; Wang, Benquan; Yao, Xincheng

    2016-01-01

    Light microscopy plays a key role in biological studies and medical diagnosis. The spatial resolution of conventional optical microscopes is limited to approximately half the wavelength of the illumination light as a result of the diffraction limit. Several approaches—including confocal microscopy, stimulated emission depletion microscopy, stochastic optical reconstruction microscopy, photoactivated localization microscopy, and structured illumination microscopy—have been established to achieve super-resolution imaging. However, none of these methods is suitable for the super-resolution ophthalmoscopy of retinal structures because of laser safety issues and inevitable eye movements. We recently experimentally validated virtually structured detection (VSD) as an alternative strategy to extend the diffraction limit. Without the complexity of structured illumination, VSD provides an easy, low-cost, and phase artifact–free strategy to achieve super-resolution in scanning laser microscopy. In this article we summarize the basic principles of the VSD method, review our demonstrated single-point and line-scan super-resolution systems, and discuss both technical challenges and the potential of VSD-based instrumentation for super-resolution ophthalmoscopy of the retina. PMID:27480461

  14. Deep sea mega-geomorphology: Progress and problems

    NASA Technical Reports Server (NTRS)

    Bryan, W. B.

    1985-01-01

    Historically, marine geologists have always worked with mega-scale morphology. This is a consequence both of the scale of the ocean basins and of the low resolution of the observational remote sensing tools available until very recently. In fact, studies of deep sea morphology have suffered from a serious gap in observational scale. Traditional wide-beam echo sounding gave images on a scale of miles, while deep sea photography has been limited to scales of a few tens of meters. Recent development of modern narrow-beam echo sounding coupled with computer-controlled swath mapping systems, and development of high-resolution deep-towed side-scan sonar, are rapidly filling in the scale gap. These technologies also can resolve morphologic detail on a scale of a few meters or less. As has also been true in planetary imaging projects, the ability to observe phenomena over a range of scales has proved very effective in both defining processes and in placing them in proper context.

  15. A design study for the use of a multiple aperture deployable antenna for soil moisture remote sensing satellite applications

    NASA Technical Reports Server (NTRS)

    Foldes, P.

    1986-01-01

    The instrumentation problems associated with the measurement of soil moisture with a meaningful spatial and temperature resolution at a global scale are addressed. For this goal only medium term available affordable technology will be considered. The study while limited in scope, will utilize a large scale antenna structure, which is being developed presently as an experimental model. The interface constraints presented by a singel Space Transportation System (STS) flight will be assumed. Methodology consists of the following steps: review of science requirements; analyze effects of these requirements; present basic system engineering considerations and trade-offs related to orbit parameters, number of spacecraft and their lifetime, observation angles, beamwidth, crossover and swath, coverage percentage, beam quality and resolution, instrument quantities, and integration time; bracket the key system characteristics and develop an electromagnetic design of the antenna-passive radiometer system. Several aperture division combinations and feed array concepts are investigated to achieve maximum feasible performacne within the stated STS constraints.

  16. Challenges in quantitative crystallographic characterization of 3D thin films by ACOM-TEM.

    PubMed

    Kobler, A; Kübel, C

    2017-02-01

    Automated crystal orientation mapping for transmission electron microscopy (ACOM-TEM) has become an easy to use method for the investigation of crystalline materials and complements other TEM methods by adding local crystallographic information over large areas. It fills the gap between high resolution electron microscopy and electron back scatter diffraction in terms of spatial resolution. Recent investigations showed that spot diffraction ACOM-TEM is a quantitative method with respect to sample parameters like grain size, twin density, orientation density and others. It can even be used in combination with in-situ tensile or thermal testing. However, there are limitations of the current method. In this paper we discuss some of the challenges and discuss solutions, e.g. we present an ambiguity filter that reduces the number of pixels with a '180° ambiguity problem'. For that an ACOM-TEM tilt series of nanocrystalline Pd thin films with overlapping crystallites was acquired and analyzed. Copyright © 2017. Published by Elsevier B.V.

  17. Automated Approach to Very High-Order Aeroacoustic Computations. Revision

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2001-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  18. Health courts: an alternative to traditional tort law.

    PubMed

    Miller, Lisa A

    2011-01-01

    The current adversarial tort-based system of adjudicating malpractice claims is flawed. Alternate methods of compensation for birth injuries related to oxygen deprivation or mechanical injury are being utilized in Virginia and Florida. Although utilization of both of these schemes is limited, and they are not without problems in application, both have been successful in reducing the number of malpractice claims in the tort system and in reducing malpractice premiums. While the Florida and Virginia programs are primarily focused on compensation, other models outside the US focus include compensation as well as enhanced dispute resolution and potential for clinical practice change through peer review. Experts in the fields of law and public policy in the United States have evaluated a variety of approaches and have proposed models for administrative health courts that would provide both compensation and dispute resolution for medical and nursing malpractice claims. These alternative models are based on transparency and disclosure, with just compensation for injuries, and opportunities for improvements in patient safety.

  19. Single-snapshot DOA estimation by using Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin

    2014-12-01

    This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.

  20. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  1. Hybrid-coded 3D structured illumination imaging with Bayesian estimation (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Chen, Hsi-Hsun; Luo, Yuan; Singh, Vijay R.

    2016-03-01

    Light induced fluorescent microscopy has long been developed to observe and understand the object at microscale, such as cellular sample. However, the transfer function of lense-based imaging system limits the resolution so that the fine and detailed structure of sample cannot be identified clearly. The techniques of resolution enhancement are fascinated to break the limit of resolution for objective given. In the past decades, the resolution enhancement imaging has been investigated through variety of strategies, including photoactivated localization microscopy (PALM), stochastic optical reconstruction microscopy (STORM), stimulated emission depletion (STED), and structure illuminated microscopy (SIM). In those methods, only SIM can intrinsically improve the resolution limit for a system without taking the structure properties of object into account. In this paper, we develop a SIM associated with Bayesian estimation, furthermore, with optical sectioning capability rendered from HiLo processing, resulting the high resolution through 3D volume. This 3D SIM can provide the optical sectioning and resolution enhancement performance, and be robust to noise owing to the Data driven Bayesian estimation reconstruction proposed. For validating the 3D SIM, we show our simulation result of algorithm, and the experimental result demonstrating the 3D resolution enhancement.

  2. Printing colour at the optical diffraction limit.

    PubMed

    Kumar, Karthik; Duan, Huigao; Hegde, Ravi S; Koh, Samuel C W; Wei, Jennifer N; Yang, Joel K W

    2012-09-01

    The highest possible resolution for printed colour images is determined by the diffraction limit of visible light. To achieve this limit, individual colour elements (or pixels) with a pitch of 250 nm are required, translating into printed images at a resolution of ∼100,000 dots per inch (d.p.i.). However, methods for dispensing multiple colourants or fabricating structural colour through plasmonic structures have insufficient resolution and limited scalability. Here, we present a non-colourant method that achieves bright-field colour prints with resolutions up to the optical diffraction limit. Colour information is encoded in the dimensional parameters of metal nanostructures, so that tuning their plasmon resonance determines the colours of the individual pixels. Our colour-mapping strategy produces images with both sharp colour changes and fine tonal variations, is amenable to large-volume colour printing via nanoimprint lithography, and could be useful in making microimages for security, steganography, nanoscale optical filters and high-density spectrally encoded optical data storage.

  3. Super-resolution imaging of subcortical white matter using stochastic optical reconstruction microscopy (STORM) and super-resolution optical fluctuation imaging (SOFI).

    PubMed

    Hainsworth, A H; Lee, S; Foot, P; Patel, A; Poon, W W; Knight, A E

    2018-06-01

    The spatial resolution of light microscopy is limited by the wavelength of visible light (the 'diffraction limit', approximately 250 nm). Resolution of sub-cellular structures, smaller than this limit, is possible with super resolution methods such as stochastic optical reconstruction microscopy (STORM) and super-resolution optical fluctuation imaging (SOFI). We aimed to resolve subcellular structures (axons, myelin sheaths and astrocytic processes) within intact white matter, using STORM and SOFI. Standard cryostat-cut sections of subcortical white matter from donated human brain tissue and from adult rat and mouse brain were labelled, using standard immunohistochemical markers (neurofilament-H, myelin-associated glycoprotein, glial fibrillary acidic protein, GFAP). Image sequences were processed for STORM (effective pixel size 8-32 nm) and for SOFI (effective pixel size 80 nm). In human, rat and mouse, subcortical white matter high-quality images for axonal neurofilaments, myelin sheaths and filamentous astrocytic processes were obtained. In quantitative measurements, STORM consistently underestimated width of axons and astrocyte processes (compared with electron microscopy measurements). SOFI provided more accurate width measurements, though with somewhat lower spatial resolution than STORM. Super resolution imaging of intact cryo-cut human brain tissue is feasible. For quantitation, STORM can under-estimate diameters of thin fluorescent objects. SOFI is more robust. The greatest limitation for super-resolution imaging in brain sections is imposed by sample preparation. We anticipate that improved strategies to reduce autofluorescence and to enhance fluorophore performance will enable rapid expansion of this approach. © 2017 British Neuropathological Society.

  4. Conflict Resolution in Chinese Adolescents' Friendship: Links with Regulatory Focus and Friendship Satisfaction.

    PubMed

    Gao, Qin; Bian, Ran; Liu, Ru-de; He, Yili; Oei, Tian-Po

    2017-04-03

    It is generally acknowledged that people adopt different resolution strategies when facing conflicts with others. However, the mechanisms of conflict resolution are still unclear and under researched, in particular within the context of Chinese adolescents' same-sex friendship relations. Thus, the present study investigated the mediator role of conflict resolution strategies in the relationship between regulatory foci and friendship satisfaction for the first time. 653 Chinese adolescents completed the regulatory foci, conflict resolution style, and friendship satisfaction measures. The results of the structure equation modeling showed that while promotion focus was positively associated with problem-solving and compliance, prevention focus was positively associated with withdrawal and conflict engagement. In addition, problem-solving mediated the relationship between promotion focus and friendship satisfaction, and conflict engagement mediated the relationship between prevention focus and friendship satisfaction. These findings contribute to understanding Chinese adolescents' use of conflict resolution strategies as well as the relationship between regulatory foci and behavioral strategies in negative situations.

  5. The influence of atmospheric grid resolution in a climate model-forced ice sheet simulation

    NASA Astrophysics Data System (ADS)

    Lofverstrom, Marcus; Liakka, Johan

    2018-04-01

    Coupled climate-ice sheet simulations have been growing in popularity in recent years. Experiments of this type are however challenging as ice sheets evolve over multi-millennial timescales, which is beyond the practical integration limit of most Earth system models. A common method to increase model throughput is to trade resolution for computational efficiency (compromise accuracy for speed). Here we analyze how the resolution of an atmospheric general circulation model (AGCM) influences the simulation quality in a stand-alone ice sheet model. Four identical AGCM simulations of the Last Glacial Maximum (LGM) were run at different horizontal resolutions: T85 (1.4°), T42 (2.8°), T31 (3.8°), and T21 (5.6°). These simulations were subsequently used as forcing of an ice sheet model. While the T85 climate forcing reproduces the LGM ice sheets to a high accuracy, the intermediate resolution cases (T42 and T31) fail to build the Eurasian ice sheet. The T21 case fails in both Eurasia and North America. Sensitivity experiments using different surface mass balance parameterizations improve the simulations of the Eurasian ice sheet in the T42 case, but the compromise is a substantial ice buildup in Siberia. The T31 and T21 cases do not improve in the same way in Eurasia, though the latter simulates the continent-wide Laurentide ice sheet in North America. The difficulty to reproduce the LGM ice sheets in the T21 case is in broad agreement with previous studies using low-resolution atmospheric models, and is caused by a substantial deterioration of the model climate between the T31 and T21 resolutions. It is speculated that this deficiency may demonstrate a fundamental problem with using low-resolution atmospheric models in these types of experiments.

  6. The high-resolution regional reanalysis COSMO-REA6

    NASA Astrophysics Data System (ADS)

    Ohlwein, C.

    2016-12-01

    Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. The regional reanalysis for Europe matches the domain of the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km) and comprises the assimilation of observational data using the existing nudging scheme of COSMO complemented by a special soil moisture analysis with boundary conditions provided by ERA-Interim data. The reanalysis data set covers the past 20 years. Extensive evaluation of the reanalysis is performed using independent observations with special emphasis on precipitation and high-impact weather situations indicating a better representation of small scale variability. Further, the evaluation shows an added value of the regional reanalysis with respect to the forcing ERA Interim reanalysis and compared to a pure high-resolution dynamical downscaling approach without data assimilation.

  7. A high-resolution regional reanalysis for Europe

    NASA Astrophysics Data System (ADS)

    Ohlwein, C.

    2015-12-01

    Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Meteorological Service), a high-resolution reanalysis system based on the COSMO model has been developed. The regional reanalysis for Europe matches the domain of the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km) and comprises the assimilation of observational data using the existing nudging scheme of COSMO complemented by a special soil moisture analysis with boundary conditions provided by ERA-Interim data. The reanalysis data set covers the past 20 years. Extensive evaluation of the reanalysis is performed using independent observations with special emphasis on precipitation and high-impact weather situations indicating a better representation of small scale variability. Further, the evaluation shows an added value of the regional reanalysis with respect to the forcing ERA Interim reanalysis and compared to a pure high-resolution dynamical downscaling approach without data assimilation.

  8. Exploring image data assimilation in the prospect of high-resolution satellite data

    NASA Astrophysics Data System (ADS)

    Verron, J. A.; Duran, M.; Gaultier, L.; Brankart, J. M.; Brasseur, P.

    2016-02-01

    Many recent works show the key importance of studying the ocean at fine scales including the meso- and submesoscales. Satellite observations such as ocean color data provide informations on a wide range of scales but do not directly provide information on ocean dynamics. Satellite altimetry provide informations on the ocean dynamic topography (SSH) but so far with a limited resolution in space and even more, in time. However, in the near future, high-resolution SSH data (e.g. SWOT) will give a vision of the dynamic topography at such fine space resolution. This raises some challenging issues for data assimilation in physical oceanography: develop reliable methodology to assimilate high resolution data, make integrated use of various data sets including biogeochemical data, and even more simply, solve the challenge of handling large amont of data and huge state vectors. In this work, we propose to consider structured information rather than pointwise data. First, we take an image data assimilation approach in studying the feasibility of inverting tracer observations from Sea Surface Temperature and/or Ocean Color datasets, to improve the description of mesoscale dynamics provided by altimetric observations. Finite Size Lyapunov Exponents are used as an image proxy. The inverse problem is formulated in a Bayesian framework and expressed in terms of a cost function measuring the misfits between the two images. Second, we explore the inversion of SWOT-like high resolution SSH data and more especially the various possible proxies of the actual SSH that could be used to control the ocean circulation at various scales. One focus is made on controlling the subsurface ocean from surface only data. A key point lies in the errors and uncertainties that are associated to SWOT data.

  9. Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera

    NASA Astrophysics Data System (ADS)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2012-10-01

    In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.

  10. A longitudinal study of the associations among adolescent conflict resolution styles, depressive symptoms, and romantic relationship longevity.

    PubMed

    Ha, Thao; Overbeek, Geertjan; Cillessen, Antonius H N; Engels, Rutger C M E

    2012-10-01

    This study investigated whether adolescents' conflict resolution styles mediated between depressive symptoms and relationship longevity. Data were used from a sample of 80 couples aged 13-19 years old (Mage = 15.48, SD = 1.16). At Time 1 adolescents reported their depressive symptoms and conflict resolution styles. Additionally, time until break-up was assessed. Data were analyzed using actor-partner interdependence models. Results showed no support for conflict resolution styles as mediators. Girls' depressive symptoms were directly related to shorter relationships. Additionally, actor effects were found indicating that boys and girls with more depressive symptoms used negative resolution styles and were less likely to employ positive problems solving strategies. Finally, one partner effect was found: girls' depressive symptoms related to more positive problem solving in boys. Copyright © 2012 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  11. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    NASA Astrophysics Data System (ADS)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver hfodd that is based on the harmonic-oscillator basis expansion. Several examples are considered, including the self-consistent HFB problem for spin-polarized trapped cold fermions and the Skyrme-Hartree-Fock (+BCS) problem for triaxial deformed nuclei. Conclusions: The new madness-hfb framework has many attractive features when applied to nuclear and atomic problems involving many-particle superfluid systems. Of particular interest are weakly bound nuclear configurations close to particle drip lines, strongly elongated and dinuclear configurations such as those present in fission and heavy-ion fusion, and exotic pasta phases that appear in neutron star crust.

  12. Signal processing in urodynamics: towards high definition urethral pressure profilometry.

    PubMed

    Klünder, Mario; Sawodny, Oliver; Amend, Bastian; Ederer, Michael; Kelp, Alexandra; Sievert, Karl-Dietrich; Stenzl, Arnulf; Feuer, Ronny

    2016-03-22

    Urethral pressure profilometry (UPP) is used in the diagnosis of stress urinary incontinence (SUI) which is a significant medical, social, and economic problem. Low spatial pressure resolution, common occurrence of artifacts, and uncertainties in data location limit the diagnostic value of UPP. To overcome these limitations, high definition urethral pressure profilometry (HD-UPP) combining enhanced UPP hardware and signal processing algorithms has been developed. In this work, we present the different signal processing steps in HD-UPP and show experimental results from female minipigs. We use a special microtip catheter with high angular pressure resolution and an integrated inclination sensor. Signals from the catheter are filtered and time-correlated artifacts removed. A signal reconstruction algorithm processes pressure data into a detailed pressure image on the urethra's inside. Finally, the pressure distribution on the urethra's outside is calculated through deconvolution. A mathematical model of the urethra is contained in a point-spread-function (PSF) which is identified depending on geometric and material properties of the urethra. We additionally investigate the PSF's frequency response to determine the relevant frequency band for pressure information on the urinary sphincter. Experimental pressure data are spatially located and processed into high resolution pressure images. Artifacts are successfully removed from data without blurring other details. The pressure distribution on the urethra's outside is reconstructed and compared to the one on the inside. Finally, the pressure images are mapped onto the urethral geometry calculated from inclination and position data to provide an integrated image of pressure distribution, anatomical shape, and location. With its advanced sensing capabilities, the novel microtip catheter collects an unprecedented amount of urethral pressure data. Through sequential signal processing steps, physicians are provided with detailed information on the pressure distribution in and around the urethra. Therefore, HD-UPP overcomes many current limitations of conventional UPP and offers the opportunity to evaluate urethral structures, especially the sphincter, in context of the correct anatomical location. This could enable the development of focal therapy approaches in the treatment of SUI.

  13. Hardware problems encountered in solar heating and cooling systems

    NASA Technical Reports Server (NTRS)

    Cash, M.

    1978-01-01

    Numerous problems in the design, production, installation, and operation of solar energy systems are discussed. Described are hardware problems, which range from simple to obscure and complex, and their resolution.

  14. Iterative Region-of-Interest Reconstruction from Limited Data Using Prior Information

    NASA Astrophysics Data System (ADS)

    Vogelgesang, Jonas; Schorr, Christian

    2017-12-01

    In practice, computed tomography and computed laminography applications suffer from incomplete data. In particular, when inspecting large objects with extremely different diameters in longitudinal and transversal directions or when high resolution reconstructions are desired, the physical conditions of the scanning system lead to restricted data and truncated projections, also known as the interior or region-of-interest (ROI) problem. To recover the searched-for density function of the inspected object, we derive a semi-discrete model of the ROI problem that inherently allows the incorporation of geometrical prior information in an abstract Hilbert space setting for bounded linear operators. Assuming that the attenuation inside the object is approximately constant, as for fibre reinforced plastics parts or homogeneous objects where one is interested in locating defects like cracks or porosities, we apply the semi-discrete Landweber-Kaczmarz method to recover the inner structure of the object inside the ROI from the measured data resulting in a semi-discrete iteration method. Finally, numerical experiments for three-dimensional tomographic applications with both an inherent restricted source and ROI problem are provided to verify the proposed method for the ROI reconstruction.

  15. Investigation of the limitations of the highly pixilated CdZnTe detector for PET applications

    PubMed Central

    Komarov, Sergey; Yin, Yongzhi; Wu, Heyu; Wen, Jie; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2016-01-01

    We are investigating the feasibility of a high resolution positron emission tomography (PET) insert device based on the CdZnTe detector with 350 μm anode pixel pitch to be integrated into a conventional animal PET scanner to improve its image resolution. In this paper, we have used a simplified version of the multi pixel CdZnTe planar detector, 5 mm thick with 9 anode pixels only. This simplified 9 anode pixel structure makes it possible to carry out experiments without a complete application-specific integrated circuits readout system that is still under development. Special attention was paid to the double pixel (or charge sharing) detections. The following characteristics were obtained in experiment: energy resolution full-width-at-half-maximum (FWHM) is 7% for single pixel and 9% for double pixel photoelectric detections of 511 keV gammas; timing resolution (FWHM) from the anode signals is 30 ns for single pixel and 35 ns for double pixel detections (for photoelectric interactions only the corresponding values are 20 and 25 ns); position resolution is 350 μm in x,y-plane and ~0.4 mm in depth-of-interaction. The experimental measurements were accompanied by Monte Carlo (MC) simulations to find a limitation imposed by spatial charge distribution. Results from MC simulations suggest the limitation of the intrinsic spatial resolution of the CdZnTe detector for 511 keV photoelectric interactions is 170 μm. The interpixel interpolation cannot recover the resolution beyond the limit mentioned above for photoelectric interactions. However, it is possible to achieve higher spatial resolution using interpolation for Compton scattered events. Energy and timing resolution of the proposed 350 μm anode pixel pitch detector is no better than 0.6% FWHM at 511 keV, and 2 ns FWHM, respectively. These MC results should be used as a guide to understand the performance limits of the pixelated CdZnTe detector due to the underlying detection processes, with the understanding of the inherent limitations of MC methods. PMID:23079763

  16. Investigation of the limitations of the highly pixilated CdZnTe detector for PET applications.

    PubMed

    Komarov, Sergey; Yin, Yongzhi; Wu, Heyu; Wen, Jie; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2012-11-21

    We are investigating the feasibility of a high resolution positron emission tomography (PET) insert device based on the CdZnTe detector with 350 µm anode pixel pitch to be integrated into a conventional animal PET scanner to improve its image resolution. In this paper, we have used a simplified version of the multi pixel CdZnTe planar detector, 5 mm thick with 9 anode pixels only. This simplified 9 anode pixel structure makes it possible to carry out experiments without a complete application-specific integrated circuits readout system that is still under development. Special attention was paid to the double pixel (or charge sharing) detections. The following characteristics were obtained in experiment: energy resolution full-width-at-half-maximum (FWHM) is 7% for single pixel and 9% for double pixel photoelectric detections of 511 keV gammas; timing resolution (FWHM) from the anode signals is 30 ns for single pixel and 35 ns for double pixel detections (for photoelectric interactions only the corresponding values are 20 and 25 ns); position resolution is 350 µm in x,y-plane and ∼0.4 mm in depth-of-interaction. The experimental measurements were accompanied by Monte Carlo (MC) simulations to find a limitation imposed by spatial charge distribution. Results from MC simulations suggest the limitation of the intrinsic spatial resolution of the CdZnTe detector for 511 keV photoelectric interactions is 170 µm. The interpixel interpolation cannot recover the resolution beyond the limit mentioned above for photoelectric interactions. However, it is possible to achieve higher spatial resolution using interpolation for Compton scattered events. Energy and timing resolution of the proposed 350 µm anode pixel pitch detector is no better than 0.6% FWHM at 511 keV, and 2 ns FWHM, respectively. These MC results should be used as a guide to understand the performance limits of the pixelated CdZnTe detector due to the underlying detection processes, with the understanding of the inherent limitations of MC methods.

  17. Computer graphics application in the engineering design integration system

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  18. eGSM: A extended Sky Model of Diffuse Radio Emission

    NASA Astrophysics Data System (ADS)

    Kim, Doyeon; Liu, Adrian; Switzer, Eric

    2018-01-01

    Both cosmic microwave background and 21cm cosmology observations must contend with astrophysical foreground contaminants in the form of diffuse radio emission. For precise cosmological measurements, these foregrounds must be accurately modeled over the entire sky Ideally, such full-sky models ought to be primarily motivated by observations. Yet in practice, these observations are limited, with data sets that are observed not only in a heterogenous fashion, but also over limited frequency ranges. Previously, the Global Sky Model (GSM) took some steps towards solving the problem of incomplete observational data by interpolating over multi-frequency maps using principal component analysis (PCA).In this poster, we present an extended version of GSM (called eGSM) that includes the following improvements: 1) better zero-level calibration 2) incorporation of non-uniform survey resolutions and sky coverage 3) the ability to quantify uncertainties in sky models 4) the ability to optimally select spectral models using Bayesian Evidence techniques.

  19. Noise Control in Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Goodman, Jerry R.

    2009-01-01

    Acoustic limits in habitable space enclosures are required to ensure crew safety, comfort, and habitability. Noise control is implemented to ensure compliance with the acoustic requirements. The purpose of this paper is to describe problems with establishing acoustic requirements and noise control efforts, and present examples of noise control treatments and design applications used in the Space Shuttle Orbiter. Included is the need to implement the design discipline of acoustics early in the design process, and noise control throughout a program to ensure that limits are met. The use of dedicated personnel to provide expertise and oversight of acoustic requirements and noise control implementation has shown to be of value in the Space Shuttle Orbiter program. It is concluded that to achieve acceptable and safe noise levels in the crew habitable space, early resolution of acoustic requirements and implementation of effective noise control efforts are needed. Management support of established acoustic requirements and noise control efforts is essential.

  20. Constructing networks with correlation maximization methods.

    PubMed

    Mellor, Joseph C; Wu, Jie; Delisi, Charles

    2004-01-01

    Problems of inference in systems biology are ideally reduced to formulations which can efficiently represent the features of interest. In the case of predicting gene regulation and pathway networks, an important feature which describes connected genes and proteins is the relationship between active and inactive forms, i.e. between the "on" and "off" states of the components. While not optimal at the limits of resolution, these logical relationships between discrete states can often yield good approximations of the behavior in larger complex systems, where exact representation of measurement relationships may be intractable. We explore techniques for extracting binary state variables from measurement of gene expression, and go on to describe robust measures for statistical significance and information that can be applied to many such types of data. We show how statistical strength and information are equivalent criteria in limiting cases, and demonstrate the application of these measures to simple systems of gene regulation.

  1. Shallow Reflection Method for Water-Filled Void Detection and Characterization

    NASA Astrophysics Data System (ADS)

    Zahari, M. N. H.; Madun, A.; Dahlan, S. H.; Joret, A.; Hazreek, Z. A. M.; Mohammad, A. H.; Izzaty, R. A.

    2018-04-01

    Shallow investigation is crucial in enhancing the characteristics of subsurface void commonly encountered in civil engineering, and one such technique commonly used is seismic-reflection technique. An assessment of the effectiveness of such an approach is critical to determine whether the quality of the works meets the prescribed requirements. Conventional quality testing suffers limitations including: limited coverage (both area and depth) and problems with resolution quality. Traditionally quality assurance measurements use laboratory and in-situ invasive and destructive tests. However geophysical approaches, which are typically non-invasive and non-destructive, offer a method by which improvement of detection can be measured in a cost-effective way. Of this seismic reflection have proved useful to assess void characteristic, this paper evaluates the application of shallow seismic-reflection method in characterizing the water-filled void properties at 0.34 m depth, specifically for detection and characterization of void measurement using 2-dimensional tomography.

  2. Quantitative nanoscopy: Tackling sampling limitations in (S)TEM imaging of polymers and composites.

    PubMed

    Gnanasekaran, Karthikeyan; Snel, Roderick; de With, Gijsbertus; Friedrich, Heiner

    2016-01-01

    Sampling limitations in electron microscopy questions whether the analysis of a bulk material is representative, especially while analyzing hierarchical morphologies that extend over multiple length scales. We tackled this problem by automatically acquiring a large series of partially overlapping (S)TEM images with sufficient resolution, subsequently stitched together to generate a large-area map using an in-house developed acquisition toolbox (TU/e Acquisition ToolBox) and stitching module (TU/e Stitcher). In addition, we show that quantitative image analysis of the large scale maps provides representative information that can be related to the synthesis and process conditions of hierarchical materials, which moves electron microscopy analysis towards becoming a bulk characterization tool. We demonstrate the power of such an analysis by examining two different multi-phase materials that are structured over multiple length scales. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Airplane detection based on fusion framework by combining saliency model with Deep Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen

    2018-03-01

    Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.

  4. Isotope ratio analysis by Orbitrap mass spectrometry

    NASA Astrophysics Data System (ADS)

    Eiler, J. M.; Chimiak, L. M.; Dallas, B.; Griep-Raming, J.; Juchelka, D.; Makarov, A.; Schwieters, J. B.

    2016-12-01

    Several technologies are being developed to examine the intramolecular isotopic structures of molecules (i.e., site-specific and multiple substitution), but various limitations in sample size and type or (for IRMS) resolution have so far prevented the creation of a truly general technique. We will discuss the initial findings of a technique based on Fourier transform mass spectrometry, using the Thermo Scientific Q Exactive GC — an instrument that contains an Orbitrap mass analyzer. Fourier transform mass spectrometry is marked by exceptionally high mass resolutions (the Orbitrap reaches M/ΔM in the range 250,000-1M in the mass range of greatest interest, 50-200 amu). This allows for resolution of a large range of nearly isobaric interferences for isotopologues of volatile and semi-volatile compounds (i.e., involving isotopes of H, C, N, O and S). It also provides potential to solve very challenging mass resolution problems for isotopic analysis of other, heavier elements. Both internal and external experimental reproducibilities of isotope ratio analyses using the Orbitrap typically conform to shot-noise limits down to levels of 0.2 ‰ (1SE), and routinely in the range 0.5-1.0 ‰, with similar accuracy when standardized to concurrently run reference materials. Such measurements can be made without modifications to the ion optics of the Q Exactive GC, but do require specially designed sample introduction devices to permit sample/standard comparison and long integration times. The sensitivity of the Q Exactive GC permits analysis of sub-nanomolar samples and quantification of multiply-substituted species. The site-specific capability of this instrument arises from the fact that mass spectra of molecular analytes commonly contain diverse fragment ion species, each of which samples a specific sub-set of molecular sites. We will present applications of this technique to the biological and abiological chemistry of amino acids, forensic identification of hydrocarbon environmental pollutants, and study of the origins of isotope anomalies in meteoritic organics.

  5. Global habitat suitability for framework-forming cold-water corals.

    PubMed

    Davies, Andrew J; Guinotte, John M

    2011-04-15

    Predictive habitat models are increasingly being used by conservationists, researchers and governmental bodies to identify vulnerable ecosystems and species' distributions in areas that have not been sampled. However, in the deep sea, several limitations have restricted the widespread utilisation of this approach. These range from issues with the accuracy of species presences, the lack of reliable absence data and the limited spatial resolution of environmental factors known or thought to control deep-sea species' distributions. To address these problems, global habitat suitability models have been generated for five species of framework-forming scleractinian corals by taking the best available data and using a novel approach to generate high resolution maps of seafloor conditions. High-resolution global bathymetry was used to resample gridded data from sources such as World Ocean Atlas to produce continuous 30-arc second (∼1 km(2)) global grids for environmental, chemical and physical data of the world's oceans. The increased area and resolution of the environmental variables resulted in a greater number of coral presence records being incorporated into habitat models and higher accuracy of model predictions. The most important factors in determining cold-water coral habitat suitability were depth, temperature, aragonite saturation state and salinity. Model outputs indicated the majority of suitable coral habitat is likely to occur on the continental shelves and slopes of the Atlantic, South Pacific and Indian Oceans. The North Pacific has very little suitable scleractinian coral habitat. Numerous small scale features (i.e., seamounts), which have not been sampled or identified as having a high probability of supporting cold-water coral habitat were identified in all ocean basins. Field validation of newly identified areas is needed to determine the accuracy of model results, assess the utility of modelling efforts to identify vulnerable marine ecosystems for inclusion in future marine protected areas and reduce coral bycatch by commercial fisheries.

  6. The impact of map and data resolution on the determination of the agricultural utilisation of organic soils in Germany.

    PubMed

    Roeder, Norbert; Osterburg, Bernhard

    2012-06-01

    Due to its nature, agricultural land use depends on local site characteristics such as production potential, costs and external effects. To assess the relevance of the modifying areal unit problem (MAUP), we investigated as to how a change in the data resolution regarding both soil and land use data influences the results obtained for different land use indicators. For the assessment we use the example of the greenhouse gas (GHG) emissions from agriculturally used organic soils (mainly fens and bogs). Although less than 5 % of the German agricultural area in use is located on organic soils, the drainage of these areas to enable their agricultural utilization causes roughly 37 % of the GHG emissions of the German agricultural sector. The abandonment of the cultivation and rewetting of organic soils would be an effective policy to reduce national GHG emissions. To assess the abatement costs, it is essential to know which commodities, and at what quantities, are actually produced on this land. Furthermore, in order to limit windfall profits, information on the differences of the profitability among farms are needed. However, high-resolution data regarding land use and soil characteristics are often not available, and their generation is costly or the access is strictly limited because of legal constraints. Therefore, in this paper, we analyse how indicators for land use on organic soils respond to changes in the spatial aggregation of the data. In Germany, organic soils are predominantly used for forage cropping. Marked differences between the various regions of Germany are apparent with respect to the dynamics and the intensity of land use. Data resolution mainly impairs the derived extent of agriculturally used peatland and the observed intensity gradient, while its impact on the average value for the investigated set of land-use indicators is generally minor.

  7. Actively Coping with Violation: Exploring Upward Dissent Patterns in Functional, Dysfunctional, and Deserted Psychological Contract End States

    PubMed Central

    Schalk, René; De Ruiter, Melanie; Van Loon, Joost; Kuijpers, Evy; Van Regenmortel, Tine

    2018-01-01

    Recently, scholars have emphasized the importance of examining how employees cope with psychological contract violation and how the coping process contributes to psychological contract violation resolution and post-violation psychological contracts. Recent work points to the important role of problem-focused coping. Yet, to date, problem-focused coping strategies have not been conceptualized on a continuum from constructive to destructive strategies. Consequently, potential differences in the use of specific types of problem-focused coping strategies and the role these different strategies play in the violation resolution process has not been explored. In this study, we stress the importance of focusing on different types of problem-focused coping strategies. We explore how employee upward dissent strategies, conceptualized as different forms of problem-focused coping, contribute to violation resolution and post-violation psychological contracts. Two sources of data were used. In-depth interviews with supervisors of a Dutch car lease company provided 23 case descriptions of employee-supervisor interactions after a psychological contract violation. Moreover, a database with descriptions of Dutch court sentences provided eight case descriptions of employee-organization interactions following a perceived violation. Based on these data sources, we explored the pattern of upward dissent strategies employees used over time following a perceived violation. We distinguished between functional (thriving and reactivation), dysfunctional (impairment and dissolution) and deserted psychological contract end states and explored whether different dissent patterns over time differentially contributed to the dissent outcome (i.e., psychological contract end state). The results of our study showed that the use of problem-focused coping is not as straightforward as suggested by the post-violation model. While the post-violation model suggests that problem-focused coping will most likely contribute positively to violation resolution, we found that this also depends on the type of problem-focused coping strategy used. That is, more threatening forms of problem-focused coping (i.e., threatening resignation as a way to trigger one’s manager/organization to resolve the violation) mainly contributed to dysfunctional and deserted PC end states. Yet, in some instances the use of these types of active coping strategies also contributed to functional violation resolution. These findings have important implications for the literature on upward dissent strategies and psychological contract violation repair. PMID:29467692

  8. Actively Coping with Violation: Exploring Upward Dissent Patterns in Functional, Dysfunctional, and Deserted Psychological Contract End States.

    PubMed

    Schalk, René; De Ruiter, Melanie; Van Loon, Joost; Kuijpers, Evy; Van Regenmortel, Tine

    2018-01-01

    Recently, scholars have emphasized the importance of examining how employees cope with psychological contract violation and how the coping process contributes to psychological contract violation resolution and post-violation psychological contracts. Recent work points to the important role of problem-focused coping. Yet, to date, problem-focused coping strategies have not been conceptualized on a continuum from constructive to destructive strategies. Consequently, potential differences in the use of specific types of problem-focused coping strategies and the role these different strategies play in the violation resolution process has not been explored. In this study, we stress the importance of focusing on different types of problem-focused coping strategies. We explore how employee upward dissent strategies, conceptualized as different forms of problem-focused coping, contribute to violation resolution and post-violation psychological contracts. Two sources of data were used. In-depth interviews with supervisors of a Dutch car lease company provided 23 case descriptions of employee-supervisor interactions after a psychological contract violation. Moreover, a database with descriptions of Dutch court sentences provided eight case descriptions of employee-organization interactions following a perceived violation. Based on these data sources, we explored the pattern of upward dissent strategies employees used over time following a perceived violation. We distinguished between functional (thriving and reactivation), dysfunctional (impairment and dissolution) and deserted psychological contract end states and explored whether different dissent patterns over time differentially contributed to the dissent outcome (i.e., psychological contract end state). The results of our study showed that the use of problem-focused coping is not as straightforward as suggested by the post-violation model. While the post-violation model suggests that problem-focused coping will most likely contribute positively to violation resolution, we found that this also depends on the type of problem-focused coping strategy used. That is, more threatening forms of problem-focused coping (i.e., threatening resignation as a way to trigger one's manager/organization to resolve the violation) mainly contributed to dysfunctional and deserted PC end states. Yet, in some instances the use of these types of active coping strategies also contributed to functional violation resolution. These findings have important implications for the literature on upward dissent strategies and psychological contract violation repair.

  9. Problems as Possibilities: Problem-Based Learning for K-12 Education.

    ERIC Educational Resources Information Center

    Torp, Linda; Sage, Sara

    Problem-based learning (PBL) is an experiential form of learning centered around the collaborative investigation and resolution of "messy, real-world" problems. This book offers opportunities to learn about problem-based learning from the perspectives of teachers, students, parents, administrators, and curriculum developers. Chapter 1 tells…

  10. Discrete Angle Radiative Transfer in Uniform and Extremely Variable Clouds.

    NASA Astrophysics Data System (ADS)

    Gabriel, Philip Mitri

    The transfer of radiant energy in highly inhomogeneous media is a difficult problem that is encountered in many geophysical applications. It is the purpose of this thesis to study some problems connected with the scattering of solar radiation in natural clouds. Extreme variability in the optical density of these clouds is often believed to occur regularly. In order to facilitate study of very inhomogeneous optical media such as clouds, the difficult angular part of radiative transfer calculations is simplified by considering a series of models in which conservative scattering only occurs in discrete directions. Analytic and numerical results for the radiative properties of these Discrete Angle Radiative Transfer (DART) systems are obtained in the limits of both optically thin and thick media. Specific results include: (a) In thick homogeneous media, the albedo (reflection coefficient), unlike the transmission, cannot be obtained by a diffusion equation. (b) With the aid of an exact analogy with an early model of conductor/superconductor mixtures, it is argued that inhomogeneous media with embedded holes, neither the transmission, nor the albedo can be described by diffusive random walks. (c) Using renormalization methods, it is shown that thin cloud behaviour is sensitive to the scattering phase functions since it is associated with a repelling fixed point, whereas, the thick cloud limit is universal in that it is phase function independent, and associated with an attracting fixed point. (d) In fractal media, the optical thickness required for a given albedo or transmission can differ by large factors from that required in the corresponding plane parallel geometry. The relevant scaling exponents have been calculated in a very simple example. (e) Important global meteorological and climatological implications of the above are discussed when applied to the scattering of visible light in clouds. In the remote sensing context, an analysis of satellite data reveals that augmenting a satellite's resolution reveals increasingly detailed structures that are found to occupy a decreasing fraction of the image, while simultaneously brightening to compensate. By systematically degrading the resolution of visible and infra red satellite cloud and surface data as well as radar rain data, resolution -independent co-dimension functions were defined which were useful in describing the spatial distribution of image features as well as the resolution dependence of the intensities themselves. The scale invariant functions so obtained fit into theoretically predicted functional forms. These multifractal techniques have implications for our ability to meaningfully estimate cloud brightness fraction, total cloud amount, as well as other remotely sensed quantities.

  11. High-resolution DEM Effects on Geophysical Flow Models

    NASA Astrophysics Data System (ADS)

    Williams, M. R.; Bursik, M. I.; Stefanescu, R. E. R.; Patra, A. K.

    2014-12-01

    Geophysical mass flow models are numerical models that approximate pyroclastic flow events and can be used to assess the volcanic hazards certain areas may face. One such model, TITAN2D, approximates granular-flow physics based on a depth-averaged analytical model using inputs of basal and internal friction, material volume at a coordinate point, and a GIS in the form of a digital elevation model (DEM). The volume of modeled material propagates over the DEM in a way that is governed by the slope and curvature of the DEM surface and the basal and internal friction angles. Results from TITAN2D are highly dependent upon the inputs to the model. Here we focus on a single input: the DEM, which can vary in resolution. High resolution DEMs are advantageous in that they contain more surface details than lower-resolution models, presumably allowing modeled flows to propagate in a way more true to the real surface. However, very high resolution DEMs can create undesirable artifacts in the slope and curvature that corrupt flow calculations. With high-resolution DEMs becoming more widely available and preferable for use, determining the point at which high resolution data is less advantageous compared to lower resolution data becomes important. We find that in cases of high resolution, integer-valued DEMs, very high-resolution is detrimental to good model outputs when moderate-to-low (<10-15°) slope angles are involved. At these slope angles, multiple adjacent DEM cell elevation values are equal due to the need for the DEM to approximate the low slope with a limited set of integer values for elevation. The first derivative of the elevation surface thus becomes zero. In these cases, flow propagation is inhibited by these spurious zero-slope conditions. Here we present evidence for this "terracing effect" from 1) a mathematically defined simulated elevation model, to demonstrate the terracing effects of integer valued data, and 2) a real-world DEM where terracing must be addressed. We discuss the effect on the flow model output and present possible solutions for rectification of the problem.

  12. Linearized image reconstruction method for ultrasound modulated electrical impedance tomography based on power density distribution

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2017-04-01

    Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.

  13. ERIC First Analysis: 1980-81 National High School Debate Resolutions (How Can the Interests of United States Consumers Best Be Served?).

    ERIC Educational Resources Information Center

    Wagner, David L.

    The five chapters of this book are intended to prepare high school debaters and their coaches for the efficient investigation of the 1980-81 High Scbool Debate Problem Area and Resolutions. The first chapter contains an overview of the problem area--consumer interests--describing the basic concepts of regulation and risk, the definitions of the…

  14. Divergence identities in curved space-time a resolution of the stress-energy problem

    NASA Astrophysics Data System (ADS)

    Yilmaz, Hüseyin

    1989-03-01

    It is noted that the joint use of two basic differential identities in curved space-time, namely, 1) the Einstein-Hilbert identity (1915), and 2) the identity of P. Freud (1939), permits a viable alternative to general relativity and a resolution of the "field stress-energy" problem of the gravitational theory. (A tribute to Eugene P. Wigner's 1957 presidential address to the APS)

  15. Wilkinson Microwave Anisotropy Probe (WMAP) Battery Operations Problem Resolution Team (PRT)

    NASA Technical Reports Server (NTRS)

    Keys, Denney J.

    2010-01-01

    The NASA Technical Discipline Fellow for Electrical Power, was requested to form a Problem Resolution Team (PRT) to help assess the health of the flight battery that is currently operating aboard NASA's Wilkinson Microwave Anisotropy Probe (WMAP) and provide recommendations for battery operations to mitigate the risk of impacting science operations for the rest of the mission. This report contains the outcome of the PRT's assessment.

  16. Camera system resolution and its influence on digital image correlation

    DOE PAGES

    Reu, Phillip L.; Sweatt, William; Miller, Timothy; ...

    2014-09-21

    Digital image correlation (DIC) uses images from a camera and lens system to make quantitative measurements of the shape, displacement, and strain of test objects. This increasingly popular method has had little research on the influence of the imaging system resolution on the DIC results. This paper investigates the entire imaging system and studies how both the camera and lens resolution influence the DIC results as a function of the system Modulation Transfer Function (MTF). It will show that when making spatial resolution decisions (including speckle size) the resolution limiting component should be considered. A consequence of the loss ofmore » spatial resolution is that the DIC uncertainties will be increased. This is demonstrated using both synthetic and experimental images with varying resolution. The loss of image resolution and DIC accuracy can be compensated for by increasing the subset size, or better, by increasing the speckle size. The speckle-size and spatial resolution are now a function of the lens resolution rather than the more typical assumption of the pixel size. The study will demonstrate the tradeoffs associated with limited lens resolution.« less

  17. Speckle imaging for planetary research

    NASA Technical Reports Server (NTRS)

    Nisenson, P.; Goody, R.; Apt, J.; Papaliolios, C.

    1983-01-01

    The present study of speckle imaging technique effectiveness encompasses image reconstruction by means of a division algorithm for Fourier amplitudes, and the Knox-Thompson (1974) algorithm for Fourier phases. Results which have been obtained for Io, Titan, Pallas, Jupiter and Uranus indicate that spatial resolutions lower than the seeing limit by a factor of four are obtainable for objects brighter than Uranus. The resolutions obtained are well above the diffraction limit, due to inadequacies of the video camera employed. A photon-counting camera has been developed to overcome these difficulties, making possible the diffraction-limited resolution of objects as faint as Charon.

  18. Limits to magnetic resonance microscopy

    NASA Astrophysics Data System (ADS)

    Glover, Paul; Mansfield, Peter, Sir

    2002-10-01

    The last quarter of the twentieth century saw the development of magnetic resonance imaging (MRI) grow from a laboratory demonstration to a multi-billion dollar worldwide industry. There is a clinical body scanner in almost every hospital of the developed nations. The field of magnetic resonance microscopy (MRM), after mostly being abandoned by researchers in the first decade of MRI, has become an established branch of the science. This paper reviews the development of MRM over the last decade with an emphasis on the current state of the art. The fundamental principles of imaging and signal detection are examined to determine the physical principles which limit the available resolution. The limits are discussed with reference to liquid, solid and gas phase microscopy. In each area, the novel approaches employed by researchers to push back the limits of resolution are discussed. Although the limits to resolution are well known, the developments and applications of MRM have not reached their limit.

  19. Overcoming a hemihedral twinning problem in tetrahydrofolate-dependent O -demethylase crystals by the microseeding method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harada, Ayaka; Sato, Yukari; Kamimura, Naofumi

    2016-11-30

    A tetrahydrofolate-dependentO-demethylase, LigM, fromSphingobiumsp. SYK-6 was crystallized by the hanging-drop vapour-diffusion method. However, the obtainedP3 121 orP3 221 crystals, which diffracted to 2.5–3.3 Å resolution, were hemihedrally twinned. To overcome the twinning problem, microseeding usingP3 121/P3 221 crystals as microseeds was performed with optimization of the reservoir conditions. As a result, another crystal form was obtained. The newly obtained crystal diffracted to 2.5–3.0 Å resolution and belonged to space groupP2 12 12, with unit-cell parametersa= 102.0,b= 117.3,c= 128.1 Å. TheP2 12 12 crystals diffracted to better than 2.0 Å resolution after optimizing the cryoconditions. Phasing using the single anomalous diffractionmore » method was successful at 3.0 Å resolution with a Pt-derivative crystal. This experience suggested that microseeding is an effective method to overcome the twinning problem, even when twinned crystals are utilized as microseeds.« less

  20. Overcoming a hemihedral twinning problem in tetrahydrofolate-dependent O -demethylase crystals by the microseeding method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harada, Ayaka; Sato, Yukari; Kamimura, Naofumi

    2016-11-30

    A tetrahydrofolate-dependentO-demethylase, LigM, from Sphingobiumsp. SYK-6 was crystallized by the hanging-drop vapour-diffusion method. However, the obtained P3 121 orP3 221 crystals, which diffracted to 2.5–3.3 Å resolution, were hemihedrally twinned. To overcome the twinning problem, microseeding using P3 121/P3 2 21 crystals as microseeds was performed with optimization of the reservoir conditions. As a result, another crystal form was obtained. The newly obtained crystal diffracted to 2.5–3.0 Å resolution and belonged to space group P2 12 12, with unit-cell parametersa= 102.0,b= 117.3,c = 128.1 Å. The P2 12 12 crystals diffracted to better than 2.0 Å resolution after optimizing themore » cryoconditions. Phasing using the single anomalous diffraction method was successful at 3.0 Å resolution with a Pt-derivative crystal. This experience suggested that microseeding is an effective method to overcome the twinning problem, even when twinned crystals are utilized as microseeds.« less

  1. Theoretical Problems in High Resolution Solar Physics, 2

    NASA Technical Reports Server (NTRS)

    Athay, G. (Editor); Spicer, D. S. (Editor)

    1987-01-01

    The Science Working Group for the High Resolution Solar Observatory (HRSO) laid plans beginning in 1984 for a series of workshops designed to stimulate a broadbased input from the scientific community to the HRSO mission. These workshops have the dual objectives of encouraging an early start on the difficult theoretical problems in radiative transfer, magnetohydrodynamics, and plasma physics that will be posed by the HRSO data, and maintaining current discussions of results in high resolution solar studies. This workshop was the second in the series. The workshop format presented invited review papers during the formal sessions and contributed poster papers for discussions during open periods. Both are presented.

  2. Adaptive optics and interferometry

    NASA Technical Reports Server (NTRS)

    Beichman, Charles A.; Ridgway, Stephen

    1991-01-01

    Adaptive optics and interferometry, two techniques that will improve the limiting resolution of optical and infrared observations by factors of tens or even thousands, are discussed. The real-time adjustment of optical surfaces to compensate for wavefront distortions will improve image quality and increase sensitivity. The phased operation of multiple telescopes separated by large distances will make it possible to achieve very high angular resolution and precise positional measurements. Infrared and optical interferometers that will manipulate light beams and measure interference directly are considered. Angular resolutions of single telescopes will be limited to around 10 milliarcseconds even using the adaptive optics techniques. Interferometry would surpass this limit by a factor of 100 or more. Future telescope arrays with 100-m baselines (resolution of 2.5 milliarcseconds at a 1-micron wavelength) are also discussed.

  3. Optical analysis of a compound quasi-microscope for planetary landers

    NASA Technical Reports Server (NTRS)

    Wall, S. D.; Burcher, E. E.; Huck, F. O.

    1974-01-01

    A quasi-microscope concept, consisting of facsimile camera augmented with an auxiliary lens as a magnifier, was introduced and analyzed. The performance achievable with this concept was primarily limited by a trade-off between resolution and object field; this approach leads to a limiting resolution of 20 microns when used with the Viking lander camera (which has an angular resolution of 0.04 deg). An optical system is analyzed which includes a field lens between camera and auxiliary lens to overcome this limitation. It is found that this system, referred to as a compound quasi-microscope, can provide improved resolution (to about 2 microns ) and a larger object field. However, this improvement is at the expense of increased complexity, special camera design requirements, and tighter tolerances on the distances between optical components.

  4. Enhanced Seismic Imaging of Turbidite Deposits in Chicontepec Basin, Mexico

    NASA Astrophysics Data System (ADS)

    Chavez-Perez, S.; Vargas-Meleza, L.

    2007-05-01

    We test, as postprocessing tools, a combination of migration deconvolution and geometric attributes to attack the complex problems of reflector resolution and detection in migrated seismic volumes. Migration deconvolution has been empirically shown to be an effective approach for enhancing the illumination of migrated images, which are blurred versions of the subsurface reflectivity distribution, by decreasing imaging artifacts, improving spatial resolution, and alleviating acquisition footprint problems. We utilize migration deconvolution as a means to improve the quality and resolution of 3D prestack time migrated results from Chicontepec basin, Mexico, a very relevant portion of the producing onshore sector of Pemex, the Mexican petroleum company. Seismic data covers the Agua Fria, Coapechaca, and Tajin fields. It exhibits acquisition footprint problems, migration artifacts and a severe lack of resolution in the target area, where turbidite deposits need to be characterized between major erosional surfaces. Vertical resolution is about 35 m and the main hydrocarbon plays are turbidite beds no more than 60 m thick. We also employ geometric attributes (e.g., coherent energy and curvature), computed after migration deconvolution, to detect and map out depositional features, and help design development wells in the area. Results of this workflow show imaging enhancement and allow us to identify meandering channels and individual sand bodies, previously undistinguishable in the original seismic migrated images.

  5. Resolution on the population and food equation and the search for rational and efficient solutions to the problem of Third World debt to ensure that the world can eat, 9 September 1989.

    PubMed

    1989-01-01

    In September 1989, the 82nd Inter-Parliamentary Conference passed a resolution "on the population and food equation and the search for rational and efficient solutions to the problem of Third World debt to ensure that the world can eat." This document contains major portions of that resolution. In the area of population, the resolution affirms family planning (FP) as a basic human right and affirms the right of governments to establish their own population policies. Governments are asked to provide the educational opportunities necessary to secure equality and rights for women. Service delivery systems should be improved to make FP accessible to the 300 million women in need. Governments should reduce infant and maternal mortality, promote child care and birth spacing, and increase population education activities. The resolution also states that the creation of peaceful conditions for development is an essential precondition for solving the world's problems. In the area of food, the eradication of hunger is designated one of the primary tasks of the international community. This will only occur when developing countries increase their food production and achieve self-sufficiency. Such action is a basic and primary responsibility of developing countries but creditor nations can provide low interest rates for food import assistance and funds to strengthen the agricultural sector. The resolution further considers the problem of developing country debt and deplores coercive measures applied by certain developed countries against developing countries. The resolution contains many suggestions for reducing debt in developing countries and achieving a more equitable distribution of wealth in the world. In the area of food resources and sustainable development, the resolution acknowledges that protection of the environment and the earth's resource base for future generations is a collective responsibility. Ecological threats to the production of food should be dealt with, industrialized countries should decrease consumption of natural resources, and food production should be ecologically sound.

  6. New Possibilities for the Accurate in Situ Determination of Chalcophile and Siderophile Trace Elements by Laser Ablation Collision and Reaction Cell ICP-MS

    NASA Astrophysics Data System (ADS)

    Mason, P. R.

    2004-05-01

    Our knowledge of how chalcophile and siderophile elements partition in minerals is limited, mainly due to the lack of suitable techniques for their accurate in situ determination. Host minerals (e.g. sulphides) are typically of small size (<30 μ m) and highly heterogeneous in composition, requiring analysis of high spatial resolution. Concentrations of chalcophile elements in silicates and oxides are low (sub μ gg-1) and thus challenging to measure. Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), offering high sensitivity and good spatial resolution (10-100 μ m) is thus highly suited for this purpose. Unfortunately, the widespread use of this technique has been limited by enhanced problems specific to chalcophile and siderophile elements. These include inaccuracy due to the presence of spectral interferences, elemental fractionation during ablation/ionization and the lack of suitable calibration standards. Polyatomic spectral interferences, present either as a background component (e.g. O2+, ArAr+) or based around the recombination of matrix elements with argon (e.g. ArS+, ArNi+) hinder accurate analysis. These depend upon the relative concentrations of major matrix components and trace elements to be measured and are significant in many relevant minerals (e.g. sulphides). The use of a collision and reaction cells in ICP-MS is a new method for selective interference attenuation, significantly improving detection limits for elements such as Fe, S and Se by between 1 and 4 orders of magnitude. ArNi+ and ArCu+ interferences in sulphides can be attenuated by at least an order of magnitude leading to improved accuracy for the measurement of the Platinum Group elements Rh and Ru. Sulphur isotopes can be measured interference-free at m/z=32 and 34 by eliminating background O2+. These improvements open up new possibilities for the use of LA-ICP-MS in trace element and isotopic studies at the lowest concentration levels or where sample preparation creates additional problems (e.g. NiS fire assay beads). I will give examples of applications for this technique in the study of ore minerals, meteorites and precipitates from hydrothermal vents.

  7. Bayesian superresolution

    NASA Astrophysics Data System (ADS)

    Isakson, Steve Wesley

    2001-12-01

    Well-known principles of physics explain why resolution restrictions occur in images produced by optical diffraction-limited systems. The limitations involved are present in all diffraction-limited imaging systems, including acoustical and microwave. In most circumstances, however, prior knowledge about the object and the imaging system can lead to resolution improvements. In this dissertation I outline a method to incorporate prior information into the process of reconstructing images to superresolve the object beyond the above limitations. This dissertation research develops the details of this methodology. The approach can provide the most-probable global solution employing a finite number of steps in both far-field and near-field images. In addition, in order to overcome the effects of noise present in any imaging system, this technique provides a weighted image that quantifies the likelihood of various imaging solutions. By utilizing Bayesian probability, the procedure is capable of incorporating prior information about both the object and the noise to overcome the resolution limitation present in many imaging systems. Finally I will present an imaging system capable of detecting the evanescent waves missing from far-field systems, thus improving the resolution further.

  8. Radar studies of the atmosphere using spatial and frequency diversity

    NASA Astrophysics Data System (ADS)

    Yu, Tian-You

    This work provides results from a thorough investigation of atmospheric radar imaging including theory, numerical simulations, observational verification, and applications. The theory is generalized to include the existing imaging techniques of coherent radar imaging (CRI) and range imaging (RIM), which are shown to be special cases of three-dimensional imaging (3D Imaging). Mathematically, the problem of atmospheric radar imaging is posed as an inverse problem. In this study, the Fourier, Capon, and maximum entropy (MaxEnt) methods are proposed to solve the inverse problem. After the introduction of the theory, numerical simulations are used to test, validate, and exercise these techniques. Statistical comparisons of the three methods of atmospheric radar imaging are presented for various signal-to-noise ratio (SNR), receiver configuration, and frequency sampling. The MaxEnt method is shown to generally possess the best performance for low SNR. The performance of the Capon method approaches the performance of the MaxEnt method for high SNR. In limited cases, the Capon method actually outperforms the MaxEnt method. The Fourier method generally tends to distort the model structure due to its limited resolution. Experimental justification of CRI and RIM is accomplished using the Middle and Upper (MU) Atmosphere Radar in Japan and the SOUnding SYstem (SOUSY) in Germany, respectively. A special application of CRI to the observation of polar mesosphere summer echoes (PMSE) is used to show direct evidence of wave steepening and possibly explain gravity wave variations associated with PMSE.

  9. Low-cost, high-resolution scanning laser ophthalmoscope for the clinical environment

    NASA Astrophysics Data System (ADS)

    Soliz, P.; Larichev, A.; Zamora, G.; Murillo, S.; Barriga, E. S.

    2010-02-01

    Researchers have sought to gain greater insight into the mechanisms of the retina and the optic disc at high spatial resolutions that would enable the visualization of small structures such as photoreceptors and nerve fiber bundles. The sources of retinal image quality degradation are aberrations within the human eye, which limit the achievable resolution and the contrast of small image details. To overcome these fundamental limitations, researchers have been applying adaptive optics (AO) techniques to correct for the aberrations. Today, deformable mirror based adaptive optics devices have been developed to overcome the limitations of standard fundus cameras, but at prices that are typically unaffordable for most clinics. In this paper we demonstrate a clinically viable fundus camera with auto-focus and astigmatism correction that is easy to use and has improved resolution. We have shown that removal of low-order aberrations results in significantly better resolution and quality images. Additionally, through the application of image restoration and super-resolution techniques, the images present considerably improved quality. The improvements lead to enhanced visualization of retinal structures associated with pathology.

  10. Breaking the acoustic diffraction limit via nonlinear effect and thermal confinement for potential deep-tissue high-resolution imaging

    PubMed Central

    Yuan, Baohong; Pei, Yanbo; Kandukuri, Jayanth

    2013-01-01

    Our recently developed ultrasound-switchable fluorescence (USF) imaging technique showed that it was feasible to conduct high-resolution fluorescence imaging in a centimeter-deep turbid medium. Because the spatial resolution of this technique highly depends on the ultrasound-induced temperature focal size (UTFS), minimization of UTFS becomes important for further improving the spatial resolution USF technique. In this study, we found that UTFS can be significantly reduced below the diffraction-limited acoustic intensity focal size via nonlinear acoustic effects and thermal confinement by appropriately controlling ultrasound power and exposure time, which can be potentially used for deep-tissue high-resolution imaging. PMID:23479498

  11. Introduction to the virtual special issue on super-resolution imaging techniques

    NASA Astrophysics Data System (ADS)

    Cao, Liangcai; Liu, Zhengjun

    2017-12-01

    Until quite recently, the resolution of optical imaging instruments, including telescopes, cameras and microscopes, was considered to be limited by the diffraction of light and by image sensors. In the past few years, many exciting super-resolution approaches have emerged that demonstrate intriguing ways to bypass the classical limit in optics and detectors. More and more research groups are engaged in the study of advanced super-resolution schemes, devices, algorithms, systems, and applications [1-6]. Super-resolution techniques involve new methods in science and engineering of optics [7,8], measurements [9,10], chemistry [11,12] and information [13,14]. Promising applications, particularly in biomedical research and semiconductor industry, have been successfully demonstrated.

  12. Perspective: Whither the problem list? Organ-based documentation and deficient synthesis by medical trainees.

    PubMed

    Kaplan, Daniel M

    2010-10-01

    The author argues that the well-formulated problem list is essential for both organizing and evaluating diagnostic thinking. He considers evidence of deficiencies in problem lists in the medical record. He observes a trend among medical trainees toward organizing notes in the medical record according to lists of organ systems or medical subspecialties and hypothesizes that system-based documentation may undermine the art of problem formulation and diagnostic synthesis. Citing research linking more sophisticated problem representation with diagnostic success, he suggests that documentation style and clinical reasoning are closely connected and that organ-based documentation may predispose trainees to several varieties of cognitive diagnostic error and deficient synthesis. These include framing error, premature or absent closure, failure to integrate related findings, and failure to recognize the level of diagnostic resolution attained for a given problem. He acknowledges the pitfalls of higher-order diagnostic resolution, including the application of labels unsupported by firm evidence, while maintaining that diagnostic resolution as far as evidence permits is essential to both rational care of patients and rigorous education of learners. He proposes further research, including comparison of diagnostic efficiency between organ- and problem-oriented thinkers. He hypothesizes that the subspecialty-based structure of academic medical services helps perpetuate organ-system-based thinking, and calls on clinical educators to renew their emphasis on the formulation and documentation of complete and precise problem lists and progressively refined diagnoses by trainees.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balakin, D. A.; Belinsky, A. V., E-mail: belinsky@inbox.ru

    Images formed by light with suppressed photon fluctuations are interesting objects for studies with the aim of increasing their limiting information capacity and quality. This light in the sub-Poisson state can be prepared in a resonator filled with a medium with Kerr nonlinearity, in which self-phase modulation takes place. Spatially and temporally multimode light beams are studied and the production of spatial frequency spectra of suppressed photon fluctuations is described. The efficient operation regimes of the system are found. A particular schematic solution is described, which allows one to realize the potential possibilities laid in the formation of the squeezedmore » states of light to a maximum degree during self-phase modulation in a resonator for the maximal suppression of amplitude quantum noises upon two-dimensional imaging. The efficiency of using light with suppressed quantum fluctuations for computer image processing is studied. An algorithm is described for interpreting measurements for increasing the resolution with respect to the geometrical resolution. A mathematical model that characterizes the measurement scheme is constructed and the problem of the image reconstruction is solved. The algorithm for the interpretation of images is verified. Conditions are found for the efficient application of sub-Poisson light for super-resolution imaging. It is found that the image should have a low contrast and be maximally transparent.« less

  14. Comparison of performance of object-based image analysis techniques available in open source software (Spring and Orfeo Toolbox/Monteverdi) considering very high spatial resolution data

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana C.; Araujo, Ricardo

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for remote sensing applications is becoming more frequent. However, this type of information can result in several software problems related to the huge amount of data available. Object-based image analysis (OBIA) has proven to be superior to pixel-based analysis for very high-resolution images. The main objective of this work was to explore the potentialities of the OBIA methods available in two different open source software applications, Spring and OTB/Monteverdi, in order to generate an urban land cover map. An orthomosaic derived from UAVs was considered, 10 different regions of interest were selected, and two different approaches were followed. The first one (Spring) uses the region growing segmentation algorithm followed by the Bhattacharya classifier. The second approach (OTB/Monteverdi) uses the mean shift segmentation algorithm followed by the support vector machine (SVM) classifier. Two strategies were followed: four classes were considered using Spring and thereafter seven classes were considered for OTB/Monteverdi. The SVM classifier produces slightly better results and presents a shorter processing time. However, the poor spectral resolution of the data (only RGB bands) is an important factor that limits the performance of the classifiers applied.

  15. Heterodyne laser Doppler distance sensor with phase coding measuring stationary as well as laterally and axially moving objects

    NASA Astrophysics Data System (ADS)

    Pfister, T.; Günther, P.; Nöthen, M.; Czarske, J.

    2010-02-01

    Both in production engineering and process control, multidirectional displacements, deformations and vibrations of moving or rotating components have to be measured dynamically, contactlessly and with high precision. Optical sensors would be predestined for this task, but their measurement rate is often fundamentally limited. Furthermore, almost all conventional sensors measure only one measurand, i.e. either out-of-plane or in-plane distance or velocity. To solve this problem, we present a novel phase coded heterodyne laser Doppler distance sensor (PH-LDDS), which is able to determine out-of-plane (axial) position and in-plane (lateral) velocity of rough solid-state objects simultaneously and independently with a single sensor. Due to the applied heterodyne technique, stationary or purely axially moving objects can also be measured. In addition, it is shown theoretically as well as experimentally that this sensor offers concurrently high temporal resolution and high position resolution since its position uncertainty is in principle independent of the lateral object velocity in contrast to conventional distance sensors. This is a unique feature of the PH-LDDS enabling precise and dynamic position and shape measurements also of fast moving objects. With an optimized sensor setup, an average position resolution of 240 nm was obtained.

  16. Mapping disease at an approximated individual level using aggregate data: a case study of mapping New Hampshire birth defects.

    PubMed

    Shi, Xun; Miller, Stephanie; Mwenda, Kevin; Onda, Akikazu; Reese, Judy; Onega, Tracy; Gui, Jiang; Karagas, Margret; Demidenko, Eugene; Moeschler, John

    2013-09-06

    Limited by data availability, most disease maps in the literature are for relatively large and subjectively-defined areal units, which are subject to problems associated with polygon maps. High resolution maps based on objective spatial units are needed to more precisely detect associations between disease and environmental factors. We propose to use a Restricted and Controlled Monte Carlo (RCMC) process to disaggregate polygon-level location data to achieve mapping aggregate data at an approximated individual level. RCMC assigns a random point location to a polygon-level location, in which the randomization is restricted by the polygon and controlled by the background (e.g., population at risk). RCMC allows analytical processes designed for individual data to be applied, and generates high-resolution raster maps. We applied RCMC to the town-level birth defect data for New Hampshire and generated raster maps at the resolution of 100 m. Besides the map of significance of birth defect risk represented by p-value, the output also includes a map of spatial uncertainty and a map of hot spots. RCMC is an effective method to disaggregate aggregate data. An RCMC-based disease mapping maximizes the use of available spatial information, and explicitly estimates the spatial uncertainty resulting from aggregation.

  17. AAPM/RSNA physics tutorial for residents: physics of flat-panel fluoroscopy systems: Survey of modern fluoroscopy imaging: flat-panel detectors versus image intensifiers and more.

    PubMed

    Nickoloff, Edward Lee

    2011-01-01

    This article reviews the design and operation of both flat-panel detector (FPD) and image intensifier fluoroscopy systems. The different components of each imaging chain and their functions are explained and compared. FPD systems have multiple advantages such as a smaller size, extended dynamic range, no spatial distortion, and greater stability. However, FPD systems typically have the same spatial resolution for all fields of view (FOVs) and are prone to ghosting. Image intensifier systems have better spatial resolution with the use of smaller FOVs (magnification modes) and tend to be less expensive. However, the spatial resolution of image intensifier systems is limited by the television system to which they are coupled. Moreover, image intensifier systems are degraded by glare, vignetting, spatial distortions, and defocusing effects. FPD systems do not have these problems. Some recent innovations to fluoroscopy systems include automated filtration, pulsed fluoroscopy, automatic positioning, dose-area product meters, and improved automatic dose rate control programs. Operator-selectable features may affect both the patient radiation dose and image quality; these selectable features include dose level setting, the FOV employed, fluoroscopic pulse rates, geometric factors, display software settings, and methods to reduce the imaging time. © RSNA, 2011.

  18. X-ray imaging with sub-micron resolution using large-area photon counting detectors Timepix

    NASA Astrophysics Data System (ADS)

    Dudak, J.; Karch, J.; Holcova, K.; Zemlicka, J.

    2017-12-01

    As X-ray micro-CT became a popular tool for scientific purposes a number of commercially available CT systems have emerged on the market. Micro-CT systems have, therefore, become widely accessible and the number of research laboratories using them constantly increases. However, even when CT scans with spatial resolution of several micrometers can be performed routinely, data acquisition with sub-micron precision remains a complicated task. Issues come mostly from prolongation of the scan time inevitably connected with the use of nano-focus X-ray sources. Long exposure time increases the noise level in the CT projections. Furthermore, considering the sub-micron resolution even effects like source-spot drift, rotation stage wobble or thermal expansion become significant and can negatively affect the data. The use of dark-current free photon counting detectors as X-ray cameras for such applications can limit the issue of increased image noise in the data, however the mechanical stability of the whole system still remains a problem and has to be considered. In this work we evaluate the performance of a micro-CT system equipped with nano-focus X-ray tube and a large area photon counting detector Timepix for scans with effective pixel size bellow one micrometer.

  19. Modeling the Atmosphere of Solar and Other Stars: Radiative Transfer with PHOENIX/3D

    NASA Astrophysics Data System (ADS)

    Baron, Edward

    The chemical composition of stars is an important ingredient in our understanding of the formation, structure, and evolution of both the Galaxy and the Solar System. The composition of the sun itself is an essential reference standard against which the elemental contents of other astronomical objects are compared. Recently, redetermination of the elemental abundances using three-dimensional, time-dependent hydrodynamical models of the solar atmosphere has led to a reduction in the inferred metal abundances, particularly C, N, O, and Ne. However, this reduction in metals reduces the opacity such that models of the Sun no longer agree with the observed results obtained using helioseismology. Three dimensional (3-D) radiative transfer is an important problem in physics, astrophysics, and meteorology. Radiative transfer is extremely computationally complex and it is a natural problem that requires computation on the exascale. We intend to calculate the detailed compositional structure of the Sun and other stars at high resolution with full NLTE, treating the turbulent velocity flows in full detail in order to compare results from hydrodynamics and helioseismology, and understand the nature of the discrepancies found between the two approaches. We propose to perform 3-D high-resolution radiative transfer calculations with the PHOENIX/3D suite of solar and other stars using 3-D hydrodynamic models from different groups. While NLTE radiative transfer has been treated by the groups doing hydrodynamics, they are necessarily limited in their resolution to the consideration of only a few (4-20) frequency bins, whereas we can calculate full NLTE including thousands of wavelength points, resolving the line profiles, and solving the scattering problem with extremely high angular resolution. The code has been used for the analysis of supernova spectra, stellar and planetary spectra, and for time-dependent modeling of transient objects. PHOENIX/3D runs and scales very well on Cray XC-30 and XC-40 machines (tested up to 100,800 CPU cores) and should scale up to several million cores for large simulations. Non-local problems, particularly radiation hydrodynamics problems, are at the forefront of computational astrophysics and we will share our work with the community. Our research program brings a unified modeling strategy to the results of several disparate groups and thus will provide a unifying framework with which to assess the metal abundance of the stars and the chemical evolution of the galaxy. We will bring together 3-D hydrodynamical models, detailed radiative transfer, and astronomical abundance studies. We will also provide results of interest to the atomic physics and plasma physics communities. Our work will use data from NASA telescopes including the Hubble Space Telescope and the James Webb Space telescope. The ability to work with data from the UV to the far IR is crucial from validating our results. Our work will also extend the exascale computational capabilities, which is a national goal.

  20. Recovery of Sparse Positive Signals on the Sphere from Low Resolution Measurements

    NASA Astrophysics Data System (ADS)

    Bendory, Tamir; Eldar, Yonina C.

    2015-12-01

    This letter considers the problem of recovering a positive stream of Diracs on a sphere from its projection onto the space of low-degree spherical harmonics, namely, from its low-resolution version. We suggest recovering the Diracs via a tractable convex optimization problem. The resulting recovery error is proportional to the noise level and depends on the density of the Diracs. We validate the theory by numerical experiments.

  1. Microscope Resolution.

    ERIC Educational Resources Information Center

    Higbie, J.

    1981-01-01

    Describes problems using the Jenkins and White approach and standard diffraction theory when dealing with the topic of finite conjugate, point-source resolution and how they may be resolved using the relatively obscure Abbe's sine theorem. (JN)

  2. Driving Parameters for Distributed and Centralized Air Transportation Architectures

    NASA Technical Reports Server (NTRS)

    Feron, Eric

    2001-01-01

    This report considers the problem of intersecting aircraft flows under decentralized conflict avoidance rules. Using an Eulerian standpoint (aircraft flow through a fixed control volume), new air traffic control models and scenarios are defined that enable the study of long-term airspace stability problems. Considering a class of two intersecting aircraft flows, it is shown that airspace stability, defined both in terms of safety and performance, is preserved under decentralized conflict resolution algorithms. Performance bounds are derived for the aircraft flow problem under different maneuver models. Besides analytical approaches, numerical examples are presented to test the theoretical results, as well as to generate some insight about the structure of the traffic flow after resolution. Considering more than two intersecting aircraft flows, simulations indicate that flow stability may not be guaranteed under simple conflict avoidance rules. Finally, a comparison is made with centralized strategies to conflict resolution.

  3. The technical consideration of multi-beam mask writer for production

    NASA Astrophysics Data System (ADS)

    Lee, Sang Hee; Ahn, Byung-Sup; Choi, Jin; Shin, In Kyun; Tamamushi, Shuichi; Jeon, Chan-Uk

    2016-10-01

    Multi-beam mask writer is under development to solve the throughput and patterning resolution problems in VSB mask writer. Theoretically, the writing time is appropriate for future design node and the resolution is improved with multi-beam mask writer. Many previous studies show the feasible results of resolution, CD control and registration. Although such technical results of development tool seem to be enough for mass production, there are still many unexpected problems for real mass production. In this report, the technical challenges of multi-beam mask writer are discussed in terms of production and application. The problems and issues are defined based on the performance of current development tool compared with the requirements of mask quality. Using the simulation and experiment, we analyze the specific characteristics of electron beam in multi-beam mask writer scheme. Consequently, we suggest necessary specifications for mass production with multi-beam mask writer in the future.

  4. Clue Insensitivity in Remote Associates Test Problem Solving

    ERIC Educational Resources Information Center

    Smith, Steven M.; Sifonis, Cynthia M.; Angello, Genna

    2012-01-01

    Does spreading activation from incidentally encountered hints cause incubation effects? We used Remote Associates Test (RAT) problems to examine effects of incidental clues on impasse resolution. When solution words were seen incidentally 3-sec before initially unsolved problems were retested, more problems were resolved (Experiment 1). When…

  5. Three-Dimensional Printing Based Hybrid Manufacturing of Microfluidic Devices.

    PubMed

    Alapan, Yunus; Hasan, Muhammad Noman; Shen, Richang; Gurkan, Umut A

    2015-05-01

    Microfluidic platforms offer revolutionary and practical solutions to challenging problems in biology and medicine. Even though traditional micro/nanofabrication technologies expedited the emergence of the microfluidics field, recent advances in advanced additive manufacturing hold significant potential for single-step, stand-alone microfluidic device fabrication. One such technology, which holds a significant promise for next generation microsystem fabrication is three-dimensional (3D) printing. Presently, building 3D printed stand-alone microfluidic devices with fully embedded microchannels for applications in biology and medicine has the following challenges: (i) limitations in achievable design complexity, (ii) need for a wider variety of transparent materials, (iii) limited z-resolution, (iv) absence of extremely smooth surface finish, and (v) limitations in precision fabrication of hollow and void sections with extremely high surface area to volume ratio. We developed a new way to fabricate stand-alone microfluidic devices with integrated manifolds and embedded microchannels by utilizing a 3D printing and laser micromachined lamination based hybrid manufacturing approach. In this new fabrication method, we exploit the minimized fabrication steps enabled by 3D printing, and reduced assembly complexities facilitated by laser micromachined lamination method. The new hybrid fabrication method enables key features for advanced microfluidic system architecture: (i) increased design complexity in 3D, (ii) improved control over microflow behavior in all three directions and in multiple layers, (iii) transverse multilayer flow and precisely integrated flow distribution, and (iv) enhanced transparency for high resolution imaging and analysis. Hybrid manufacturing approaches hold great potential in advancing microfluidic device fabrication in terms of standardization, fast production, and user-independent manufacturing.

  6. Three-Dimensional Printing Based Hybrid Manufacturing of Microfluidic Devices

    PubMed Central

    Shen, Richang; Gurkan, Umut A.

    2016-01-01

    Microfluidic platforms offer revolutionary and practical solutions to challenging problems in biology and medicine. Even though traditional micro/nanofabrication technologies expedited the emergence of the microfluidics field, recent advances in advanced additive manufacturing hold significant potential for single-step, stand-alone microfluidic device fabrication. One such technology, which holds a significant promise for next generation microsystem fabrication is three-dimensional (3D) printing. Presently, building 3D printed stand-alone microfluidic devices with fully embedded microchannels for applications in biology and medicine has the following challenges: (i) limitations in achievable design complexity, (ii) need for a wider variety of transparent materials, (iii) limited z-resolution, (iv) absence of extremely smooth surface finish, and (v) limitations in precision fabrication of hollow and void sections with extremely high surface area to volume ratio. We developed a new way to fabricate stand-alone microfluidic devices with integrated manifolds and embedded microchannels by utilizing a 3D printing and laser micromachined lamination based hybrid manufacturing approach. In this new fabrication method, we exploit the minimized fabrication steps enabled by 3D printing, and reduced assembly complexities facilitated by laser micromachined lamination method. The new hybrid fabrication method enables key features for advanced microfluidic system architecture: (i) increased design complexity in 3D, (ii) improved control over microflow behavior in all three directions and in multiple layers, (iii) transverse multilayer flow and precisely integrated flow distribution, and (iv) enhanced transparency for high resolution imaging and analysis. Hybrid manufacturing approaches hold great potential in advancing microfluidic device fabrication in terms of standardization, fast production, and user-independent manufacturing. PMID:27512530

  7. A new high-resolution electromagnetic method for subsurface imaging

    NASA Astrophysics Data System (ADS)

    Feng, Wanjie

    For most electromagnetic (EM) geophysical systems, the contamination of primary fields on secondary fields ultimately limits the capability of the controlled-source EM methods. Null coupling techniques were proposed to solve this problem. However, the small orientation errors in the null coupling systems greatly restrict the applications of these systems. Another problem encountered by most EM systems is the surface interference and geologic noise, which sometimes make the geophysical survey impossible to carry out. In order to solve these problems, the alternating target antenna coupling (ATAC) method was introduced, which greatly removed the influence of the primary field and reduced the surface interference. But this system has limitations on the maximum transmitter moment that can be used. The differential target antenna coupling (DTAC) method was proposed to allow much larger transmitter moments and at the same time maintain the advantages of the ATAC method. In this dissertation, first, the theoretical DTAC calculations were derived mathematically using Born and Wolf's complex magnetic vector. 1D layered and 2D blocked earth models were used to demonstrate that the DTAC method has no responses for 1D and 2D structures. Analytical studies of the plate model influenced by conductive and resistive backgrounds were presented to explain the physical phenomenology behind the DTAC method, which is the magnetic fields of the subsurface targets are required to be frequency dependent. Then, the advantages of the DTAC method, e.g., high-resolution, reducing the geologic noise and insensitive to surface interference, were analyzed using surface and subsurface numerical examples in the EMGIMA software. Next, the theoretical advantages, such as high resolution and insensitive to surface interference, were verified by designing and developing a low-power (moment of 50 Am 2) vertical-array DTAC system and testing it on controlled targets and scaled target coils. At last, a high-power (moment of about 6800 Am2) vertical-array DTAC system was designed, developed and tested on controlled buried targets and surface interference to illustrate that the DTAC system was insensitive to surface interference even with a high-power transmitter and having higher resolution by using the large-moment transmitter. From the theoretical and practical analysis and tests, several characteristics of the DTAC method were found: (1) The DTAC method can null out the effect of 1D layered and 2D structures, because magnetic fields are orientation independent which lead to no difference among the null vector directions. This characteristic allows for the measurements of smaller subsurface targets; (2) The DTAC method is insensitive to the orientation errors. It is a robust EM null coupling method. Even large orientation errors do not affect the measured target responses, when a reference frequency and one or more data frequencies are used; (3) The vertical-array DTAC method is effective in reducing the geologic noise and insensitive to the surface interference, e.g., fences, vehicles, power line and buildings; (4) The DTAC method is a high-resolution EM sounding method. It can distinguish the depth and orientation of subsurface targets; (5) The vertical-array DTAC method can be adapted to a variety of rapidly moving survey applications. The transmitter moment can be scaled for effective study of near-surface targets (civil engineering, water resource, and environmental restoration) as well as deep targets (mining and other natural-resource exploration).

  8. Health Education in Practice: Employee Conflict Resolution Knowledge and Conflict Handling Strategies

    ERIC Educational Resources Information Center

    Hackett, Alexis; Renschler, Lauren; Kramer, Alaina

    2014-01-01

    The purpose of this project was to determine if a brief workplace conflict resolution workshop improved employee conflict resolution knowledge and to examine which conflict handling strategies (Yielding, Compromising, Forcing, Problem-Solving, Avoiding) were most used by employees when dealing with workplace conflict. A pre-test/post-test control…

  9. Sensor fusion to enable next generation low cost Night Vision systems

    NASA Astrophysics Data System (ADS)

    Schweiger, R.; Franz, S.; Löhlein, O.; Ritter, W.; Källhammer, J.-E.; Franks, J.; Krekels, T.

    2010-04-01

    The next generation of automotive Night Vision Enhancement systems offers automatic pedestrian recognition with a performance beyond current Night Vision systems at a lower cost. This will allow high market penetration, covering the luxury as well as compact car segments. Improved performance can be achieved by fusing a Far Infrared (FIR) sensor with a Near Infrared (NIR) sensor. However, fusing with today's FIR systems will be too costly to get a high market penetration. The main cost drivers of the FIR system are its resolution and its sensitivity. Sensor cost is largely determined by sensor die size. Fewer and smaller pixels will reduce die size but also resolution and sensitivity. Sensitivity limits are mainly determined by inclement weather performance. Sensitivity requirements should be matched to the possibilities of low cost FIR optics, especially implications of molding of highly complex optical surfaces. As a FIR sensor specified for fusion can have lower resolution as well as lower sensitivity, fusing FIR and NIR can solve performance and cost problems. To allow compensation of FIR-sensor degradation on the pedestrian detection capabilities, a fusion approach called MultiSensorBoosting is presented that produces a classifier holding highly discriminative sub-pixel features from both sensors at once. The algorithm is applied on data with different resolution and on data obtained from cameras with varying optics to incorporate various sensor sensitivities. As it is not feasible to record representative data with all different sensor configurations, transformation routines on existing high resolution data recorded with high sensitivity cameras are investigated in order to determine the effects of lower resolution and lower sensitivity to the overall detection performance. This paper also gives an overview of the first results showing that a reduction of FIR sensor resolution can be compensated using fusion techniques and a reduction of sensitivity can be compensated.

  10. SRRF: Universal live-cell super-resolution microscopy.

    PubMed

    Culley, Siân; Tosheva, Kalina L; Matos Pereira, Pedro; Henriques, Ricardo

    2018-08-01

    Super-resolution microscopy techniques break the diffraction limit of conventional optical microscopy to achieve resolutions approaching tens of nanometres. The major advantage of such techniques is that they provide resolutions close to those obtainable with electron microscopy while maintaining the benefits of light microscopy such as a wide palette of high specificity molecular labels, straightforward sample preparation and live-cell compatibility. Despite this, the application of super-resolution microscopy to dynamic, living samples has thus far been limited and often requires specialised, complex hardware. Here we demonstrate how a novel analytical approach, Super-Resolution Radial Fluctuations (SRRF), is able to make live-cell super-resolution microscopy accessible to a wider range of researchers. We show its applicability to live samples expressing GFP using commercial confocal as well as laser- and LED-based widefield microscopes, with the latter achieving long-term timelapse imaging with minimal photobleaching. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Time reversal through a solid-liquid interface and super-resolution

    NASA Astrophysics Data System (ADS)

    Tsogka, Chrysoula; Papanicolaou, George C.

    2002-12-01

    We present numerical computations that reproduce the time-reversal experiments of Draeger et al (Draeger C, Cassereau D and Fink M 1998 Appl. Phys. Lett. 72 1567-9), where ultrasound elastic waves are time-reversed back to their source with a time-reversal mirror in a fluid adjacent to the solid. We also show numerically that multipathing caused by random inhomogeneities improves the focusing of the back-propagated elastic waves beyond the diffraction limit seen previously in acoustic wave propagation (Dowling D R and Jackson D R 1990 J. Acoust. Soc. Am. 89 171-81, Dowling D R and Jackson D R 1992 J. Acoust. Soc. Am. 91 3257-77, Fink M 1999 Sci. Am. 91-7, Kuperman W A, Hodgkiss W S, Song H C, Akal T, Ferla C and Jackson D R 1997 J. Acoust. Soc. Am. 103 25-40, Derode A, Roux P and Fink M 1995 Phys. Rev. Lett. 75 4206-9), which is called super-resolution. A theoretical explanation of the robustness of super-resolution is given, along with several numerical computations that support this explanation (Blomgren P, Papanicolaou G and Zhao H 2002 J. Acoust. Soc. Am. 111 238-48). Time reversal with super-resolution can be used in non-destructive testing and, in a different way, in imaging with active arrays (Borcea L, Papanicolaou G, Tsogka C and Berryman J 2002 Inverse Problems 18 1247-79).

  12. Far Infrared Spectroscopy of H II Regions. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Ward, D. B.

    1975-01-01

    The far infrared spectra of H II regions are investigated. A liquid helium cooled grating spectrometer designed to make observations from the NASA Lear Jet is described along with tests of the instrument. The observing procedure on the Lear Jet telescope is described and the method of data analysis is discussed. Results are presented from a search for the (O III) 88.16 micron line. An upper limit on the emission in this line is obtained and line detection is described. Results are compared to theoretical predictions, and future applications of fine structure line observations are discussed. Coarse resolution results are given along with calibration problems. The spectra obtained are compared to models for dust emission.

  13. SVG-Based Web Publishing

    NASA Astrophysics Data System (ADS)

    Gao, Jerry Z.; Zhu, Eugene; Shim, Simon

    2003-01-01

    With the increasing applications of the Web in e-commerce, advertising, and publication, new technologies are needed to improve Web graphics technology due to the current limitation of technology. The SVG (Scalable Vector Graphics) technology is a new revolutionary solution to overcome the existing problems in the current web technology. It provides precise and high-resolution web graphics using plain text format commands. It sets a new standard for web graphic format to allow us to present complicated graphics with rich test fonts and colors, high printing quality, and dynamic layout capabilities. This paper provides a tutorial overview about SVG technology and its essential features, capability, and advantages. The reports a comparison studies between SVG and other web graphics technologies.

  14. Sub-pixel mapping of hyperspectral imagery using super-resolution

    NASA Astrophysics Data System (ADS)

    Sharma, Shreya; Sharma, Shakti; Buddhiraju, Krishna M.

    2016-04-01

    With the development of remote sensing technologies, it has become possible to obtain an overview of landscape elements which helps in studying the changes on earth's surface due to climate, geological, geomorphological and human activities. Remote sensing measures the electromagnetic radiations from the earth's surface and match the spectral similarity between the observed signature and the known standard signatures of the various targets. However, problem lies when image classification techniques assume pixels to be pure. In hyperspectral imagery, images have high spectral resolution but poor spatial resolution. Therefore, the spectra obtained is often contaminated due to the presence of mixed pixels and causes misclassification. To utilise this high spectral information, spatial resolution has to be enhanced. Many factors make the spatial resolution one of the most expensive and hardest to improve in imaging systems. To solve this problem, post-processing of hyperspectral images is done to retrieve more information from the already acquired images. The algorithm to enhance spatial resolution of the images by dividing them into sub-pixels is known as super-resolution and several researches have been done in this domain.In this paper, we propose a new method for super-resolution based on ant colony optimization and review the popular methods of sub-pixel mapping of hyperspectral images along with their comparative analysis.

  15. Resolution Quality and Atom Positions in Sub-Angstrom Electron Microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Keefe, Michael A.; Allard, Lawrence F.; Blom, Douglas A.

    2005-02-15

    Ability to determine whether an image peak represents one single atom or several depends on resolution of the HR-(S)TEM. Rayleigh's resolution criterion, an accepted standard in optics, was derived as a means for judging when two image intensity peaks from two sources of light (stars) are distinguishable from a single source. Atom spacings closer than the Rayleigh limit have been resolved in HR-TEM, suggesting that it may be useful to consider other limits, such as the Sparrow resolution criterion. From the viewpoint of the materials scientist, it is important to be able to use the image to determine whether anmore » image feature represents one or more atoms (resolution), and where the atoms (or atom columns) are positioned relative to one another (resolution quality). When atoms and the corresponding image peaks are separated by more than the Rayleigh limit of the HR-(S)TEM, it is possible to adjust imaging parameters so that relative peak positions in the image correspond to relative atom positions in the specimen. When atoms are closer than the Rayleigh limit, we must find the relationship of the peak position to the atom position by peak fitting or, if we have a suitable model, by image simulation. Our Rayleigh-Sparrow parameter QRS reveals the ''resolution quality'' of a microscope image. QRS values greater than 1 indicate a clearly resolved twin peak, while values between 1 and 0 mean a lower-quality resolution and an image with peaks displaced from the relative atom positions. The depth of the twin-peak minimum can be used to determine the value of QRS and the true separation of the atom peaks that sum to produce the twin peak in the image. The Rayleigh-Sparrow parameter can be used to refine relative atom positions in defect images where atoms are closer than the Rayleigh limit of the HR-(S)TEM, reducing the necessity for full image simulations from large defect models.« less

  16. What is the spatial sampling of MISR?

    Atmospheric Science Data Center

    2014-12-08

    ... spatial resolution of the sensors without exceeding the data transfer quotas, MISR can be operated in two different data acquisition modes: ... data at the full resolution, but only for limited periods of time and therefore for limited regions, typically about 300 km in length (along ...

  17. Quantity not quality: The relationship between fluid intelligence and working memory capacity

    PubMed Central

    Fukuda, Keisuke; Vogel, Edward; Mayr, Ulrich; Awh, Edward

    2010-01-01

    A key motivation for understanding capacity in working memory (WM) is its relationship with fluid intelligence. Recent evidence has suggested a 2-factor model that distinguishes between the number of representations that can be maintained in WM and the resolution of those representations. To determine how these factors relate to fluid intelligence, we conducted an exploratory factor analysis on multiple number-limited and resolution-limited measures of WM ability. The results strongly supported the 2-factor model, with fully orthogonal factors accounting for performance in the number-limited and resolution-limited conditions. Furthermore, the reliable relationship between WM capacity and fluid intelligence was exclusively supported by the number factor (r = .66), while the resolution factor made no reliable contribution (r = −.05). Thus, the relationship between WM capacity and standard measures of fluid intelligence is mediated by the number of representations that can be simultaneously maintained in WM rather than by the precision of those representations. PMID:21037165

  18. Relaxation and Preconditioning for High Order Discontinuous Galerkin Methods with Applications to Aeroacoustics and High Speed Flows

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    2004-01-01

    This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. Other related issues in high order WENO finite difference and finite volume methods have also been investigated. methods are two classes of high order, high resolution methods suitable for convection dominated simulations with possible discontinuous or sharp gradient solutions. In [18], we first review these two classes of methods, pointing out their similarities and differences in algorithm formulation, theoretical properties, implementation issues, applicability, and relative advantages. We then present some quantitative comparisons of the third order finite volume WENO methods and discontinuous Galerkin methods for a series of test problems to assess their relative merits in accuracy and CPU timing. In [3], we review the development of the Runge-Kutta discontinuous Galerkin (RKDG) methods for non-linear convection-dominated problems. These robust and accurate methods have made their way into the main stream of computational fluid dynamics and are quickly finding use in a wide variety of applications. They combine a special class of Runge-Kutta time discretizations, that allows the method to be non-linearly stable regardless of its accuracy, with a finite element space discretization by discontinuous approximations, that incorporates the ideas of numerical fluxes and slope limiters coined during the remarkable development of the high-resolution finite difference and finite volume schemes. The resulting RKDG methods are stable, high-order accurate, and highly parallelizable schemes that can easily handle complicated geometries and boundary conditions. We review the theoretical and algorithmic aspects of these methods and show several applications including nonlinear conservation laws, the compressible and incompressible Navier-Stokes equations, and Hamilton-Jacobi-like equations.

  19. Super-resolution Time-Lapse Seismic Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Ovcharenko, O.; Kazei, V.; Peter, D. B.; Alkhalifah, T.

    2017-12-01

    Time-lapse seismic waveform inversion is a technique, which allows tracking changes in the reservoirs over time. Such monitoring is relatively computationally extensive and therefore it is barely feasible to perform it on-the-fly. Most of the expenses are related to numerous FWI iterations at high temporal frequencies, which is inevitable since the low-frequency components can not resolve fine scale features of a velocity model. Inverted velocity changes are also blurred when there is noise in the data, so the problem of low-resolution images is widely known. One of the problems intensively tackled by computer vision research community is the recovering of high-resolution images having their low-resolution versions. Usage of artificial neural networks to reach super-resolution from a single downsampled image is one of the leading solutions for this problem. Each pixel of the upscaled image is affected by all the pixels of its low-resolution version, which enables the workflow to recover features that are likely to occur in the corresponding environment. In the present work, we adopt machine learning image enhancement technique to improve the resolution of time-lapse full-waveform inversion. We first invert the baseline model with conventional FWI. Then we run a few iterations of FWI on a set of the monitoring data to find desired model changes. These changes are blurred and we enhance their resolution by using a deep neural network. The network is trained to map low-resolution model updates predicted by FWI into the real perturbations of the baseline model. For supervised training of the network we generate a set of random perturbations in the baseline model and perform FWI on the noisy data from the perturbed models. We test the approach on a realistic perturbation of Marmousi II model and demonstrate that it outperforms conventional convolution-based deblurring techniques.

  20. Current Status and Research into Overcoming Limitations of Capsule Endoscopy

    PubMed Central

    Kwack, Won Gun; Lim, Yun Jeong

    2016-01-01

    Endoscopic investigation has a critical role in the diagnosis and treatment of gastrointestinal (GI) diseases. Since 2001, capsule endoscopy (CE) has been available for small-bowel exploration and is under continuous development. During the past decade, CE has achieved impressive improvements in areas such as miniaturization, resolution, and battery life. As a result, CE is currently a first-line tool for the investigation of the small bowel in obscure gastrointestinal bleeding and is a useful alternative to wired enteroscopy. Nevertheless, CE still has several limitations, such as incomplete examination and limited diagnostic and therapeutic capabilities. To resolve these problems, many groups have suggested several models (e.g., controlled CO2 insufflation system, magnetic navigation system, mobile robotic platform, tagging and biopsy equipment, and targeted drug-delivery system), which are in development. In the near future, new technological advances will improve the capabilities of CE and broaden its spectrum of applications not only for the small bowel but also for the colon, stomach, and esophagus. The purpose of this review is to introduce the current status of CE and to review the ongoing development of solutions to address its limitations. PMID:26855917

  1. Multi-focal multiphoton lithography.

    PubMed

    Ritschdorff, Eric T; Nielson, Rex; Shear, Jason B

    2012-03-07

    Multiphoton lithography (MPL) provides unparalleled capabilities for creating high-resolution, three-dimensional (3D) materials from a broad spectrum of building blocks and with few limitations on geometry, qualities that have been key to the design of chemically, mechanically, and biologically functional microforms. Unfortunately, the reliance of MPL on laser scanning limits the speed at which fabrication can be performed, making it impractical in many instances to produce large-scale, high-resolution objects such as complex micromachines, 3D microfluidics, etc. Previously, others have demonstrated the possibility of using multiple laser foci to simultaneously perform MPL at numerous sites in parallel, but use of a stage-scanning system to specify fabrication coordinates resulted in the production of identical features at each focal position. As a more general solution to the bottleneck problem, we demonstrate here the feasibility for performing multi-focal MPL using a dynamic mask to differentially modulate foci, an approach that enables each fabrication site to create independent (uncorrelated) features within a larger, integrated microform. In this proof-of-concept study, two simultaneously scanned foci produced the expected two-fold decrease in fabrication time, and this approach could be readily extended to many scanning foci by using a more powerful laser. Finally, we show that use of multiple foci in MPL can be exploited to assign heterogeneous properties (such as differential swelling) to micromaterials at distinct positions within a fabrication zone.

  2. Using Problem-Based Learning to Enhance Team and Player Development in Youth Soccer

    ERIC Educational Resources Information Center

    Hubball, Harry; Robertson, Scott

    2004-01-01

    Problem-based learning (PBL) is a coaching and teaching methodology that develops knowledge, abilities, and skills. It also encourages participation, collaborative investigation, and the resolution of authentic, "ill-structured" problems through the use of problem definition, teamwork, communication, data collection, decision-making,…

  3. On the application of ENO scheme with subcell resolution to conservation laws with stiff source terms

    NASA Technical Reports Server (NTRS)

    Chang, Shih-Hung

    1991-01-01

    Two approaches are used to extend the essentially non-oscillatory (ENO) schemes to treat conservation laws with stiff source terms. One approach is the application of the Strang time-splitting method. Here the basic ENO scheme and the Harten modification using subcell resolution (SR), ENO/SR scheme, are extended this way. The other approach is a direct method and a modification of the ENO/SR. Here the technique of ENO reconstruction with subcell resolution is used to locate the discontinuity within a cell and the time evolution is then accomplished by solving the differential equation along characteristics locally and advancing in the characteristic direction. This scheme is denoted ENO/SRCD (subcell resolution - characteristic direction). All the schemes are tested on the equation of LeVeque and Yee (NASA-TM-100075, 1988) modeling reacting flow problems. Numerical results show that these schemes handle this intriguing model problem very well, especially with ENO/SRCD which produces perfect resolution at the discontinuity.

  4. Ambient atomic resolution atomic force microscopy with qPlus sensors: Part 1.

    PubMed

    Wastl, Daniel S

    2017-01-01

    Atomic force microscopy (AFM) is an enormous tool to observe nature in highest resolution and understand fundamental processes like friction and tribology on the nanoscale. Atomic resolution in highest quality was possible only in well-controlled environments like ultrahigh vacuum (UHV) or controlled buffer environments (liquid conditions) and more specified for long-term high-resolution analysis at low temperatures (∼4 K) in UHV where drift is nearly completely absent. Atomic resolution in these environments is possible and is widely used. However, in uncontrolled environments like air, with all its pollutants and aerosols, unspecified thin liquid films as thin as a single molecular water-layer of 200 pm or thicker condensation films with thicknesses up to hundred nanometer, have been a problem for highest resolution since the invention of the AFM. The goal of true atomic resolution on hydrophilic as well as hydrophobic samples was reached recently. In this manuscript we want to review the concept of ambient AFM with atomic resolution. The reader will be introduced to the phenomenology in ambient conditions and the problems will be explained and analyzed while a method for scan parameter optimization will be explained. Recently developed concepts and techniques how to reach atomic resolution in air and ultra-thin liquid films will be shown and explained in detail, using several examples. Microsc. Res. Tech. 80:50-65, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution

    PubMed Central

    Bishara, Waheb; Su, Ting-Wei; Coskun, Ahmet F.; Ozcan, Aydogan

    2010-01-01

    We demonstrate lensfree holographic microscopy on a chip to achieve ~0.6 µm spatial resolution corresponding to a numerical aperture of ~0.5 over a large field-of-view of ~24 mm2. By using partially coherent illumination from a large aperture (~50 µm), we acquire lower resolution lensfree in-line holograms of the objects with unit fringe magnification. For each lensfree hologram, the pixel size at the sensor chip limits the spatial resolution of the reconstructed image. To circumvent this limitation, we implement a sub-pixel shifting based super-resolution algorithm to effectively recover much higher resolution digital holograms of the objects, permitting sub-micron spatial resolution to be achieved across the entire sensor chip active area, which is also equivalent to the imaging field-of-view (24 mm2) due to unit magnification. We demonstrate the success of this pixel super-resolution approach by imaging patterned transparent substrates, blood smear samples, as well as Caenoharbditis Elegans. PMID:20588977

  6. Problem of data quality and the limitations of the infrastructure approach

    NASA Astrophysics Data System (ADS)

    Behlen, Fred M.; Sayre, Richard E.; Rackus, Edward; Ye, Dingzhong

    1998-07-01

    The 'Infrastructure Approach' is a PACS implementation methodology wherein the archive, network and information systems interfaces are acquired first, and workstations are installed later. The approach allows building a history of archived image data, so that most prior examinations are available in digital form when workstations are deployed. A limitation of the Infrastructure Approach is that the deferred use of digital image data defeats many data quality management functions that are provided automatically by human mechanisms when data is immediately used for the completion of clinical tasks. If the digital data is used solely for archiving while reports are interpreted from film, the radiologist serves only as a check against lost films, and another person must be designated as responsible for the quality of the digital data. Data from the Radiology Information System and the PACS were analyzed to assess the nature and frequency of system and data quality errors. The error level was found to be acceptable if supported by auditing and error resolution procedures requiring additional staff time, and in any case was better than the loss rate of a hardcopy film archive. It is concluded that the problem of data quality compromises but does not negate the value of the Infrastructure Approach. The Infrastructure Approach should best be employed only to a limited extent, and that any phased PACS implementation should have a substantial complement of workstations dedicated to softcopy interpretation for at least some applications, and with full deployment following not long thereafter.

  7. Helmet-mounted pilot night vision systems: Human factors issues

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.; Brickner, Michael S.

    1989-01-01

    Helmet-mounted displays of infrared imagery (forward-looking infrared (FLIR)) allow helicopter pilots to perform low level missions at night and in low visibility. However, pilots experience high visual and cognitive workload during these missions, and their performance capabilities may be reduced. Human factors problems inherent in existing systems stem from three primary sources: the nature of thermal imagery; the characteristics of specific FLIR systems; and the difficulty of using FLIR system for flying and/or visually acquiring and tracking objects in the environment. The pilot night vision system (PNVS) in the Apache AH-64 provides a monochrome, 30 by 40 deg helmet-mounted display of infrared imagery. Thermal imagery is inferior to television imagery in both resolution and contrast ratio. Gray shades represent temperatures differences rather than brightness variability, and images undergo significant changes over time. The limited field of view, displacement of the sensor from the pilot's eye position, and monocular presentation of a bright FLIR image (while the other eye remains dark-adapted) are all potential sources of disorientation, limitations in depth and distance estimation, sensations of apparent motion, and difficulties in target and obstacle detection. Insufficient information about human perceptual and performance limitations restrains the ability of human factors specialists to provide significantly improved specifications, training programs, or alternative designs. Additional research is required to determine the most critical problem areas and to propose solutions that consider the human as well as the development of technology.

  8. Benefits and Limitations of Prenatal Screening for Prader-Willi Syndrome

    PubMed Central

    Butler, Merlin G.

    2016-01-01

    This review the status of genetic laboratory testing in Prader-Willi syndrome (PWS) due to different genetic subtypes, most often a paternally derived 15q11-q13 deletion, with benefits and limitations related to prenatal screening. Medical literature was searched for prenatal screening and genetic laboratory testing methods in use or under development and discussed in relationship to PWS. Genetic testing includes six established laboratory diagnostic approaches for PWS with direct application to prenatal screening. Ultrasonographic, obstetric and cytogenetic reports were summarized in relationship to the cause of Prader-Willi syndrome and identification of specific genetic subtypes including maternal disomy 15. Advances in genetic technology were described for diagnosing PWS specifically DNA methylation and high-resolution chromosomal SNP microarrays as current tools for genetic screening and incorporating next generation DNA sequencing for noninvasive prenatal testing (NIPT) using cell-free fetal DNA. Positive experiences are reported with NIPT for detection of numerical chromosomal problems (aneuploidies) but not for structural problems (microdeletions). These reports will be discussed along with future directions for genetic screening of PWS. In summary, this review describes and discusses the status of established and ongoing genetic testing options for PWS applicable in prenatal screening including NIPT and future directions for early diagnosis in Prader-Willi syndrome. PMID:27537837

  9. Benefits and limitations of prenatal screening for Prader-Willi syndrome.

    PubMed

    Butler, Merlin G

    2017-01-01

    This review summarizes the status of genetic laboratory testing in Prader-Willi syndrome (PWS) with different genetic subtypes, most often a paternally derived 15q11-q13 deletion and discusses benefits and limitations related to prenatal screening. Medical literature was searched for prenatal screening and genetic laboratory testing methods in use or under development and discussed in relationship to PWS. Genetic testing includes six established laboratory diagnostic approaches for PWS with direct application to prenatal screening. Ultrasonographic, obstetric and cytogenetic reports were summarized in relationship to the cause of PWS and identification of specific genetic subtypes including maternal disomy 15. Advances in genetic technology were described for diagnosing PWS specifically DNA methylation and high-resolution chromosomal SNP microarrays as current tools for genetic screening and incorporating next generation DNA sequencing for noninvasive prenatal testing (NIPT) using cell-free fetal DNA. Positive experiences are reported with NIPT for detection of numerical chromosomal problems (aneuploidies) but not for structural problems (microdeletions). These reports will be discussed along with future directions for genetic screening of PWS. In summary, this review describes and discusses the status of established and ongoing genetic testing options for PWS applicable in prenatal screening including NIPT and future directions for early diagnosis in PWS. © 2016 John Wiley & Sons, Ltd. © 2016 John Wiley & Sons, Ltd.

  10. Computational high-resolution optical imaging of the living human retina

    NASA Astrophysics Data System (ADS)

    Shemonski, Nathan D.; South, Fredrick A.; Liu, Yuan-Zhi; Adie, Steven G.; Scott Carney, P.; Boppart, Stephen A.

    2015-07-01

    High-resolution in vivo imaging is of great importance for the fields of biology and medicine. The introduction of hardware-based adaptive optics (HAO) has pushed the limits of optical imaging, enabling high-resolution near diffraction-limited imaging of previously unresolvable structures. In ophthalmology, when combined with optical coherence tomography, HAO has enabled a detailed three-dimensional visualization of photoreceptor distributions and individual nerve fibre bundles in the living human retina. However, the introduction of HAO hardware and supporting software adds considerable complexity and cost to an imaging system, limiting the number of researchers and medical professionals who could benefit from the technology. Here we demonstrate a fully automated computational approach that enables high-resolution in vivo ophthalmic imaging without the need for HAO. The results demonstrate that computational methods in coherent microscopy are applicable in highly dynamic living systems.

  11. Thermal studies of a superconducting current limiter using Monte-Carlo method

    NASA Astrophysics Data System (ADS)

    Lévêque, J.; Rezzoug, A.

    1999-07-01

    Considering the increase of the fault current level in electrical network, the current limiters become very interesting. The superconducting limiters are based on the quasi-instantaneous intrinsic transition from superconducting state to normal resistive one. Without detection of default or given order, they reduce the constraints supported by electrical installations above the fault. To avoid the destruction of the superconducting coil, the temperature must not exceed a certain value. Therefore the design of a superconducting coil needs the simultaneous resolution of an electrical equation and a thermal one. This papers deals with a resolution of this coupled problem by the method of Monte-Carlo. This method allows us to calculate the evolution of the resistance of the coil as well as the current of limitation. Experimental results are compared with theoretical ones. L'augmentation des courants de défaut dans les grands réseaux électriques ravive l'intérêt pour les limiteurs de courant. Les limiteurs supraconducteurs de courants peuvent limiter quasi-instantanément, sans donneur d'ordre ni détection de défaut, les courants de court-circuit réduisant ainsi les contraintes supportées par les installations électriques situées en amont du défaut. La limitation s'accompagne nécessairement de la transition du supraconducteur par dépassement de son courant critique. Pour éviter la destruction de la bobine supraconductrice la température ne doit pas excéder une certaine valeur. La conception d'une bobine supraconductrice exige donc la résolution simultanée d'une équation électrique et d'une équation thermique. Nous présentons une résolution de ce problème electrothermique par la méthode de Monte-Carlo. Cette méthode nous permet de calculer l'évolution de la résistance de la bobine et du courant de limitation. Des résultats expérimentaux sont comparés avec les résultats théoriques.

  12. Developmental problems and their solution for the Space Shuttle main engine alternate liquid oxygen high-pressure turbopump: Anomaly or failure investigation the key

    NASA Astrophysics Data System (ADS)

    Ryan, R.; Gross, L. A.

    1995-05-01

    The Space Shuttle main engine (SSME) alternate high-pressure liquid oxygen pump experienced synchronous vibration and ball bearing life problems that were program threatening. The success of the program hinged on the ability to solve these development problems. The design and solutions to these problems are engirded in the lessons learned and experiences from prior programs, technology programs, and the ability to properly conduct failure or anomaly investigations. The failure investigation determines the problem cause and is the basis for recommending design solutions. For a complex problem, a comprehensive solution requires that formal investigation procedures be used, including fault trees, resolution logic, and action items worked through a concurrent engineering-multidiscipline team. The normal tendency to use an intuitive, cut-and-try approach will usually prove to be costly, both in money and time and will reach a less than optimum, poorly understood answer. The SSME alternate high-pressure oxidizer turbopump development has had two complex problems critical to program success: (1) high synchronous vibrations and (2) excessive ball bearing wear. This paper will use these two problems as examples of this formal failure investigation approach. The results of the team's investigation provides insight into the complexity of the turbomachinery technical discipline interacting/sensitivities and the fine balance of competing investigations required to solve problems and guarantee program success. It is very important to the solution process that maximum use be made of the resources that both the contractor and Government can bring to the problem in a supporting and noncompeting way. There is no place for the not-invented-here attitude. The resources include, but are not limited to: (1) specially skilled professionals; (2) supporting technologies; (3) computational codes and capabilities; and (4) test and manufacturing facilities.

  13. Developmental problems and their solution for the Space Shuttle main engine alternate liquid oxygen high-pressure turbopump: Anomaly or failure investigation the key

    NASA Technical Reports Server (NTRS)

    Ryan, R.; Gross, L. A.

    1995-01-01

    The Space Shuttle main engine (SSME) alternate high-pressure liquid oxygen pump experienced synchronous vibration and ball bearing life problems that were program threatening. The success of the program hinged on the ability to solve these development problems. The design and solutions to these problems are engirded in the lessons learned and experiences from prior programs, technology programs, and the ability to properly conduct failure or anomaly investigations. The failure investigation determines the problem cause and is the basis for recommending design solutions. For a complex problem, a comprehensive solution requires that formal investigation procedures be used, including fault trees, resolution logic, and action items worked through a concurrent engineering-multidiscipline team. The normal tendency to use an intuitive, cut-and-try approach will usually prove to be costly, both in money and time and will reach a less than optimum, poorly understood answer. The SSME alternate high-pressure oxidizer turbopump development has had two complex problems critical to program success: (1) high synchronous vibrations and (2) excessive ball bearing wear. This paper will use these two problems as examples of this formal failure investigation approach. The results of the team's investigation provides insight into the complexity of the turbomachinery technical discipline interacting/sensitivities and the fine balance of competing investigations required to solve problems and guarantee program success. It is very important to the solution process that maximum use be made of the resources that both the contractor and Government can bring to the problem in a supporting and noncompeting way. There is no place for the not-invented-here attitude. The resources include, but are not limited to: (1) specially skilled professionals; (2) supporting technologies; (3) computational codes and capabilities; and (4) test and manufacturing facilities.

  14. Interferometric temporal focusing microscopy using three-photon excitation fluorescence.

    PubMed

    Toda, Keisuke; Isobe, Keisuke; Namiki, Kana; Kawano, Hiroyuki; Miyawaki, Atsushi; Midorikawa, Katsumi

    2018-04-01

    Super-resolution microscopy has become a powerful tool for biological research. However, its spatial resolution and imaging depth are limited, largely due to background light. Interferometric temporal focusing (ITF) microscopy, which combines structured illumination microscopy and three-photon excitation fluorescence microscopy, can overcome these limitations. Here, we demonstrate ITF microscopy using three-photon excitation fluorescence, which has a spatial resolution of 106 nm at an imaging depth of 100 µm with an excitation wavelength of 1060 nm.

  15. Breaking the acoustic diffraction barrier with localization optoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Deán-Ben, X. Luís.; Razansky, Daniel

    2018-02-01

    Diffraction causes blurring of high-resolution features in images and has been traditionally associated to the resolution limit in light microscopy and other imaging modalities. The resolution of an imaging system can be generally assessed via its point spread function, corresponding to the image acquired from a point source. However, the precision in determining the position of an isolated source can greatly exceed the diffraction limit. By combining the estimated positions of multiple sources, localization-based imaging has resulted in groundbreaking methods such as super-resolution fluorescence optical microscopy and has also enabled ultrasound imaging of microvascular structures with unprecedented spatial resolution in deep tissues. Herein, we introduce localization optoacoustic tomography (LOT) and discuss on the prospects of using localization imaging principles in optoacoustic imaging. LOT was experimentally implemented by real-time imaging of flowing particles in 3D with a recently-developed volumetric optoacoustic tomography system. Provided the particles were separated by a distance larger than the diffraction-limited resolution, their individual locations could be accurately determined in each frame of the acquired image sequence and the localization image was formed by superimposing a set of points corresponding to the localized positions of the absorbers. The presented results demonstrate that LOT can significantly enhance the well-established advantages of optoacoustic imaging by breaking the acoustic diffraction barrier in deep tissues and mitigating artifacts due to limited-view tomographic acquisitions.

  16. 39 CFR 601.107 - Initial disagreement resolution.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... resolution. Alternative dispute resolution (ADR) procedures may be used to resolve a disagreement. If the use of ADR is agreed upon, the 10-day limitation is suspended. If agreement cannot be reached, the...

  17. 39 CFR 601.107 - Initial disagreement resolution.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... resolution. Alternative dispute resolution (ADR) procedures may be used to resolve a disagreement. If the use of ADR is agreed upon, the 10-day limitation is suspended. If agreement cannot be reached, the...

  18. Super-resolution using a light inception layer in convolutional neural network

    NASA Astrophysics Data System (ADS)

    Mou, Qinyang; Guo, Jun

    2018-04-01

    Recently, several models based on CNN architecture have achieved great result on Single Image Super-Resolution (SISR) problem. In this paper, we propose an image super-resolution method (SR) using a light inception layer in convolutional network (LICN). Due to the strong representation ability of our well-designed inception layer that can learn richer representation with less parameters, we can build our model with shallow architecture that can reduce the effect of vanishing gradients problem and save computational costs. Our model strike a balance between computational speed and the quality of the result. Compared with state-of-the-art result, we produce comparable or better results with faster computational speed.

  19. Formulation of image fusion as a constrained least squares optimization problem

    PubMed Central

    Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge

    2017-01-01

    Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885

  20. Non-adversarial justice and the coroner's court: a proposed therapeutic, restorative, problem-solving model.

    PubMed

    King, Michael S

    2008-12-01

    Increasingly courts are using new approaches that promote a more comprehensive resolution of legal problems, minimise any negative effects that legal processes have on participant wellbeing and/or that use legal processes to promote participant wellbeing. Therapeutic jurisprudence, restorative justice, mediation and problem-solving courts are examples. This article suggests a model for the use of these processes in the coroner's court to minimise negative effects of coroner's court processes on the bereaved and to promote a more comprehensive resolution of matters at issue, including the determination of the cause of death and the public health and safety promotion role of the coroner.

  1. Aortic endothelium detection using spectral estimation optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Liu, Xinyu; Chen, Si; Luo, Yuemei; Bo, En; Wang, Nanshuo; Yu, Xiaojun; Liu, Linbo

    2016-02-01

    The evaluation of the endothelium coverage on the vessel wall is most wanted by cardiologists. Arterial endothelial cells play a crucial role in keeping low-density lipoprotein and leukocytes from entering into the intima. The damage of endothelial cells is considered as the first step of atherosclerosis development and the presence of endothelial cells is an indicator of arterial healing after stent implantation. Intravascular OCT (IVOCT) is the highest-resolution coronary imaging modality, but it is still limited by an axial resolution of 10-15 µm. This limitation in axial resolution hinders our ability to visualize cellular level details associated with coronary atherosclerosis. Spectral estimation optical coherence tomography (SE-OCT) uses modern spectral estimation techniques and may help reveal the microstructures underlying the resolution limit. In this presentation, we conduct an ex vivo study using SE-OCT to image the endothelium cells on the fresh swine aorta. We find that in OCT images with an axial resolution of 10 µm, we may gain the visibility of individual endothelium cells by applying the autoregressive spectral estimation techniques to enhance the axial resolution. We believe the SE-OCT can provide a potential to evaluate the coverage of endothelium cells using current IVOCT with a 10-µm axial resolution.

  2. Next-generation marine instruments to join plume debate

    NASA Astrophysics Data System (ADS)

    Simons, F. J.; Nolet, G.; Babcock, J.

    2003-12-01

    Whether hot spot volcanism is the consequence of plate tectonics or has a deep origin in a mantle plume is debated. G.~Foulger (Geol.~Soc.~London Lett.~Online, accessed 9/3/2003), writes that carefully truncated cross sections, with color scales cranked up, give noisy images the illusion of strong anomalies traversing the mantle. Don Anderson, the big daddy of non-plume hypotheses (R.~Kent, Geol.~Soc.~London Lett.~Online, accessed 9/3/2003) has written that the resolution of regional tomography experiments must be improved in order to successfully determine whether (...) the deep mantle is the controlling factor in the formation of proposed hot spots (Keller et al., GRL 27 (24), 2000). In particular for Iceland, at issue is the inherently limited aperture of any land-based seismometer array on the island: (...) the resolution of such images could be increased (...) by using ocean bottom seismometers (...) (ibidem). These problems are not unique to the plume debate. Coverage, resolution and robustness of models of the wave speed distribution in the interior of the Earth obtained by seismic tomographic inversions are limited by the areal distribution of seismic stations. Two thirds of Earth's surface are virtually inaccessible to passive-source seismometry, save indeed for expensive ocean-bottom seismometers or moored hydrophones. Elsewhere at this meeting, Montelli et al. describe how an improved theoretical treatment of the generation and survival of travel-time anomalies and sophisticated parameterization techniques yield unprecedented resolution of the seismic expression of a variety of ``plumes'' coming from all depths within the mantle. On the other hand, the improved resolution required to settling the debate on the depth to the seismic origin of various hot spots will also result from the collection of previously inaccessable data. Here, we show our progress in the development of an independent hydro-acoustical recording device mounted on SOLO floats. Our instrument is able to maintain a constant water column depth below the sound channel and will surface only periodically for position determination and satellite data communication. Using these low-cost, non-recovered floating sensors, the aperture of arrays mounted on oceanic islands can be increased manifold. Furthermore, adding such instruments to poorly instrumented areas will improve the resolution of deep Earth structure more dramatically than the addition of stations in already densely sampled continental areas. Our progress has been made in the design of intelligent algorithms for the automatic identification and discrimination of seismic phases that are expected to be recorded. We currently recognize teleseismic arrivals in the presence of local P, S, and T phases, ship and whale noise, and other contaminating factors such as airgunning. Our approach combines continuous time-domain processing, spectrogram analysis, and custom-made wavelet methods new to global seismology. The lifespan and cost of the instrument are critically dependent on its ability to limit its power consumption by using a minimum amount of processing steps. Hence, we pay particular attention to the numerical implementation and efficiency of our algorithms, which are shown to be accurate while approaching a theoretical limit of efficiency. We show examples on data from ridge-tethered hydrophones and expect preliminary results from a first test deployment in October.

  3. Can Social Stories Enhance the Interpersonal Conflict Resolution Skills of Children with LD?

    ERIC Educational Resources Information Center

    Kalyva, Efrosini; Agaliotis, Ioannis

    2009-01-01

    Since many children with learning disabilities (LD) face interpersonal conflict resolution problems, this study examines the efficacy of social stories in helping them choose more appropriate interpersonal conflict resolution strategies. A social story was recorded and played to the 31 children with LD in the experimental group twice a week for a…

  4. Developmental Changes in Conflict Resolution Styles in Parent-Adolescent Relationships: A Four-Wave Longitudinal Study

    ERIC Educational Resources Information Center

    Van Doorn, Muriel D.; Branje, Susan J. T.; Meeus, Wim H. J.

    2011-01-01

    In this study, changes in three conflict resolution styles in parent-adolescent relationships were investigated: positive problem solving, conflict engagement, and withdrawal. Questionnaires about these conflict resolution styles were completed by 314 early adolescents (M = 13.3 years; 50.6% girls) and both parents for four consecutive years.…

  5. Adolescents', mothers', and fathers' gendered coping strategies during conflict: Youth and parent influences on conflict resolution and psychopathology.

    PubMed

    Marceau, Kristine; Zahn-Waxler, Carolyn; Shirtcliff, Elizabeth A; Schreiber, Jane E; Hastings, Paul; Klimes-Dougan, Bonnie

    2015-11-01

    We observed gendered coping strategies and conflict resolution outcomes used by adolescents and parents during a conflict discussion task to evaluate associations with current and later adolescent psychopathology. We studied 137 middle- to upper-middle-class, predominantly Caucasian families of adolescents (aged 11-16 years, 65 males) who represented a range of psychological functioning, including normative, subclinical, and clinical levels of problems. Adolescent coping strategies played key roles both in the extent to which parent-adolescent dyads resolved conflict and in the trajectory of psychopathology symptom severity over a 2-year period. Gender-prototypic adaptive coping strategies were observed in parents but not youth, (i.e., more problem solving by fathers than mothers and more regulated emotion-focused coping by mothers than fathers). Youth-mother dyads more often achieved full resolution of conflict than youth-father dyads. There were generally not bidirectional effects among youth and parents' coping across the discussion except boys' initial use of angry/hostile coping predicted fathers' angry/hostile coping. The child was more influential than the parent on conflict resolution. This extended to exacerbation/alleviation of psychopathology over 2 years: higher conflict resolution mediated the association of adolescents' use of problem-focused coping with decreases in symptom severity over time. Lower conflict resolution mediated the association of adolescents' use of angry/hostile emotion coping with increases in symptom severity over time. Implications of findings are considered within a broadened context of the nature of coping and conflict resolution in youth-parent interactions, as well as on how these processes impact youth well-being and dysfunction over time.

  6. Adolescents’, Mothers’, and Fathers’ Gendered Coping Strategies during Conflict: Youth and Parent Influences on Conflict Resolution and Psychopathology

    PubMed Central

    Marceau, Kristine; Zahn-Waxler, Carolyn; Shirtcliff, Elizabeth A.; Schreiber, Jane E; Hastings, Paul; Klimes-Dougan, Bonnie

    2015-01-01

    We observed gendered coping strategies and conflict resolution outcomes used by adolescents and parents during a conflict discussion task to evaluate associations with current and later adolescent psychopathology. We studied 137 middle-to-upper-middle class predominantly Caucasian families of adolescents (aged 11–16 years, 65 males) who represented a range of psychological functioning including normative (~1/3) sub-clinical (~1/3) and clinical (~1/3) levels of problems. Adolescent coping strategies played key roles both in the extent to which parent-adolescent dyads resolved conflict and in the trajectory of psychopathology symptom severity over a two-year period. Gender-prototypic adaptive coping strategies were observed in parents but not youth, i.e. more problem-solving by fathers than mothers and more regulated emotion-focused coping by mothers than fathers. Youth-mother dyads more often achieved full resolution of conflict than youth-father dyads. There were generally not bidirectional effects among youth and parents’ coping across the discussion except boys’ initial use of angry/hostile coping predicted fathers’ angry/hostile coping. The child was more influential than the parent on conflict resolution. This extended to exacerbation/alleviation of psychopathology over two years: higher conflict resolution mediated the association of adolescents’ use of problem-focused coping with decreases in symptom severity over time. Lower conflict resolution mediated the association of adolescents’ use of angry/hostile emotion coping with increases in symptom severity over time. Implications of findings are considered within a broadened context of the nature of coping and conflict resolution in youth-parent interactions, as well as how these processes impact on youth well-being and dysfunction over time. PMID:26439060

  7. Downscaling of Remotely Sensed Land Surface Temperature with multi-sensor based products

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Baik, J.; Choi, M.

    2016-12-01

    Remotely sensed satellite data provides a bird's eye view, which allows us to understand spatiotemporal behavior of hydrologic variables at global scale. Especially, geostationary satellite continuously observing specific regions is useful to monitor the fluctuations of hydrologic variables as well as meteorological factors. However, there are still problems regarding spatial resolution whether the fine scale land cover can be represented with the spatial resolution of the satellite sensor, especially in the area of complex topography. To solve these problems, many researchers have been trying to establish the relationship among various hydrological factors and combine images from multi-sensor to downscale land surface products. One of geostationary satellite, Communication, Ocean and Meteorological Satellite (COMS), has Meteorological Imager (MI) and Geostationary Ocean Color Imager (GOCI). MI performing the meteorological mission produce Rainfall Intensity (RI), Land Surface Temperature (LST), and many others every 15 minutes. Even though it has high temporal resolution, low spatial resolution of MI data is treated as major research problem in many studies. This study suggests a methodology to downscale 4 km LST datasets derived from MI in finer resolution (500m) by using GOCI datasets in Northeast Asia. Normalized Difference Vegetation Index (NDVI) recognized as variable which has significant relationship with LST are chosen to estimate LST in finer resolution. Each pixels of NDVI and LST are separated according to land cover provided from MODerate resolution Imaging Spectroradiometer (MODIS) to achieve more accurate relationship. Downscaled LST are compared with LST observed from Automated Synoptic Observing System (ASOS) for assessing its accuracy. The downscaled LST results of this study, coupled with advantage of geostationary satellite, can be applied to observe hydrologic process efficiently.

  8. Manage Your Life Online (MYLO): a pilot trial of a conversational computer-based intervention for problem solving in a student sample.

    PubMed

    Gaffney, Hannah; Mansell, Warren; Edwards, Rachel; Wright, Jason

    2014-11-01

    Computerized self-help that has an interactive, conversational format holds several advantages, such as flexibility across presenting problems and ease of use. We designed a new program called MYLO that utilizes the principles of METHOD of Levels (MOL) therapy--based upon Perceptual Control Theory (PCT). We tested the efficacy of MYLO, tested whether the psychological change mechanisms described by PCT mediated its efficacy, and evaluated effects of client expectancy. Forty-eight student participants were randomly assigned to MYLO or a comparison program ELIZA. Participants discussed a problem they were currently experiencing with their assigned program and completed measures of distress, resolution and expectancy preintervention, postintervention and at 2-week follow-up. MYLO and ELIZA were associated with reductions in distress, depression, anxiety and stress. MYLO was considered more helpful and led to greater problem resolution. The psychological change processes predicted higher ratings of MYLO's helpfulness and reductions in distress. Positive expectancies towards computer-based problem solving correlated with MYLO's perceived helpfulness and greater problem resolution, and this was partly mediated by the psychological change processes identified. The findings provide provisional support for the acceptability of the MYLO program in a non-clinical sample although its efficacy as an innovative computer-based aid to problem solving remains unclear. Nevertheless, the findings provide tentative early support for the mechanisms of psychological change identified within PCT and highlight the importance of client expectations on predicting engagement in computer-based self-help.

  9. MT+, integrating magnetotellurics to determine earth structure, physical state, and processes

    USGS Publications Warehouse

    Bedrosian, P.A.

    2007-01-01

    As one of the few deep-earth imaging techniques, magnetotellurics provides information on both the structure and physical state of the crust and upper mantle. Magnetotellurics is sensitive to electrical conductivity, which varies within the earth by many orders of magnitude and is modified by a range of earth processes. As with all geophysical techniques, magnetotellurics has a non-unique inverse problem and has limitations in resolution and sensitivity. As such, an integrated approach, either via the joint interpretation of independent geophysical models, or through the simultaneous inversion of independent data sets is valuable, and at times essential to an accurate interpretation. Magnetotelluric data and models are increasingly integrated with geological, geophysical and geochemical information. This review considers recent studies that illustrate the ways in which such information is combined, from qualitative comparisons to statistical correlation studies to multi-property inversions. Also emphasized are the range of problems addressed by these integrated approaches, and their value in elucidating earth structure, physical state, and processes. ?? Springer Science+Business Media B.V. 2007.

  10. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  11. The newfoundland basin - Ocean-continent boundary and Mesozoic seafloor spreading history

    NASA Technical Reports Server (NTRS)

    Sullivan, K. D.

    1983-01-01

    It is pointed out that over the past 15 years there has been considerable progress in the refinement of predrift fits and seafloor spreading models of the North Atlantic. With the widespread acceptance of these basic models has come increasing interest in resolution of specific paleogeographic and kinematic problems. Two such problems are the initial position of Iberia with respect to North America and the geometry and chronology of early (pre-80 m.y.) relative motions between these two plates. The present investigation is concerned with geophysical data from numerous Bedford Institute/Dalhousie University cruises to the Newfoundland Basin which were undrtaken to determine the location of the ocean-continent boundary (OCB) and the Mesozoic spreading history on the western side. From the examination of magnetic data in the Newfoundland Basin, the OCB east of the Grand Banks is defined as the seaward limit of the 'smooth' magnetic domain which characterizes the surrounding continental shelves. A substantial improvement in Iberia-North America paleographic reconstructions is achieved.

  12. Sparsity-based super-resolved coherent diffraction imaging of one-dimensional objects.

    PubMed

    Sidorenko, Pavel; Kfir, Ofer; Shechtman, Yoav; Fleischer, Avner; Eldar, Yonina C; Segev, Mordechai; Cohen, Oren

    2015-09-08

    Phase-retrieval problems of one-dimensional (1D) signals are known to suffer from ambiguity that hampers their recovery from measurements of their Fourier magnitude, even when their support (a region that confines the signal) is known. Here we demonstrate sparsity-based coherent diffraction imaging of 1D objects using extreme-ultraviolet radiation produced from high harmonic generation. Using sparsity as prior information removes the ambiguity in many cases and enhances the resolution beyond the physical limit of the microscope. Our approach may be used in a variety of problems, such as diagnostics of defects in microelectronic chips. Importantly, this is the first demonstration of sparsity-based 1D phase retrieval from actual experiments, hence it paves the way for greatly improving the performance of Fourier-based measurement systems where 1D signals are inherent, such as diagnostics of ultrashort laser pulses, deciphering the complex time-dependent response functions (for example, time-dependent permittivity and permeability) from spectral measurements and vice versa.

  13. Continuous probing of cold complex molecules with infrared frequency comb spectroscopy

    NASA Astrophysics Data System (ADS)

    Spaun, Ben; Changala, P. Bryan; Patterson, David; Bjork, Bryce J.; Heckl, Oliver H.; Doyle, John M.; Ye, Jun

    2016-05-01

    For more than half a century, high-resolution infrared spectroscopy has played a crucial role in probing molecular structure and dynamics. Such studies have so far been largely restricted to relatively small and simple systems, because at room temperature even molecules of modest size already occupy many millions of rotational/vibrational states, yielding highly congested spectra that are difficult to assign. Targeting more complex molecules requires methods that can record broadband infrared spectra (that is, spanning multiple vibrational bands) with both high resolution and high sensitivity. However, infrared spectroscopic techniques have hitherto been limited either by narrow bandwidth and long acquisition time, or by low sensitivity and resolution. Cavity-enhanced direct frequency comb spectroscopy (CE-DFCS) combines the inherent broad bandwidth and high resolution of an optical frequency comb with the high detection sensitivity provided by a high-finesse enhancement cavity, but it still suffers from spectral congestion. Here we show that this problem can be overcome by using buffer gas cooling to produce continuous, cold samples of molecules that are then subjected to CE-DFCS. This integration allows us to acquire a rotationally resolved direct absorption spectrum in the C-H stretching region of nitromethane, a model system that challenges our understanding of large-amplitude vibrational motion. We have also used this technique on several large organic molecules that are of fundamental spectroscopic and astrochemical relevance, including naphthalene, adamantane and hexamethylenetetramine. These findings establish the value of our approach for studying much larger and more complex molecules than have been probed so far, enabling complex molecules and their kinetics to be studied with orders-of-magnitude improvements in efficiency, spectral resolution and specificity.

  14. Prototype Global Burnt Area Algorithm Using a Multi-sensor Approach

    NASA Astrophysics Data System (ADS)

    López Saldaña, G.; Pereira, J.; Aires, F.

    2013-05-01

    One of the main limitations of products derived from remotely-sensed data is the length of the data records available for climate studies. The Advanced Very High Resolution Radiometer (AVHRR) long-term data record (LTDR) comprises a daily global atmospherically-corrected surface reflectance dataset at 0.05Deg spatial resolution and is available for the 1981-1999 time period. The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument has been on orbit in the Terra platform since late 1999 and in Aqua since mid 2002; surface reflectance products, MYD09CMG and MOD09CMG, are available at 0.05Deg spatial resolution. Fire is strong cause of land surface change and emissions of greenhouse gases around the globe. A global long-term identification of areas affected by fire is needed to analyze trends and fire-clime relationships. A burnt area algorithm can be seen as a change point detection problem where there is an abrupt change in the surface reflectance due to the biomass burning. Using the AVHRR-LTDR and the aforementioned MODIS products, a time series of bidirectional reflectance distribution function (BRDF) corrected surface reflectance was generated using the daily observations and constraining the BRDF model inversion using a climatology of BRDF parameters derived from 12 years of MODIS data. The identification of the burnt area was performed using a t-test in the pre- and post-fire reflectance values and a change point detection algorithm, then spectral constraints were applied to flag changes caused by natural land processes like vegetation seasonality or flooding. Additional temporal constraints are applied focusing in the persistence of the affected areas. Initial results for years 1998 to 2002, show spatio-temporal coherence but further analysis is required and a formal rigorous validation will be applied using burn scars identified from high-resolution datasets.

  15. High-Resolution Three-Dimensional Computed Tomography for Assessing Complications Related to Intrathecal Drug Delivery.

    PubMed

    Morgalla, Matthias; Fortunato, Marcos; Azam, Ala; Tatagiba, Marcos; Lepski, Guillherme

    2016-07-01

    The assessment of the functionality of intrathecal drug delivery (IDD) systems remains difficult and time-consuming. Catheter-related problems are still very common, and sometimes difficult to diagnose. The aim of the present study is to investigate the accuracy of high-resolution three-dimensional computed tomography (CT) in order to detect catheter-related pump dysfunction. An observational, retrospective investigation. Academic medical center in Germany. We used high-resolution three dimensional (3D) computed tomography with volume rendering technique (VRT) or fluoroscopy and conventional axial-CT to assess IDD-related complications in 51 patients from our institution who had IDD systems implanted for the treatment of chronic pain or spasticity. Twelve patients (23.5%) presented a total of 22 complications. The main type of complication in our series was catheter-related (50%), followed by pump failure, infection, and inappropriate refilling. Fluoroscopy and conventional CT were used in 12 cases. High-resolution 3D CT VRT scan was used in 35 instances with suspected yet unclear complications. Using 3D-CT (VRT) the sensitivity was 58.93% - 100% (CI 95%) and the specificity 87.54% - 100% (CI 95%).The positive predictive value was 58.93% - 100% (CI 95%) and the negative predictive value: 87.54% - 100% (CI 95%).Fluoroscopy and axial CT as a combined diagnostic tool had a sensitivity of 8.3% - 91.7% (CI 95%) and a specificity of 62.9% - 100% (CI 95%). The positive predictive value was 19.29% - 100% (CI 95%) and the negative predictive value: 44.43% - 96.89% (CI 95%). This study is limited by its observational design and the small number of cases. High-resolution 3D CT VRT is a non- invasive method that can identify IDD-related complications with more precision than axial CT and fluoroscopy.

  16. Natural Environment Characterization Using Hybrid Tomographic Aproaches

    NASA Astrophysics Data System (ADS)

    Huang, Yue; Ferro-Famil, Laurent; Reigber, Andreas

    2011-03-01

    SAR tomography (SARTOM) is the extension of conventional two-dimensional SAR imaging principle to three dimensions [1]. A real 3D imaging of a scene is achieved by the formation of an additional synthetic aperture in elevation and the coherent combination of images acquired from several parallel flight tracks. This imaging technique allows a direct localization of multiple scattering contributions in a same resolution cell, leading to a refined analysis of volume structures, like forests or dense urban areas. In order to improve the vertical resolution with respect to classical Fourier-based methods, High-Resolution (HR) approaches are used in this paper to perform SAR tomography. Both nonparametric spectral estimators, like Beamforming and Capon and parametric ones, like MUSIC, Maximum Likelihood, are applied to real data sets and compared in terms of scatterer location accuracy and resolution. It is known that nonparametric approaches are in general more robust to focusing artefacts, whereas parametric approaches are characterized by a better vertical resolution. It has been shown [2], [3] that the performance of these spectral analysis approaches is conditioned by the nature of the scattering response of the observed objects. In the scenario of hybrid environments where objects with a deterministic response are embedded in a speckle affected environment, the parameter estimation for this type of scatterers becomes a problem of mixed-spectrum estimation. The impenetrable medium like the ground or object, possesses an isolated localized phase center in the vertical direction, leading to a discrete (line) spectrum. This type of scatterers can be considered as 'h-localized', named 'Isolated Scatterers' (IS). Whereas natural environments consist of a large number of elementary scatterers successively distributed in the vertical direction. This type of scatterers can be described as 'h-distributed' scatterers and characterized by a continuous spectrum. Therefore, the usual spectral estimators may reach some limitations due to their lack of adaptation to both the statistical features of the backscattered information and the type of spectrum of the considered media. In order to overcome this problem, a tomographic focusing approach based on hybrid spectral estimators is introduced and extended to the polarimetric case. It contains two parallel procedures: one is to detect and localize isolated scatterers and the other one is to characterize the natural environment by estimating the heights of the ground and the tree top. These two decoupled procedures permit to more precisely characterize the scenario of hybrid environments.

  17. Improved laser-based triangulation sensor with enhanced range and resolution through adaptive optics-based active beam control.

    PubMed

    Reza, Syed Azer; Khwaja, Tariq Shamim; Mazhar, Mohsin Ali; Niazi, Haris Khan; Nawab, Rahma

    2017-07-20

    Various existing target ranging techniques are limited in terms of the dynamic range of operation and measurement resolution. These limitations arise as a result of a particular measurement methodology, the finite processing capability of the hardware components deployed within the sensor module, and the medium through which the target is viewed. Generally, improving the sensor range adversely affects its resolution and vice versa. Often, a distance sensor is designed for an optimal range/resolution setting depending on its intended application. Optical triangulation is broadly classified as a spatial-signal-processing-based ranging technique and measures target distance from the location of the reflected spot on a position sensitive detector (PSD). In most triangulation sensors that use lasers as a light source, beam divergence-which severely affects sensor measurement range-is often ignored in calculations. In this paper, we first discuss in detail the limitations to ranging imposed by beam divergence, which, in effect, sets the sensor dynamic range. Next, we show how the resolution of laser-based triangulation sensors is limited by the interpixel pitch of a finite-sized PSD. In this paper, through the use of tunable focus lenses (TFLs), we propose a novel design of a triangulation-based optical rangefinder that improves both the sensor resolution and its dynamic range through adaptive electronic control of beam propagation parameters. We present the theory and operation of the proposed sensor and clearly demonstrate a range and resolution improvement with the use of TFLs. Experimental results in support of our claims are shown to be in strong agreement with theory.

  18. Gradient design for liquid chromatography using multi-scale optimization.

    PubMed

    López-Ureña, S; Torres-Lapasió, J R; Donat, R; García-Alvarez-Coque, M C

    2018-01-26

    In reversed phase-liquid chromatography, the usual solution to the "general elution problem" is the application of gradient elution with programmed changes of organic solvent (or other properties). A correct quantification of chromatographic peaks in liquid chromatography requires well resolved signals in a proper analysis time. When the complexity of the sample is high, the gradient program should be accommodated to the local resolution needs of each analyte. This makes the optimization of such situations rather troublesome, since enhancing the resolution for a given analyte may imply a collateral worsening of the resolution of other analytes. The aim of this work is to design multi-linear gradients that maximize the resolution, while fulfilling some restrictions: all peaks should be eluted before a given maximal time, the gradient should be flat or increasing, and sudden changes close to eluting peaks are penalized. Consequently, an equilibrated baseline resolution for all compounds is sought. This goal is achieved by splitting the optimization problem in a multi-scale framework. In each scale κ, an optimization problem is solved with N κ  ≈ 2 κ variables that are used to build the gradients. The N κ variables define cubic splines written in terms of a B-spline basis. This allows expressing gradients as polygonals of M points approximating the splines. The cubic splines are built using subdivision schemes, a technique of fast generation of smooth curves, compatible with the multi-scale framework. Owing to the nature of the problem and the presence of multiple local maxima, the algorithm used in the optimization problem of each scale κ should be "global", such as the pattern-search algorithm. The multi-scale optimization approach is successfully applied to find the best multi-linear gradient for resolving a mixture of amino acid derivatives. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Resolution in QCM sensors for the viscosity and density of liquids: application to lead acid batteries.

    PubMed

    Cao-Paz, Ana María; Rodríguez-Pardo, Loreto; Fariña, José; Marcos-Acevedo, Jorge

    2012-01-01

    In battery applications, particularly in automobiles, submarines and remote communications, the state of charge (SoC) is needed in order to manage batteries efficiently. The most widely used physical parameter for this is electrolyte density. However, there is greater dependency between electrolyte viscosity and SoC than that seen for density and SoC. This paper presents a Quartz Crystal Microbalance (QCM) sensor for electrolyte density-viscosity product measurements in lead acid batteries. The sensor is calibrated in H(2)SO(4) solutions in the battery electrolyte range to obtain sensitivity, noise and resolution. Also, real-time tests of charge and discharge are conducted placing the quartz crystal inside the battery. At the same time, the present theoretical "resolution limit" to measure the square root of the density-viscosity product [Formula: see text] of a liquid medium or best resolution achievable with a QCM oscillator is determined. Findings show that the resolution limit only depends on the characteristics of the liquid to be studied and not on frequency. The QCM resolution limit for [Formula: see text] measurements worsens when the density-viscosity product of the liquid is increased, but it cannot be improved by elevating the work frequency.

  20. Picosecond Resolution Time-to-Digital Converter Using Gm-C Integrator and SAR-ADC

    NASA Astrophysics Data System (ADS)

    Xu, Zule; Miyahara, Masaya; Matsuzawa, Akira

    2014-04-01

    A picosecond resolution time-to-digital converter (TDC) is presented. The resolution of a conventional delay chain TDC is limited by the delay of a logic buffer. Various types of recent TDCs are successful in breaking this limitation, but they require a significant calibration effort to achieve picosecond resolution with a sufficient linear range. To address these issues, we propose a simple method to break the resolution limitation without any calibration: a Gm-C integrator followed by a successive approximation register analog-to-digital converter (SAR-ADC). This translates the time interval into charge, and then the charge is quantized. A prototype chip was fabricated in 90 nm CMOS. The measurement results reveal a 1 ps resolution, a -0.6/0.7 LSB differential nonlinearity (DNL), a -1.1/2.3 LSB integral nonlinearity (INL), and a 9-bit range. The measured 11.74 ps single-shot precision is caused by the noise of the integrator. We analyze the noise of the integrator and propose an improved front-end circuit to reduce this noise. The proposal is verified by simulations showing the maximum single-shot precision is less than 1 ps. The proposed front-end circuit can also diminish the mismatch effects.

  1. Using High-Resolution, Regional-Scale Data to Characterize Floating Aquatic Nuisance Vegetation in Coastal Louisiana Navigation Channels

    DTIC Science & Technology

    2014-01-01

    Comparison of footprints from various image sensors used in this study . Landsat (blue) is in the upper left panel, SPOT (yellow) is in the upper right...the higher resolution sensors evaluated as part of this study are limited to four spectral bands. Moderate resolution processing. ArcGIS ...moderate, effective useful coverage may be much more limited for a scene that includes significant amounts of water. Throughout the study period, SPOT 4

  2. Different Procedures for Solving Mathematical Word Problems in High School

    ERIC Educational Resources Information Center

    Gasco, Javier; Villarroel, Jose Domingo; Zuazagoitia, Dani

    2014-01-01

    The teaching and learning of mathematics cannot be understood without considering the resolution of word problems. These kinds of problems not only connect mathematical concepts with language (and therefore with reality) but also promote the learning related to other scientific areas. In primary school, problems are solved by using basic…

  3. Electro-Optic Time-to-Space Converter for Optical Detector Jitter Mitigation

    NASA Technical Reports Server (NTRS)

    Birnbaum, Kevin; Farr, William

    2013-01-01

    A common problem in optical detection is determining the arrival time of a weak optical pulse that may comprise only one to a few photons. Currently, this problem is solved by using a photodetector to convert the optical signal to an electronic signal. The timing of the electrical signal is used to infer the timing of the optical pulse, but error is introduced by random delay between the absorption of the optical pulse and the creation of the electrical one. To eliminate this error, a time-to-space converter separates a sequence of optical pulses and sends them to different photodetectors, depending on their arrival time. The random delay, called jitter, is at least 20 picoseconds for the best detectors capable of detecting the weakest optical pulses, a single photon, and can be as great as 500 picoseconds. This limits the resolution with which the timing of the optical pulse can be measured. The time-to-space converter overcomes this limitation. Generally, the time-to-space converter imparts a time-dependent momentum shift to the incoming optical pulses, followed by an optical system that separates photons of different momenta. As an example, an electro-optic phase modulator can be used to apply longitudinal momentum changes (frequency changes) that vary in time, followed by an optical spectrometer (such as a diffraction grating), which separates photons with different momenta into different paths and directs them to impinge upon an array of photodetectors. The pulse arrival time is then inferred by measuring which photodetector receives the pulse. The use of a time-to-space converter mitigates detector jitter and improves the resolution with which the timing of an optical pulse is determined. Also, the application of the converter enables the demodulation of a pulse position modulated signal (PPM) at higher bandwidths than using previous photodetector technology. This allows the creation of a receiver for a communication system with high bandwidth and high bits/photon efficiency.

  4. Structural identifiability of cyclic graphical models of biological networks with latent variables.

    PubMed

    Wang, Yulin; Lu, Na; Miao, Hongyu

    2016-06-13

    Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems.

  5. Automatic correspondence detection in mammogram and breast tomosynthesis images

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Jan; Krüger, Julia; Bischof, Arpad; Barkhausen, Jörg; Handels, Heinz

    2012-02-01

    Two-dimensional mammography is the major imaging modality in breast cancer detection. A disadvantage of mammography is the projective nature of this imaging technique. Tomosynthesis is an attractive modality with the potential to combine the high contrast and high resolution of digital mammography with the advantages of 3D imaging. In order to facilitate diagnostics and treatment in the current clinical work-flow, correspondences between tomosynthesis images and previous mammographic exams of the same women have to be determined. In this paper, we propose a method to detect correspondences in 2D mammograms and 3D tomosynthesis images automatically. In general, this 2D/3D correspondence problem is ill-posed, because a point in the 2D mammogram corresponds to a line in the 3D tomosynthesis image. The goal of our method is to detect the "most probable" 3D position in the tomosynthesis images corresponding to a selected point in the 2D mammogram. We present two alternative approaches to solve this 2D/3D correspondence problem: a 2D/3D registration method and a 2D/2D mapping between mammogram and tomosynthesis projection images with a following back projection. The advantages and limitations of both approaches are discussed and the performance of the methods is evaluated qualitatively and quantitatively using a software phantom and clinical breast image data. Although the proposed 2D/3D registration method can compensate for moderate breast deformations caused by different breast compressions, this approach is not suitable for clinical tomosynthesis data due to the limited resolution and blurring effects perpendicular to the direction of projection. The quantitative results show that the proposed 2D/2D mapping method is capable of detecting corresponding positions in mammograms and tomosynthesis images automatically for 61 out of 65 landmarks. The proposed method can facilitate diagnosis, visual inspection and comparison of 2D mammograms and 3D tomosynthesis images for the physician.

  6. An Unsplit Monte-Carlo solver for the resolution of the linear Boltzmann equation coupled to (stiff) Bateman equations

    NASA Astrophysics Data System (ADS)

    Bernede, Adrien; Poëtte, Gaël

    2018-02-01

    In this paper, we are interested in the resolution of the time-dependent problem of particle transport in a medium whose composition evolves with time due to interactions. As a constraint, we want to use of Monte-Carlo (MC) scheme for the transport phase. A common resolution strategy consists in a splitting between the MC/transport phase and the time discretization scheme/medium evolution phase. After going over and illustrating the main drawbacks of split solvers in a simplified configuration (monokinetic, scalar Bateman problem), we build a new Unsplit MC (UMC) solver improving the accuracy of the solutions, avoiding numerical instabilities, and less sensitive to time discretization. The new solver is essentially based on a Monte Carlo scheme with time dependent cross sections implying the on-the-fly resolution of a reduced model for each MC particle describing the time evolution of the matter along their flight path.

  7. Stream network analysis and geomorphic flood plain mapping from orbital and suborbital remote sensing imagery application to flood hazard studies in central Texas

    NASA Technical Reports Server (NTRS)

    Baker, V. R. (Principal Investigator); Holz, R. K.; Hulke, S. D.; Patton, P. C.; Penteado, M. M.

    1975-01-01

    The author has identified the following significant results. Development of a quantitative hydrogeomorphic approach to flood hazard evaluation was hindered by (1) problems of resolution and definition of the morphometric parameters which have hydrologic significance, and (2) mechanical difficulties in creating the necessary volume of data for meaningful analysis. Measures of network resolution such as drainage density and basin Shreve magnitude indicated that large scale topographic maps offered greater resolution than small scale suborbital imagery and orbital imagery. The disparity in network resolution capabilities between orbital and suborbital imagery formats depends on factors such as rock type, vegetation, and land use. The problem of morphometric data analysis was approached by developing a computer-assisted method for network analysis. The system allows rapid identification of network properties which can then be related to measures of flood response.

  8. Morphological and behavioral limit of visual resolution in temperate (Hippocampus abdominalis) and tropical (Hippocampus taeniopterus) seahorses.

    PubMed

    Lee, Hie Rin; O'Brien, Keely M Bumsted

    2011-07-01

    Seahorses are visually guided feeders that prey upon small fast-moving crustaceans. Seahorse habitats range from clear tropical to turbid temperate waters. How are seahorse retinae specialized to mediate vision in these diverse environments? Most species of seahorse have a specialization in their retina associated with acute vision, the fovea. The purpose of this study was to characterize the fovea of temperate Hippocampus abdominalis and tropical H. taeniopterus seahorses and to investigate their theoretical and behavioral limits of visual resolution. Their foveae were identified and photoreceptor (PR) and ganglion cell (GC) densities determined throughout the retina and topographically mapped. The theoretical limit of visual resolution was calculated using formulas taking into account lens radius and either cone PR or GC densities. Visual resolution was determined behaviorally using reactive distance. Both species possess a rod-free convexiclivate fovea. PR and GC densities were highest along the foveal slope, with a density decrease within the foveal center. Outside the fovea, there was a gradual density decrease towards the periphery. The theoretically calculated visual resolution on the foveal slope was poorer for H. abdominalis (5.25 min of arc) compared with H. taeniopterus (4.63 min of arc) based on PR density. Using GC density, H. abdominalis (9.81 min of arc) had a lower resolution compared with H. taeniopterus (9.04 min of arc). Behaviorally, H. abdominalis had a resolution limit of 1090.64 min of arc, while H. taeniopterus was much smaller, 692.86 min of arc. Although both species possess a fovea and the distribution of PR and GC is similar, H. taeniopterus has higher PR and GC densities on the foveal slope and better theoretical and behaviorally measured visual resolution compared to H. abdominalis. These data indicate that seahorses have a well-developed acute visual system, and tropical seahorses have higher visual resolution compared to temperate seahorses.

  9. Time Series Analysis for Spatial Node Selection in Environment Monitoring Sensor Networks

    PubMed Central

    Bhandari, Siddhartha; Jurdak, Raja; Kusy, Branislav

    2017-01-01

    Wireless sensor networks are widely used in environmental monitoring. The number of sensor nodes to be deployed will vary depending on the desired spatio-temporal resolution. Selecting an optimal number, position and sampling rate for an array of sensor nodes in environmental monitoring is a challenging question. Most of the current solutions are either theoretical or simulation-based where the problems are tackled using random field theory, computational geometry or computer simulations, limiting their specificity to a given sensor deployment. Using an empirical dataset from a mine rehabilitation monitoring sensor network, this work proposes a data-driven approach where co-integrated time series analysis is used to select the number of sensors from a short-term deployment of a larger set of potential node positions. Analyses conducted on temperature time series show 75% of sensors are co-integrated. Using only 25% of the original nodes can generate a complete dataset within a 0.5 °C average error bound. Our data-driven approach to sensor position selection is applicable for spatiotemporal monitoring of spatially correlated environmental parameters to minimize deployment cost without compromising data resolution. PMID:29271880

  10. Aquatic habitat measurement and valuation: imputing social benefits to instream flow levels

    USGS Publications Warehouse

    Douglas, Aaron J.; Johnson, Richard L.

    1991-01-01

    Instream flow conflicts have been analysed from the perspectives offered by policy oriented applied (physical) science, theories of conflict resolution and negotiation strategy, and psychological analyses of the behavior patterns of the bargaining parties. Economics also offers some useful insights in analysing conflict resolution within the context of these water allocation problems. We attempt to analyse the economics of the bargaining process in conjunction with a discussion of the water allocation process. In particular, we examine in detail the relation between certain habitat estimation techniques, and the socially optimal allocation of non-market resources. The results developed here describe the welfare implications implicit in the contemporary general equilibrium analysis of a competitive market economy. We also review certain currently available techniques for assigning dollar values to the social benefits of instream flow. The limitations of non-market valuation techniques with respect to estimating the benefits provided by instream flows and the aquatic habitat contingent on these flows should not deter resource managers from using economic analysis as a basic tool for settling instream flow conflicts.

  11. Mesoscopic-microscopic spatial stochastic simulation with automatic system partitioning.

    PubMed

    Hellander, Stefan; Hellander, Andreas; Petzold, Linda

    2017-12-21

    The reaction-diffusion master equation (RDME) is a model that allows for efficient on-lattice simulation of spatially resolved stochastic chemical kinetics. Compared to off-lattice hard-sphere simulations with Brownian dynamics or Green's function reaction dynamics, the RDME can be orders of magnitude faster if the lattice spacing can be chosen coarse enough. However, strongly diffusion-controlled reactions mandate a very fine mesh resolution for acceptable accuracy. It is common that reactions in the same model differ in their degree of diffusion control and therefore require different degrees of mesh resolution. This renders mesoscopic simulation inefficient for systems with multiscale properties. Mesoscopic-microscopic hybrid methods address this problem by resolving the most challenging reactions with a microscale, off-lattice simulation. However, all methods to date require manual partitioning of a system, effectively limiting their usefulness as "black-box" simulation codes. In this paper, we propose a hybrid simulation algorithm with automatic system partitioning based on indirect a priori error estimates. We demonstrate the accuracy and efficiency of the method on models of diffusion-controlled networks in 3D.

  12. Cell-phone-based platform for biomedical device development and education applications.

    PubMed

    Smith, Zachary J; Chu, Kaiqin; Espenson, Alyssa R; Rahimzadeh, Mehdi; Gryshuk, Amy; Molinaro, Marco; Dwyre, Denis M; Lane, Stephen; Matthews, Dennis; Wachsmann-Hogiu, Sebastian

    2011-03-02

    In this paper we report the development of two attachments to a commercial cell phone that transform the phone's integrated lens and image sensor into a 350x microscope and visible-light spectrometer. The microscope is capable of transmission and polarized microscopy modes and is shown to have 1.5 micron resolution and a usable field-of-view of 150 x 50 with no image processing, and approximately 350 x 350 when post-processing is applied. The spectrometer has a 300 nm bandwidth with a limiting spectral resolution of close to 5 nm. We show applications of the devices to medically relevant problems. In the case of the microscope, we image both stained and unstained blood-smears showing the ability to acquire images of similar quality to commercial microscope platforms, thus allowing diagnosis of clinical pathologies. With the spectrometer we demonstrate acquisition of a white-light transmission spectrum through diffuse tissue as well as the acquisition of a fluorescence spectrum. We also envision the devices to have immediate relevance in the educational field.

  13. Cell-Phone-Based Platform for Biomedical Device Development and Education Applications

    PubMed Central

    Smith, Zachary J.; Chu, Kaiqin; Espenson, Alyssa R.; Rahimzadeh, Mehdi; Gryshuk, Amy; Molinaro, Marco; Dwyre, Denis M.; Lane, Stephen; Matthews, Dennis; Wachsmann-Hogiu, Sebastian

    2011-01-01

    In this paper we report the development of two attachments to a commercial cell phone that transform the phone's integrated lens and image sensor into a 350× microscope and visible-light spectrometer. The microscope is capable of transmission and polarized microscopy modes and is shown to have 1.5 micron resolution and a usable field-of-view of 150×150 with no image processing, and approximately 350×350 when post-processing is applied. The spectrometer has a 300 nm bandwidth with a limiting spectral resolution of close to 5 nm. We show applications of the devices to medically relevant problems. In the case of the microscope, we image both stained and unstained blood-smears showing the ability to acquire images of similar quality to commercial microscope platforms, thus allowing diagnosis of clinical pathologies. With the spectrometer we demonstrate acquisition of a white-light transmission spectrum through diffuse tissue as well as the acquisition of a fluorescence spectrum. We also envision the devices to have immediate relevance in the educational field. PMID:21399693

  14. Consistent three-equation model for thin films

    NASA Astrophysics Data System (ADS)

    Richard, Gael; Gisclon, Marguerite; Ruyer-Quil, Christian; Vila, Jean-Paul

    2017-11-01

    Numerical simulations of thin films of newtonian fluids down an inclined plane use reduced models for computational cost reasons. These models are usually derived by averaging over the fluid depth the physical equations of fluid mechanics with an asymptotic method in the long-wave limit. Two-equation models are based on the mass conservation equation and either on the momentum balance equation or on the work-energy theorem. We show that there is no two-equation model that is both consistent and theoretically coherent and that a third variable and a three-equation model are required to solve all theoretical contradictions. The linear and nonlinear properties of two and three-equation models are tested on various practical problems. We present a new consistent three-equation model with a simple mathematical structure which allows an easy and reliable numerical resolution. The numerical calculations agree fairly well with experimental measurements or with direct numerical resolutions for neutral stability curves, speed of kinematic waves and of solitary waves and depth profiles of wavy films. The model can also predict the flow reversal at the first capillary trough ahead of the main wave hump.

  15. An Effective Measured Data Preprocessing Method in Electrical Impedance Tomography

    PubMed Central

    Yu, Chenglong; Yue, Shihong; Wang, Jianpei; Wang, Huaxiang

    2014-01-01

    As an advanced process detection technology, electrical impedance tomography (EIT) has widely been paid attention to and studied in the industrial fields. But the EIT techniques are greatly limited to the low spatial resolutions. This problem may result from the incorrect preprocessing of measuring data and lack of general criterion to evaluate different preprocessing processes. In this paper, an EIT data preprocessing method is proposed by all rooting measured data and evaluated by two constructed indexes based on all rooted EIT measured data. By finding the optimums of the two indexes, the proposed method can be applied to improve the EIT imaging spatial resolutions. In terms of a theoretical model, the optimal rooting times of the two indexes range in [0.23, 0.33] and in [0.22, 0.35], respectively. Moreover, these factors that affect the correctness of the proposed method are generally analyzed. The measuring data preprocessing is necessary and helpful for any imaging process. Thus, the proposed method can be generally and widely used in any imaging process. Experimental results validate the two proposed indexes. PMID:25165735

  16. Femtosecond Electron Wave Packet Propagation and Diffraction: Towards Making the ``Molecular Movie"

    NASA Astrophysics Data System (ADS)

    Miller, R. J. Dwayne

    2003-03-01

    Time-resolved electron diffraction harbors great promise for achieving atomic resolution of the fastest chemical processes. The generation of sufficiently short electron pulses to achieve this real time view of a chemical reaction has been limited by problems in maintaining short electron pulses with realistic electron densities to the sample. The propagation dynamics of femtosecond electron packets in the drift region of a photoelectron gun are investigated with an N-body numerical simulation and mean-field model. This analyis shows that the redistribution of electrons inside the packet, arising from space-charge and dispersion contributions, changes the pulse envelope and leads to the development of a spatially linear axial velocity distribution. These results have been used in the design of femtosecond photoelectron guns with higher time resolution and novel electron-optical methods of pulse characterization that are approaching 100 fs timescales. Time-resolved diffraction studies with electron pulses of approximately 500 femtoseconds have focused on solid-liquid phase transitions under far from equilibrium conditions. This work gives a microscopic description of the melting process and illustrates the promise of atomically resolving transition state processes.

  17. Stress-relieved assembly method for a high-resolution airborne optical system

    NASA Astrophysics Data System (ADS)

    Park, Kwang-Woo; Kim, Chang-Woo; Rhee, Hyug-Gyo; Yang, Ho-Soon; Lee, Eun-Jong

    2012-04-01

    In this manuscript, we report on assembly issues of a new high-resolution airborne optical system (HRAOS), which consists of front-end optics, two after-end optical channels (electro-optical and infrared channels), and a connection module. The beam splitter (BS) plate in the connection module divides the output beam from the front-end optics by 50:50 and feeds into the after-end optical channels. The BS plate has a 116-mm elliptical shape on the major axis while the thickness is only 8 mm to meet the weight limitation of the system. As a result, a small amount of stress on the BS plate causes a relatively large deformation and ultimately leads to a serious deterioration of the image quality. To resolve this problem, we suggest a new assembly method (a four-point-bonding method) and verify it by using a finite-elements analysis. After the proposed method had been applied, the final wavefront error of the entire optical system showed a rms (root-mean-square) value of 66 nm. The error of a previous result was about 317 nm. Thermal effects were also observed.

  18. Optical method for high magnification imaging and video recording of live cells at sub-micron resolution

    NASA Astrophysics Data System (ADS)

    Romo, Jaime E., Jr.

    Optical microscopy, the most common technique for viewing living microorganisms, is limited in resolution by Abbe's criterion. Recent microscopy techniques focus on circumnavigating the light diffraction limit by using different methods to obtain the topography of the sample. Systems like the AFM and SEM provide images with fields of view in the nanometer range with high resolvable detail, however these techniques are expensive, and limited in their ability to document live cells. The Dino-Lite digital microscope coupled with the Zeiss Axiovert 25 CFL microscope delivers a cost-effective method for recording live cells. Fields of view ranging from 8 microns to 300 microns with fair resolution provide a reliable method for discovering native cell structures at the nanoscale. In this report, cultured HeLa cells are recorded using different optical configurations resulting in documentation of cell dynamics at high magnification and resolution.

  19. A pratical deconvolution algorithm in multi-fiber spectra extraction

    NASA Astrophysics Data System (ADS)

    Zhang, Haotong; Li, Guangwei; Bai, Zhongrui

    2015-08-01

    Deconvolution algorithm is a very promising method in multi-fiber spectroscopy data reduction, the method can extract spectra to the photo noise level as well as improve the spectral resolution, but as mentioned in Bolton & Schlegel (2010), it is limited by its huge computation requirement and thus can not be implemented directly in actual data reduction. We develop a practical algorithm to solve the computation problem. The new algorithm can deconvolve a 2D fiber spectral image of any size with actual PSFs, which may vary with positions. We further consider the influence of noise, which is thought to be an intrinsic ill-posed problem in deconvolution algorithms. We modify our method with a Tikhonov regularization item to depress the method induced noise. A series of simulations based on LAMOST data are carried out to test our method under more real situations with poisson noise and extreme cross talk, i.e., the fiber-to-fiber distance is comparable to the FWHM of the fiber profile. Compared with the results of traditional extraction methods, i.e., the Aperture Extraction Method and the Profile Fitting Method, our method shows both higher S/N and spectral resolution. The computaion time for a noise added image with 250 fibers and 4k pixels in wavelength direction, is about 2 hours when the fiber cross talk is not in the extreme case and 3.5 hours for the extreme fiber cross talk. We finally apply our method to real LAMOST data. We find that the 1D spectrum extracted by our method has both higher SNR and resolution than the traditional methods, but there are still some suspicious weak features possibly caused by the noise sensitivity of the method around the strong emission lines. How to further attenuate the noise influence will be the topic of our future work. As we have demonstrated, multi-fiber spectra extracted by our method will have higher resolution and signal to noise ratio thus will provide more accurate information (such as higher radial velocity and metallicity measurement accuracy in stellar physics) to astronomers than traditional methods.

  20. High-resolution scanning precession electron diffraction: Alignment and spatial resolution.

    PubMed

    Barnard, Jonathan S; Johnstone, Duncan N; Midgley, Paul A

    2017-03-01

    Methods are presented for aligning the pivot point of a precessing electron probe in the scanning transmission electron microscope (STEM) and for assessing the spatial resolution in scanning precession electron diffraction (SPED) experiments. The alignment procedure is performed entirely in diffraction mode, minimising probe wander within the bright-field (BF) convergent beam electron diffraction (CBED) disk and is used to obtain high spatial resolution SPED maps. Through analysis of the power spectra of virtual bright-field images extracted from the SPED data, the precession-induced blur was measured as a function of precession angle. At low precession angles, SPED spatial resolution was limited by electronic noise in the scan coils; whereas at high precession angles SPED spatial resolution was limited by tilt-induced two-fold astigmatism caused by the positive spherical aberration of the probe-forming lens. Copyright © 2016 Elsevier B.V. All rights reserved.

Top